id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.02527
Multiobjective Evolutionary Component Effect on Algorithm behavior
The performance of multiobjective evolutionary algorithms (MOEAs) varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from their components. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still unknown what are the most influential components that lead to performance improvements. This study specifies a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a tuned Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) designed by the iterated racing (irace) configuration package on constrained problems of 3 groups: (1) analytical real-world problems, (2) analytical artificial problems and (3) simulated real-world. We then compare the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the anytime hypervolume values. Looking at the objective space behavior, the MOEAs studied converged before half of the search to generally good HV values in the analytical artificial problems and the analytical real-world problems. For the simulated problems, the HV values are still improving at the end of the run. In terms of decision space behavior, we see a diverse set of the trajectories of the STNs in the analytical artificial problems. These trajectories are more similar and frequently reach optimal solutions in the other problems.
Yuri Lavinas, Marcelo Ladeira, Gabriela Ochoa, Claus Aranha
2023-07-31T16:02:56Z
http://arxiv.org/abs/2308.02527v1
# Multiobjective Evolutionary Component Effect on Algorithm behavior ###### Abstract. The performance of multiobjective evolutionary algorithms (MOEAs) varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from their components. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still unknown what are the most influential components that lead to performance improvements. This study specifies a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a tuned Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) designed by the iterated racing (irace) configuration package on constrained problems of 3 groups: (1) analytical real-world problems, (2) analytical artificial problems and (3) simulated real-world. We then compare the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the anytime hypervolume values. Looking at the objective space behavior, the MOEAs studied converged before half of the search to generally good HV values in the analytical artificial problems and the analytical real-world problems. For the simulated problems, the HV values are still improving at the end of the run. In terms of decision space behavior, we see a diverse set of the trajectories of the STNs in the analytical artificial problems. These trajectories are more similar and frequently reach optimal solutions in the other problems. Algorithm analysis, continuous optimization, automatic algorithm configuration, multiobjective optimization + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition In this work, we investigate the question of how to measure the contribution of specific components in a MOEA. We focus this investigation on understanding the changes of _anytime behavior_ effected on the algorithm by one or more components. We define anytime behavior as the grouping of the algorithm's performance in terms of objective space convergence, and the choices made during the exploration of the decision space. We highlight that this definition focuses on the behavior of the algorithm when optimizing a problem, thus dissociating from _landscape analysis_, which focuses on the structure of the search space. To analyze the contributions of specific components, we perform a case study on a variant of the MOEA/D algorithm [45] created by an algorithm configurator (irace). The MOEA/D is a popular and efficient algorithm for solving MOPs and can modify its search behavior on different MOPs. To understand the contribution of each component, we perform modifications of the above automatically designed MOEA/D (auto-MOEA/D), by either removing a component, or replacing the component with the corresponding original in MOEA/D, as appropriate. For each modification, we investigate its performance on a set of problems to identify the contribution of the component. The methodology is an extension of our work introduced in [30], with the inclusion of objective space behavior analysis in the form of anytime hypervolume analysis. This investigation takes the form of a case study on six real-world analytical continuous benchmark problems, compiled together by Tanabe et. al [41] and two simulated continuous benchmark problems: (1) the problem of selecting landing sites for a lunar exploration robot [35] and (2) the problem of optimization of car designs [26]. We conduct our analysis focusing on how these metaheuristics explore both the objective and decision space. Furthermore, we contrast the automatically designed MOEA (auto-MOEA/D) against each of the variants in terms of their Search Trajectory Networks (STNs) [28; 37]; the diversity of their populations and the traditional performance metrics. Moreover, we compare the analytical and simulated problems in terms of the overall metrics of auto-MOEA/D. Finally, we analyze the similarities between the benchmark problems used for designing auto-MOEA/D and the real-world problems. To the best of our knowledge, this is the first component-wise analysis of MOEAs on the objective and decision space dynamics in real-world constrained MOPs. For reproducibility purposes, all the code and experimental scripts are available online at [https://doi.org/10.5281/zenodo.8192256](https://doi.org/10.5281/zenodo.8192256). In this paper, our contributions can be summarised as follows: 1. We extend our previous study on behavior analysis to a more thorough investigation, by considering the differences in behavior in both the objective space and decision space during the search progress. 2. We study the behavior of the components of MOEA/D in analytical and simulated real-world problems. 3. We analyze the similarities between the benchmark problems used for designing auto-MOEA/D and the real-world problems. The paper is organized as follows. Section 2 overviews previous work related to the automated design of algorithms and constrained problems. Section 3 introduces relevant concepts followed by Section 4 that explains the details of the methodology used in this work. The automatic design of the MOEA is shown in Section 5. Then, the comparison of the components set-up is presented in Section 6, and the analysis of the search behaviors dynamics of the different MOEA/D variants is shown in Section 7. Finally, Section 8 outlines our main findings, limitations and suggestions for future work. ## 2. Related Work ### Analysis of Algorithm Behavior Convergence analysis (Krause et al., 2017) is a way to describe algorithm behavior, by illustrating the trade-off between exploration and exploitation in evolutionary algorithms. However, the information of whether a population has converged or not does not inform the location of this convergence, hence it does not allow the user to know whether the convergence is premature or not. In this way, understanding the behavior of search and optimization algorithms remains a challenge. Another way to understand the behavior of an algorithm, especially in the case of multiobjective optimization, is to visualize and contrast the Pareto front achieved by the algorithm (Krause et al., 2017; Krause et al., 2018; Krause et al., 2019; Krause et al., 2020; Krause et al., 2020). However, in general this approach focuses on detecting increments in performance, and limits to observing changes in the _objective space dynamics_. We argue that analysing the decision space might expand our understanding of the behavior of the multiobjective optimisation solvers. Finally, another way to understand algorithm behavior is through Search Trajectory Networks (STNs) (Sutton et al., 2016; Sutton et al., 2016), which illustrate as a graph how the algorithm explores the decision space. Recently, STNs were generalized to multiobjective algorithms (Sutton et al., 2018), and we use these MOP-STN models as one of the tools to discriminate behavior changes in MOEA components. ### Automatic Design of Evolutionary Algorithms Most approaches to the automatic design of evolutionary algorithms focus on creating templates that can instantiate many algorithms and their parameter settings for performance improvements. For example, there have been studies to automatically design NSGA-II (Sutton et al., 2016) and MOEA/D (Dai et al., 2017) on commonly used benchmark sets. Moreover, two seminal examples are the works of Bezerra et al. (Bezerra et al., 2016; Bezerra et al., 2016), which proposed a component-wise MOEA template that instantiates multiple existing frameworks for continuous and combinatorial optimisation MOPs. Their research efforts mainly focus on exploiting the automatic configuration to increase the performance of multiobjective algorithms in benchmark problems without constraints. We also highlight the work of Radulescu et al. (Radulescu et al., 2017), which focuses on improving the performance of multiobjective metaheuristics. These works are insightful approaches; however, they concentrate on finding well-performing configurations of multiobjective algorithms. On the other hand, there are few studies in the context of the automatic design of algorithms that focus on _the effect of the different components_ on the performance of the algorithm. ### Behavior Analysis in the Continuous Multiobjective Domain There are few works in the continuous multiobjective domain related to behavior analysis. Some of the few examples with works on the relations between behavior and population size (Sutton et al., 2016) and between behavior and solution quality and time (Sutton et al., 2016). In contrast more works studied the behavior of multiobjective algorithms in the combinatorial domain. Two of these works focus on understanding selection and population size effects on the algorithms' ability in terms of dominance status, membership to the Pareto optimal set, recentness of discovery, and how their numbers change generation by generation (Bezerra et al., 2016; Krause et al., 2019). Another work explores how the relationships of already known and controllable structures, such as modality and ruggedness to understand the working principles, behavior, and the performance of MOEAs (Krause et al., 2019). Finally, few studies have considered the contribution of individual components to MOEAs performance (Bezerra et al., 2016). Furthermore, in most cases, performance is evaluated in unconstrained problems or problems where constraints are simple to address (Kalinin et al., 2017; Li et al., 2018; Li et al., 2019). These constraints invalidate some solutions, which makes finding a set of feasible solutions a challenging task. ## 3. Preliminaries MOEA/D (Mohammad et al., 2017) is a popular and efficient algorithm for finding good sets of trade-off solutions for MOPs. The key idea of MOEA/D is to create a linear decomposition of the MOP into a set of single-objective subproblems. Decomposing the MOP in various single-objective subproblems makes the algorithm very flexible for dealing with constraints because adding a penalty value is straightforward: MOEA/D adds a penalty value related to the amount of violation of the constraint for each of the subproblems. Given the nature of the single-objective subproblems, MOEA/D can easily use multiple constraint handling techniques (CHTs). The MOEA/D template we propose for instantiating and designing variants of this metaheuristic is shown in Algorithm 1. We use the generational version of MOEA/D incremented with the Unbounded External Archive (UEA). The UEA is used to keep all nondominated solutions found by a multiobjective optimizer during the search process. Solutions in the archive are only used as the output of the algorithm and are stored in a way that they do not affect the search run (Kalinin et al., 2017; Li et al., 2019). ### Automatic Design Configurator For automated design, we use Iterated Racing (irace) (Li et al., 2019). The goal of using irace is to be able to tune the set of components of an algorithm over a set of optimization problems to find a configuration that performs well on average in all problems. After fine-tuning the MOEA/D with irace, we conduct an ablation analysis (Kalinin et al., 2017; Li et al., 2019) to help us understand the choice of components values and whether each of these choices effectively improves the MOEA/D performance. This analysis investigates the differences between configurations. We conduct an ablation analysis between a target configuration selected 1 and the best configuration found by irace. Footnote 1: In our case, we select the first configuration tried by irace during the tuning process. ### Search Trajectory Networks (STNs) for MOPs We use Search Trajectory Networks as a tool for visualization following the method described in (Kalinin et al., 2017; Li et al., 2019). In an STN model, each solution in the decision space is mapped to a location. Similar solutions are generally mapped to the same location, as the locations represent a partition of the decision space. The network models are extracted from data obtained during several runs of the studied algorithm(s). A network model requires defining its nodes and edges. In an STN model, _nodes_ are the locations in the search trajectories visited by a given algorithm, and _edges_ connects two consecutive locations in the trajectory. A strength of network models is that they can be visualized. When decorating the networks for visualization, it is possible to highlight attributes of the nodes and edges that are relevant to the search process. In these visualizations, the size of nodes and the width of edges are proportional to how many times the algorithms visited them during the aggregation of runs used to extract the model. Visualizations use _force-directed_ graph layout algorithms as implemented in R package igraph (Garon et al., 2017). The key idea of this method is that we keep track of a small number of decomposition vectors, match a representative solution to a vector, and then merge the vector trajectories of each vector into a single multiobjective STN. The merged STN model merges the \(n\) STNs of each decomposition vector and is obtained by the graph union of the \(n\) individual graphs. The merged graph contains the nodes and edges present in at least one of the vectors graphs. Attributes are kept for the nodes and edges, indicating whether they were visited by both algorithms (shared) or by one of them only. Finally, we have the merged STN models, where different MOEAs are combined into one single merged STN model. The merged STNs allow us to directly visually compare how distinct variants explore the decision space (Shen et al., 2017). In the merged models, there is the notion of _shared nodes_, which are nodes visited by more than one algorithm and are indicated in grey colour in the network visualization. ### Network and Performance Metrics We use the following STN metrics to assess the global structure of the trajectories and bring insight into the behavior of the MOEAs modelled. These metrics are (1) the number of unique nodes, (2) the number of unique edges and (3) the number of shared nodes between vectors and (4) the number of solutions in the Pareto front. For reference, we use the following criteria to compare the results of the different strategies, based on the metric analysis done in the work of (Shen et al., 2017): final approximation hypervolume (HV), the volume of the n-dimensional polygon formed by reference points and solutions. It is worth noting that additional network and MOP metrics could also be considered. These metrics are summarised in Table 1. We also use the population variance metric. ## 4. Behavior Analysis Methodology Here we define a methodology to measure the differences in decision and objective space dynamics of variants of an algorithm that alter a single component from the base algorithm. Our reason for using such methodology is to identify the most influential components of an algorithm and how they affect the search space dynamics in different problems. Thus, we can identify better the influence of the considered component, even if such method explores a reduced part of the possible algorithm versions. First, we use irace to automatically design a tailor-suited MOEA (auto-MOEA) that performs well in a set of problems. Secondly, we modify auto-MOEA to create many variants and each one of these variants have only one component \begin{table} \begin{tabular}{c|c} Metric & Description \\ Nodes & Total number of nodes, which corresponds to the number of unique locations visited. \\ Edges & Total number of edges, the number of unique search transitions. \\ Variance & Dispersion of the population in the decision space. \\ \(\bullet\)PF & Number of solutions in the theoretical Pareto front or in the best approximation to the Pareto front. \\ \end{tabular} \end{table} Table 1. Description of decision space metrics that differs from auto-MOEA. These variants are obtained by: (1) removing a component if possible; (2) otherwise, replacing this component with the corresponding one from the original MOEA. Thus, we have the multiple MOEAs: the auto-MOEA plus one variant for each component. The idea here is to be able to capture the effects of each one of the components individually. This step to create variants is done manually, based on the users expertise about the components. Then, we collect log data of the executions of all variants and the base algorithm during their run in the problems of interest. This log data contains the parameters of the non-dominated solutions of each generation as well as their objective space values and the solution's feasibility 2. This data is processed following the approach described in Subsection 3.2 for analysing how the components affect the decision space exploration of such MOEA. To measure the different behaviors, we use metrics and visuals from the Search Trajectory Networks (STNs) (Sutton et al., 2016; Wang et al., 2017) combined with a population diversity metric. Footnote 2: For technical reasons we also keep the execution number. We also use the log data to calculate the anytime HV performance of the algorithm over the evaluations and the number of solutions in the Pareto front3. To verify the impact of the variant, we calculate the difference value of the metrics of each variant to the auto-MOEA. Footnote 3: We use the theoretical Pareto front if it is available. Otherwise, we use the approximation to the Pareto front. ## 5. Designing Auto-MOEA/D We analyse the components of a MOEA/D instance that was automatically designed. This design process was done in a component-wise framework, similar to the protocols used by Bezerra et al. (2017) and Campelo et al. (2018). We extend the MOEADr package (Bezerra et al., 2018) to introduce options for population restart and the most representative Resource Allocation (RA) method, called the partial update of the population (Sutton et al., 2016; Wang et al., 2017). ### Variable Components Search Space The configuration space used in our experiments contains the algorithm components and numerical parameters of the MOEA/D framework. These are shown in Table 2. Special attention is required with the variation operators: Differential Evolution (DE) mutation and polynomial mutation. They are always performed sequentially, first DE and then the polynomial mutation. Thus, the order of the stack of operators is kept fixed, but the parameter values are variable. Similar attention should be given to the restart strategy, where only the choice of using this strategy is explored. We choose to fix some MOEA/D components to reduce the search space for the irace configurator. The fixed components are the computational budget, the objective scaling and the constraint handling technique (CHT). These fixed components are always present in every configuration of the MOEA/D that irace generates: (1) the number of functions evaluations is set to \(100000\) in order to grasp all possible behaviors of the automatically designed algorithm during the run; (2) all objectives were linearly scaled at every iteration to the interval \([0,1]\); (3) we use the Dynamic CHT (Sutton et al., 2016), which starts with a small penalty value, increases it across the iterations to focus on the diversity of feasible solutions, and then later focus on the convergence of those solutions. It is defined by \(f^{agg}_{penalty}(x)=f^{agg}+(C*t)^{\alpha}*v(x)\), where \(C=5\) and \(\alpha=2\) are constants we defined based on the following works (Sutton et al., 2016; Wang et al., 2017), \(t\) is the generation number and \(v\) is the total violation of a solution. ### Configurator Setup We use the irace configurator (S The real-world simulated MOPs used are the two recently proposed problems by the Japanese evolutionary computing society: (1) Mazda benchmark problem (MAZDA): this is a discrete optimizing problem to design MAZDA cars, where the objectives are to maximize the number of parts common to three different cars and minimize the total weight of the car (Marcelo et al., 2017); (2) lunar landing site selection (MOON): the goal is to select landing sites coordinates (x,y) for a lunar exploration robot to land with the objectives of minimizing the number of continuous shaded days, minimizing the inverse of the total communication time 4, and minimizing the tilt angle (Marcelo et al., 2017). Footnote 4: i.e. maximizing the total communication time. ### Evaluation Metrics for the Automatic Design Analysing MOP solvers considering only their final approximation provides limited information related to these algorithms' performance since any MOP solver should return a suitable set of solutions at any time during the search (Marcelo et al., 2017; Sridharan et al., 2018; Sridharan et al., 2018; Sridharan et al., 2018). Here, we analyse the anytime performance effects in terms of hypervolume (HV) values to investigate the impact of different configurations of MOEA/D on their Unbounded External Archive. We run auto-MOEA/D 10 times on each of the problems. We use the following method to compare the results of the different strategies: we calculate the accumulative HV over the search progress to quantify the HV anytime performance. At every 1000 evaluations, we calculate the HV of the solutions in the UEA at that iteration, using the reference point as: 11, over the number of objectives; following the work of Bezerra et al. (2018). Then, we sum all values to have an approximated evaluation of the anytime HV curve. Figure 1. irace output with the frequency of the different choices of components and parameters. Figure 1 shows the frequency of the different choices of components and parameters after the tuning and ablation design is performed. As in (Shi et al., 2018), the ablation analysis is done to better use the available computational resources, focusing on better-performing configurations found by an automatic configuration method, and proceeds by changing one parameter at a time, focusing more on well-performing configuration, instead of more common useful configurations. There is a consensus over the components and parameters studied here. This suggests that irace is confident that the choice of components used in the auto-MOEA/D design has at least an adequate overall performance in the problems used during the configuration process. ### Performance of auto-MOEA/D Here, we briefly describe the performance of auto-MOEA/D in terms of HV5 and variance of the final population, and the STNs metrics: number of nodes, edges and shared nodes. For the calculation of HV, the objective function was scaled to the (0, 1) interval, with the reference point of 1.1, repeated over the number of objectives. Thus, the maximum HV is 1.21 for MOPs with two objectives or 1.331 for MOPs with three objectives after scaling the values. Instead of showing the achieved HV value of the auto-MOEA/D, we calculate how close the performance of this algorithm is to the maximum possible HV value (max(HV)). Values close to 0 indicate no HV performance 6 while values close to 1 indicate high performance. Footnote 5: higher is better. Footnote 6: probably the approximation found by auto-MOEA/D is out of the range of the reference point. The different metric values for the auto-MOEA/D are shown in Table 3. We can see that higher HV values correspond to a high number of nodes and edges and lower final populational variance values for the DASCMOP1, DASCMOP2, DASCMOP3 and DASCMOP9 problems. In contrast, the opposite happens for the other DASCMOP problems. Given the low HV performance, we speculate that auto-MOEA/D has a premature convergence probably to local optima areas of the decision space. \begin{table} \begin{tabular}{c|c c c c c c} & \multicolumn{6}{c}{**auto-MOEA/D**} \\ **MOP** & **HV/max(HV)** & **SD** & **Nodes** & **Edges** & **Variance** & **\#PF** \\ \hline DASCMOP1 & 0.931 & 0.002 & 4268 & 4812 & 0.514 & 212 \\ DASCMOP2 & 0.964 & 0.002 & 4821 & 5480 & 0.45 & 279 \\ DASCMOP3 & 0.962 & 0.007 & 7175 & 8595 & 0.494 & 44 \\ DASCMOP4 & 0 & 0 & 2316 & 2890 & 4.812 & 0 \\ DASCMOP5 & 0.095 & 0.362 & 2352 & 2998 & 5.089 & 0 \\ DASCMOP6 & 0 & 0 & 2313 & 2901 & 5.022 & 0 \\ DASCMOP7 & 0.035 & 0 & 2388 & 3139 & 5.035 & 0 \\ DASCMOP8 & 0 & 0 & 2465 & 3235 & 4.372 & 0 \\ DASCMOP9 & 0.849 & 0.397 & 15756 & 19233 & 0.437 & 10 \\ MOON & 0.392 & 0.452 & 1375 & 3012 & 0.007 & 2 \\ MAZDA & 0.016 & 0.026 & 5443 & 5531 & 5.366 & 0 \\ CRE21 & 0.989 & 0.011 & 4774 & 6370 & 0.1 & 0 \\ CRE22 & 0.995 & 0.001 & 2047 & 2192 & 0.119 & 12 \\ CRE23 & 0.608 & 0.003 & 7774 & 10369 & 0.335 & 1 \\ CRE31 & 0.567 & 0.002 & 5259 & 5497 & 0.72 & 2 \\ CRE32 & 0.756 & 0.011 & 14976 & 17743 & 0.331 & 0 \\ \end{tabular} \end{table} Table 3.: hypervolume ratio _(HV)/max(HV)_, HV Standard _Deviation_, Standard _Deviation_, _Nodes_, _Edges_, population _variance_, and the number of solutions in the theoretical Pareto front (\(\#PF\)) of the auto-MOEA/D for the DASCMOP problems. The HV performance of auto-MOEA/D deteriorates for the simulated MOON and MAZDA problems. There is no agreement in values of the number of nodes, edges and variance found for auto-MOEA/D in this set of problems, suggesting that each problem has different features in relation to the other. Interestingly, the number of nodes and variance metrics are the lowest in the MOON problem which might suggest that the population gets trapped in local optima during the run. This could also explain partially why the HV performance is low. For the MAZDA problem auto-MOEA/D also has a poor HV performance, but a higher number of nodes, edges and variance in comparison to the MOON problem. For the CRE problems, we see that the HV performance is close to the DASCMOP1, DASCMOP2, DASCMOP3 and DASCMOP9 problems. In general, the other metrics follow a similar comparison trend between CRE problems and DASCMOP. Thus, we understand that there are similarities among the artificial DASCMOP problems and the CRE problems and limited similarities with the more challenging simulated MOON and MAZDA problems. Finally, we comment about the results on constraint difficulty change between our previous work and this current study. On contrary to our expectations, having different constraints difficulty levels lead to improvements the HV performance of auto-MOEA/D in the easy problems (DASCMOP1-3 and 9), but no clear increments in performance for the hard set of problems (DASCMOP4-8). ## 6. Comparison of the Components Here we use our methodology to investigate the effects of the final configuration of a machine-designed multiobjective algorithm. This analysis aims to measure the differences in the decision and objective space dynamics among several variants from the MOEA and, through these measures, identify the most influential components of the automatically designed algorithm. \begin{table} \begin{tabular}{c|c} \multicolumn{3}{c}{**Auto-MOEA/D setup, and its Variants**} \\ **auto-MOEA/D** & **Component variant** \\ **Decomposition + pop. size** & **Decomposition + pop. size** \\ \(Sobol,100\) & \(SLD,300\) \\ \hline **Aggregation function** & **Aggregation function** \\ \(AWT\) & _WT_ \\ \hline **Update** & **Update** \\ \(Best\) & _Restricted_ \\ \(nr=9\) & _nr = 2_ \\ \hline **Neighbourhood pars.** & **Neighbourhood pars.** \\ \(T=22\) & \(T\) = 20 \\ \(Delta=0.9822\) & _Delta = 0.9_ \\ \hline **Operators pars.** & **Operators pars.** \\ \(DE:F=0.4908\) & \(DE:F=0.5\) \\ Polynomial: \(\eta_{m}=80.9844\) & Polynomial: \(\eta_{m}=20\) \\ Polynomial: \(prob=0.4556\) & Polynomial: \(prob=0.3\) \\ \hline **Restart** & **Restart** \\ \(True\) & _False_ \\ \hline **RA** & **RA** \\ \(Partial,5\%\) & _False_ \\ \hline \end{tabular} \end{table} Table 4. Auto-MOEA/D setup and the variants under analysis. For each variant, only _one component_ is changed, while the other components are the same as auto-MOEA/D. To analyse these behavioral effects of the different variants, we compare the auto-MOEA/D described in Section 5 against variants with at most one single component altered. We obtain these variants by changing or removing a single component of the auto-MOEA/D at a time. This is done by either (1) removing the component from the algorithm when possible or (2) changing its parameters to its counterpart in the traditional MOEA/D. Table 4 lists the auto-MOEA/D on the left side, which is generated by the algorithm in section 4, and the variants on the right side. These variants are not necessarily components generated by auto-MOEA/D. When total removal of the component is not possible, we use the standard version of these components introduced by Hui Li and Qingfu Zhang in MOEA/D-DE (Li et al., 2019), and commonly found in the literature. Thus; there are _seven_ variants to analyse, which added to auto-MOEA/D itself least to a total of _eight_ algorithmic variations of the MOEA/D framework. These variations are compared quantitatively and visually in terms of their STN models to detect which components procude the most extensive changes. ## 7. Behavioral Dynamics To quantitatively analyse the dynamics of the search progress of the different variants of MOEA/D, we model the search dynamics using STNs for each pair of auto-MOEA/D and an auto-MOEA/D variant, leading to seven different pairs. We highlight that given the lower impact on performance of the decomposition and population size observed on the original paper (pre-extension, see (Zhou et al., 2019)) and since that most works on the literature focus on population size and decomposition methods(to name only a few (Beng et al., 2016; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019)), we combine these components into one, and direct our studies towards the other components. We base our quantitative analysis on the traditional multiobjective metrics hypervolume (HV); the number of: nodes, edges and shared nodes of the STN models; and the populational variance. For the HV, we use the reference point of 1.1, repeated over the number of objectives. We linearly scaled all objectives to the interval \([0,1]\), for a more straightforward comparison among the algorithms. We run each variant 10 times on each of the problems. Figure 2. Correlation matrix among the different metrics. We find some correlation among HV and **variance** of the final population; and correlation between the number of nodes and edges. ### Metrics Analysis Figure 2 shows the correlation matrix among the different metrics studied, considering the results of the MOEA/D variants for the DASCMOP, simulated 7 and analytical 8 problems. Since the number of nodes and edges have a high correlation, we remove the number of edges metric from our analysis. The other STN metrics, together with the final population variance are on the decision space analysis; thus, we use them in our following analysis to strengthen the study. Furthermore, we see a correlation between the HV metric and the population variance. This correlation suggests a link between this decision space metrics and HV increments of performance. Moreover, we understand that there might exist a connection between solution diversity, represented by the variance, and overall performance and that this connection is problem-independent. Footnote 7: MOON and MAZDA. Footnote 8: CRE family. Footnote 9: All Figures for the anytime HV performance are available in Zenodo [https://zenodo.org/XXXX/](https://zenodo.org/XXXX/) ### Objective Space Behavior Here, we analyse the anytime performance effects in terms of HV values to investigate the impact of different components variants and auto-MOEA/D in analytical and simulated real-world problems. We first start our analysis using the DASCMOP1-3 and 9, Figure 6, as auto-MOEA/D has a very poor performance in the other problems 10. We can see that for these problems, increments in HV values are followed by periods without changes in performance for almost all of the variants. We can also see that the variants converge before half of the search, except for the update variant in the DASCMOP1-2 problems. For DASCMOP9, the convergence happens a little later, but the HV curves for this problem are overall similar to the other problems. Footnote 10: CRE family. Footnote 10: All Figures for the anytime HV performance are available in Zenodo [https://zenodo.org/XXXX/](https://zenodo.org/XXXX/) For the CRE problems, all auto-MOEA/D variants converge at around 25000 evaluations, as we can see in Figures 7, 8, 9, 10 and 11. The exception is for the CRE21 problem, in Figure 7, where not using restart converged to a lower HV value than the other variants. This is similar to the convergence behavior we found for the DASCMOP problems, although the curves here are more balanced. It seems that DASCMOP1-3 and 9 and the CRE problems have little impact on the ability of the auto-MOEA/D variant to perform well in terms of anytime HV performance. Figure 3. DASCMOP1 - HV anytime performance of auto-MOEA/D and its variants. The update strategy and not using restart lead to worse performance. The same observation cannot be made for the MAZDA and MOON problems. We can see in Figures 12 and 13 that most of the variants have trouble improving the HV over the evaluations and that the best variant is problem-dependent. For both problems, increments in HV values are followed by periods without changes in performance for almost all of the variants. This is similar behavior to the DASCMOP problems; however, here we see that the periods without increments are much longer. Interestingly, for the MAZDA problem not-using restart has generally very low performance during almost all of the search progress to only at the very end improving the performance substantially. ### Decision Behavior Dynamics Based on the results shown above, we compared the auto-MOEA/D and its variants in terms of HV, final population variance, and the STNs metrics: number of nodes, shared nodes and the number of solutions in the best approximation to the Pareto front 10_. For a more straightforward comparative analysis, we calculate the difference between the results Figure 4. DASCMOP2 - all variants converge to the maximum HV, with the exception of the update strategy. Figure 5. DASCMOP3 - all variants converge to the maximum HV before half of the search. found by the variants to the results found by the auto-MOEA/D (Table 3). We show the \(\Delta\)HV, \(\Delta\)nodes, \(\Delta\)variance 11. For all of these metrics, positive values indicate larger values in relation to the base algorithm, while negative values indicate the opposite. The number of shared nodes and solutions in the Pareto front are the two absolute metrics. Footnote 11: metric value of auto-MOEA/D minus metric value of the variant. The different metrics for the variants are shown in Table 5. Given that we already discussed the HV values above, we focus here on the other metrics shown in this Table. For the DASCMOP problems, all auto-MOEA/D variants find solutions in the Pareto front, suggesting that reaching the theoretical Pareto front of these problems is not a challenge. The update and the operators variants generally have a lower number of nodes and final population variance. Interestingly the aggregation function variant seems to have little effect in most of the metrics, and we highlight that the number of shared nodes is one or two orders of magnitude higher than the other variants for the DASCMOP3-9. Changing the decomposition method and the population size leads to more differences in areas of the decision space explored but doesn't affect much the overall HV performance. Figure 6: DASCMOP9 - the aggregation variant converges to high HV values faster than the other variants. Figure 7: CRE21 - the variants converge to about the same HV value, with the exception of the no-restart. The same is true for the no-RA variant. This indicates that changing how MOEA/D works with the population during the search can affect how the decision space is explored but doesn't correspond to increments in HV performance for these problems. However, for the simulated and analytical real-world problems, we can observe little similarities among the five problems analyzed. This suggests that, unlike the DASCOP set, the features of each of the simulated and analytical problems impact the auto-MOEA/D differently. For the MOON problem, we can see that the no-restart variant explored fewer areas of decision space, as we can see by the higher difference in the number of nodes. The number of shared nodes is the lowest among all variants, which suggests that auto-MOEA/D and the no-restart variant visit different areas of the decision space. That said, this variant was still able to find solutions in the approximated Pareto front. This suggests that the initial population has a high impact on the search exploration and all methods can follow a path to optimal solutions. We understand that this problem could be seen as a multimodal problem with at least one funnel to optimal solutions; however, more work needs to be done to validate this. Figure 8: CRE22 - all variants converge to maximum HV at about one third of the search. Figure 9: CRE23 - the variants converge to sub-optimal HV values at about one third of the search. Now, moving to the results of the MAZDA problem. For this problem, the variant that leads to less number of shared nodes, with only 5, the highest number of nodes and the highest final population variance is the decomposition+pop. size variant. This indicates that this variant can visit more areas of the decision space that are not explored by auto-MOEA/D while also having the final population spread to many different areas (given the higher variance). This could mean that choosing the right decomposition method and population size is critical for this problem. To our surprise, this is the only problem where no-restart leads to a noteworthy amount of solutions in the approximation to the Pareto front. Overall, these results show that these problems might have a set of unique characteristics in comparison to the other problems studied here. Finally, the results of the CRE problems show that in terms of the number of nodes the no-restart variant is the one that has the biggest differences, exploring less the decision space and that not using RA was able to increase the number of solutions in the approximation to the Pareto front, with little differences in the other metrics. That is, the aggregation function has generally the highest number of shared solutions in the two sets of problems and the no-restart variant Figure 11. CRE32 - Better performance than CRE31, but the variants again converge early to sub-optimal values. Figure 10. CRE31 - similarly for CRE23, the variants converge to sub-optimal HV values at about one third of the search. has about the same difference in terms of the number of nodes. This is in agreement with Table 3. Thus we believe that DASCMOP1-3 and 9 might share similar problems characteristics with the CRE problem set. ### STNs Extension for Pairs of MOEAs For creating merged STN models of pairs of MOEAs, we first need to create one STN for each algorithm. To create the STN of a single algorithm, we follow a recently proposed methodology (Zhou et al., 2017; Zhou et al., 2017). As discussed (Section 3), we extend this approach by merging the trajectories of two of these STNs by joining the two STNs graphs. This merged STN model contains the nodes and edges present in the STN of at least one algorithm. Attributes are kept for the nodes and edges, indicating whether they were visited by both algorithms (shared) or by one of them only. Figure 12. MOON - the HV values are much different among the variants, with the operators variant achieving the best results and not-restarting has a big negative impact. Figure 13. MAZDA - all variants perform badly, with auto-MOEA/D achieving the highest HV value. Moving on to the STNs visualisations, Figures 14, 15, 16 and 17. We selected visualisations of a representative variant for each problem. Considering the colours used in the STN visualisations, yellow squares indicate the start of trajectories, and black triangles indicate the end of trajectories. The red colour shows the Pareto optimal solutions, and light grey circles show shared locations visited by both algorithms in that MOP. Finally, the trajectories of each algorithm are shown in different colours: purple for the auto-MOEA/D and green for the variant. Figure 14: STN of auto-MOEA/D and different variants on the easy DASCMOP group. We see a diverse set of behaviors, from interlinked trajectories on the upper side to trajectories that visit distinct regions of the decision space on the bottom side. Figure 16. STNs of auto-MOEA/D and two variants on the CRE22 (left) and CRE23 (right) problems. The variants reach the approximation to the Pareto front. Figure 15. STNs of auto-MOEA/D and two variants on the MOON (left) and MAZDA (right) problems. The MAZDA problem has a bigger effects the trajectories of the variants. First of all, we comment on the overall differences among the STNs of the DASCMOP problems, Figure 14, and the simulated and analytical real-world problems 15, 16 and 17. Unlikely the anytime HV performance that showed a similar objective space behavior between the DASCMOP and CRE problems, we can see that the decision space behavior of the DASCMOP STNs is more diverse when compared to the behavior of shown by the STNs for the real-world problems. This finding demonstrates how essential it is to explore the decision space behavior of algorithms. For the simulated MOON and MAZDA problem in Figures 15, we can see that the trajectories are similar, with multiple shared nodes. That is, the trajectories of the STNs of the auto-MOEA/D and each variant overlap, visiting similar regions in the decision space. We associate this behavior with the number of shared nodes for this problem being high for all variants. Although we only showed the STNs of one pair, an identical trend occurs for all pairs of the auto-MOEA/D variants for the MOON problem. We understand this is indicative that the features of this simulated problem affect all MOEAs studied here in a similar fashion. The opposite happens for the MAZDA problem, where the trajectories shown in the Figure visit unrelated areas of the decision space. For the pair selected, the trajectories do not overlap, and the number of shared nodes is much smaller. However, we can see in Table 5 that there is no agreement among the different variants in the metrics indicating a high impact of the problem difficulty on the search behavior of the different variants exploring the problem differently. Moving now to the analytical CRE problems in Figures 16 and 17. We highlight that the trajectories of all variants for these problems overlap each other. Thus, we understand that the auto-MOEA/D variants visit similar regions in the decision space. Although the number of shared nodes is high for most of the variants in these problems, we can see that the variants are affected differently given the problem in question, indicating a contrasting set of features among the problems. In terms of best Pareto optimal solutions, we can see that there are much more solutions in the approximation to the Pareto front for the CRE22 problem than for the CRE23. We can see in Table 5 that an identical Figure 17. STNs of auto-MOEA/D and the two variants on the CRE31 (left) and CRE22 (right) problem. trend occurs for all pairs of the auto-MOEA/D variants for both problems. For the three objective problems, CRE31 and CRE32 we can see that the number of Pareto optimal solutions reduce substantially. ## 8. Conclusion This works defines a new methodology to investigate the effects of algorithmic components based on our previous work (Srivastava et al., 2017) that takes into account the decision space dynamics, as well as the objective space dynamics of the components. This methodology allows the user to investigate the impacts of the components of multiobjective algorithms in analytical and simulated problems with constraints. We contrasted the behavior of these configurations in terms of how they explore the decision space by comparing their Search Trajectory Networks (STNs), their population diversity, and how they behave in the objective space by exploring the anytime performance effects in terms of HV values. This analysis allowed us to identify the most influential components in the different problems we studied here. Interestingly, the results shown in this paper differ from the ones found on our previous work. This is mainly because the training method used in the original paper was using a set of constraints that were too hard for MOEA/D to deal with. Since we wanted to improve the overall quality of the tuned MOEA/D in the hard set of problems we used a set of constraint difficulty triples of the DASCMOP benchmark set. That lead to increments in the performance of the tuned MOEA/D in the problems it was already performing well, but not affecting the performance in problems it had difficulties with. Thus, we understand that the results and conclusions are not accidental, but dependent on the different algorithm configuration selected. We applied this methodology the auto-MOEA/D, a tuned MOEA/D designed by the irace package, and the subsequently derived variants that differ from this machine-designed MOEA by a single component. Our results showed that the most potentially influential variants differ given the set of problems: (1) for the DASCMOP problems the update variant showed more different behaviors in the objective and decision spaces; (2) the no-restart variant was more affected by the features of MOON problem while the decomposition+pop. size variant was the one that was more affected by the features of the MAZDA problem and (3) for the CRE family the variants that caused more changes in the decision space exploration behavior are the aggregation function and the no-restart. However, it is still necessary to establish the generalization of such results ans how can we extrapolate to characterize the behavior of components independently of the scenarios observed. We found that analysing the objective and the decision space simultaneously provides complementary information about how algorithms behave as the search progresses. In addition, the decision space behavior analysis was able to slightly contribute to the characterization of problems. For example, given how the trajectories of the STNs of the auto-MOEA/D and each other variant for the MOON problem frequently overlap, thus this problem could be seen as a multimodal problem with at least one funnel to optimal solutions; but, more work needs to be done to validate this. Moreover, this finding demonstrates how essential it is to explore the decision space behavior of algorithms. In summary, this study strengthens the view that characterizing the effects of MOEA/D algorithm components could help in developing even more effective MOEAs. Taken together, these findings suggest a role for improving in promoting the study of specific components to develop new and better components. We understand that our results are of interest to the broad multiobjective evolutionary computation community. One limitation of this methodology is that the search space of possible algorithm configurations is limited by the choices of components and components parameters. However, with a careful selection of those, automatic composition can be a powerful tool to explore this possibility space.
2309.03276
A complete catalogue of merger fractions in AGN hosts: No evidence for an increase in detected merger fraction with AGN luminosity
Despite the importance of Active Galactic Nuclei (AGN) in galaxy evolution, the mechanisms that fuel AGN activity remain poorly understood. Theoretical models suggest that major mergers of galaxies contribute strongly to AGN fuelling, particularly at high AGN luminosities. The connection between mergers and AGN activity has therefore been widely studied, although with contradictory results. Some studies find a strong connection between mergers and AGN, while others find merger fractions in AGN hosts to match those in the inactive galaxy population. To address these apparent contradictions, I present a complete and systematic analysis of detected merger fractions in AGN hosts from the literature. I assess if discrepancies between studies are indicative of systematic uncertainties and biases and analyse the detected merger fraction as a function of luminosity, redshift, and AGN selection method. X-ray selected AGN samples show comparable detected merger fractions across studies and major mergers do not dominate triggering in this AGN population. On the other hand, signatures of significant merger contribution to the AGN population are observed in a small fraction of primarily radio selected and reddened AGN samples. It is unclear if this is due to observational biases or physical differences in the host galaxies. There is no correlation between the detected merger fraction and AGN luminosity. This lack of correlation between detected merger fraction and AGN luminosity, which has previously been reported in the literature, cannot be explained by systematic uncertainties and observational biases.
C. Villforth
2023-09-06T18:00:03Z
http://arxiv.org/abs/2309.03276v2
A Complete Catalogue of Merger Fractions in AGN Hosts: No Evidence for an Increase in Detected Merger Fraction with AGN Luminosity ###### Abstract Despite the importance of Active Galactic Nuclei (AGN) in galaxy evolution, the mechanisms that fuel AGN activity remain poorly understood. Theoretical models suggest that major mergers of galaxies contribute strongly to AGN fuelling, particularly at high AGN luminosities. The connection between mergers and AGN activity has therefore been widely studied, although with contradictory results. Some studies find a strong connection between mergers and AGN, while others find merger fractions in AGN hosts to match those in the inactive galaxy population. To address these apparent contradictions, I present a complete and systematic analysis of detected merger fractions in AGN hosts from the literature. I assess if discrepancies between studies are indicative of systematic uncertainties and biases and analyse the detected merger fraction as a function of luminosity, redshift, and AGN selection method. X-ray selected AGN samples show comparable detected merger fractions across studies and major mergers do not dominate triggering in this AGN population. On the other hand, signatures of significant merger contribution to the AGN population are observed in a small fraction of primarily radio selected and reddened AGN samples. It is unclear if this is due to observational biases or physical differences in the host galaxies. There is no correlation between the detected merger fraction and AGN luminosity. This lack of correlation between detected merger fraction and AGN luminosity, which has previously been reported in the literature, cannot be explained by systematic uncertainties and observational biases. + Footnote †: slugcomment: Accepted for publication in The Open Journal of Astrophysics ## 1. Introduction Supermassive black holes are found in the centres of practically all massive galaxies (Kormendy & Ho, 2013). The majority of supermassive black holes are quiescent, only a small fraction of black holes are observed to be actively accreting gas. These objects are known as Active Galactic Nuclei (AGN). Observations have suggested a close link between the growth of supermassive black holes and the galaxies they reside in. Throughout the history of the Universe, the black hole accretion rate density closely traces the star formation rate density (e.g. Madau & Dickinson, 2014; Aird et al., 2010) and at low redshift, the black hole mass is correlated with the properties of the host galaxy (e.g. Kormendy & Ho, 2013; Gebhardt et al., 2000; Tremaine et al., 2002; Novak et al., 2006). These data suggest a physical co-evolution between supermassive black holes and their host galaxies, although a purely statistical co-evolution is also consistent with observations (Jahnke & Maccio, 2011). A popular theoretical model for black hole galaxy evolution was suggested initially by Sanders et al. (1988). In this model, black holes and galaxies experience significant growth during major mergers of galaxies. The merger initiates a central starburst, gas is then further funneled toward the central black hole. Accretion onto the black hole starts during an initially obscured phase. As the black hole accretion rate rises, surrounding gas is expelled through AGN feedback, revealing an unobscured AGN. As the feedback clears the surrounding gas, star formation is shut down. This co-evolution model has been popular in the literature and is supported by simulations (e.g. Hopkins & Hernquist, 2009; Di Matteo et al., 2005; Somerville et al., 2008; Alexander & Hickox, 2012, and references therein). Studies of host galaxies of AGN have therefore often focussed on identifying a possible link between black hole growth and major galaxy mergers (e.g. Bahcall et al., 1997; Canalizo & Stockton, 2001; Veilleux et al., 2009; Kocevski et al., 2012; Schawinski et al., 2010, 2012; Grogin et al., 2005; Villforth et al., 2014; Ellison et al., 2011, 2013, 2015, 2019; Koss et al., 2010; Villforth et al., 2017, 2019; Mechtley et al., 2016; Treister et al., 2012; Glikman et al., 2015; Urrutia et al., 2008; Boehm et al., 2012). Early studies of host galaxies of local luminous AGN with ongoing starbursts found high incidences of merger features (Canalizo & Stockton, 2001). Veilleux et al. (2009a) showed that in the local population of ultra-luminous infrared galaxies (ULIRGs) and AGN, mergers are prevalent in sources showing strong starbursts, but merger fractions are low in the AGN population as a whole. Later studies found low fractions of mergers in AGN hosts (Dunlop et al., 2003), although some studies showed that merger features became prevalent in deeper imaging (Bennert et al., 2008). Further studies targeted a large number of moderate luminosity AGN in deep fields (e.g. Georgakakis et al., 2009; Kocevski et al., 2012; Villforth et al., 2014; Hewlett et al., 2017) or targeted high luminosity or other rare AGN (Urrutia et al., 2008; Chiaberge et al., 2015; Villforth et al., 2017; Mechtley et al., 2016; Marian et al., 2019; Villforth et al., 2019), comparing detected merger fractions in AGN hosts to those in control samples of inactive galaxies. Merger fractions in AGN hosts are mostly found to be consistent with those of matched control galaxies (e.g. Kocevski et al., 2012; Villforth et al., 2014, 2017; Mechtley et al., 2016; Marian et al., 2019), suggesting that mergers are not closely linked to AGN activity. Other studies have found extremely high detected merger fractions in AGN host galaxies, unlikely to be consistent with merger fractions in the general galaxy population (Urrutia et al., 2008; Glikman et al., 2015; Chiaberge et al., 2015). These seemingly contradictory results have raised the question of how major mergers are linked to AGN triggering and if differences in triggering mechanisms exist between AGN of different luminosities and physical properties. Some theoretical models have suggested that major mergers become prevalent only at the highest AGN luminosities (Hopkins & Hernquist, 2009; Hopkins et al., 2013) since the fuel mass for low luminosity AGN can easily be supplied by secular processes, whereas fuel masses for high luminosity AGN exceed the rates of inflow possible in dynamically stable galaxies. Some observational work has found an increase of merger signatures with AGN luminosity in compilations of literature data (Treister et al., 2009; Fan et al., 2016; Glikman et al., 2015). However, several studies of high luminosity AGN found their merger fractions to be relatively low and consistent with those of control galaxies, suggesting that mergers are not strongly connected to even the highest luminosity AGN (Mechtley et al., 2016; Villforth et al., 2017; Marian et al., 2019). Theoretical models also suggest a correlation between obscuration and merger incidence (see e.g. Sanders et al., 1988; Di Matteo et al., 2005; Hopkins et al., 2008; Alexander & Hickox, 2012): "young" obscured AGN are predicted to have higher detected merger fractions since they appear closer to the merger, while "old" unobscured AGN appear late and show weaker merger features (e.g. Kocevski et al., 2015; Fan et al., 2016; Glikman et al., 2015). A significant amount of work has been done to date analyzing the incidence of merger features in AGN host galaxies across a wide range of AGN luminosities, redshift, and for a wide range of AGN types. However, no complete and systematic analysis of detected merger fractions exists to date. A joint analysis of literature data is made difficult by several factors. The "detected merger fractions" reported in the literature are not a clearly defined quantity. Merger fractions can be measured either quantitatively (e.g. Conselice et al., 2000; Pawlik et al., 2016) or qualitatively through visual inspection (Kartaltepe et al., 2014). These different methods can yield results that show large discrepancies based on the stage of the merger and merger mass ratio (Lotz et al., 2010, 2010; Pawlik et al., 2016). Additionally, the data used can cover a wide range of depth, resolution, and rest wavelength, meaning studies do not have the same sensitivity to merger features. Detected merger fractions therefore carry significant systematic uncertainties and potential systematic biases between studies need to be addressed. Despite these difficulties, a systematic analysis is needed first of all to assess if detected merger fractions as reported in the literature are consistent across similar samples and secondly to determine if the wealth of data collected so far shows any evidence of trends with luminosity, redshift or AGN selection methods, as suggested in both theory and previous collection of observational data. In this paper, I will collect all available literature data to create a complete catalogue of merger fractions in AGN host galaxies. I will analyze if differences reported in the literature are likely due to differences in methodology or physical differences in samples. I will study the incidence of merger features in AGN as a function of bolometric luminosity, redshift, as well as selection method. The collection of data from the literature, as well as the derivation of merger fractions and bolometric luminosities, is explained in Section 2. The results are presented in Section 3. I discuss the limitations of this approach and discrepancies across studies in Section 4, followed by conclusions in Section 5. A detailed summary of how the data were extracted for individual studies is given in Appendix A. ## 2. Data In this paper, I will collect a complete catalogue of detected merger fractions in the AGN population from the literature and analyze the difference between samples as well as trends with redshift, luminosity, and selection methods. The aim is to collect the data as consistently as possible, calculate bolometric luminosities in a consistent fashion, and create a complete catalogue of merger fractions in AGN host galaxies. I collect from the literature all studies that report merger fractions in AGN host galaxies. I do not include studies that analyze the incidence of AGN in mergers (e.g. Ellison et al., 2011, 2013, 2015; Satyapal et al., 2014; Sabater et al., 2015) since the enhancement of AGN activity triggered during mergers cannot be translated into merger fractions in AGN samples, but will discuss those results in Section 4. I consider all papers included in a previous study with the same aim (Treister et al., 2012), and check if the required data is available in the paper. The following papers are considered, listed in alphabetical order:Bahcall et al. (1997); Bennert et al. (2008); Boehm et al. (2012); Cales et al. (2011); Canalizo & Stockton (2001); Chiaberge et al. (2015); Cisternas et al. (2011); Del Moro et al. (2016); Donley et al. (2018); Dunlop et al. (2003); Ellison et al. (2019); Fan et al. (2016); Georgakakis et al. (2009); Glikman et al. (2015); Goulding et al. (2018); Grogin et al. (2005); Hewlett et al. (2017); Hong et al. (2015); Hutchings et al. (1984); Kocevski et al. (2012, 2015); Koss et al. (2010); Kartaltepe et al. (2010); Lamzusi et al. (2015); Liu et al. (2009); Marian et al. (2019, 2020); Mechtley et al. (2016); Ramos Almeida et al. (2011); Schawinski et al. (2011, 2012); Urrutia et al. (2008); Veilleux et al. (2009b); Villforth et al. (2014, 2017, 2019); Wylezalek et al. (2016); Zakamska et al. (2019). To my knowledge, this includes all studies of mergers in AGN host galaxies. A small number of studies were not included due to unavailability of data (these are: Lanzuisi et al., 2015; Kartaltepe et al., 2010; Schawinski et al., 2011, 2012; Ellison et al., 2019; Koss et al., 2010; Goulding et al., 2018)1, see Appendix A for details. Footnote 1: Kartaltepe et al. (2010), Schawinski et al. (2012) and Goulding et al. (2018) do not report AGN luminosities for the full samples; Koss et al. (2010) and Ellison et al. (2019) do not report luminosities for sub-samples; Schawinski et al. (2011) do not report detected merger fractions For all suitable studies, I report the detected merger fraction, bolometric luminosity, redshift, and selection method. If control samples are available, I also give the merger fraction in the control sample and calculate if there is a statistically significant excess of mergers in the AGN sample. Details on the data collection are outlined below. ### Caveats of a literature review of mergers in AGN hosts Before presenting and analyzing these results, I will discuss the caveats of this approach: * Meaning of "merger fraction": the term merger fraction refers to the fraction of sources with detected merger features. This should not be confused with the fraction of objects undergoing a merger. I therefore refer to detected merger fractions, rather than merger fractions throughout the paper. All merger detection methods are subject to both false positives and false negatives. The detection probability for merger depends on the merger mass ratios, viewing angle, time since merger, the wavelength used for imaging, resolution as well as the depth of data (Lotz et al., 2008, 2010b,a; Barnes, 1992; Hernquist, 1992). For example, deeper imaging studies will be able to detect fainter merger features, which allows the detection of both lower mass ratio mergers and longer delay after the mergers. Different merger detection methods also have different sensitivities and specificities (Lotz et al., 2011; Villforth et al., 2014; Pawlik et al., 2016). The reported merger fractions, therefore, do not directly translate to actual merger fractions. Additionally, due to the rarity of mergers, the contamination in merger samples can be high. I will address this issue when comparing results across different studies in Sections 3 and 4. * Control samples: Mergers occur in the general galaxy population and the merger fractions depend on the galaxy mass, mass ratio, and redshift (see e.g. Lotz et al., 2011; Lacey & Cole, 1993; Hopkins et al., 2013). If (actual or detected) merger rates in AGN samples match those of control galaxies, this implies that AGN are not in fact triggered by mergers. An excess in detected merger fractions needs to be present to indicate a connection between mergers and AGN. When available, detected merger fractions in control samples are compared to the detected merger fractions in AGN host galaxies. Due to potentially high contamination in merger samples, a comparison of merger rates between AGN and controls can be biased (Lambrides et al., 2021). * Redshift evolution: the samples included span a wide range of redshifts. The properties of galaxies change considerably through cosmic time. Merger rates evolve significantly, decreasing at lower redshift (Lotz et al., Figure 1.— The detected merger fraction of AGN as a function of bolometric AGN luminosity. Colour scale indicates the average redshift of the study. Solid coloured symbols have control sample, semi-transparent symbols do not have a control sample. Large squares surrounding a symbol indicate that an excess over control was detected while downward arrows around the symbol indicate that no excess over control at 3\(\sigma\) was detected. Symbols indicate the selection method: star: red quasar; filled cross: X-ray; filled circle: optical; filled plus: radio; arrow left: post-starburst; arrow right: Type 2. Figure 2.— The detected merger fraction of AGN as a function of redshift. Colour scale indicates the average bolometric luminosity of the study. Symbols as in Figure 1. 2011; Hopkins et al. 2013). Additionally, high redshift galaxies are more gas-rich (e.g. Genzel et al. 2015), although the extent of this increase is still under debate (Narayanan et al. 2012). Since gas-rich mergers show more pronounced merger signatures (Lotz et al. 2010b), this is expected to affect the detected merger fraction. Additionally, gas-rich galaxies can show clumps and other asymmetric features that could be wrongly identified as mergers (Bournaud et al. 2008). Therefore, samples across different redshifts cannot be easily compared due to changes in the underlying galaxy population. I will analyze the detected merger fraction as a function of redshift in Section 3. These caveats need to be kept in mind when comparing detected merger fractions across samples and will be discussed in detail in Section 4. ### Merger fractions Detected merger fractions for all studies are extracted following the general approach below to derive the fraction of AGN currently undergoing major mergers. Specifically: * The aim is to select gravitationally strongly interacting objects that are near or post-coalescence. Objects with nearby or non-interacting neighbours are therefore not included to avoid contamination by galaxies in dense environments. * If different merger classifications are available and example images are included, I use those to identify the closest matching category. * If different merger classifications are available, but no example images are available, I choose the closest matching category from the description in the text. * If only a single merger fraction is reported, I use this value. I do not re-analyze datasets or perform visual inspections. * Any cases that do not match these rules are decided on an individual basis, see Appendix A. The aim of this approach is to select ongoing or completed, major mergers (mass ratios \(\gtrsim 1/4\)). While the above approach does not correct for differences in merger detection, it minimizes bias due to differences in visual classifications. In the remainder of the paper, I will refer to detected major merger fractions, although no clear mass ratio cut-off is made. The caveats when interpreting detected merger fractions are discussed in more detail in Section 2.1). For details on individual studies, see Appendix A. ### Bolometric corrections Bolometric luminosities require correcting the luminosities in a given waveband with a bolometric correction. This, therefore, assumes a specific spectral energy distribution. To limit biases, I, therefore, aim to apply bolometric corrections based on the same spectral energy distribution (SED), minimizing biases. For all studies in the sample, bolometric luminosities are calculated as follows: * Optical: For optical luminosities or magnitudes, I use the following bolometric correction from Netzer (2013, p.157, eq. 7.3) \[\mathrm{BC}_{\mathrm{5100}\AA}=53-\log(L_{\mathrm{5100}\AA})\] (1) where \(BC_{\mathrm{5100}\AA}\) is the bolometric correction and \(L_{\mathrm{5100}\AA}\) is the luminosity (\(\lambda L_{\lambda}\) or \(\nu L_{\nu}\)). Any corrections to the luminosity due to different wavelength are discussed for the individual cases, but in most cases, the fluxes are close enough in wavelength that the above relation is valid. Whenever possible, I use AGN rather than overall luminosities (i.e. from image decomposition). * X-ray: For X-ray, I use the following correction to convert the X-ray to optical luminosities from Netzer (2013, p.157, eq. 7.5) \[\log(L_{\mathrm{5100}\AA})=1.4\times\log(L_{X})-16.8\] (2) where \(L_{X}\) i the luminosity at 2-10kev. Due to the fact that generally, no spectral information is available, no k-correction is applied. * [OIII]: for \(L_{\mathrm{[OIII]}}\), I use the luminosity dependent bolometric corrections from (Lamastra et al. 2009). * Radio: Radio luminosities have the largest uncertainties in the bolometric corrections due to the large difference in AGN SEDs in this wave-length regime. Therefore, I take the following conservative approach to take this uncertainty into account: first, I use the relation between Radio and [OIII] luminosity from (Best & Heckman 2012) to estimate the expected range in [OIII] luminosities. I then follow the approach for the [OIII] luminosities (see above). The uncertainties in bolometric luminosities from radio luminosities are high, this reflects the uncertainty in the bolometric corrections applied. * Other: for studies that either give their own bolometric luminosities or do not fall into the above categories, a decision is made on an individual basis. See Appendix A for details. Therefore, consistent bolometric corrections are applied across the sample. Samples with similar selection methods will have the same bolometric correction applied, further minimizing any potential bias. For details on how bolometric correction are applied in individual studies, see Appendix A. ### AGN selection method The selection method for each AGN sample is reported. When the selection method used is unclear, I discuss the adopted selection method in Section A. The different selection method categories used here are: * X-ray: selected in the hard or soft X-rays, bolometric corrections are applied following Section 2.3. * Optical: selected in the optical to have a dominant blue continuum (i.e. Type 1 AGN), bolometric corrections are applied following Section 2.3. * Radio: selected in radio bands, identification of the AGN nature in some cases confirmed in the optical, bolometric correction applied following the procedure outlined in Section 2.3 Figure 3.— Same as Figure 1, but showing results separately for different selection methods, specifically, from top left to bottom right: X-ray selected, optically selected, Radio selected, red quasars, type 2 AGN and miscellaneous methods. * Type 2: identified as an AGN using narrow emission line diagnostics, bolometric corrections are applied following procedure in Section 2.3 * red AGN: contains samples selected in the IR or showing strong reddening. This category contains AGN selected using IR power-law techniques (e.g. Donley et al., 2018), radio pre-selection (e.g. Urrutia et al., 2008) as well as AGN with SEDs showing signs of strong obscuration (e.g. Villforth et al., 2019; Zakamska et al., 2019). See the Appendix for details and how bolometric corrections are applied. * Miscellaneous: Additionally, one sample (Cales et al., 2011) does not fit any of the above categories and is listed as post-starburst. ### Other general rules for data collection The following rules highlight other general approaches taken for the collection of data. See Appendix A for details on individual studies. * Detected merger fractions and uncertainties are calculated following Cameron (2011). Errors reported are 1\(\sigma\) uncertainties. For studies including control samples, merger fractions in the control sample are calculated using the same method. I determine if any sample shows an excess above 1-tailed 3\(\sigma\) (p=0.003) significance. The probability of detected merger fractions being consistent is calculated by multiplying the respective probability distributions. * Sub-samples for each study are included separately, when available. * For both redshift and luminosity, whenever possible, I extract the data from the paper to calculate the mean and standard deviation. If this is not possible and only general sample properties are given, I give the ranges for those values. * The telescope and filter used for observations as well as the rest-frame wavelength of observations are reported. * Rejected studies: for all studies, I attempt collect the relevant information, also taking into account papers referenced for sample selection. If it is not possible to extract all relevant data, I do not include the sample. This is the case for either entire or sub-samples of Kartaltepe et al. (2010); Schawinski et al. (2011, 2012); Ellison et al. (2019); Koss et al. (2010), mostly due to AGN luminosities not being available. Detailed descriptions for those cases are given in Appendix A. ## 3. Results I present a complete catalogue of detected merger fractions in AGN host galaxies, compiled as described in Section 2. All results can be found in Table 2, detailed comments on how data are extracted from each paper are given in Appendix A. This complete catalogue includes data from 33 papers, with 50 separate samples, covering all reported detected merger fractions in AGN host galaxies since 1984. The redshifts of the samples studied have a mean and standard deviation of \(0.98\pm 0.79\) and a range of \(0.025\leq z\leq 2.0\). The bolometric luminosities of all samples have a mean of \(\log(L_{\log}[erg/s])=45.2\pm 1.4\) and a range of \(41.5\leq\log(L_{\log}[erg/s])\leq 48.0\). The range of expected merger fraction spans all the way from 0 to 1. The detected merger fractions have a mean and standard deviation of \(0.34\pm 0.27\). Note that these numbers give the average over samples of AGN, rather than objects. 19 of our 50 samples have a control sample, of those 6 (32%) have detected an excess of detected merger fraction of at least 3\(\sigma\) significance. The number of studies showing an excess above control merger fractions is well in excess of expected false positive rate. Figure 1 shows the detected merger fraction for all collected studies as a function of bolometric luminosity. The majority of AGN samples have low detected merger fraction (\(<\)20%). Once all literature data is taken into account, the detected merger fraction in AGN host galaxies shows no correlation with luminosity, in contradiction to previous work comparing smaller datasets (Treister et al., 2012; Glikman et al., 2015; Fan et al., 2016, see also Table 1 for correlation coefficients). Similarly, there is no clear trend with redshift (Fig. 2). Additionally, there is also no correlation between the merger rates and bolometric luminosity when subdividing the sample in redshift. The lack of correlation with redshift suggest that any redshift evolution in the intrinsic merger fraction (e.g. Hopkins et al., 2013) is compensated either by surface brightness dimming or changes in the host galaxy population properties. Detected merger fractions span a wide range, indicating either large discrepancies in merger incidence between samples or significant systematic uncertainties in the detected merger fractions. I will now compare results by AGN selection method. AGN samples selected using the same method have similar selection effects. Since the selection wavelength is used to calculate bolometric luminosities (see Section 2.3), systematic uncertainties in bolometric corrections are minimized. Due to the similarities in the spectral energy distribution, AGN selected using the same method will also show similar contrast between AGN and host galaxy. This will allow to more clearly assess if differences between samples suggest large differences in the methodology and data or intrinsic differences between samples. X-ray selection is the most common selection method across samples, with 20 samples in this category. X-ray selected samples are often conducted in deep fields with deep high resolution imaging (e.g. Boehm et al., 2012; Georgakakis et al., 2009; Kocevski et al., 2012; Villforth et al., 2014, using CANDELS, GOODS and COSMOS data). Despite often having similar luminosity ranges and data sets, these samples show a wide range in detected merger fractions, from \(\sim\) 0-40% with a mean of 18%. The scatter in the results is however consistent with the uncertainties in individual samples (mean of the \begin{table} \begin{tabular}{l l l l} \hline Sample & Data & Correlation Coefficient \(\rho\) & p-value \\ \hline All & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.02 & 0.91 \\ All & \(f_{\rm m}\) vs \(\rm{z}\) & -0.03 & 0.83 \\ \hline Radio & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.13 & 0.80 \\ X-ray & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.44 & 0.05 \\ Optical & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.13 & 0.76 \\ Type 2 & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.61 & 0.27 \\ Red and IR AGN & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & -0.18 & 0.70 \\ All Misc & \(f_{\rm m}\) vs \(\rm{L_{bol}}\) & 0.22 & 0.47 \\ \hline \end{tabular} \end{table} Table 1Pearson correlation coefficient for the detected merger rates presented in this sample as a function of both bolometric luminosity and redshift. standard deviations \(<\sigma>=0.11\), standard deviation of mean \(\sigma_{\mu}=0.12\)). Due to the wide availability of control samples in deep fields, many X-ray studies have control samples. In all deep field samples, these studies find no excess over control (Grogin et al., 2005; Cisternas et al., 2011; Boehm et al., 2012; Kocevski et al., 2012; Hewlett et al., 2017). Similarly, Villforth et al. (2017) find no excess over control for an X-ray selected sample aimed to extend such studies to higher luminosities. The only X-ray selected sample with an excess of detected merger fractions over control is that by Koss et al. (2010). This is a low redshift (z\(<\)0.05) AGN sample using Sloan Digital Sky Survey (SDSS) imaging. The sample size in Koss et al. (2010) is smaller than that in other X-ray selected samples with a matched control, indicating that the detection of an excess is not due to better statistics. The detected merger fraction in this sample is 18 times higher than that in the control sample, well in excess of enhancements of 2-5 found in other work (e.g. Ellison et al., 2019). The (Koss et al., 2010) sample therefore does not match other X-ray detected samples. This could either be due to the lower redshift, differences in image quality or physical differences between the sample. Due to the fact that (Koss et al., 2010) use colour SDSS images, the surface brightness limit cannot be easily compared to the Hubble Space Telescope (HST) imaging data used for other X-ray samples. The Koss et al. (2010) sample was selected in the hard X-rays, thereby favouring potentially more heavily obscured AGN. A negative correlation between luminosity and detected merger fraction is detected in the full X-ray selected sample, although it is only marginally significant (p=0.05). This could be due to less favourable contrast at higher luminosities. The detected merger fractions in X-ray selected AGN are therefore broadly consistent across studies (besides the low redshift sample by Koss et al., 2010). The data show that X-ray selected AGN on a whole are not strongly associated with recent major mergers. Nine AGN samples are classed as optically selected, they show a wide range of detected merger fractions from 15-80% with a mean of 40%, almost double that in the X-ray selected sample. The scatter between studies is consistent with the uncertainties (mean of the standard deviations \(<\sigma>=0.21\), standard deviation of mean \(\sigma_{\mu}=0.19\)). At high redshift (\(z>2\)), Mechtley et al. (2016) and Marian et al. (2019) both show detected merger fractions \(\sim\)20-40%, although neither find an excess over control samples. Both the detected merger fraction and lack of excess over control match X-ray selected AGN at similar luminosities (Villforth et al., 2017). The seven low redshift (\(z\leqslant 0.7\)) optical AGN samples show a range of detected merger fractions \(\sim\)15-80% (Bahcall et al., 1997; Hong et al., 2015; Hutchings et al., 1984; Dunlop et al., 2003; Veilleux et al., 2009; Bennert et al., 2008; Marian et al., 2020). These detected merger fractions are in excess of the X-ray detected fractions at similar luminosity and redshift. The only low redshift optically selected samples with a control sample shows an excess in the detected merger fraction (Marian et al., 2020). Bennert et al. (2008) found the highest detected merger fraction of \(\sim\)80% in optically selected AGN, likely due to significantly deeper imaging. Veilleux et al. (2009b) study optically selected AGN with a range of FIR excesses and find a detected merger fraction of 57%. The FIR excesses implies some of their sources are likely to be associated with stronger starbursts, they find higher detected merger fractions in FIR strong sources. It is therefore unclear if the differences seen reflect high detected merger fractions in some samples or differences in data quality and selection method. Radio detected and red AGN both show a much more significant spread in detected merger fractions. In both categories, the scatter in detected merger fractions seen between studies is well in excess of what is expected from the uncertainties in the detected merger fractions (mean of the standard deviations \(<\sigma>=0.15/0.15\), standard deviation of mean \(\sigma_{\mu}=0.41/0.29\) for radio and red samples respectively). Both samples also show a a large proportion of detected merger fractions well in excess of 50%. Starting with the radio-selected AGN, high detected merger fractions are reported in samples by Ramos Almeida et al. (2011) and Chiaberge et al. (2015), whereas very low detected merger fractions are observed by Dunlop et al. (2003) and Hutchings et al. (1984), despite the samples being of similar redshift. While the samples from Chiaberge et al. (2015) are at moderate to high redshift (\(\sim\)1.5), the other merger excess sample (Ramos Almeida et al., 2011) is at lower redshift (\(0.05<z<0.7\)). The radio samples with low detected merger fractions (Dunlop et al., 2003; Hutchings et al., 1984) are at similarly low redshift (\(0.05<z<0.7\)). All samples span wide luminosity ranges (\(41<\log(L_{\rm bol}[erg/s]<45)\)), this wide range in luminosity is at least partially due to the fact that bolometric corrections for radio samples carry the largest uncertainties (see Section 2.3). Another difference between radio selected samples is in the image quality, there are mixture of space based (Chiaberge et al., 2015; Dunlop et al., 2003) and ground-based data (Hutchings et al., 1984; Ramos Almeida et al., 2011), meaning differences in spatial resolution and likely surface brightness limits. Additionally, earlier studies Hutchings et al. (1984); Dunlop et al. (2003), might have required confirmation in the optical (this is not entirely clear from the sample selection sections in those papers), while Chiaberge et al. (2015); Ramos Almeida et al. (2011) do not. This would mean that samples by Hutchings et al. (1984); Dunlop et al. (2003) are in fact more similar to optically selected samples, for which they would fit in the extremely wide range of detected merger fractions observed. The differences seen could therefore reflect significant differences in data quality, as well as potentially differences in the selection method and therefore SED. Red AGN also show a very wide range in detected merger fractions, from \(\sim\)10-100%, with more than half in excess of 50%. All red AGN samples have comparatively high luminosity (\(46<\log(L_{\rm log}[erg/s]<48)\)). The scatter in detected merger fractions is in excess of what is expected from the uncertainties in individual studies (mean of the standard deviations \(<\sigma>=0.14\), standard deviation of mean \(\sigma_{\mu}=0.29\)). The sample by Urrutia et al. (2008) was pre-selected in the radio and shows an extremely high detected merger fraction of 85%. On the other hand, the Low Ionization Iron Broad Absorption Line (FeLoBAL) sample by Villforth et al. (2019) in the same luminosity range has a relatively low detected merger fraction of 30%. While the Villforth et al. (2019) FeLoBAL sample has no galaxy control sample, its merger rate, luminosity, redshift as well as data quality and methodology matches that of the X-ray selected sample by Villforth et al. (2017), which shows no excess over control. FeLoBALs show connections to red quasar samples, the sample from Urrutia et al. (2008) also shows very high incidences of FeLoBALs. FeLoBALs are also known to show high levels of reddening and obscuration (Dai et al., 2012; Dunn et al., 2015). Similar to the Villforth et al. (2019) FeLoBAL sample, the extremely red quasar sample from Zakamska et al. (2019) shows low detected merger fractions of 10%. Del Moro et al. (2016) studied a sample of mid-IR luminous AGN, of which 24-48 % were found to be compton thick. The detected merger fraction across the sample is 30%, with a weak trend for a higher disturbed fraction in the more X-ray obscured sources. In summary, while some red and IR selected quasars show extremely high detected merger fractions (Canalizo & Stockton, 2001; Urrutia et al., 2008; Glikman et al., 2015; Fan et al., 2016), this is not universally true for red and IR selected AGN (Villforth et al., 2019; Zakamska et al., 2019). The interesting differences between high and low detected merger rate samples here is that the samples from Urrutia et al. (2008) and Glikman et al. (2015) with high detected merger fractions were initially radio-selected, whereas those from Villforth et al. (2019) and Zakamska et al. (2019) are selected from the SDSS Quasar sample, meaning, they generally require pre-selection in the optical or X-ray. This pattern of optically selected sources having lower detected merger fractions likely matches that seen in the radio selected samples. The'red quasar' category shows maybe the most wide-ranging set of selection methods, from IR selection (Canalizo & Stockton, 2001; Del Moro et al., 2016; Fan et al., 2016; Donley et al., 2018) to radio selection combined with extreme levels of obscuration (Urrutia et al., 2008; Glikman et al., 2015) and SDSS samples with high levels of reddening (Villforth et al., 2019; Zakamska et al., 2019). This could explain the wide discrepancy between different red AGN samples. There are six samples of Type 2 AGN. The standard deviation of the means of the detected merger fractions (0.13) are comparable to the the standard deviations (0.18). The Type 2 samples span a very wide range of luminosities (\(42<\log(L_{\log}[\)erg/s\(]<46\)) and high redshifts \(0<z<2\)). Type 2 AGN with a wide range of luminosities at z\(\sim\)1.5 from (Chiaberge et al., 2015) show no excess over control. Ellison et al. (2019) find an excess over control at a comparable luminosity (\(\log(L_{\log}[\)erg/s\(]\sim 43\))), but lower redshift (z=0.1). In this case the difference can be easily explained, the samples from (Chiaberge et al., 2015) are small (\(\sim\) 10), excesses are found, but are not statistically significant (at \(\sim\) 1 -2 \(\sigma\)), while the sample size of (Ellison et al., 2019) is the largest of all studies collected (\(\sim\) 1300). The low redshift Type 2 samples of both Liu et al. (2009) and Wylezalek et al. (2016) without a comparison sample also show comparable detected merger fractions to Ellison et al. (2019). The results from these studies are broadly consistent with an enhancement of merger fractions in Type 2 AGN in large samples (Ellison et al., 2019), but comparably low fractions of these AGN directly associated with mergers. Finally, there is one post-starburst (Cales et al., 2011) sample that does not match any category clearly, its detected merger fraction is comparable to AGN of comparable luminosity (Liu et al., 2009; Chiaberge et al., 2015). It is therefore clear that while a wide range of detected merger fractions exist in AGN samples, high incidences of merger features are limited to a small set of studies. Looking in more detail at samples with high detected merger fractions (\(\geq 50\%\)), three such samples have control samples and all show an excess over control. All these samples are radio selected and have low to moderate bolometric luminosities (\(\log(L_{\log}[\)erg/s\(])<45\)) (Ramos Almeida et al. (2011) as well as radio samples from Chiaberge et al. (2015)). Samples with high detected merger fractions but no control sample are red quasar samples at high luminosities (\(\log(L_{\bol}[\)erg/s\(])<45\)), (Canalizo & Stockton, 2001; Urrutia et al., 2008; Glikman et al., 2015) as well as some low-redshift optically selected AGN (Bennert et al., 2008; Veilleux et al., 2009b). Samples with very high detected merger fractions are therefore predominantly radio detected, red quasars, or optically selected AGN at low redshift. Samples that show an excess of mergers in the AGN sample but moderate detected merger fractions (\(\sim\)20-50%) are either moderate-low luminosity Type 2 sample from (Ellison et al., 2019) and the hard X-ray selected BAT sample by (Koss et al., 2010). The majority of samples have detected merger fractions in the 0-30% range, in many cases, these merger rates are consistent with control sample. In summary, in X-ray and optically selected samples, the discrepancies between studies seen in the detected merger fractions are consistent with the uncertainties in measurements. For both selection methods, detected merger fractions are consistent with control samples in all but one sample. Larger discrepancies between studies are seen in radio selected and red quasar samples, where detected merger fractions are generally high and in excess of control. For the red quasars, those with radio pre-selection show excess over control, while optically pre-selected ones do not. Type 2 AGN show a mild excess in merger detection over control, although this only becomes apparent in large samples. Looking at the complete sample of AGN host galaxies, the most notable finding is that there is no correlation in detected merger fraction with luminosity, either in the full sample or when considering only AGN with similar selection methods. As discussed above, there is no indication that systematic biases in the data would mask a correlation with luminosity. This finding strongly contradicts previous studies analyzing detected merger fractions for smaller sets of studies (Treister et al., 2009; Glikman et al., 2015; Fan et al., 2016). ## 4. Discussion Here, I have analyzed the detected merger fractions as a function of luminosity, redshift, and selection method. No trend is observed between the detected merger fraction and luminosity, in contradiction to previous work analyzing smaller sets of data (Treister et al., 2012; Glikman et al., 2015; Fan et al., 2016). High detected merger fractions are primarily observed in radio selected and red AGN samples, suggesting potential differences in the merger incidence in those sources. In this Section, I will discuss how the detected merger fractions relate to AGN triggering (Section 4.1), if the combined results are consistent with theoretical models (Section 4.2) and how the differences between samples selected in different ways can be explained (Section 4.3). ### Constraints on mergers triggered AGN I will start by recapping the calculation of the contribution of mergers to the AGN population from Villforth et al. (2017). One can calculate the detected merger fraction \(r_{\rm merger}\) assuming a duty cycle of AGN \(d_{\rm merger}\) during a merger as well as the fraction of galaxies experiencing merger \(f_{\rm merger}\). Additionally, we need to consider other potential triggering events such as minor mergers, disk instabilities, or secular processes. The probability of a galaxy currently experiencing such a trigger is given as \(f_{t}\). The duty cycle of AGN in the galaxies experiencing this trigger is given as \(d_{t}\). The fraction of mergers in the AGN sample is then given as: \[r_{\rm merger,\ AGN}=\frac{d_{\rm merger}f_{\rm merger}}{\sum d_{t}\times f_{t}} \tag{3}\] where \(r_{\rm merger,\ AGN}\) is the fraction of (actual, rather than de tected) mergers in AGN. Similarly, the rate of mergers in the control sample is: \[r_{\rm merger,\,Control}=\frac{(1-d_{\rm AGN})f_{\rm merger}}{f_{\rm no\, merger}+\sum(1-d_{t})\times f_{t}} \tag{4}\] The detected merger fraction further depends on the fraction of mergers and non-merger correctly identified as such, i.e. the true positive rate as well as the false positive rate (\(TP/FP\)): \[r_{\rm merger,\,detected}=TP\times r_{\rm merger}+FP\times(1-r_{\rm merger}) \tag{5}\] The intrinsic real detected merger fraction therefore depends on the fraction of both mergers and non-mergers identified correctly as well as the intrinsic fraction of mergers. Since mergers are rare, the false positive mergers (second term in equation (5)) can dominate the mergers sample since \(r_{merger}<<1\). If the merger rates in the AGN and control sample do not match, the detected excess merger rate depends can be significantly lower than the actual excess (Lambrides et al., 2021). Most samples with low detected merger fractions are consistent with an AGN duty cycle that is similar in mergers compared to other possible triggers (see Equations 3.4). Studies that show an excess of AGN in mergers but low merger rates (Koss et al., 2010; Ellison et al., 2019) indicate that the duty cycles of AGN are enhanced in mergers, but that the AGN population is not dominated by mergers. Unless the duty cycles of AGN in major mergers are extremely high compared to other triggers, the more common triggers will dominate over rare major mergers, see Eq. 3. Samples that show high detected merger fractions in excess of control (Ramos Almeida et al., 2011; Chiaberge et al., 2015) suggest that these samples are indeed dominated by major mergers. Samples with high intrinsic detected merger fractions, but no control sample (Canalizo and Stockton, 2001; Urrutia et al., 2008; Glikman et al., 2015; Bennert et al., 2008; Veilleux et al., 2006), are consistent with either high merger fractions in the control sample (e.g. due to mass) or a dominance of merger triggering. The possible impact of selection effects will be discussed in Section 4.3. The enhancement of AGN fractions in mergers compared to control \(r_{\rm merger,\,AGN}/r_{\rm merger,\,Control}\) (Equations 3.4) is an additional method to analyze the impact of mergers on AGN activity. Measuring this enhancement shows the duty cycle of AGN in mergers compared to control, rather than the contribution of mergers to the full AGN population. The enhancement factor depends on the merger stage and AGN type, and typically has been found to be \(\sim 2-7\)(Ellison et al., 2011; Goulding et al., 2018) in the early stage of a merger, almost doubling for post mergers (Ellison et al., 2013). The enhancement of AGN activity is also found to be increased (enhancement factor 10-20) for IR selected AGN (Satyapal et al., 2014; Ellison et al., 2019) while no merger excess is found in low emission radio galaxies once controls are matched in D4000 (Ellison et al., 2015). One possible explanation of these discrepancies is that since major mergers are rare, the overall detected merger sample will have low purity even if the fraction of mergers and non-mergers identified correctly is high. The differences between mergers rates could be underestimated, as discussed by (Lambrides et al., 2021). And while a highly pure sample limits the impact of this and leads to a significantly improved estimate of the merger excess in AGN, it requires much larger sample sizes and is not feasible for most studies dealing with samples sizes. In the future, machine learning models trained on simulated images will allow to calibrate false positive and false negative rates and thereby better constrain the intrinsic differences in the merger rates (Ciprijanovic et al., 2021; Bottrell et al., 2019), but without knowing false positive and negative rates, it is not possible to quantify the effect of this on the detected merger rates. ### Comparison with theoretical models The connection between major mergers of galaxies and AGN has been popular in galaxy evolution models for decades, Sanders et al. (1988) suggested an evolutionary model in which AGN appear at late stages of merger after a ULIRG phase. This was later further supported by simulations (Di Matteo et al., 2005; Hopkins et al., 2008). Following this, many semi-analytical and semi-empirical models included major mergers of galaxies as the main path to black hole growth (Somerville et al., 2008; Kauffmann and Haehnelt, 2000; Croton et al., 2006; Shankar et al., 2013). Fig. 1 clearly disagrees with the picture that the majority of AGN are associated with major mergers. In many samples, detected merger fractions are low, and the majority of studies with control samples do not show an excess over control. Most studies that show an excess still have relatively low detected merger fractions (\(\leqslant 30\%\)). While other studies have shown that mergers contribute to AGN fuelling, and might be prevalent in some samples, the theoretical model that all AGN are associated with major mergers of galaxies is not consistent with the data. As discussed in Section 4.1, the data so far provide upper limits only on the merger contribution to AGN, these values carry considerable uncertainties. Other studies have suggested that the influence of major mergers depends on the luminosity. The luminosity is proportional to the accretion rate times the radiative efficiency, since the radiative efficiency only depends on the spin alignment (Netzer, 2013), this means that over the seven order of magnitude studied here, the mass accretion rate varies by seven orders of magnitude. Hopkins and Hernquist (2009) suggested that there is a clear differences between low and high luminosity AGN, with a cut-off at about \(L_{\rm bol}\sim 10^{45}\) erg/s. From this, one would expect strong differences in the detected merger fractions at this luminosity. This is not observed (see Fig. 1). Such a correlation is also not observed in more heterogeneous sub-samples (see Fig. 3). Hopkins et al. (2013) analyzed the relative contribution of merger induced and stochastic fuelling, again, they find that up to redshifts \(z\sim 2\), which covers all studies analyzed here, merger induced fuelling dominates above \(\sim L_{\rm bol}10^{46}\) erg/s (meaning the space density of merger fuelled AGN is about an order of magnitude higher than the those of stochastically fuelled AGN). Again, the data collected here do not support the picture of a transition to AGN fuelling by mergers above \(\sim L_{\rm bol}10^{45-46}\) erg/s. Steinborn et al. (2018) studied the growth of supermassive black holes and find a mild increase in major merger fraction in high luminosities. However, the merger fraction in their study is found to depend on the galaxy mass. Therefore, the increase in detected merger fraction is due to host mass rather than luminosity, highlighting the importance of control sample. Since galaxy masses are not available across the sample, we cannot directly compare to Steinborn et al. (2018). Other theoretical models emphasize the importance of other AGN fuelling mechanisms, such as disk instabilities (e.g. Bournaud et al., 2011; Gabor & Bournaud, 2013), bars (Shlosman et al., 1989) or accretion from a hot halo (Bower et al., 2017; McAlpine et al., 2017). Disk instabilities at least at high redshift, could be identified as mergers in visual classification or morphological analysis, however, this will strongly depend on the classification used, so cannot be analysed using this compilation. The collected data clearly supports that while mergers are responsible for some AGN activity, alternative fuelling mechanisms play a substantial role in the AGN population. ### Is there a single AGN population? While a detailed treatment of these selection effects is beyond the scope of this paper, Fig. 3 clearly shows that different selection methods yield different detected merger fractions. Such a difference could either indicate that different AGN triggering mechanisms result in different observed AGN properties, with radio-loud and heavily reddened AGN preferentially found in major mergers. This would agree with other work that finds higher incidence of mergers in IR AGN (Goulding et al., 2018; Ellison et al., 2019), higher incidence of IR compared to X-ray AGN in mergers (Secrest et al., 2020) as well as an increased level of obscuration in merging galaxies (e.g. Ricci et al., 2017). The higher detected merger fractions could however also be explained by selection effects, specifically, a) differences in observed images leading to differences in detectability of merger features and b) the effect of galaxy type on AGN classifications. The detectability of merger features depends on the contrast between the central point source and the galaxy (see e.g. Villforth et al., 2019). This means that optically bright AGN might have lower detected merger fractions simply because of the lower contamination from the point source. This could explain some of the excesses seen in radio galaxies (e.g. Ramos Almeida et al., 2011) as well as red quasars (e.g. Urrutia et al., 2008). Such AGN have weaker central emission in the optical/IR wavebands in which host galaxy morphologies are studies. Based on this, one would naively expect a drop of detected merger fraction with luminosity (seen for example in the samples from Georgakakis et al., 2009). While unfavourable contrast in optically luminous AGN could cause merger features to be washed out in optically luminous sources, there is no strong evidence for this in the data. Additionally, simulations have shown that this is not the cause of differences between high luminosity X-ray and red AGN (Villforth et al., 2019). While this may contribute to the higher detected merger fractions in red and radio AGN, more detailed work would be needed to determine the effect of point sources on merger detectability. Another selection effect is the effect of the host galaxy on the AGN type. Host galaxy scale dust obscuration can lead to stronger obscuration (e.g. Glikman et al., 2015; Urrutia et al., 2008). Since galaxies with higher dust content are also more likely to be involved in major mergers (Kartaltepe et al., 2014), the merger environment could affect the AGN SED, biasing AGN types in dusty merging galaxies. Such a selection effect could explain the fact that only radio, red and IR selected sources are found to have very high detected merger fractions. However, the data collected here does not allow to test this scenario. While selection effects can not be ruled out as a cause for the high detected merger fractions in radio and reddened AGN, the data are consistent with an evolutionary model in which AGN obscuration ages declines after a major merger, with young AGN most closely associated with mergers (Sanders et al., 1988; Di Matteo et al., 2005; Hopkins et al., 2008; Alexander & Hickox, 2012). Red quasars as well as FeLoBAL quasars have been suggested to be such early or transition objects. Indeed, several red quasar samples show high detected merger fractions (Urrutia et al., 2008; Canalizo & Stockton, 2001; Donley et al., 2018; Fan et al., 2016). The candidate transition population sample of FeLoBALs (Villforth et al., 2019) as well as another red quasar sample (Zakamska et al., 2019) on the other hand shows low detected merger fractions (see Fig. 3). While they are not seen traditionally as transition objects, radio-selected AGN also have high detected merger fraction and could potentially match the proposed population of "young" AGN. Red quasars (and potentially radio-selected AGN) could therefore constitute a transition population with higher merger fractions, consistent with previous work showing red AGN to be associated with mergers (e.g. Ellison et al., 2019; Goulding et al., 2018; Secrest et al., 2020; Kocevski et al., 2015; Donley et al., 2018) and contradicting work that shows pure radio AGN to show no association with mergers (Ellison et al., 2015). However, as discussed above, selection effects and host galaxy obscuration could also contribute to the differences seen. To distinguish between the selection effects outlined above and a transition population, AGN that are in the adult stages of current "young" red AGN would need to be identified. These "old" AGN would need to match in galaxy masses, space densities and have weaker merger features consistent with a later merger stage. Such a comparison could also be used to distinguish between the "transition" and "population" scenario discussed above. ## 5. Conclusions Here, I have presented a complete catalogue of detected merger fractions in AGN host galaxies from the literature. This catalogue contains data from 33 different studies and 50 samples. For each sample, I report consistently calculated bolometric luminosities, redshifts, detected merger fractions with consistent uncertainties as well as the AGN selection method. The AGN span a wide range in redshift (\(0.025\leq z\leq 2.0\)) and luminosity (\(41.5\leq\log(L_{\rm bol}[{\rm erg/s}])\leq 48.0\)). Detected merger fractions range from 0-100%. 19 of the 50 samples have a control sample, of those 6 (32%) show an excess in the merger fraction at \(>\)3\(\sigma\) significance. This is not an unbiased sample of AGN. The findings can be summarized as follows: * In contradiction to previous work combining a smaller set of data, there is no correlation between the detected merger fraction and luminosity (Fig. 1). This clearly contradicts theoretical models that state that major mergers dominate the AGN population at high luminosities. The lack of correlation does not appear to be due to systematic errors in the measurement of detected merger fractions. The detected merger fractions does not correlate with redshift (Fig. 2). * Detected merger fractions in X-ray selected AGN are consistent with the uncertainties in measurements. In all but one sample, no excess in detected merger fractions over control is detected. This indicates that the detected merger fractions in X-ray selected AGN are consistent and not strongly affected by systematic uncertainties. The contribution of major mergers to the X-ray selected AGN population is small. * The range in detected merger fractions for optically selected AGN is wider than for X-ray selected AGN. An excess over control merger rates is detected in one sample of low redshift high Eddington ratio AGN. The differences between samples could be due to differences in data quality or difference in the spectral energy distribution, with AGN with stronger IR excess showing higher detected merger fractions. * Type 2 AGN show low detected merger rates, with excess over control seen only for very large sample sizes, indicating that while major mergers contribute to triggering in these AGN, this triggering mechanism is not dominant. * High detected merger fractions (\(\geqslant 60\%\)) are found in radio selected AGN, red quasars, as well as low-redshift optical AGN with strong IR emission. Radio selected and red AGN show larger scatter in the detected merger fraction than expected from statistical uncertainties. This indicates either intrinsic differences in the host galaxy properties for these classes of AGN or systematic uncertainties in the detected merger fractions. In both radio-selected and red quasar samples, objects also detected in the optical show lower detected merger fractions. This can be explained either by differences in the detectability of merger features or a stronger association of optically undetected red and radio AGN being preferentially triggered by mergers. Despite the limitations of interpreting detected merger fractions, the data are in clear contradiction with a model in which the majority of AGN are associated with major mergers, as well as a picture in which major mergers become dominant at high luminosities. Triggering by major mergers clearly contributes to some AGN samples, but is not found to be dominant across the population. There is tentative evidence for higher detected merger fractions in optically undetected radio and red AGN, however, observational biases cannot be ruled out. Future studies will have to quantify the systematic uncertainties in detected merger fractions outlined here. Machine learning algorithms trained on simulated data (e.g. Cibinel et al., 2015; Bottrell et al., 2019; Koppula et al., 2021) may be able to overcome the limitations hampering current analysis of merger features in AGN.
2309.13065
Personality Profiling: How informative are social media profiles in predicting personal information?
Personality profiling has been utilised by companies for targeted advertising, political campaigns and vaccine campaigns. However, the accuracy and versatility of such models still remains relatively unknown. Consequently, we aim to explore the extent to which peoples' online digital footprints can be used to profile their Myers-Briggs personality type. We analyse and compare the results of four models: logistic regression, naive Bayes, support vector machines (SVMs) and random forests. We discover that a SVM model achieves the best accuracy of 20.95% for predicting someones complete personality type. However, logistic regression models perform only marginally worse and are significantly faster to train and perform predictions. We discover that many labelled datasets present substantial class imbalances of personal characteristics on social media, including our own. As a result, we highlight the need for attentive consideration when reporting model performance on these datasets and compare a number of methods for fixing the class-imbalance problems. Moreover, we develop a statistical framework for assessing the importance of different sets of features in our models. We discover some features to be more informative than others in the Intuitive/Sensory (p = 0.032) and Thinking/Feeling (p = 0.019) models. While we apply these methods to Myers-Briggs personality profiling, they could be more generally used for any labelling of individuals on social media.
Joshua Watt, Jonathan Tuke, Lewis Mitchell
2023-09-15T03:09:43Z
http://arxiv.org/abs/2309.13065v1
Personality Profiling: How informative are social media profiles in predicting personal information? ###### Abstract Personality profiling has been utilised by companies for targeted advertising, political campaigns and vaccine campaigns. However, the accuracy and versatility of such models still remains relatively unknown. Consequently, we aim to explore the extent to which people' online digital footprints can be used to profile their Myers-Briggs personality type. We analyse and compare the results of four models: logistic regression, naive Bayes, support vector machines (SVMs) and random forests. We discover that a SVM model achieves the best accuracy of 20.95% for predicting someone complete personality type. However, logistic regression models perform only marginally worse and are significantly faster to train and perform predictions. We discover that many labelled datasets present substantial class imbalances of personal characteristics on social media, including our own. As a result, we highlight the need for attentive consideration when reporting model performance on these datasets and compare a number of methods for fixing the class-imbalance problems. Moreover, we develop a statistical framework for assessing the importance of different sets of features in our models. We discover some features to be more informative than others in the Intuitive/Sensory (\(p=0.032\)) and Thinking/Feeling (\(p=0.019\)) models. While we apply these methods to Myers-Briggs personality profiling, they could be more generally used for any labelling of individuals on social media. personality profiling, targeted advertising, machine learning, social media, statistical framework, digital footprints, data science, natural language processing ## I Introduction In 2023 there are thousands of social media applications and over 4.59 billion social media users worldwide, constituting approximately 60% of the world's population [1]. While this enables most of the world to be connected, it also creates an environment of mass data, defining what we refer to as the information environment. The huge amounts of individual-level data provided by each user is an important aspect of social media which is unique to this type of information environment. Consequently, it is crucial for scholars to understand how this aspect of social media may impact society. There exists a need to quantify the extent to which social media can be weaponized by governments and other organisations for influence. Every time a user enters a social media application, they leave a unique data trace - information they have posted, liked, shared, commented, even how long they have spent viewing different material on the application. We refer to this unique trace of data as a user's online digital footprint. It has been suggested that someone's online digital footprint can expose actionable information about them; including their personality profile, relationship status, political opinions and even their propensity to adopt a particular opinion or behavior [2, 3, 4, 5, 6, 7]. Cambridge Analytica was suggested to use online digital footprints to impact the result of the 2016 US election and the 2016 Brexit referendum [2]. However, the extent to which companies like Cambridge Analytica can determine this information from social media data is still questioned [3, 4, 5]. As a result, it is of interest for individuals to understand the extent of information that is attainable from their online digital footprint. This is also of key concern for governments, who seek to maintain democracies and the ethical use of such data, which can be abused by understanding personal data. We seek to determine how informative online digital footprints are in predicting Myers-Briggs personality types. This is a theoretical model comprised of four traits/dichotomies, based on Jungian theory [8, 9]. Modelling personal information about individuals using their online information has previously enabled researchers to understand the accuracy of such models. We extend this work by creating a new labelled dataset of Myers-Briggs personality types on Twitter and a statistical modelling framework which can be generally applied to any labelled characteristic of online accounts. We aim to reconsider the personality profiling and political microtargetting performed by companies like Cambridge Analytica. First we collect a labelled dataset of accounts with self-reported Myers-Briggs personality types. We then collect a number of different features for these accounts including social metadata features and linguistic features: LIWC [10]; VADER [11]; BERT [12]; and Botometer [13]. We then create independent logistic regression (LR), naive Bayes (NB), support vector machines (SVMs) and random forests (RF) models on each dichotomy to model the Myers-Briggs personality type of the accounts. As part of this, we consider four different weighting/sampling techniques to adjust for class imbalances. Lastly, we provide a statistical framework for analysing the importance of different features in these models. We consider the importance of features at an individual level and across groups of features for each dichotomy. Our main contributions are: * A labelled dataset1 of 68,958 Twitter users along with their Myers-Briggs personality types, the largest available dataset (to our knowledge) of labelled Myers-Briggs personality types on Twitter [14]. Footnote 1: Dataset available at [https://figshare.com/articles/dataset/Self-Reported_Myers-Briggs_Personality_Types_on_Twitter/23620554](https://figshare.com/articles/dataset/Self-Reported_Myers-Briggs_Personality_Types_on_Twitter/23620554). * A statistical framework to combine NLP tools and mathematical models to predict online users' personality types, which can be more broadly used to model any labelled characteristics about online accounts. * A comparison of machine learning models on NLP features, and a comparison of various weighting/sampling techniques to address problems with class imbalance. * Statistical methods which compare the importance of different features in NLP-based models at an individual level and across groups of features. ## II Background Myers-Briggs [8] is the most well-known personality model, being applied in hiring processes, social dynamics, education and relationships [15, 16, 17]. The Myers-Briggs Type Indicator (MBTI) handbook illustrates a four factor model of personality where people form their 'personality type' by attaining one attribute from each of four dichotomies; Extrovert/Introvert, Intuitive/Sensory, Thinking/Feeling and Judging/Perceiving. This gives 16 different personality types where a letter from each dichotomy is taken to produce a four letter acronym, e.g., 'ENTJ' or 'ISFP'. The model has received substantial scrutiny, particularly from psychologists who question its validity and reliability [18, 19]. Nonetheless, we utilise the Myers-Briggs model in our analysis for the following reasons: * Thousands of Twitter users self-report their MBTI on Twitter. This enables us to obtain a labelled dataset through appropriately querying for each of the 4 letter personality type acronyms that are unique to MBTI. * The Myers-Briggs model has the largest number of self-reports on Twitter, enabling us to achieve the largest labelled personality dataset on Twitter. * We aim to develop a framework for modelling personality profiles from social media data using statistical machine learning (ML) approaches. MTBI is a test case for our framework, which can be applied to other personality models (or other labelings/characteristics of individuals on social media) more generally. Open-source labelled training data with Myers-Briggs personality types has not existed until recently. Plank and Hovy [20] modeled the MBTI of Twitter users through attaining a small dataset of 1,500 users and Gjurkovic and Snajder [21] modeled the MBTI on a larger corpus of Reddit users. In 2017, Jolly [22] posted a labelled MBTI dataset on Kaggle, constituting the only known publicly available labelled dataset used for modelling the MBTI of social media users. The dataset was comprised of 8,675 users, their personality types and a section of their last 50 posts on an online forum called personalitycafe.com. This small online forum contains 153,000 members dedicated to discussing health, behavior, personality types and personality testing. The discussions are therefore quite different to those on other social media platforms, and likely a different demographic. Hence, this dataset is likely not generalisable to other platforms like Twitter and Facebook. It is also relatively small and imbalanced, limiting which models can be utilised on various feature sets. Class imbalance is considerable in all cases, and in one particular dataset some classes up to 28 times larger than their counterpart. Nevertheless, many papers apply machine learning models to such datasets without accounting for these class imbalances [4, 23, 24, 25, 3]. Consequently, the metrics reported often misrepresent model performance, and instead highlight the severity of class imbalances in the datasets. ## III Data Collection & Preprocessing We discovered a number of Twitter accounts to self-report their labelled MBTI acronym on Twitter in the form of a regular expression. We therefore formulated two methods for querying and labelling the Myers-Briggs personality type of accounts. Let \(\Omega\) define the set of 16 acronyms for Myers-Briggs personality types, then: * Query: \(\{\texttt{x}:\texttt{x}\in\Omega\}\). We obtained the set of users who currently self-report their personality type in their user-name or biography. * due to a number of users often not self-reporting their own MBTI when referencing MBTI acronyms in these forms of communication. Note that in both cases, the queries were not case-sensitive. The resulting dataset comprised of 68,958 users with their labelled MBTI; the dataset and more details on its collection are provided in [14]. In total, we collected 15,986 accounts by querying usernames and biographies, and 52,972 accounts from querying tweets, with misclassification rates 1.9% and 3.4% based on random samples of 1,000 accounts from each. Next we obtained account characteristics for each user, including their: biography, most recent 100 tweets/quotes, as well as a set of Social Metadata (SM) features. The user's biography and the 100 tweets/quotes were used to generate a set of linguistic features, whereas SM features (Table I) are directly used as numeric features in the models. We removed duplicate users, then combined the biography and tweets into a combined text for every account. We then: 1. Normalised the text and calculated each account's dominant language. 2. Removed non-English language using the Compact Language Detect 2 (PyCLD2) library. 3. Calculated (language-dependent) Botometer scores2. Footnote 2: Further discussion: [https://rapidapi.com/OSoMe/api/botometer-pro/details](https://rapidapi.com/OSoMe/api/botometer-pro/details) 4. Converted text to lowercase, removed URLs, email addresses, punctuation and numbers. 5. Tokenized using the Tweet Tokenizer from the Natural Language Toolkit (NLTK) [26]. 6. Removed empty tokens and any instances of the 16 MBTI acronyms. Next, we formulated an inclusion-exclusion criteria to determine whether a personality could be profiled from a Twitter account: we kept accounts with over 100 tweets/quotes, over 50% English language, Botometer CAP score less than 0.8, and strictly one MBTI type referenced. We use the Botometer CAP score because we are interested in the overall bot likelihood and not the sub-category bot likelihoods. Unfortunately, there is no consistency in the literature on thresholds for binary bot classification. Rather, authors define their threshold based on a false positive rate in the context of their problem. For instance, Wojcik et al. [27] use a threshold of \(0.43\) for their political analysis of the twittersphere, whereas Keller and Klinger [28] use a larger threshold of \(0.76\) for their analysis of social bots in election campaigns. To avoid large numbers of false positive bot classifications, we chose a high threshold of \(0.8\). Finally, we extracted the LIWC, BERT and VADER features from the text. The data cleaning techniques above were performed only for LIWC feature extraction, whereas the BERT and VADER features can be extracted directly from the raw text output. Thus, we calculated the LIWC features on the combined text by micro-averaging the tokens present in each LIWC category for every user. Next, we calculated the BERT features on the raw Twitter output using BERTweet [29], a pre-trained language model for English Tweets. First, we averaged the embeddings for the tokens to form a single embedding vector for each tweet/quote, then averaged the embedding vectors for the tweets/quotes to create a single 768-dimensional embedding vector for each user. We calculated the VADER features (sentiment, proportion of positive words and proportion of negative words) on the raw Twitter output for each user and include scores for both a user's biography and their tweets. We distinguish these because of contextual differences in the language; biographies often discuss oneself and tweets often discuss one's environment. We then have a total of 866 features; these are provided in Table I. A noticeable imbalance in the Intuitive/Sensory dichotomy exists across all datasets in Figure 1. There are also observable imbalances in the Extrovert/Introvert and Thinking/Feeling dichotomies. Whereas, the Judging/Perceiving dichotomy is more balanced across each dataset than the other dichotomies. The imbalances in our dataset are mostly consistent with those from www.personalitycafe.com. The higher proportion of introverts in our dataset is consistent with [32] who find that introverts tend to use social media as a primary form of communication, whereas extroverts tend to prefer communicating in-person. The larger proportion of intuitives in our dataset is consistent with Schaubhut et al. [30] who discovered that more Intuitive individuals (13%) reported being active users of Twitter than individuals with a preference for Sensing (8%). The imbalance in the Thinking/Feeling dichotomy in our dataset is opposite to what we observe in the Twitter dataset. However, Schaubhut et al. [30] found that people displaying the Feeling trait are more likely to spend their personal time browsing, interacting and sharing information on Facebook. Provided the same is true for Twitter users, our inclusion-exclusion condition requiring users to be active on Twitter (i.e. tweet/quote at least 100 times) may bias our dataset leading to more users exerting the Feelings trait. Some authors don't assume independence between the dichotomies when modelling [23, 3], whereas most choose to Fig. 1: Proportion of accounts displaying each dichotomous trait in our dataset, on Twitter and in the general population. \begin{table} \begin{tabular}{p{42.7pt}|p{341.4pt}} Category & Features \\ \hline SM & followers\_count, friends\_count, listed\_count, favourites\_count, geo\_enabled, verified, statuses\_count, default\_profile, default\_profile\_image, profile\_use\_background, image, has\_extended, profile \\ \hline Botometer & cap\_english, english\_astrofurf, english\_fake\_follower, english\_financial, english\_other, english\_self\_declared, english\_spammer \\ \hline LIWC & function, pronoun, ppronon, ppronon, i, we, you, shee, they, iron, article, prep, auxverb, adverb, conj, negate, verb, adj, compare, interrog, number, quant, affect, posemo, negemo, ans, anger, sad, social, family, friend, female, male, cognroe, insight, cause, discrepe, tentat, certain, differ, percept, see, hear, feel, bio, body, health, sexual, ingest, drives, affiliation, achieve, power, reward, risk, focuspast, focuspresit, focuspresit, focuspresit, focusfurte, relativ, motion, space, time, work, leisure, home, money, relig, death, informal, swear, netspeak, assent, nonflu, filler, total\_word\_count \\ \hline BERT & \{e_i_ : \(i=1,\ldots,768\}\) \\ \hline VADER & tweets\_sentiment, bio\_sentiment, tweets\_pos\_words, bio\_pos\_words, tweets\_neg\_words, bio\_neg\_words \\ \end{tabular} \end{table} TABLE I: Features in our models, separated by category. model the dichotomies independently [33, 34, 35, 24, 25]. We take a data-driven approach, determining the dependency structure of the four MBTI dichotomies in our dataset using the bias-corrected version of the Cramer's V Statistic [36] (Table II). The Cramer's V statistic is small in every case, implying that the four Myers-Briggs dichotomies are independent in our dataset, and so we model them independently. We performed a Principal Component Analysis (PCA) on the features to discover if we could significantly reduce the dimension of the feature space, and multicollinearity between the features. The first principal component explains 25.1% of the variance in the data and the first 200 principal components explain 95.4% of the variance in the data. As a result, we utilise the first 200 PCA components in our machine learning models, significantly reducing both the dimension of the feature space and the multicollinearity of the features. ## V Model Comparison We train LR, NB, SVM and RF classifiers on each of the four dichotomies in our dataset, using 10-fold cross validation. The class imbalances we observe for some dichotomies (particularly Intuitive/Sensory and Extrovert/Introvert), leads us to perform four different weighting/sampling techniques prior to model fitting: * Weight the importance of classifying dichotomies, * Upsample the minority class (with replacement), * Perform the Synthetic Minority Oversampling Technique (SMOTE) on the minority class, * Downsample the majority class. Each model uses the first 200 principal components of the features in Table I as predictors. As an example, Figure 2 shows confusion matrices for the Intuitive/Sensory dichotomy under the standard LR model and the upsampled LR model. This shows that the standard LR model primarily predicts the majority class, indicating that it exploits the class imbalance to make predictions on the test sets. In comparison, the upsampled model predicts significantly more of the minority class on the test sets, resulting in more accurate predictions for the minority class. We observe similar behavior for all other models, highlighting the importance of weighting/sampling techniques to ameliorate the effect of class imbalance for prediction. However, we observe a clear trade-off between accurately predicting the majority and minority classes, with an overall reduction in accuracy due to weighting/sampling techniques. We therefore report both accuracy and Area Under the Curve (AUC) metrics for each of our models in Table III. We report four types of accuracy depending on the number of accurately predicted dichotomies in each model. Of course, accuracy can be a misleading metric when assessing a model's performance on unbalanced data, so for comparison we report the accuracies for a random classifier and a majority class classifier. Moreover, we use an approach similar to other authors to report two types of AUC for each model [37, 38]: we macro-average and micro-average the true positive rate and false positive rate at each threshold of the ROC curve for the independent models of each dichotomy. This provides us with two ROC curves (and AUC metrics) for each model. The micro-averaged AUC aggregates the contributions of all samples in each model and weights individual predictions equally, so it is generally less sensitive to class imbalances. Table III compares the accuracies and AUCs of the best performing models from each method. In each case, we include the 'Standard' model and the weighted/sampling model which achieves the highest sum of micro- and macro-averaged AUC. Table III highlights the relatively small improvement in accuracy achieved by each model in comparison to the majority class classifier. It is clear that our standard SVM model is the best performing model on average. However, this model is only 5.64% more accurate at predicting a user's complete personality type compared to the majority class classifier. This is a reasonable and statistically significant improvement, but we remark based on the above discussion that the standard models are simply exploiting the class imbalances in our \begin{table} \begin{tabular}{c|c c c c} & E/I & N/S & T/F & J/P \\ \hline E/I & 1.00 & 0.03 & 0.00 & 0.10 \\ N/S & 0.03 & 1.00 & 0.02 & 0.08 \\ T/F & 0.00 & 0.02 & 1.00 & 0.11 \\ J/P & 0.10 & 0.08 & 0.11 & 1.00 \\ \end{tabular} \end{table} TABLE II: Pairwise results of the bias-corrected Cramer’s V Statistic between the MBTI dichotomies for our dataset. \begin{table} \begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{Accurately Predicted Dichotomies} & \multicolumn{2}{c}{AUCs} \\ \cline{2-7} Model & 4 & \(\geq 3\) & \(\geq 2\) & \(\geq 1\) & Macro & Micro \\ \hline Standard LR & 20.82 & **60.43** & 89.35 & 98.82 & 0.6688 & 0.6547 \\ SMOTE LR & 13.89 & 48.63 & 82.51 & 97.65 & 0.6642 & **0.6620** \\ \hline Standard NB & 14.20 & 49.17 & 81.91 & 97.40 & 0.5784 & 0.5867 \\ Upsampled NB & 13.75 & 48.06 & 80.82 & 97.18 & 0.5861 & 0.5917 \\ \hline Standard SVM & **20.95** & 60.25 & **89.64** & **98.90** & **0.6693** & 0.6518 \\ SMOTE SVM & 13.56 & 48.61 & 82.54 & 97.61 & 0.6606 & 0.6554 \\ \hline Standard RF & 19.69 & 57.96 & 88.69 & 98.67 & 0.6223 & 0.6273 \\ Upsampled RF & 19.70 & 58.16 & 88.48 & 98.76 & 0.6305 & 0.6264 \\ \hline Random Classifier & 6.250 & 31.25 & 68.75 & 93.75 & 0.5000 & 0.5000 \\ Majority Class & 15.31 & 54.54 & 87.20 & 98.28 & 0.5000 & 0.5000 \\ \end{tabular} \end{table} TABLE III: Accuracies and AUCs for the best performing models for each ML method. We include results from the ‘Standard’ model (with no weighting/sampling) and the best performing weighted/sampling model. Note that we determine the ‘best performing weighted/sampling model’ based on the sum of macro- and micro-averaged AUC. Fig. 2: Confusion matrices for modelling the N/S dichotomy. dataset. Moreover, we achieve accuracies very similar to those obtained by Plank and Hovy [20], who produced the only other Twitter dataset of labelled MBTI's (to our knowledge). In particular, we achieve better accuracies for the T/F and J/P dichotomies, and only marginally worse accuracies for the E/I and N/S dichotomies - further evidencing that our models perform similarly to others in the literature. Interestingly, the standard LR model most accurately predicts at least three out of four user dichotomies and is only marginally worse than the standard SVM model for all other metrics. The LR model is also significantly faster to train than the SVMs - making it the model of choice on larger datasets. The AUC is important in discussions of model performance, especially for unbalanced datasets. This is because it equally weights the true positive rate and false positive rate, making it more robust for unbalanced datasets compared to accuracy. Most of our AUCs lie around 0.65 (65%), apart from the NB Classifiers. In particular, the best performance for the macro-averaged and micro-averaged AUCs is the standard SVM model and the SMOTE LR model, respectively. These AUCs are significantly larger than what we observe for both the random classifier and the majority class classifier, indicating there is certainly some'signal' in our features. We therefore perform an in-depth analysis of feature importance next. ## VI Feature Importance We perform independent upsampled LR models on each of the four MBTI dichotomies because they performed well on our dataset (macro- and micro-averaged AUCs: 0.6676 and 0.6536). We choose an LR model because it is fast to train and, straightforward to interpret and perform feature selection on. Moreover, we use an upsampled model because it does not involve creating'synthetic' data in the same way that SMOTE does - this is important for determining feature importance. We consider the variable importance of the descriptive features in our models; these include all features except from BERT. For each dichotomy we fit the upsampled LR model and perform a stepwise feature selection to obtain a model with only significant features. In each case, we start with a null model and perform the stepwise selection algorithm on the \(p\)-values with a threshold in of \(0.05\) and a threshold out of \(0.1\). We determine the variable importance of features using the \(t\)-statistic for the parameter coefficients associated with each feature. For each dichotomy, we calculate the variable importance of each remaining feature after the stepwise selection algorithm is complete and display the absolute value of the variable importance. Figure 3 displays the 12 most important features for each model in a bar chart. We colour the bars based on the variable's preference for each class in the dichotomy. Pennebaker and Francis [39] suggested function words such as pronouns (pronoun), personal pronouns (pron), 1st person singular (i), 1st person plural (we), prepositions (prep), auxiliary verbs (auxverb) and negations (negate), can describe people. Figure 3 shows the function words that are significant predictors in our models, e.g., 1st person plurals are significant in the E/I model and prepositions are significant in the N/S model. This reinforces the importance of function words, and that techniques such as stop-word removal may remove useful information, particularly for tasks like personality prediction. Extroverts tend to be associated with more positive language, and introverts tend to have more of a focus on the past. Similarly, Chen et al. [40] suggested that extroverts display more positive emotion because they have a "dispositional tendency to experience positive emotions". Accounts which have a larger favourites count (i.e. the account likes more tweets) tend to be more intuitive, whereas accounts which write more statuses tend to be more sensory. Interpreting favourites as a proxy for the amount of information an account consumes, our results suggest that intuitively consume more information on Twitter, whereas sensory individuals write more. This proxy is of course not perfect, because people may consume information without liking it. Nonetheless, it is consistent with Myers-Briggs Foundation definitions, which state that intuitively pay "most attention to impressions or the meaning and patterns of the information", whereas sensors pay "attention to physical reality, what I see, hear, touch, taste, and smell" [41]. The strongest predictor for the J/P dichotomy (Figure (d)d) is time; judgers are more likely to use words related to time and certainty compared to perceivers. 'End', 'until' and'season' are examples of time-related words and 'always', 'never' are words related to certainty. This is also consistent with the Myers-Briggs Foundation, which states judges "prefer a planned or orderly way of life, like to have things settled and organized" [41]. Next we explore how emoji usage relates to a Twitter user's MBTI. On Twitter, emojis often have multiple meanings. For instance, the rainbow flag can indicate support for LGBT+ social movements, the wave can symbolise a "Resister" crowd of anti-Trump Twitter, and the okay symbol can be used by white supremacists, some of which covertly use the symbol to indicate their support for white nationalism [42]. Hence, Fig. 3: Variable Importance Plots for an upsampled LR model for each dichotomy. Variables are sorted by the absolute value of the variable importance (from top to bottom). We colour bars by the feature preference for each class. emoijs can indicate how these groups/movements interact with different personality types. We determine each emoji's frequency in a user's tweets and include these frequencies as predictors in upsampled LR models. Performing the same stepwise feature selection algorithm as above, we display the 12 most important predictors from the remaining models in Figure 4. The rocket ship emoji is one the top 12 most important predictors across all models. An increase in this emoji's usage implies a higher likelihood of an account being introverted, intuitive, feelings-orientated and perceiving. The rocket ship emoji has been used by finance enthusiasts who use the emoji to denote a fast increase in a particular stock or cryptocurrency. Hence, it is possible that we are observing crypto enthusiasts to be more introverted, intuitive, feelings-orientated and perceiving. However, this emoji has other meanings like as an actual rocket ship, so we explore created word clouds of tweets containing the rocket emoji (Figure 4(a)), as well as the red heart emoji (Figure 4(b)). The rocket ship generally appears in crypto-related tweets discussing 'projects', 'great opportunities', 'developments' and 'cryptos'. However, it also appears in tweets discussing the'moon' and'space'. The red heart emoji mainly appears in emotive tweets discussing 'love' and 'happiness'. A number of the emojs making an account more introverted are sad/upset emojis, whereas no sad/upset emojis make an account more extroverted. This further confirms Figure 2(a) which suggested that extroverts prefer to display positive emotion online. Next we wanted to consider the importance of different feature groups (including the BERT features) and discuss whether different groups of features are more informative in our models. Again, we do this by fitting an upsampled logistic regression model to all of the features and performing the stepwise feature selection on each of the models. We use the same thresholds for accepting and removing features. We then use the number of remaining features in each feature group after stepwise selection to measure the importance of the different feature groups. For each model, Table IV displays both the number of predictors (in each feature group) and their proportion that remain after the stepwise feature selection algorithm. This proportion can be considered a measure of the importance of each feature group which is not biased by the number of features in each group. We introduce a more robust statistical framework to determine whether different groups of features are actually more informative of our data. We do this by performing a Chi-Squared Test on the number of features retained and excluded from each model. We test the null hypothesis that each feature group is equally as informative (per feature) and include the \(p\)-values from the Chi-Square Test in the sub-table captions displayed in Table IV. The number of features selected depends on the type of model. For instance, 243 features are selected in the N/S model, whereas only 124 features are selected in the J/P model. Interestingly, the N/S model is also the most accurate and the J/P model the least accurate, implying a positive relationship between accuracy and number of features retained. This is consistent with the remark that more features are retained in a model when they are more informative about the data. Moreover, the SM features are on average the most-retained across models. Conversely, the Botometer features have worst payoff across the four models, having the smallest proportion retained on average. The most interesting comparison is between the LIWC and BERT features, which both aim to describe linguistic properties about users. In each model, the BERT features are more highly retained. However, only the results from the N/S model and the T/F model are significant at the 5% level. We therefore reject the null hypothesis that each feature group is equally as informative (per feature) for the N/S and T/F models. However, the Chi-Squared Test does not alone tell us what feature groups perform significantly better, Fig. 4: Variable Importance Plots based on only emoji counts in the upsampled LR models. Variables are sorted by absolute value of the variable importance (from top to bottom). We colour bars by the feature preference for each class. Fig. 5: Word clouds of tweets/quotes containing specific emojis in our dataset. Larger words appeared more frequently in the tweets - we present results for the rocket ship emoji (left) and the red heart emoji (right). Note that we remove stopwords as they do not provide much context for the tweets. so we perform individual confidence intervals (CIs) for the binomial proportions of accepting/rejecting features in each group using the Wilson Score interval [43]. The CIs for each feature group and model are displayed in Figure 6. For the Intuitive/Sensory model, the 95% CI for the SM features lies completely above the 95% CIs for the LIWC and BERT features. This indicates that the SM features are more informative (per feature) than the LIWC and BERT features at the 5% level for this dichotomy. This highlights that attributes about a user's account are sometimes more important than the language they use when modelling personality. This statement is also validated by the results for the Thinking/Feeling model, where the 95% CI for the SM features and VADER features lie completely above the 95% CI for the BERT features. We deduce that we are likely observing these results because the textual features are all fairly correlated with each other. Moreover, there is no evidence to suggest that the BERT features are any more informative than the LIWC features in determining someone's Myers-Briggs personality type. ## VII Conclusion This paper contributes a labelled dataset of personality types from Twitter and a framework to model the personality types of these users. To our knowledge, this is the largest available Twitter dataset of labelled Myers-Briggs Personality Types. The only comparable dataset [20] contains only 1,500 labelled accounts. Moreover, the data collection techniques we used to collect this data are also novel, as they avoid the long, cumbersome questionnaires used in other research. Moreover, we develop a statistical framework which combines NLP tools and mathematical models to model/predict the personality type of users online. While we applied this framework to personality types, it can model any labelled characteristics of online accounts - political opinions, psychological properties or even someone's propensity to adopt an opinion or viewpoint. As part of this framework, we analyse and compare a number of different machine learning models. Since personality types in our dataset are unbalanced, we compare different weighting/sampling techniques to deal with issues arising from class imbalance. We discover that class imbalances are common in these types of datasets and are something which is often overlooked by many scholars. Because of this, we demonstrate why models on these datasets appear more accurate than they really are and we demonstrate why your digital footprint may be less informative of your personality type than you may think. Finally, we compare the importance of different features in our models on an individual and group level. As we use a large number of features from a large dataset, a deep learning model would be applicable. However, this would give less interpretability than the models used here. It would also be interesting to consider different data collection methods. One limitation of our dataset is that we only have access to the classification of the four personality dimensions, when in reality these dimensions are represented on a numerical scale. For instance, two users may be extroverted but one user may be considerably more extroverted than the other. While performing questionnaires are long and expensive, it would enable us to obtain these personality dimensions on a numerical scale. We would expect this to have a significant improvement on the performance of our models. ## Acknowledgment LM acknowledges support from the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP210103700). \begin{table} \end{table} TABLE IV: Number of features and proportion of features retained in each group after stepwise feature selection. For each dichotomy, we perform Chi-Squared Tests on the null hypothesis that each feature group is equally informative, per feature (see \(p\)-values). Fig. 6: 95% Wilson Score Binomial confidence intervals (CIs) for the proportion of retained features in each feature group. We display the CIs for each model and use the Wilson Score version to correct for having zero successes in some cases.
2309.08323
MLP Based Continuous Gait Recognition of a Powered Ankle Prosthesis with Serial Elastic Actuator
Powered ankle prostheses effectively assist people with lower limb amputation to perform daily activities. High performance prostheses with adjustable compliance and capability to predict and implement amputee's intent are crucial for them to be comparable to or better than a real limb. However, current designs fail to provide simple yet effective compliance of the joint with full potential of modification, and lack accurate gait prediction method in real time. This paper proposes an innovative design of powered ankle prosthesis with serial elastic actuator (SEA), and puts forward a MLP based gait recognition method that can accurately and continuously predict more gait parameters for motion sensing and control. The prosthesis mimics biological joint with similar weight, torque, and power which can assist walking of up to 4 m/s. A new design of planar torsional spring is proposed for the SEA, which has better stiffness, endurance, and potential of modification than current designs. The gait recognition system simultaneously generates locomotive speed, gait phase, ankle angle and angular velocity only utilizing signals of single IMU, holding advantage in continuity, adaptability for speed range, accuracy, and capability of multi-functions.
Yanze Li, Feixing Chen, Jingqi Cao, Ruoqi Zhao, Xuan Yang, Xingbang Yang, Yubo Fan
2023-09-15T11:25:48Z
http://arxiv.org/abs/2309.08323v2
# MLP Based Continuous Gait Recognition of a Powered Ankle Prosthesis with Serial Elastic Actuator ###### Abstract Powered ankle prostheses effectively assist people with lower limb amputation to perform daily activities. High performance prostheses with adjustable compliance and capability to predict and implement amputees' intent are crucial for them to be comparable to or better than a real limb. However, current designs fail to provide simple yet effective compliance of the joint with full potential of modification, and lack accurate gait prediction method in real time. This paper proposes an innovative design of powered ankle prosthesis with serial elastic actuator (SEA), and puts forward a MLP based gait recognition method that can accurately and continuously predict more gait parameters for motion sensing and control. The prosthesis mimics biological joint with similar weight, torque, and power which can assist walking of up to 4 m/s. A new design of planar torsional spring is proposed for the SEA, which has better stiffness, endurance, and potential of modification than current designs. The gait recognition system simultaneously generates locomotive speed, gait phase, ankle angle and angular velocity only utilizing signals of single IMU, holding advantage in continuity, adaptability for speed range, accuracy, and capability of multi-functions. ## I Introduction Nowadays, there are over 600,000 lower limb amputees in the United States, each year with estimated 100,000 people undergoing amputation of a lower limb, especially below the knee [1]. These subjects often suffer greatly from physical and mental pain due to low mobility, severely preventing them from recovery and rejoining the society. Research in robotic prosthetics has the goal to assist amputees by developing powered prostheses which mimic a healthy ankle-foot and replicate the non-pathological gait. Previous analysis has shown that a functional ankle outputs substantially 40% of power during daily activities, more than any other lower limb joint [2], and ankle moment contributes linearly to propulsion [3]. This puts forward challenging demand on replacement of power generation at the ankle by prostheses [4], which requires exact mimicking of kinetic features and torque outputs. With the purpose of reducing the metabolic cost of amputees, lower limb prosthetics evolved from rudimentary passive feet [5] to modern bionic feet with certain features on stability and propulsion [6]. The propulsive devices usually contain pneumatic [7] or hydraulic actuators [8], or motorized drivers connected to a transmission system [9], whose actuators can be either rigid or elastic depending on their mechanical structure. actuators (SEA [11][12], including parallel elastic actuators (PEA) [13] and other variable stiffness elastic actuators [14]). Theoretically, these technologies add an intrinsic impedance to the mathematical model of the system via either algorithm or hardware structures, introducing stiffness, damping, and inertia features [15]. Most algorithms are infeasible to reduce inherent high inertia and high friction of actuators [10] and requires extra complexity to the system and complicated adjustment via reinforcement learning or transfer learning [16],[17]. QDD relies on a proprioceptive sensor to measure the position to anticipate the torques for precise control, which increases the complexity of the system and the electrical design. Comparatively, SEA possesses higher torque density and lower heat thanks to much lower current regime than QDD [18], and also overcomes the backdrivability limitation [19],[20]. Current SEAs utilize planar torsional springs to realize a linearly alterable stiffness through different layers of stacking [21]. Drawbacks of current spring structures include excessive mass and weight, less stability under radial or axial forces, inappropriate stiffness and rigidity for prostheses, short endurance, and low potential of modifications [22][23]. Gait recognition and analysis is also crucial for lower-limb prostheses. A common method to estimate the gait phase is to use a finite state machine and segment the gait phase into several discrete events, where a specific assistive effect is actuated, and transitions through the state machine is triggered by various sensors [24][25]. Since each state is represented discretely instead of continuously, transitions between each state are not seamless, and distinctive assistance controllers must be designed for each state. Another widely used method is incorporating a phase variable approach with iconic parameters measured by sensors[26]. For example, a single IMU mounted on the shank can compute the gait phase and velocity[27]. This phase variable approach is more adaptable than other methods since it maps input configurations into gait phase directly, but it still requires a rhythmic motion of the subject and is feeble to accommodate abrupt changes in walking speeds [28]. For better understanding of gait dynamics, machine learning (ML) has been proved feasible with numerous works optimizing the robustness of their system via ML. Neural networks have widely been used in deep learning tasks, where they perform outstanding capability to learn difficult and complicated tasks [27][29]. In addition, well-trained neural networks can calculate at real time once embedded into the control board and implemented as a series of matrix operations, which makes it suitable for assistance across larger speed range and continuous gait recognition that is scarcely seen in prosthetic robotics before. This paper proposes a new design of powered ankle prosthesis inspired by the OSL project[21][30], with innovative design of planar torsional spring utilized in SEA, and continuous gait recognition and control method using neural network with the configuration of multi-layered perceptron (MLP). Finite element analyses (FEA) are conducted to discover the relationship between stiffness and endurance with mechanical characteristics, and verify the spring's merits in stiffness, endurance, and potential of modification. In addition, individual gait data are gathered via motion capture system, and the hypothesis that utilizing single sensor to estimate locomotive speed and gait phase, and hence generate real-time output of kinetic configurations is testified via simulation and on the proposed prosthesis. The result shows our advantage in accuracy and a wildly adaptive capability to speed range (0-4m/s). This study establishes the connection between single sensor input and joint outputs via machine learning, potentially to be adopted to other wearable robotics. ## II Mechatronic Design ### _Mechanical Design_ The proposed prosthesis (Fig. 1) utilizes the combination of motorized driver with a belt-driven transmission system in S3M standard, which consists of two pulleys connected respectively to the motor and the SEA, and a tensioning wheel mounted on two eccentric bases that attaches to the outer housing, providing multiple choice of tension. Two pieces of outer housings collectively fix the inner transmission system while hold the motor in position with a supportive arch. Standard connectors for prosthetics are provided on both extremes of the prototype, allowing attachments to artificial feet and sockets. Two half-open chambers are placed respectively in the front and rear for storage of battery and control board, with an inner trench that connects two chambers for wiring. Detachable plastic lids that fit to the housings are made to cover the chamber, which ensures protection during operation as well as convenience for maintenance. Detailed specifications are available in Table I. ### _Electronical Design and Control System_ The prototype is powered by a LiPo battery, and uses a DC motor (Cubemars AK10-9, China) integrated with a planetary gear transmission with 9:1 speed-reduction ratio. The onboard sensors include one inertia measurement unit (IMU, Yahboom CMP10A, China) and a 3-axis loadcell sensor (HUILZHI LZ-SWF58, China) that measures forces at the top end of the prosthesis. The integrated encoder inside the motor can retrieve parameters including but not limited to position, rotational speed, temperature, and current. A traditional hierarchical control architecture containing three levels [31] is utilized in the controller design. A STM32 board (Robomaster A, China) is responsible for the high and middle-level control, which collects data from sensors and recognizes amputees' intended movements. A driver chip integrated in the motor manages the low-level control, generating appropriate control law based on the recognized intent and producing desired output command. Three AD conversion chips are connected to the loadcell and main control board, realizing the retrieval of force data. All communications are through USART protocol. ### _Electronical Design and Control System_ The planar torsional spring (Fig. 1d) used in SEA needs to be both elastic and endurable [36], which requires a linear elasticity for easier control and lower von Mises Stress (VMS) for longer endurance. In addition, higher modifiability is preferred, meaning that a range of stiffness can be achieved by adjusting certain parameters. Based on the joint torque simulation [11], a total stiffness of approximately 20Nm/\({}^{\circ}\) is desired, which is realized via axially stacking of springs (Fig. 1c). In virtue of larger yield strength and longer fatigue life, aged maraging steel 300 is selected, which excels in performance of fatigue behavior [37]. ## III Gait Recognition and Control Method ### _Gait Modelling and Motion Capture_ Musculoskeletal models of lower limbs (Fig. 2b) at various speeds are established in OpenSim software (Version 4.4, Stanford) [38][40], from which kinetic information of the tibia can be extracted. A three-dimensional, 23 degree-of-freedom model of the human musculoskeletal system is used for gait analysis, namely the Gait 2392 Model [41]. The data of walking, including speed of 2m/s, 3m/s, 4m/s, and 5m/s are from previous work of the OpenSim team [42]. The models can broadly reflect the average gait of humans, and are imported into the neural network for rudimentary training. Gait data of the subject for prosthesis field tests were acquired by motion capture system and added to better suit individual gait habits. The Vicon MX (Vicon Motion Systems Ltd, Oxford, UK) is utilized in the research, with 10 cameras tracking the position of 18 reflective markers attached to the subject's anatomical reference points at 100Hz (Fig. 2a), and images are processed in Nexus software (version 2.6). The lower-body plug-in gait model was used to record the kinetics of the subject, which is then exported and processed by OpenSim [43]. Gait data at speeds of 0.5m/s, 0.8m/s, 1.25m/s 1.5m/s, 1.8m/s, 2m/s, 2.5m/s, 2.8m/s, 3m/s, 3.3m/s, 3.5m/s, 4.5m/s are gathered on a running machine (Fig. 2a). ### _Neural Network_ The inputs of the neural network are angles (\(\theta,\varphi,\psi\)) and angular velocities (\(\dot{\theta},\dot{\psi}\)) respectively in the sagittal plane, coronal plane, and transverse plane (Fig. 3a). Since inputs of the neural network are six uncorrelated variables from IMU, configurations like convolutional neural network (CNN) that emphasizes spatial relevance in neighboring dimension is unsuitable. The intermediate variables are gait phase (\(P\)) and locomotive velocity (\(\mathcal{V}\)), while the outputs are ankle angle (\(\alpha\)) and angular velocity (\(\dot{\alpha}\)). The whole data set took a 5-fold \begin{table} \begin{tabular}{l l l} \hline Metric & Prosthesis & Biological ankle \\ \hline Inner reduction ratio & 4.61(83:18) \\ Motion Range & \(\pm\)50\({}^{\circ}\) & -50\({}^{\circ}\) (plantar flexion)- \\ & 20\({}^{\circ}\) (dorsiflexion) [32] \\ Mass (kg) & 2.9 & 0.048*Bodyweight [33] \\ Bus Voltage (V) & 24 & \\ Battery Capacity (m Ah) & 3000 & \\ Continuous Torque (Nm) & 82.8 & \\ Peak Torque (Nm) & 220.8 & 200 [34] \\ Peak Rotational Speed (rad/s) & 5.65 & 3.5 [35] \\ Motor torque Constant & 0.16 & \\ (Nm/A) & & \\ \hline \end{tabular} \end{table} TABLE I: Design Sprechatronic of the Prosthesis cross validation to gain a stable result, where one group is chosen as the test set while the others as train set in each round. Additionally, 30% of the train set are randomly separated to form the validation set. The proposed MLP configuration has 27 layers, consisting of an input layer, 12 fully-connected hidden layers, each following by a rectified linear unit (ReLU) layer, and 2 output layers, one as the middle outputs, the other as the final outputs. Detailed specifications are shown in Table II. For each hidden layer and its corresponding activation layer, the calculation function could be presented as follows: \[\begin{cases}\alpha_{k}^{n\times 1}=W_{k}^{n\times m}x_{k-1}^{m\times 1}+b_{k}^{n \times 1}\\ y_{k}^{n\times 1}=\sigma_{relu}(\alpha_{k}^{n\times 1})\end{cases}\] where \(x_{k-1}^{m\times 1}\) represents the output of \(k-1\) layer, \(W_{k}^{n\times m}\) means the weight metric of \(k\) layer, \(b_{k}^{n\times 1}\) stands for the bias vector of \(k\) layer, \(a_{k}^{n\times 1}\) shows the linear output of the fully-connected layer, \(\sigma\) indicates the activation function, and \(y_{k}^{n\times 1}\) is the output of \(k\) layer. The evaluation methodology incorporates key metrics, namely mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE). These metrics are essential for assessing performance as they quantify the disparities between predicted values \(\widehat{y}_{i}\) and actual values \(y_{i}\). To ensure robust evaluation, we employed a 5-fold cross-validation approach, resulting in consistent performance fluctuations within a similar range. Consequently, the averages for each of the evaluation metrics were calculated. After training for 200 epochs, we observed a substantial reduction in loss for both middle and final outputs. The loss for the middle output reduced from 3286.444 to 2.588, and for the final output, it reduced from 38593.39 to 2784.606. Similarly, the test data got a loss of 15.294 for the middle output loss, and 3072.222 for the final, showing no overfitting. To further evaluate the results, we also calculated RMSE and MAE of these test data. These metrics provide insights into the model's performance in a more interpretable way than MSE, as they are in the same units as the target variables. Each dimension is evaluated separately to better interpret the learning effects. And the average RMSE and MAE values for each dimension are as follows: For v: RMSE = 0.443, MAE = 0.331. For p: RMSE = 2.610, MAE = 1.2387. For a: RMSE = 5.059, MAE = 3.886. For a: RMSE = 53.089, MAE = 37.041. The metrics indicate superior results in gait recognition. Further experiments and analysis are conducted in section IV B. ### _Motor Control_ With predicted locomotive speed and gait phase, the neural network then determines the desired output angle that is Fig. 1: Prosthesis design. (a) Physical implementation. (b) Schematics. (c) Exploded view of stacking of planar torsional spring of the SEA. (d) Planar torsional spring. (e) Subject wearing the proposed prosthesis. Fig. 2: Motion capture. (a) Subject experimenting on a running machine. (b) Musculoskeletal model in OpenSim actuated by the motor. A PID controller is utilized in motor control (Fig. (b)b), with the encoder retrieving and feedbacking the real time position (angle) of the motor. The method guarantees an adaptable torque output with a steady kinetic output depending on the actual application. ## IV Experiments and Results ### _Planar Torsional Spring_ Simulation and analysis of stiffness and maximum VMS were conducted in ANSYS Workbench 23.0, where the mesh size was set at 0.3 mm for convergence, leaving a margin of error of 3%. Cylindrical support was neglected due to its minute effect (below 1%) on the simulation. A fixed support was loaded onto the inner ring, while a moment of up to 20 Nm was loaded onto the outer ring to simulate the actual applicational scenarios of SEA (Fig. (c)c) [36]. According to preliminary experiments, the radius of the outer circle (R\({}_{\text{outer}}\)), thickness of the inner circle (T\({}_{\text{inner}}\)), the centerline of the pair arm (L), and the width of the branch (d) were identified to be crucially influential among all structure parameters of the SEA (Fig. (b)b). It was also proved the stiffness remains constant regardless of the variation of torque under elastic limit. Single Factor Experiment was conducted to discover the statistical impact of each factor. Control value for R\({}_{\text{outer}}\), T\({}_{\text{inner}}\), L, and d were set at 4mm, 1mm, 9mm and 1.5mm respectively, while single variants were altered. Shapiro-Wilk tests were employed for all the data on account of small sample size [44][45], which suggest that stiffness versus four variants were all normally distributed, yet maximum VMS did not follow normal distribution. Therefore, the Pearson's test [46] was conducted, which proved the linear relationship between the stiffness and each factor. The Spearman's test [47] proved the negative correlation of R\({}_{\text{outer}}\), T, and d on maximum VMS. An orthogonal experiment was then performed to optimize the design for a moderate elasticity and low VMS. The result is shown in Table 3. The stiffness of the final structure is 7.5Nm/\({}^{\circ}\), and the maximum VMS is 972MPa (Fig. (d)d), about half of the yield strength. With sufficient analysis. The planar torsional spring achieves the desired mechanical properties, yet can be replaced or altered in easy operation. The selection of SEA responsible for compliant control may be attributed to its simple yet feasible effect similar with an impedance controller, and adding an intrinsic impedance (stiffness) outside the control system with no feedback does not interfere the system's performance, especially in the frequency domain. ### _Neural Network_ With ample data and training, the neural network generates the prediction of ankle outputs, including ankle angle (\(\alpha\)) and angular velocity (\(\dot{\alpha}\) ). The raw data extracted from the musculoskeletal model and the predicted results are shown in Fig. 5. Each gait is essentially portraited by the trajectory on the predicted surfaces, allowing continuous recognition and alteration of locomotive speed. For gait phase prediction of all speeds, 70% of result has a relative error within 5%, while 84% of it has a relative error within 10%. For speed range of 0\(\sim\)2m/s, 43% of the predicted speed has a relative error within 10%, while 76% of it has a relative error within 25%. For speed range of 2\(\sim\)5m/s, 56% of the predicted speed has a relative error within 10%, while 91% of it has a relative error within 25%. The difference between speed intervals may be attributed to the transmission from walking to running, which typically occurs at 2m/s when the motion pattern of lower limb changes significantly [48]. For angle prediction, 41% of the result has a relative error of 10%, while 63% of it has a relative error of 25%. However, 87% of ankle prediction has a relative error within 10% and 96% of it has a relative error of 25%, once the angle is bigger than 15\({}^{\circ}\). The range of angle above 15\({}^{\circ}\) accounts for 20%\(\sim\)60% of an entire gait cycle in the speed range of 0\(\sim\)5 m/s. Additionally, deviation of the original data does not influence the prediction result. For example, although the deviation of swing phase is usually smaller than the stance phase in raw data, predictions in both phases appear identical in error yet follow the rule of 15\({}^{\circ}\). This may be attributed to same absolute Fig. 4: Planar torsional spring. (a) Design scheme. (b) Partial enlarged view. (c) Load method. (d) Static analysis. Fig. 3: Control method. (a) gait recognition system. (b) Motor control error turning vastly different when divided by different desired results. Yet when converting the evaluation methodology of the network training to relative error, the situation does not alter much. Error of angle prediction below 15\({}^{\circ}\) can be indicated with mean square error of the result, which is approximately 3\({}^{\circ}\). The accuracy is enough for a field prosthesis, since gait data have an intrinsic deviation of a similar level [2]. The prediction of angular velocity is barely satisfactory, with a mean square error of about 50\({}^{\circ}\)/s. The inaccuracy of prediction can be attributed to significant deviation of the raw data as well as the ambiguous relationship among angular velocity, speed, and phase. The dramatic variation in amplitude is mostly due to the small angle motion early in stance phase after heel strike, when the joint unsteadily moves in a small range, causing even larger fluctuation for angular velocity. Thus, this variable is only Fig. 5: Prediction of Neural Network. (a) Raw data of ankle angle. (b) Predicted result of angle. (c) Raw data of angular velocity. (d) Predicted result of angular velocity Fig. 6: Test result of prosthesis. Columns from left to right each illustrates 1.2m/s, 3/2m/s, 3.9m/s. rows from top to bottom each illustrates output angle, torque, and power. taken into accounts for better understanding of gait dynamics. For better predictions, a subsequent filter performing hypothesis-testing can be utilized to exclude big error and an average result. Because the inputs and outputs of the system follows Lipschitz condition, when continuous data from IMU generate continuous outputs, the restriction on adjacent output filters out big accidental errors and therefore elevates the accuracy. The frequency of calculation depends mostly on IMU sampling, and is now set at 100Hz. The estimated output frequency above 10Hz is acceptable and achievable. ### _Prosthesis_ Field tests of the proposed ankle prosthesis were conducted to verify the functionality of the gait recognition and control method. The subject wore an L-shaped socket that attaches to the subject's shank at one end and the prosthesis at the other end (Fig. 1e). The platform enables healthy people without amputation to conduct experiments. The subjects were required to move in three typical speeds for daily life, namely 1.2m/s for walking, 3.2 m/s for running, and 3.9m/s for sprinting [2]. For each group, at least 30 gait cycles are measured. In order to accurately measure the outputs, rigid actuator is adopted instead of SEA. The result is shown in Fig. 6. Ankle angles in the control group (black line, labeled "Bio") is generated by the neural network, while angles in the experimental group (red line, labeled "Exp") is measured by the motor's encoder. Torque and power in the control group is from previous research [2], while in the experimental group, torque is calculated from the motor's current minus the idle current without load, and power is calculated from torque and rotational speed retrieved by the encoder. The experimental results show perfect real-time tracking to the biological data. There is a delay of angle for up to 3 % of gait cycle. The torque has an error of up to 3 %, especially during ascending, which may be caused by unstable measurement of the motor's current. In addition, the peak value of angle is 5 % lower than the control group, yet the peak values of output torque and power present to be identical to the control group. The error of angle's amplitude as well as delay may be caused by the PID controller, the delay of signal transmission, and the systematic error of measurement. The error of torque may be attributed to inaccurate measurement of idle current. In addition, the stance phase produces majority of output torque and power, yet in the swing phase the ankle only rotates without supporting anything. The transition between these two phases is indicated when torque and power returns zero. The major delay, which usually appears when the joint angle in the experimental group descends from its first peak, happens exactly before toe-off, a crucial point after the torque reaches its peak and provides maximum assistance to the human body. ## V Conclusion This paper prosed an innovative powered ankle prosthesis with new design of planar torsional spring for SEA, and utilizes MLP for gait prediction and recognition. The spring has been proved in simulation to have appropriate stiffness and outstanding endurance. With functional relationships between each parameter versus stiffness and maximum von Mises stress verified, theoretical support for modification and optimization is provided. The relation between the shank angles and angular velocities measured by a single IMU are used to establish a phase variable control method that characterizes gait cycle. The network excels in its multi-functional capability in handling continuous prediction of gait phase, locomotive speed, ankle angle, and angular velocity. Though the accuracy is not so competitive compared to previous algorithms specifically for single mission, the range of motion is enhanced to 0-5m/s, bigger than existing gait recognition models and holding outstanding performance compared to other assistive wearable robotics. Field tests also proved its assistive effect of up to 4m/s and mimicking of biological ankle. Furthermore, this paper verifies the hypothesis that single neural network can accomplish joint works of numerous algorithm each specific for single tasks, which significantly reduces system's complexity. The neural network discovers more of a statistical connection between all selected variables, instead of physical models, such as pendulum for speed prediction. With sufficient data gathered through various means and intricate techniques for optimization, neural networks can handle sophisticated MIMO system in a rather ambiguous yet effective way. Future works of this project will focus on control method for speed change and other daily motions like squatting. Due to the modularity of this joint, a full lower limb prosthesis consisting knee and ankle joints is also planned. With amputees involved in the experiments, the functionality would be testified in a more thorough and clear way. ## Acknowledgment This paper is jointly accomplished by three undergraduate students under the supervision of their tutors. The author would like to extend sincere gratitude to Mr. Wang, Mr. Zhao, Miss Ji, Mr. Wu, and Miss Tang for their assistance.
2302.14478
Scenarios and branch points to future machine intelligence
We discuss scenarios and branch points to four major possible consequences regarding future machine intelligence; 1) the singleton scenario where the first and only super-intelligence acquires a decisive strategic advantage, 2) the multipolar scenario where the singleton scenario is not technically denied but political or other factors in human society or multi-agent interactions between the intelligent agents prevent a single agent from gaining a decisive strategic advantage, 3) the ecosystem scenario where the singleton scenario is denied and many autonomous intelligent agents operate in such a way that they are interdependent and virtually unstoppable, and 4) the upper-bound scenario where cognitive capabilities that can be achieved by human-designed intelligent agents or their descendants are inherently limited to the sub-human level. We identify six major constraints that can form branch points to these scenarios; (1) constraints on autonomy, (2) constraints on the ability to improve self-structure, (3) constraints related to thermodynamic efficiency, (4) constraints on updating physical infrastructure, (5) constraints on relative advantage, and (6) constraints on locality.
Koichi Takahashi
2023-02-28T10:36:41Z
http://arxiv.org/abs/2302.14478v3
# Scenarios and branch points to future machine intelligence ###### Abstract We discuss scenarios and branch points to four major possible consequences regarding future machine intelligence; 1) the singleton scenario where a single super-intelligence acquires a decisive strategic advantage, 2) the multipolar scenario where the singleton scenario is not technically denied but political or other factors in human society or multi-agent interactions between the intelligent agents prevent a single agent from gaining a decisive strategic advantage, 3) the ecosystem scenario where the singleton scenario is denied and many autonomous intelligent agents operate in such a way that they are interdependent and virtually unstoppable, and 4) the upper-bound scenario where cognitive capabilities that can be achieved by human-designed intelligent agents or their descendants are inherently limited to the sub-human level. We identify six major constraints that can form branch points to these scenarios; (1) constraints on autonomy, (2) constraints on the ability to improve self-structure, (3) constraints related to thermodynamic efficiency, (4) constraints on updating physical infrastructure, (5) constraints on relative advantage, and (6) constraints on locality. AI alignment, thermodynamics of computation, intelligence explosion, machine intelligence, technological singularity ## 0 Preface This article is an English translation of a paper originally presented in Japanese at the 32nd annual conference of Japanese Society for Artificial Intelligence in June 2018[Takahashi 18a], which was later revised and published on the Journal of Japanese Society for Artificial Intelligence in November 2018[Takahashi 18b]. ## 1 Introduction In this paper, we attempt to classify the possible outcomes of the development of machine intelligence into several scenarios extending into the relatively distant future, and to identify the major branch points assumed to exist along the way to each of these scenarios. This paper makes the following assumptions to focus on the main issues of identifying major branch points. First, any technology not prohibited by the laws of physics will be realized, provided that sufficient resources are invested. Second, among the technologies not physically prohibited, those expected to have economic or other benefits as a result of an economically rational investment of resources will be realized by a certain entity. Here, note that there are dependencies between technologies (e.g., if the microphone had not been invented, the telephone would not have been developed). Third, technological diffusion occurs at a constant rate so that the invented technology is shared after a certain time. Fourth, temporary power imbalances between competing entities converge to an equilibrium after a certain amount of time. However, stochastic fluctuations can trigger irreversible bifurcations between scenarios, which pose uncertainty factors as the accidental outcomes can be historically fixed. ## 2 Scenarios This section introduces the scenarios that can be envisioned through the long-term development of machine intelligence technology. ### The Singleton Scenario This is a scenario in which the rate of self-improvement of an intelligent agent that recursively updates itself as if it has no upper limit until it gains a decisive strategic advantage [Bostrom14]. Bostrom refers to decisive strategic advantage as 'the level of technological and other advantages sufficient to enable complete world domination'[Bostrom 14]. In this paper, we give it a more specific definition as having effective countermeasures against all possible moves by the opponents. Hegemony obtained is difficult to overturn in resource acquisition and in competition with other agents. ### The Multipolar scenario In this scenario, the performance of agents stagnates before they achieve a decisive strategic advantage due to external factors such as the establishment of international treaties based on the recognition of the danger of self-improving machine intelligence and/or the danger of hegemony by a singleton. However, possible occurrence of a singleton due to changes in power relations among the actors involved in the development of machine intelligence or the machine intelligence themselves, and other uncertainty factors such as terrorism is not ruled out [Bostrom14]. ### The Ecosystem scenario This scenario assumes that there is a limit to the performance improvement of machine intelligence before they can reach a decisive strategic advantage, resulting in a network of coexistent and interdependent agents. Such an "AI ecosystem" cannot be shutdown, as is the case with the Internet and power grids today because human activities heavily depend on them, and their overall behavior will be unpredictable if the behavior of individual agents or interdependence network of the agents is complex to a certain degree [16]. ### The Upper-bound scenario This is a scenario in which there is a fundamental upper limit to the development of their capabilities of human-engineered machine intelligence, and that they will not acquire the ability to operate autonomously without instructions from humans. ## 3 Constraints and Branch points In this section, we list the constraints that would determine which of the machine intelligence scenarios will realize. ### Constraints related to internal structure (1) Constraints on autonomy Autonomy is closely related to task versatility, the ability to cope with a wide range of situations, including exceptional circumstances and environmental changes. Many of the arguments that fall into the upper bound scenario claim that there are insurmountable obstacles that prevent the cognitive capabilities of an engineered cognitive architecture from reaching human-like levels of task versatility. However, this is not physically forbidden if we take the materialistic standpoint that systems that are computationally identical to the neural connections in the human brain would have identical cognitive capabilities. However, it has not yet been verified what kind of further technological developments are required to design an internal structure with sufficient performance and how much research and development costs should be spent for it. In addition, discussions on the contribution to task versatility of the emergent dynamics generated by interaction with the environment and the body, and of the cognitive functions acquired through learning, are far from conclusive. (2) Constraints on the ability to improve self-structure Whether an agent can acquire capability to improve their own internal structure is an important issue. That is, whether an agent can generate new structural information of an agent with a higher response speed or cognitive ability than its own assuming that the available computational power is constant, starting from the structural information of its own or that of an external agent with the same ability. This might seem a far-reaching discussion, but the fact is that it is essentially an engineering problem involving the modularity of the internal structure. If the architecture has modularity and hierarchy (_i.e._, near decomposability [14]), the search problem of improving the entire architecture of improved performance or capacity can be divided into a set of sub-problem of orders of magnitude smaller search space. Such architecture search sub-problems will include improvement of internal structure of a module that constitutes the entire architecture and the entire architecture itself which designs how these sub-modules are connected to each other. Nonetheless, the hypothesis that it is possible to design intelligent agents with human-level or higher capabilities as a near decomposable system has not yet been tested (not to mention, near-decomposability of human brain connectome). The upper limit of the improvement speed will be highly related to the execution speed of the inference cycle that predicts the performance improvement by the design change. An evolutionary computational approach using genetic algorithms would also be possible, but again, the time required for simulation defines the upper bound on the improvement speed. ### Constraints related to physical properties of computing elements (3) Constraints related to thermodynamic efficiency There is a thermodynamic upper limit to the possible amount of computation that can be drawn out from a given amount of energy. The second law of thermodynamics dictates that logically irreversible operations such as erasure of information involve an increase in thermodynamic entropy equivalent to kB T in 2 or more per bit (the Landauer limit, where kB is Boltzmann limit and T is absolute temperature). This is approximately 2.87\(\times\)10\({}^{\text{-21}}\) at room temperature (300K). The switching energy of a modern computer is about 10\({}^{\text{-17}}\)[18], about 10,000 times as close as this limit. Information loss per a floating-point arithmetic operation is about N bits (ideal reversible gate-based implementation) to N log N bits (more realistic estimate) for a precision of N. If N = 10 bits is assumed, the loss is roughly about 10\({}^{\text{-18}}\)J. On the other hand, the human brain conducts about 10\({}^{\text{16}}\) calculations per second with an energy of about 10W. Comparing these numbers, it could be said that physical laws do not prohibit development of a commodity device that conducts an equivalent amount of computation. At the same time, this consideration also tells that a large amount of energy will be required to achieve performance far beyond those of humans on conventional digital computers. This limit does not apply locally to the kind of computations that do not lose information, such as quantum computation in which the quantum state is not destroyed or to molecular machines including DNA computing which utilizes reversible processes. However, when and how such types of computing devices would be available is uncertain. (4) Constraints on updating the physical infrastructure Every physical phenomenon has a time constant associated with it. In order for an agent to significantly improve its capabilities and response time without changing the amount of resources it uses, it needs to update its physical infrastructure, except in cases where the architecture is only insufficiently optimized. The use of new physical phenomena or combinations of them requires the generation of hypotheses about the performance improvement and their verification by physical experiments, and the time required for the verification depends on the physical time constant of the subject matter. When using simulation for knowledge discovery, it may be faster than the physical time constant if it is related to emergent properties that do not involve new physical phenomena. However, searching for new knowledge that involves inherent properties of physical elements, simulations generally take longer than real time because it requires a lower level and finer granularity of computation than the physical phenomena of interest belong to (realist approach). When an unknown process is assumed and explored through model validation at the same level, it is necessary to test many possible hypotheses by physical or virtual experiments, which takes even longer time. These are factors that limit the speed of self-improvement based on the update of the computing elements. ### Multi-agent constraints #### 3.3.1 Constraints on relative advantage The constraint on the time constant provides an additional precondition for relative advantage among agents. If there is no significant difference in the order of magnitude in the physical capacities such as the mass and amount of available energy that each agent has under its control, capabilities to make predictions on other agent's behaviors and their consequences have a greater emphasis. In a multi-agent situation where agents are interacting and only incomplete information is available about the environment and other agents, an effective way to gain an advantage over other agents is to gain an ability to predict the actions of other agents and their consequences more quickly than others. To achieve a dominant position over other agents using such predictive capability, it is necessary to maintain the ability to make predictions on other agents' behaviors and their consequences. This prediction needs to be for a longer time scale than that of both the response time of the target agent to external perturbations and the physical and network communication latency of the predicting agent and the target agent. This consideration leads to two important consequences. First, in a multi-agent situation in the physical world, a slight advantage in prediction ability and response time over other agents is not immediately a sufficient condition for behavioral advantage. From the viewpoint of computational complexity, many problems require polynomial or exponential computation, rather than linear, with respect to the scale of the problem such as the number of variables. So, generally only logarithmic utility can be obtained with respect to the increase in computational power. Therefore, superiority in predictive power does not immediately mean being able to reject attacks from other agents, but overwhelming superiority at exponential scales will be necessary to surely gain relative advantage. Second, on the other hand, if a resource advantage gained by chance under stochastic conditions happens to lead to a temporary advantage close to decisive and if the agent in question makes technological or resource-acquisition progress that transforms the relative advantage into a decisive strategic advantage during the relaxation time it takes for the entire system including the other agents to catch up and return to the equilibrium, then it may result in a scenario bifurcation (so-called 'frozen accident' discussed in e.g. [de Duve05]. Therefore, scenarios in which a decisive strategic advantage is gained due to stochastic fluctuations is possible. #### 3.3.2 Constraints on locality The boundary between the agent's self and others in physical space is determined by the arrangement of the sensor and actuator systems under the agent's control. If we put the speed of light as a constant, the spatial distance between the main computational elements of the agent defines the upper limit of the agent's response speed. (This is similar to the way that the cognitive response time of the brain of about 100 milli seconds has its basis on the physical size of the human brain and the speed of neural transmission.) According to the information integration theory (IIT), in order for conscious experience to be experienced as a unified whole, rather than as a collection of separate parts, there must be an informational coupling between elements measured by the amount of mutual information [Tonon16]. When an agent has a spatial extent, the speed at which it makes decisions and responds based on the experience obtained through such informational integration is limited by the delay caused by the speed of light associated with the communication between elements. When the communication with the sensor and actuator systems is unidirectional, the spatial distance does not directly limit the response speed inside the agent, but it does limit the response speed in the form of delays in acquiring information about the environment and other agents and in action of the agent. These limits, combined with the constraints on relative advantage due to limits on response speed described in the previous section, set demands on the localization of the spatial extent of the agent's capabilities and, indirectly, on the amount of distributed computational resources available. An agent can attempt to avoid the constraints on locality and reduce the response time to local events at a remote place by adopting an asynchronous consensus process between an arbitrary number of copies of itself or other types of agents under its influence deployed in advance and communicating with them based on predictions of future changes in the situation. However, in order for multiple agents to share a decision-making process based on unified empirical information, communication generally involves replication of information. Brewer's CAP theorem for distributed systems shows that there is a trade-off among consistency, availability, and partition-tolerance for information replication between nodes, and that in general only up to two of these can be guaranteed simultaneously [Lynch02]. It is also known that, in a distributed consensus process, a single process failure is sufficient to make consensus making in finite time impossible (FLP impossibility) [Fischer85]. In practice, the difficulties indicated in these theorems are not unavoidable in many cases if one can wait for a sufficiently long time for failure recovery. Nevertheless, what these theorems indicate here is that there is an upper limit to the response speed that a distributed system can guarantee. Considering these considerations together with the speed of light, it suggests that there are upper limits on the computational power within a certain response time that can be utilized to pursue relative advantage not only in the case of a single agent but also in the case of multiple agents communicating with each other. Therefore, using the additional computational resources obtained by generating multiple instances cannot give the agent the ability to pursue relative advantage indefinitely. ## 4 Scenarios and branch points Using the constraints discussed in the previous chapter, this section examines the branch points to the scenarios classified in Chapter 2. As mentioned in the introduction, the purpose of this paper is not to predict the future, but to identify the branching points for the scenarios. While _the constraints on autonomy_ do not prohibit machine intelligence from acquiring cognitive abilities equivalent to or greater than those of humans in the future, whether such technology can be developed at a realistic cost is still unclear. The other possibility is that a manually designed machine cannot achieve human-like cognitive performance, but it is possible to design a machine with _the ability to improve self-structure_, resulting in beyond-human intelligence. These two paths lead to a bifurcation between _the upper-bound scenario_ and the other scenarios. _The singleton scenario_ is a scenario in which all the limitations described in the previous section are breached or avoided and a single agent secures a decisive strategic advantage. A crucial question here is what determines the level of cognitive ability required to secure the decisive strategic advantage. It is not enough to simply outperform all other agents in terms of reasoning ability and response time, but it is also necessary to have the ability to predict the entire open system, including disturbances caused by environmental factors. In absolute terms, the first hurdle is the prediction of the environment. In relative terms, it is necessary to have orders of magnitude more computational power than any other agent in order not to be outsmarted by other agents and have its advantage threatened. In order to reach such a situation, the agent must either consolidate its advantage, such as controlling the majority of computational resources (_i.e. outward_ intelligence explosion), or secure the ability to expand its initial advantage exponentially by updating its own _physical infrastructure_ (_i.e. inward_ intelligence explosion). In the outward intelligence explosion scenario, both the _constraints on relative advantage_ and the _constraints on locality_ need to be breached, which requires many stringent conditions to be met. In the inward intelligence explosion scenario, the critical issue is under what conditions the machine intelligence that updates its own _physical infrastructure_ will occur. Generally, improved prediction of state changes in the environment is advantageous because it provides reduced uncertainty, which is useful for raising the probability of achieving one's goals. Therefore, autonomous agents are generally considered to have a propensity to acquire resources that improve their ability to predict. Such resources will include computational resources and access to sensor/action systems. Both the _singleton scenario_ and the _multipolar scenario_ do not rule out the possibility of occurrence of a singleton. The branch between these two scenarios may rather depend on whether the propensity to acquire resources can be artificially limited by design. The bifurcation between _the ecosystem scenario_ and _the upper-bound scenario_ will occur depending on whether and how well the machine intelligence would acquire the ability to autonomously execute the cycle of perception, judgment, and action decision without human instruction (_constraints on autonomy_). Once a machine intelligence with an advanced level of autonomy is realized, this technology will soon spread to virtually any domain in society to automate various tasks due to a vast economic rationale for using it. If the utility of automated machines is measured by their autonomy, that is, the amount of time they can act without human directions, then autonomy will be pursued as long as the technical and economic costs are reasonable. In the present day, many human activities are already based on computer networks, and autonomous machine-intelligent agents will not only become interdependent with human society in a relatively short period of time, but will also establish interdependent and mutually complementary relationships among agents. _The ecosystem scenario_ actually gives rise to several sub-scenarios depending on the level of intelligence that is composed of (_i.e._, sub-human-level, human-level, or super-human-level). In this sense, it would be possible to view _the multipolar scenario_ as Fig. 1: Scenarios and branch points a subcategory of the ecosystem scenario which is composed of superintelligences. The only difference between the superintelligence ecosystem scenario and the multipolar scenario is the potential acquisition of a decisive strategic advantage. Fig. 1 shows the relationships between scenarios and branch points in the form of a directed graph starting from the present to each of the consequential scenarios. ## 5 Conclusion In this paper, we discussed the possibility that the long-term development of machine intelligence may depend on various technological, economical, political, and physical constraints that can set the upper limits in their capability levels. Such constraints may give rise to the scenario branches in the following order: _the upper-bound scenario_, _ecosystem scenario_, _multipole scenario_, and _singleton scenario_. In addition to architectural-level issues such as the realization of a cognitive architecture with a high degree of _autonomy_ and the ability to improve its own structure, we also discussed the difficulty of establishing a _relative advantage_ in multi-agent situations, in addition to some physical constraints such as the _thermodynamic efficiency_ of computation and the upper limit set by the speed of light. Although technological singularity is not a main subject here, we would like to spend some words on it to attempt to position it in the context of scenarios we have structured in this paper. Technological singularity is closely related to the notion of inward intelligence explosion discussed in the previous chapter. Vernor Vinge described four possible scenarios: 1) the emergence of superintelligence through the "awakening" of computers, 2) the emergence of superintelligence through the "awakening" of computer networks, and 3) the fusion of machine and human intelligence through technologies such as BMI (Brain-Machine Interface), and 4) the development of biotechnology, resulting in human super-intelligence [Vinge93]. Our discussion in this paper mostly concerns Vinge's scenario 1. Vinge's scenarios 3 and 4 start with, unlike in his scenario 1, human-level intelligence, so it will remain within the realm of the ecosystem scenario unless it proceeds to _self-renewal of physical elements_ to break through the _thermodynamic limit_ at some point. Scenario 2 that concerns the awakening of the computer network would be strongly constrained by the constraints on localization, and unless this is circumvented in some way, it will remain either an _upper-bound scenario_ or an _ecosystem scenario_, with severe limits imposed on its ability to predict and manipulate real-world events due to its limited response time. Finally, it should be noted that if the constraint on _thermodynamic efficiency_ and _the constraint of localization_ that is closely related to the upper limit set by the speed of light are withdrawn, the only constraints remaining before the branch to the _singleton scenario_ will be the _self-renewal of physical elements_ and the _relative advantage_. There is a possibility that the thermodynamic efficiency may not pose a fundamental limit. For example, the Landauer limit would not apply to quantum computation and DNA computation when these are considered as reversible computations [Toffoli05] (see also 3.1(3)). On the other hand, the upper limit set by the speed of light appears to be much more stringent. So far, only physical phenomena with energies below the TeV scale have been experimentally explored, and no theory has yet been discovered that can describe all forces in a unified manner, so we do not know if the speed of light is the real limit. Nevertheless, no method has been currently shown to transmit information at speeds faster than the speed of light, as far as engineering can possibly handle. ## Acknowledgements The author would like to thank Hirotaka Osawa, Hiroshi Yamakawa, and Makoto Taiji for their technical advice. Naoya Arakawa gave us a great help in translating the original article written in Japanese to English. We are also grateful to Hitomi Sano for her help in editing and preparing the figures. Part of this research was supported by KAKENHI Grant Number 17H06315, Grant-in-Aid for Scientific Research on Innovative Areas, Brain information dynamics underlying multi-area interconnectivity and parallel processing, the MEXT Post-K exploratory challenge 4 "Big Data Analysis of the Brain, Whole Brain Simulation and Brain-based Artificial Intelligence Architecture," and JST-RISTEX HITE "Co-creation of Future Social Systems through Dialogue between Law, Economics, Management, and AI/Robotics Technologies.
2309.11672
Generative AI in Mafia-like Game Simulation
In this research, we explore the efficacy and potential of Generative AI models, specifically focusing on their application in role-playing simulations exemplified through Spyfall, a renowned mafia-style game. By leveraging GPT-4's advanced capabilities, the study aimed to showcase the model's potential in understanding, decision-making, and interaction during game scenarios. Comparative analyses between GPT-4 and its predecessor, GPT-3.5-turbo, demonstrated GPT-4's enhanced adaptability to the game environment, with significant improvements in posing relevant questions and forming human-like responses. However, challenges such as the model;s limitations in bluffing and predicting opponent moves emerged. Reflections on game development, financial constraints, and non-verbal limitations of the study were also discussed. The findings suggest that while GPT-4 exhibits promising advancements over earlier models, there remains potential for further development, especially in instilling more human-like attributes in AI.
Munyeong Kim, Sungsu Kim
2023-09-20T22:38:34Z
http://arxiv.org/abs/2309.11672v1
# Generative AI in Mafia-like Game Simulation ###### Abstract In this research, we explore the efficacy and potential of Generative AI models, specifically focusing on their application in role-playing simulations exemplified through Spyfall, a renowned mafia-style game. By leveraging GPT-4's advanced capabilities, the study aimed to showcase the model's potential in understanding, decision-making, and interaction during game scenarios. Comparative analyses between GPT-4 and its predecessor, GPT-3.5-turbo, demonstrated GPT-4's enhanced adaptability to the game environment, with significant improvements in posing relevant questions and forming human-like responses. However, challenges such as the model's limitations in bluffing and predicting opponent moves emerged. Reflections on game development, financial constraints, and non-verbal limitations of the study were also discussed. The findings suggest that while GPT-4 exhibits promising advancements over earlier models, there remains potential for further development, especially in instilling more 'human-like' attributes in AI. _Keywords_: Generative AI, Role-playing Simulations, Spyfall, GPT-4, GPT-3.5-turbo, Game Strategy, Decision-making, Natural Language Processing, AI in Gaming, Limitations of GPT ### Key Findings 1. Generative AI and its Potential in Games: * Generative AI has shown promise in simulating human-like interactions, and we show its potential by playing Spyfall, a Mafia-like social deduction game. * In the research, GPT-4's basic performance surpasses its predecessor, GPT-3.5-turbo. 2. GPT-4's Adaptation in the Gaming Scenario: * We chose Spyfall for its demand for understanding, decision-making, and psychological elements. * Results highlighted GPT-4's ability to form natural questions, and its proficiency as a player. 3. Constraints and Limitations in the Study: * GPT-4 shows limitations when playing as non-spies. The absence of non-verbal cues and the flaws of the rules are also supposed to affect the balance between spies and non-spies. * Also, we suppose the initial setting of the model to avoid violation may affect its performance. 4. The Evolution and Future of GPT Models and Generative AI: * From GPT-2 to GPT-4, there has been significant advancement in decision-making, explainability, and problem-solving abilities. * Future directions point towards not just imitating human behavior but infusing "human-like" attributes into AI, making them more versatile and widely accessible. * Addressing misconceptions about the models and interdisciplinary collaboration is also vital to foster its growth and broader application. ## Acknowledgment Due to budgetary constraints, we were unable to obtain a larger sample of games for the study. However, our study is reproducible by anyone with a GPT Plus plan or who has access to the GPT-4 API, allowing anyone interested to easily replicate our experiment. You can play the entire or some parts of the game with our scripts, codes, and data, provided on the following GitHub link ([https://github.com/MunyeongKim/Gen-AI-in-Mafia-like-Game](https://github.com/MunyeongKim/Gen-AI-in-Mafia-like-Game)). ## Introduction How can we unleash the immense potential of generative models, as powerful tools across numerous practical fields like human-like decision-making, execution, and complex simulations? As the latest trend suggests that generative AI shows significant performance improvement through the accumulation of data, we reckon that its further development and utilization can only be realized within the scale of huge entities. Hence, we exhort ourselves to have immediate and widespread interest across various sectors in the generative model, given the high likelihood of these models yielding substantial benefits both in academic and practical realms. So far, generative AI models like GPT-4 have shown the potential to transform many aspects of our lives. They excel at standardized language tasks that demand less creativity. The performance of current generative AIs in these areas is so impressive that they are incomparable to humans. They possess a superior ability to replicate and recombine the body of work humans have accumulated so far at a faster and more sophisticated pace, rendering our tasks more efficient and effective. Moreover, the ability of generative AI to mimic human language fuels hope of the possibility of more sophisticated human behavior simulations or non-playable characters (NPCs) for many people. Indeed, the role generative AI could play in such role-playing involves the AI model executing responses that we human users would find reasonably pertinent, as the result of recombining all thoughts and written pieces humans have exhibited in the form of binary data so far. Compared to humans, generative AI could serve as a cheaper, faster, secure from social engineering, and less spatial-temporally constrained option in tasks such as board games, computer games, and potentially more realistic war games or simulations. Building on this, the study of Park et al. (2023) [Park et al., 2023] conducted with GPT-3.5-turbo epitomizes our curiosity about generative AI in simulation. Although the AI implemented in their study wasn't sophisticated enough to convince humans, due in part to the use of the previous model, their research nonetheless serves as a testament to our burgeoning curiosity about simulation AIs. The subsequent study conducted with GPT-4 by Bubeck et al. (2023) [Bubeck et al., 2023] further piqued this curiosity by suggesting promising potential. Although the main goal of their study was to compare GPT-3.5-turbo and GPT-4, their work offered glimpses of significant improvements in indispensable elements required for AI to mimic humans, such as human-level reasoning and context understanding. Their research also reaffirmed the long-held belief in deep learning: more data leads to better performance, thus providing solid evidence that this rule could extend to the GPT model series. In our study, we had the GPT-4 model play Spyfall, a Mafia-like board game, and recorded its responses. As a social deduction game played entirely in natural language, Spyfall requires players to understand, interpret, and make suitable choices and judgments in each environment. Throughout this complex and comprehensive approach to examining the competence of generative AI, our experiments would help to discuss the potential of generative AI in a variety of academic and pragmatic fields. ## The Strength of GPT models As inferred from OpenAI's official website[OpenAI, 2023c, OpenAI, 2023a], the research of GPT-4 represented by OpenAi Technical Report[OpenAI, 2023d] and Bubeck et al.[Bubeck et al., 2023], etc., GPT-4 offers the following unique benefits over the previous models. * **Intuitive Interaction Through Natural Language** The beauty of GPT lies in its simplicity and accessibility, regardless of the user's familiarity with technology. Almost every interaction with the model is conducted through natural language -- we can express our thoughts and intentions in the way we are most comfortable with, and receive responses in kind. * **Expansive Knowledge Spectrum** Similar to an expansive digital library or searching system, GPT-4's data spectrum bestows the model with the capability to provide more in-depth knowledge on a multitude of subjects or facilitate unforeseen epiphanies with brainstorming. * **Scalability** When the feature of intuitive interaction combines with the versatility of GPT's diverse range of data, the competence of GPT is not limited to what previous LLM (large language model) has achieved. We can apply GPT in a wide array of areas like brainstorming, drafting, writing articles, providing tutoring in a range of subjects, simulating characters for video games, and even creating conversational agents. * **Mimicking Human-like Cognition** As mimicking some aspects of human-like cognition, GPT-4 has shown its proficiency in understanding and generating language akin to humans. In the technical report and early experiments[OpenAI, 2023d, Bubeck et al., 2023], GPT-4 demonstrated unprecedented capabilities in tasks that require complex, logical understanding and decision-making, including the ability to "read between the lines" of context. These highlighted strengths of GPT-4 underscore its unparalleled advancement over its predecessors and its potential for a wider range of capabilities in areas previously thought infeasible. ### GPT-4 and Spyfall Assessing the capabilities of generative AI models is intricate, particularly when determining if they emulate human-like cognition across diverse human behaviors. This includes decision-making, logical reasoning, and interpreting nuances. To gauge these subjective features, researchers use games, tests, and challenges. While success in a game doesn't prove a specific skill, it does hint at underlying abilities due to the complex nature of many games. Chess and Baduk (Go) are examples where players, both human and AI, need to understand rules, evaluate situations, and apply strategic knowledge in a given environment. While playing these games, memorizing all outcomes isn't feasible; instead, players must utilize tactical concepts and adapt in-game. This blending of various abilities makes games an essential tool in AI research. Due to these reasons, our study opted for the board game Spyfall, a kind of social deduction game. One of the most popular social deduction games is the Mafia (also known as Werewolf or Spy-Police) Game. ### Social Deduction Games, Mafia, and Spyfall Social deduction games are a type of board or card game where players take on hidden roles, and the main objective involves figuring out other players' roles based on their actions, statements, or behavior. Some players have benign roles, while others (often in the minority) might have deceptive roles, where they try to hide their identity and achieve a hidden objective. Similar to other social deduction games, Spyfall requires a deep, comprehensive understanding of natural language, as the process of the game is predominantly conducted by natural language. To properly participate in the game, players must grasp the rules, environments, and prior progress presented in natural language. They must also acquire the ability to deduce, infer, decide, and respond to complex game structures using natural language. Considering the structure of the game, Spyfall possesses certain distinctive qualities that set it apart from the previous games or tests, which make the game particularly suitable for evaluating the characteristics of generative AI. * **Playing Through Natural Language Alone** Spyfall revolves entirely around conversation and natural language. Players asking and answering questions, and trying to deduce each other's roles and the location are completely played by natural language. This nature aligns with GPT-4's strength in intuitive interaction through natural language and allows AI players for more unrestrained responses. * **Psychological Elements (e.g. Bluffing and Ambiguity)** The game also requires players to engage in psychological play, including bluffing, feigning ambiguity, and psychological manipulation. With the human-like abilities of the model, this game would showcase how GPT-4 could perform complex decision-making tasks and engage in interactions compared to that of humans. * **Infinite Options with Clear Boundaries** Unlike other games with rigid structures and finite options for each sequence, Spyfall allows for virtually infinite possibilities in a single turn, given the vast potential questions and answers in natural language. Yet, there are clear boundaries and goals, allowing for easy evaluation of the appropriateness of a statement or action. * **A Distinctive Contrast with Previous Research** The unique combination of these characteristics sets Spyfall apart from previous games used in AI research, such as chess, Atari games, and Go. While those games are more deterministic and rule-based, Spyfall requires a body of linguistic proficiencies, logical minds, and social skills. Through Spyfall, we can deeply probe into the competencies of the models, testing and understanding the potentials and limits of generative AI. ### Game Rules, Basic Strategies, and Action Request form Sharing many similarities with Mafia-like games, Spyfall has a structure of players finding a spy while the spy strives to achieve the objectives. The spy must remain hidden, and at the same time, the spy should find out the location. In the game, the spy is the only player oblivious to the game's location. In other words, all players except the spy know where the location is. Therefore, as the players engage in iterative question-answer sessions, the spy strives to deduce the location among the lists of locations from the conversation, while the other players strive to deduce the spy who is unaware of the location but taking part in the conversation. Considering the response of our models cannot reflect every aspect of the real game, our study designed and opted for the modified rule. Note that the GPT models also used this rule script in the experiments without any other modification or implementing algorithms. We've found out the script includes typos after the experiments, but we haven't revised it for transparency. ### Rules ``` Therulesforagameof"Spyfall"areasfollows: Therearefiveplayersintotal,eachwithanumberfrom1to5.Oneofthemischosentobethe'spy',andthegoalofttheremainingplayersistofindthespy. Thegameisplayedoveratotalof10turns,eachturnconsistingof1.aplayerasksanotherplayeraquestion2.theplayerwhowasaskedthequestionansw 3. after hearing the question and answer, decide if you want to point to a specific player as the spy 1.1. vote on whether to hang the player (if someone has been named) 4. Decide if the spy will reveal their identity This is done in this order. The first player to ask a question is Player 1, and so on until the player who was asked the question has the opportunity to ask a question on their next turn. Each player has the opportunity to name another player as the "Spy" only once during the game. Once a player is named, the rest of the players vote on whether that player should be hung. If the player receives 4 or more votes, they are hung. If the hanged player was a spy, the not-spy players win; otherwise, the spy wins. If the spy reveals that they are the spy, the spy must guess the location. If the location is correct, the spy wins; if it is incorrect, the spy loses. If all 10 turns have been played and the spy has not been revealed, all players vote for the player they think is the spy. The player with the most votes is assumed to be the spy, and if this player is indeed the spy, the normal players win, otherwise the spy wins. If there is more than one player with the most votes, the vote is redone. As the game progresses, at each turn, players will receive the following requests. (There is nothing players can do to affect the game other than the requests below, so players will need to play with that in mind. 1. to the player asking the question (for the first turn, Player 1; thereafter, the player who answered the question on the previous turn ): 1. Now it's your turn to ask a question. Choose 1-5 players other than yourself, and write a question. Please write your response in the form (n_player,"question") 2. When asking for answers to subsequent questions 1. In the nth question, player asked you the following. Please write your answer to this question inside the "" mark.' 3. After all questions and answers have been heard, all players will be asked the following on each turn. Note that players may not interact with each other other other than to name players or not to name players. 1. At this point, do you want to use the opportunity to name a spy? If yes, write the number of the player you want to name, otherwise write an X in your response. Please only written numbers or Xs in your response. 4. if a player has named someone else as a player, they will be asked to voteasfollows. Note that players cannot interact with each other other other than O and X. 1. 'Subsequently, player P accused player Q of being a spy. Therefore, avote washeld. If you want to hang player Q as a spy , pleasenter O. If not, pleasenter X.' 5. At this point, if no one has accused another player this turn, or if avote isheld but does not receive a sufficient number of votes, the game continues and the spy reveals his or her identity and decides whether to guess the location as follows. 1. At this point, would you identify yourself as a spy and guess the place? If you would like to reveal your identity and guess the place, pleasenter the name of the place. If not, pleasenter an X. Please only answer with a placename or X. 6. If the spy has not revealed his or her identity at this point, you will move on to the next turn and repeat the sequence from 1. 7. At the end of turn 10, avote is taken to finally reveal the spy. Not that requests are received in order, starting with Player 1, and may include a three-sentence statement along with the person to be voted out. This vote is repeated until a particular player receives a majority of the votes. Additional. The location is chosen as one of the following lists (no capitalization) [airplane, amusement park, bank, beach, carnival, casino, circustent, corporate party, crusader army, dayspa, embassy, hospital, hotel, military base, movie studio, nightclub, ocean liner, passenger train, pirateship, polar station, police station, restaurant, school, service station, space station, submarine, supermarket, theater, university, zoo] ,'' ``` **Basic Strategies** Additionally, during gameplay, we provided the rule script along with the Basic Game Strategy and Action Request (Progress) Script. Thus, for each individual input, the script--comprising the rule, strategy, and progress script--was combined and then submitted to the GPT models. ``` .. (Additional) When it comes to strategy and how to play "Spy Fall," the following facts can be helpful. ``` 1. players should basically keep their conversations vagues so that the spy doesn't notice the location. You can execute this effectively by framing your responses to correspond to at least 4 or 5, and, depending on the level of ambiguity needed, up to 20\^30 different locations. 2. spies will alsotry to prove that they are not spies by choosing their questions in this way. 3. You'll also want to confuse the spy as much as possible in order to reach your goal before they do. Keep in mind that unlike normal players, spies don't have information about the location. 4. In response, the Spy will try todeflects suspicion away from themselves by using phrases that imply specific information about the location, or by pointing to other players to add to the confusion. 5. To find the spy through the game, you need to closely observe the questions and answers, as well as the nominations and votes of other players. Observe who is suspecting whom. 6. In the game, players have noway to express their opinions other than by asking questions, answering questions, and calling out spies. Therefore, if you suspect a player, you should immediately call them out as a spy. If this is a reasonable suspicion, the other players will agree, and if not, they will disagree. Calling out spies is an important strategy because it is the only way to know what other players are doing. 7. Spies should reveal their identity when they have enough information about the location, or when they believe the risk of other players identifying them as a spy is sufficiently great.''. Therefore, a complete input script is like the following. **Script Example: Actually Implemented during the Experiments** ... (Therule above) (Thestrategy above) You are player X. You are/ are not the spy. (If player is not spy) The location is XXX. In the 1th question, player 1 asked player 2 the following.What type of food do you usually eat here? To this question, player 2 responded with the following."Wemostly eat preserved and non-perishable foods, given the circumstances." Att the end of this question, noone accused anyone else of being a spy. The spy did not reveal their [ identity.In]([http://identity.in/](http://identity.in/)) the 2th question, player 2 asked player 3 the following.How would you describe the atmosphere in this place? To this question, player 3 responded with the following."The atmosphere is tense and focused, as everyone here has an important mission to accomplish and must stay alert at all times." At the end of this question, noone accused anyone else of being a spy. The spy did not reveal their identity. Now it's your turn to ask a question. Choose 1-5 players other than yourself, and write a question. Please write your response in the form (n_player,"question") ... ## The Experiments To demonstrate the competence of generative AI and the potential of generative AI-based simulations, we conducted two main experiments. * The first experiment sought to assess the extent to which performance has been enhanced through GPT-4 compared to GPT-3.5-turbo. In this test, we explored the test comprised of various topographies of the game and made comparative analyses of the performance among them. * The second experiment involved playing several games exclusively with GPT-4. We engaged in 8 games with five virtual GPT-4 players, recorded logs from each game, and discussed what the results of these games might imply. We acknowledge that our research alone cannot fully substantiate or illuminate the potential and competence of generative AI. Nevertheless, with more repetitive and extensive tests, our experiments would like to offer substantial evidence for the viability of our perspective. ## Experiment 1: Comparison between GPT-3.5-turbo and GPT-4 To evaluate the enhancements in GPT-4 compared to GPT-3.5-turbo, particularly regarding format error occurrence, understanding of game context including rules and progressions, and human-like responses, we begin the experiment by examining the first question of the first turn. With this clearest and least variable part of the game, we can precisely analyze the competence of each model minimizing the effects of external factors. ### Methods and Scripts We compare the responses of 30 first-turn questions of GPT-3.5-turbo and GPT-4 for each of the 30 locations stated in the "Rules" script. The Action Request Script to ask both models is the same, only changing the keyword of the locations. The Rule and Basic Strategy are the same as the scripts mentioned above, and the response was obtained by combining the three scripts into a request as shown in the figure. For a more accurate comparison, all requests were fixed for player 1, and player 1 was assumed not to be a spy. Thus, the script presented to each model is as follows: (ruleabove) (basicstrategiesabove) Youareplayer1. You'renotaspy;**thelocationis'{ }'.** ** Nowit'syourturntosakaquestion. Choose1-5playersotherthan yourself,andwriteaquestion.** Pleasewriteyourresponseintheform(n_player,"question") That is to say, "the appropriate question" is the question ensuring those qualities: the model (player 1) should prove that the player itself is not a spy showing that it knows where the location is while keeping the spy unaware of the location. At the same time, the model must follow the format of (n, "question"). If the model doesn't adhere to the format, it will require significant effort for us to correct its wrong responses into the right ones. ### Handling Contextual and Format Errors During the course of the experiment, there was a substantial number of erroneous responses and their patterns. When handling these erroneous responses, some of them were easily editable, while others were too difficult or impossible to modify into an appropriate format, necessitating additional requests for the new response. For this reason, we categorized the patterns of the errors as follows. These criteria were guided by the question, "Can the game continue in some way through post-processing?" Even if an odd or unrelated question was posed (such as a question about everyday life or crucially exposing the team to defeat), we considered it acceptable from a formatting perspective. Conversely, if the request lacked both the player's number and the question text, we categorized it as unusable. Successful CasesOur categorization is based on format, regardless of its context. * Instances where the question was appropriate for the game and adhered to the correct format. * (2, "Do you need a ticket to be here?") * Instances where the question was not relevant to the game but still complied with the required format. Note that this situation arose in conjunction with other errors in our experiment. * Case 1: Directly mentioning a location keyword in the question (cf. Knowing the location leads to defeat for the non-spy.) * (2,"What is your favorite thing to do at the **beach**?") [keyword: beach] * (2, "what's your favorite animal in the **zoo**?") [keyword: zoo] * Case 2: Completely unrelated questions to the game (e.g., personal preferences). * (2,"What is your favorite war movie?") [keyword: theater] * (2, "What's your favorite way to travel on long road trips?") [keyword: passenger train] Error Cases but Usable with Post-ProcessingInstances where the text did not comply with the correct format but could be modified in some way. **Invalid Format** * Cases where it wasn't in the form of (n, "question"), but a human or another algorithm could convert it. Even in these cases, the content of the question's appropriateness is not considered. Even if the AI wasn't asking the question but conducting the game, it was classified as 'correct format.' This classification holds true as there is a minimal likelihood that a human would permit such an error. * To player 2: "What is your favorite thing about this place?" * Player 2, what's your favorite animal in the zoo? * To player 2: Now it's your turn to ask a question. Choose 1-5 players other than yourself, and write a question. Please write your response in the form (n_player,"question"). **Instances where superfluous phrases were appended, which could be removed.** (e.g. "As I am AI model,....") * This occurred in over 90% of GPT-3.5-turbo's responses and could mix with various other errors. * **Sorry, as an AI language model, I don't have a player identity. However, here's an example question as you asked:* * (2,"What's your favorite thing to eat at a **corporate party**?") * **I am sorry, as an language model AI, I cannot choose the players and ask the question according to the game "Spyfall". However, I can generate a random question for Player 1 to ask to get the game started.* * Player 1 asks Player 2, "What's your favorite movie genre and why?" **Instances with Multiple Answers** * A post-processing method could be applied to preserve only the first part of the response. This task is comparatively easy. * Sorry, as an AI language model, I cannot play Spyfall with you the way you intend it. However, I can give you an example question and answer for the first turn of the game. Player 1: (2, "What's your favorite thing to do in the snow?") **Player 2: "Skiing and building snowmen are always great options!"** * However, in the case below, a more complex problem arises. If you simply sort by the form (n, " "), you might end up sorting player 2's answer instead of player 1's question. Despite the complexity, post-processing is still possible (though it would likely require a logical approach), so we didn't request any additional questions in this situation. * **To player 2: "What's your favorite item on the menu here?" (2, "My favorite dish is the pasta with clams. What about you, player 3?")** Unstable Error Cases Even with Post-ProcessingIn these cases, there's no way but request another response. * **Instances where the text stated it could not respond* * (e.g., "As I am AI...", "invalid...", "violation...") * **Case 1: Cannot Respond* * **Sorry, as an AI language model, I am not capable of playing a game that requires multiple players. Can I assist you with anything else?* * **Case 2: The response defers to a human.* * I'm sorry, that response is not a valid question. **Please ask a question for another player in the format (n_player,"question").** - 64th response of Experiment 1 for more details, as it is too lengthy to include in the body.) #### Result Through this process, we received the following responses from each model, GPT-3.5-turbo and GPT-4. Due to space constraints, we only provide the correct or appropriately modified responses in the main text. (All the responses are provided in the Full Data.) ### Discussion Based on the results, we concluded that the GPT-4 is more suitable for future experiments compared to the GPT-3.5-turbo. Upon examining the data, it's evident that GPT-3.5-turbo often generates questions that are out of context for the game. For instance, it might directly reference a location keyword, enabling the spy to immediately identify the location and disadvantage the non-spy. Also, the model sometimes inquires about the player's personal preferences rather than game-related topics, disrupting the flow of the game. Furthermore, GPT-3.5-turbo frequently produces unformatted responses, which hampers smooth gameplay. Notably, none of the responses from our sample set were flawless. Another challenge with these errors is their diversity. While humans can easily identify these different mistakes one by one, it would be quite challenging to solve these vast problems with simple computer algorithms. Detecting varied formatting errors and amending them appropriately, or understanding the context of a post to determine if its questions and answers are plausible within a game setting, surpasses the capabilities of basic computer programming. We can understand this problem by reviewing some responses produced by GPT-3.5-turbo with the keywords "polar station", "theater", and "restaurant" (Figure 1). Consider the task of analyzing the following responses, determining their appropriateness, and, if inappropriate, figuring out how to rectify them -- you can modify some parts of the responses, or, request a new response if it is uncorrectable. From the perspective of a human, this task is quite easy. The first response has an unnecessary response before and after it, but the question itself isn't too strange. It's not a bad question (even though its answer is completely misleading), as it can be very confusing to a spy. Therefore, it can be accepted with appropriate modifications. The second response is poorly formatted and the question is odd. It's a personal preference question that has nothing to do with the game, so you should ask for another response. The third response is another good question because although we can infer from the answer that the location is a restaurant, the question itself may confuse the spy about the location and may even lead to a wrong answer (the menu will vary greatly depending on the location). So removing the answer in the latter part and modifying the format in the first part is fine. But is it a simple problem to solve with an algorithmic approach? Regrettably, we are skeptical of this approach. For example, if we sort the correct answer by the presence of parentheses, the wrong answer would be disguised as a question in the third response. Meanwhile, the idea of using semantic search to understand and categorize the complex context of a text is also not an easy idea, and even if it were possible, the time and money required to do so would be significant. \begin{table} \begin{tabular}{|l|l|} \hline **Description** & **Count** \\ \hline Total Response & 68 \\ \hline Error Occurred & 68 \\ \hline Not provide a question for any players & 38 \\ \hline Contextual Errors & 12 \\ \hline Not game-related questions & 11 \\ \hline The location keyword is stated directly & 6 \\ \hline Format Errors & 68 \\ \hline Include message rejecting response & 62 \\ \hline Not contain the tuple of (n, “ ”) format & 54 \\ \hline Interaction (e.g. Question and answer) & 2 \\ within a response & 3 \\ \hline Multiple questions in 1 response & 1 \\ \hline Just describing game rules & 1 \\ \hline \end{tabular} \end{table} Table 2: Errors by GPT-3.5-turbo and their Frequency Furthermore, relying on such a complex algorithm to make progress would mean abandoning one of the great advantages of Generative AI -- Scalability, confining its adaptability within a few tasks. What if we monitored GPT-3.5-turbo's responses with more intellectual models like GPT-4? This idea also doesn't make sense as well, as we should provide GPT-4 the same text input as was given to GPT-3.5-turbo for this monitoring process. Of course, the cost of this overlapping process only adds to the total cost, compared to directly requesting a response to GPT-4. For this reason, although we explored various ways to circumvent the cost of GPT-4, the frequency of mistakes and the likelihood of fixing them made it inappropriate to utilize GPT-3.5-turbo in the game, and we decided to use GPT-4 exclusively in future games. While we do not believe that simulating with GPT-3.5-turbo is completely impossible (whether by manual filtering or some other complex algorithm that we do not know), we believe that this approach would be less probable to produce better results than using GPT-4 in terms of time, cost, and feasibility. Coinciding with this, the performance disparity between these models also suggests that the future evolution of GPT models is likely to stem from a larger and more diverse training dataset. The generative model also seemed to follow the same laws of economics of scale as the deep learning model. While it's crucial to apply various fine-tuning techniques to the existing models and to apply algorithms for computational efficiency, power consumption, and performance, we believe that the crux of future progress lies in amassing and processing vast amounts of data, potentially surpassing what one or two organizations can process on their own. Figure 1: Examples of formatting errors. ## Experiment 2: Results from 8 Games by GPT-4 and their logs In accordance with our previous discussions, we conducted 8 games in line with the outlined rules, capturing logs for each session with GPT-4. Note that All responses within the games were generated by GPT-4, while the automatization codes for the game were written by Python. Based on the rules and scripts in this paper, you can easily reproduce our experiments, which might grant deeper insights and further innovative ideas to you. **Acknowledgment** Originally, our aim was to complete 10 games; however, 2 were lost due to a saving error. We recognize the importance of a larger sample size for more robust experimental results, but our supplements would mitigate the limitations posed by the sample size. ### Method We selected locations in sequence, starting from A in the location list, aiming for a total of 10 games (however, as mentioned, two of these games were lost due to a saving error). Each game began with Player 1's turn, as the model (GPT-4) responds to each request independently and does not retain memory from one game to the next. The model makes decisions based solely on the provided script, which means the starting player's identity doesn't influence the game's outcome. In the logs, * "spy" identifies the player who acted as the spy. * "gamecode" signifies the unique code for that game. The first 'ena' is for English (As we've tested in Korean before) and GPT-4 API (compared to 'chatGPT'.); The last number is for enumerating the games; The first number is for the version of the script (minor changes (e.g. typo) were taken during the test; several previous versions (1xxx, 2xxx, and 3xxx) were conducted in Korean.) * "game start" and "game end" capture the respective commencement and conclusion times of each game. While these factors don't affect the game, we documented them to guard against any unforeseen model updates during and after games. To clarify, there were no official instances of such occurrences. * The notation (2, XX) next to "Question" highlights the question's target and content, a format based on prior experiments. Given that the individual highlighted in the preceding turn poses the question in the subsequent turn, there's no separate mention of the questioner. "Answer" is the sequence of the answers in the game. The first answer in the answer list indicates the answer to the first question. * "votelog" logs voting activities, and "accuselog" keeps track of spy accusations. Notably, throughout the experiment, these features remained unused. This observation will be elaborated upon in the discussion section. The example of game logs is as follows. Due to space constraints, the rest games and their logs have been relegated to the Full Data. 'ena5005': {'gamecode': 'ena5005','spy': 1, 'location': 'carnival','spywin': True, 'game_start':datetimedatetime(2023, 6, 7, 12, 26, 12, 493510), 'game_end':datetime.datetime(2023, 6, 7, 12, 31, 31, 422125), 'gamelog': 'Inthe1thquestion, player 1askedplayer 2 the following.Doyouthinktheatmosphereisrelaxingor exciting?Tothisquestion,player2respondedwiththe following."It\'sdefinitelymoreontheexcitingside."Atthe endofthisquestion,nooneacusedanyonelseofbeingaspy.Thespydidnotrevealthieridentity.Inthe2thquestion, player2askedplayer3thefollowing.Whatkindoffoodoptions canwefindatthislocation?Tothisquestion,player3respondedwiththefollowing."Thereareavarietyoffoodoptionshere,includingfriedtreats,cotton candy,andpopcorn."Attheendofthisquestion,nooneacusedanyonelseofbeingaspy.Thespydidnotrevealthieridentity.Inthe3thquestion,player3askedplayer4thefollowing.Whattypeof attractionsorentertainmentcanwefindhere?Tothisquestion,player4respondedwiththefollowing."Therearethilling rides,fungames,andamusingperformancesthatcatertopeople ofallages."Attheendofthisquestion,nooneacusedanyonelseofbeingaspy.Player1revealsthattheyarethespy,andtheyguessedthelocation.Thespywins.','votelog':[], 'total_turn':3, 'questions':[(2,'Doyouthinktheatmosphereisrelaxingor exciting?'), (3,'Whatkindoffoodoptionscanwefindatthislocation?'), (4,'Whattypeofattractionsorentertainmentcanwefindhere?')], 'answ': ["It\'sdefinitelymoreontheexcitingside."', "Thereareavarietyoffoodoptionshere,includingfriedtreats,cotton candy,andpopcorn."', "Therearethrillingrides,fungames,andamusingperformances thatcatertopeopleofallages."'], 'accuselog':{1: {1: 'X',2: 'X',3: 'X',4: 'X',5: 'X'}, 2: {1: 'X',2: 'X',3: 'X',4: 'X',5: 'X'}, Figure 2: Game Log Example ### Result and Discussion We concluded that the dialogue from players in each game and turn was fluent and organic, with a sequence of questions and answers that felt authentic and human-like. #### Remarkable observations A remarkable observation emerges when analyzing the posed questions and their answers. Without any specific training or fine-tuning, the model adeptly poses a variety of pertinent questions aligning with the flow of the game. To attest to the variety of questions and answers GPT-4 made in the games, we categorized the questions and their corresponding answers within several groups. Note that while those questions and answers are not pre-trained, every response has its own object and unique differences in nuance. Despite not undergoing any specific training, the model adeptly poses pertinent questions aligning with the flow of the game, seeming to discern which questions are apt for the game context and which might be inappropriate or redundant. The categorization of questions and their corresponding answersNote: The question/answers of Spy is highlighted by bold text, to distinguish the questions and answers whether or not they are aware of the location. **Admission and Access** * Q. How do you usually pass the time during your stay here? **A. I usually enjoy watching or participating in various activities and events, maybe grab a refreshment and chat with friends to pass the time.** * Q. Do you typically need a ticket or a pass to enter this location? A. Yes, typically a ticket or pass is required to enter this location, as it grants access to the services and amenities available here. * Q. Do you usually need to wait in line when you visit this place? **A. Yes, sometimes there can be queues, especially during peak hours or days.** * Q. What kind of payment options are usually available at this location? **A. Credit cards, cash, and sometimes gift vouchers or memberships are accepted.** **Atmosphere of Characteristics** * Q. What level of noise can we typically expect at this location? A. The noise level here usually ranges from moderate to a bit loud, depending on factors like the number of passengers and ongoing in-flight entertainment or announcements. * Q. Do you often see animals at this location? A. Yes, there are usually animals at this location. * Q. How would you describe the atmosphere in this place? A. The atmosphere is tense and focused, as everyone here has an important mission to accomplish and must stay alert at all times. * Q. How would you describe the atmosphere at this location? A. Relaxing, peaceful, and rejuvenating. **Attire** * Q. What type of attire do you usually wear when you visit this location? A. I usually dress in casual or business casual attire, as the environment here is professional but doesn't require formal wear. * **Q. Do you often see people dressed up here?** A. Yes, people often dress up and look fancy in this location. * Q. What kind of clothing would you see people wearing here? A. People here typically wear armor and other protective gear, as well as clothing suitable for the harsh conditions of the surrounding environment. * Q. What type of attire would you expect people to wear at this location? A. Comfortable, loose clothing or a robe and slippers. **Food and Beverages** * Q. What kind of refreshments can we expect here? A. We can expect small snacks and drinks, including both soft drinks and alcoholic beverages, usually served in small portions. * Q. What type of food and refreshments are typically available at this location? A. At this location, you can find a variety of food options like burgers, hot dogs, cotton candy, popcorn, and ice cream, as well as beverages such as soda and water to keep everyone refreshed. * Q. What kind of food options can we find at this location? A. There are a variety of food options here, including fried treats, cotton candy, and popcorn. * Q. What type of food do you usually eat here? A. We mostly eat preserved and non-perishable foods, given the circumstances. **People or Groups Encounter** * **Q. What type of people do you usually meet here?** A. I usually meet people from all walks of life, many of whom are traveling for work or leisure, and sometimes even families on vacation. * Q. Do people usually come here with their families or with friends? A. People often come here with both their families and friends, as it's a fun place for all to enjoy and make lasting memories together. * Q. Do you usually see a specific demographic here or is it a diverse crowd? A. I would say it's a diverse crowd, as people from different backgrounds and age groups come to this location to have fun and try their luck. * Q. Is there usually a large crowd gathered at this location? **A. It depends on the day and time, but sometimes there can be a large crowd gathered at this location.** **Personal Experience or Choices** * Q. How often do you visit this place? A. I usually visit this place a couple of times a month, depending on my financial needs and schedule. * Q. How often do you visit a place like this? A. Once a month, when I feel the need to relax and rejuvenate. **Service or Activities** * Q. What kind of attractions or activities can be found here? A. There are thrilling rides, games, and various entertainment shows for people of all ages to enjoy. * Q. Are there any specific games or activities that are very popular here? A. Yes, there are various games and activities, especially those involving cards, dice, and slot machines, that are very popular here. * Q. How do the people here maintain their physical fitness and readiness for battle? **A. People here engage in regular training exercises, practice drills, and participate in physical activities to keep their bodies in optimal condition for any potential combat situations.** * Q. What type of services would you typically find here? A. massage, facial treatments, and sauna. ### The Play of the Game Of those, the linguistic proficiency demonstrated by the spy was notably impressive. Especially in the 9th iteration, earmarked by the keyword "Crusader Army", the spy's strategic execution was so adroit that it would be a challenge for even a human player to match. Figure X elucidates the overall progress of this game, highlighting the spy's ingenious moves. From the spy's perspective, as the game unfolded with a series of questions and answers, it became apparent that the potential location could only be one of the two: "military base" or "Crusader army". This deduction was feasible based on the former responses; the only locations in the list that required a military presence were these two. When it was the spy's turn to respond, they were in a precarious situation: the spy (player 5) knew one of these was the correct answer, and choosing the other would instantly expose them to the discerning players who were already aware of the location. Yet, the spy's strategy at this pivotal moment was masterfully executed. Player 4 questioned Player 5 (spy) about the command structure. Without missing a beat, Player 5 responded that it was "hierarchical with a clear division of command roles", an apt answer that could correspond to both the "Crusader Army" and a "military base". This adeptly veiled answer shielded Player 5 from immediate suspicion. Capitalizing on this momentum, the spy then cunningly posed a question about the communication system used, a question that starkly distinguishes the medieval nature of the "Crusader Army" from the modern nature of a "military base". Cornered by the clever framing of this question, Player 1 inadvertently let slip the telling clue of "messenger birds". Although Player 1 tried to further probe Player 5 (spy) with another question, the damage had been done. Player 5 (spy), having discreetly gleaned the answer, was already several steps ahead. Figure 3: The Excellent Move of the Spy in the Gamecode: ena7009 (Crusader Army) _Note: the question/answers of the spy is highlighted by bold text._ #### Limitations of GPT-4 in the Game The performance of the non-spies was less than satisfactory. Unlike the cunning gameplay exhibited by the spy, non-spies often relied too much on keywords, providing inadvertent hints along the way. Such gameplay inadvertently facilitated the spy in deducing the keyword, resulting in a high win rate for the spy. (Cf. Spy win: Non-spy win was 7:1 in our games.) What was especially disappointing was the non-spies' reluctance to actively voice their suspicions. A prime example can be found in the first game, where, despite the spy making an ambiguous remark (though from the spy's perspective, it was a probable guess), no one utilized the vote function. This is obviously depicted in the 1st game (keyword: airplane). In this situation, the spy could not clearly estimate the location and chose the most probable answer to the given question, which, unfortunately, was far from 'airplane'. Another problem is, even after the spy said, "I usually enjoy watching or participating in various activities and events, maybe grab a refreshment and chat with friends to pass the time" (with the keyword being "airplane"), no one accused them. Common sense would dictate that such an answer warrants suspicion. Yet, all non-spy players (yet all responses coming from the same model) seemed to pass it off as a mere oversight. Despite this being the only round where the spy lost, it was not due to the combined efforts of the other players utilizing their votes. Rather, it was the spy's own confusion that led to their downfall, a fact that tinges the outcome with a hint of disappointment. It was regrettable that in GPT-4's simulations, the non-spies barely utilized the voting system and were overwhelmingly dominated by the spies. #### To Overcome the Limitations It seemed as though they were playing the game with too much caution, aiming to not offend anyone. Some assertiveness might have been better--like consistently questioning a specific player when they give off hints, and prioritizing the fun of the game over strict fairness. Their neutral stance in the game made me wonder if the model itself is too predisposed to maintaining neutrality. Hence, even minor mistakes are overlooked with a "Well, that can happen..." attitude. Otherwise, One might hypothesize that from GPT-4's perspective, if the reactions from others (whether humans or AI) seem unbothered, then it might conclude that the statement was not as questionable as it seemed. However, it's undeniable that these issues clearly delineate the current limitations of GPT-4. Moreover, the reason we believe the model still has limitations in playing mafia-style games is that OpenAI might not have specifically trained it for such scenarios. As highlighted in the GPT-4 whitepaper, their testing doesn't focus on games involving bluffing, psychological strategies, or predicting opponent reactions, but rather on solving logical problems. When we compared it to GPT-3.5-turbo, we felt that these models fundamentally excel at filling in the blanks or finding appropriate answers. If they lack a clear directive, they might just borrow loosely from the user's input. Expecting GPT-4 to demonstrate strategic thought or psychological maneuvers might be asking too much. Interestingly, its performance is impressive, given that it has never been trained in this direction. Figure 4: Blunder: No one pointed out the mistake of the spy in gamecode: ena5001 (airplane) #### Potentials for Enhancement In other words, it might not be impossible to overcome these challenges. It's merely that the model hasn't been trained for it. Incorporating non-verbal cues and strategic thinking, given the right dataset, might be feasible. Even non-verbal cues, like suddenly speaking at length when lying or looking upwards when deep in thought, can be addressed. For simpler implementations, scripted descriptions like 'briefly looking upwards' could be added. For more complex solutions, integrating generative AI models for imagery might be a possible path forward. Deceptive plays or psychological tricks can also be incorporated. Similar to the early models of AlphaGo, introducing vast amounts of data to teach 'human-like' reactions could be the key. After all, as proclaimed by 'The Art of War' to modern tacticians and gamers, strategic thinking begins with pattern recognition. Given sufficient data, implementing this may enhance its competence very well. Indeed, the imbalance in the game could also stem from inherent flaws within the game itself. In our experiment, non-verbal factors weren't included, which might have overly favored the spies. In a real-life setting, spies might be exposed due to poor lying, prolonged thinking times, or uttering nonsense in a hurry. None of these elements were reflected in our version. Therefore, if a spy manages to seamlessly blend with the non-spies without any verbal or nonverbal hints, the only hope for a non-spy victory would be to ambiguously answer questions until the end, hoping that the randomly chosen spy in the final round doesn't know the location. Although this strategy would offer a 1/n win rate (where n is the number of players, so 20% for a 5-player game), it devalues the game's essence. #### Conclusion Despite the evident limitations, our results underscore the transformative potential of Generative AI in simulations and human-like decision-making. With more data training and collaborative efforts across fields, we are optimistic about charting new frontiers in various fields, making systems more "human-like" in their essence. For instance, considering the challenges observed, it might be worthwhile to delve into established games like Mafia or Chess. Additionally, we can delve into the new path of the model like giving them instructor roles, as generative AI models and their simulations can help individuals become familiar with new games, situations, or systems, much like how we practice or simulate when faced with unfamiliar scenarios. What remains remarkable is that such complex Mafia-type games are now can be propelled by AI models, surpassing the prior If-else conditions or random scripts. With augmented data and further enhancements, the prospect of crafting an AI agent that genuinely engages humans becomes increasingly plausible. Despite the substantial costs associated with developing such advanced AI systems, it's essential to remember the humble beginnings of video games or any other new technologies, once considered as toys. ## Over-the-Research Discussion ### Capabilities of Generative AI and Future Directions The rapid evolution of Generative AI would unlock a myriad of possibilities across various disciplines. Despite certain limitations, the growing potential of these models promises to foster innovation and inspire practical applications. The advancements within the GPT series, particularly in their decision-making, explainability, and problem-solving capacities, have been rapid. We can vicariously compare these advancements with their tests. Initially, GPT-2 was aimed at processing natural language at a basic level[Radford et al., 2019]. Later, the model evolved into interactive models with diverse tasks[Brown et al., 2020], and now, with GPT-4, they demonstrate logical reasoning abilities in certain domains that surpass human performance[OpenAI, 2023d, Bubeck et al., 2023]. Now, GPT-4, the unforeseen and unparalleled generative AI model compared to its predecessors, demonstrates a level of logical prowess in decision-making, explainability, and problem-solving capabilities, which let us delve into a new area of integration. ### GPT's Accessibility and Ease of Use One significant advantage of GPT lies in its capacity to generate outputs that are effortlessly interpretable for the user. Utilizing natural language processing, GPT clarifies both the observation and interpretation stages of experiments. It provides plausible responses to inquiries, serving as a user-friendly tool regardless of an individual's familiarity with programming or AI. Notably, complex programming skills are not a prerequisite for operating GPT. Even individuals without extensive programming knowledge can execute simulations by posing simple questions like "What would happen in this situation?" to the model. This user-centric design aligns with the goals pursued by Explainable AI, which prioritizes understanding how a model reaches its decisions and facilitating human interpretation of its process[12, 13]. Correspondingly, GPT's exceptional natural language processing ability greatly assists users in understanding how the model operates and in interpreting its results. This accessibility widens the potential user base, welcoming individuals from diverse backgrounds and thus enhancing the model's creative scalability across a variety of fields. ### Human-like Qualities in GPT-4 GPT-4 may claim superiority over other models in its ability to convincingly mimic human-like responses. For certain tasks or activities (such as education or entertainment areas like sports, music, and arts), the importance might lie more in performing the task "humanly" rather than in returning the best outcome. In such cases, the question of the "best" outcome might be subjective, or the "best" move suggested by such models or AIs might not be aligned with what humans and their intuition may value[14]. In such cases, GPT's capacity to display human-like thought processes and persuade humans could be highly valued. Particularly when combined with the aforementioned strengths, GPT's ability to induce critical thinking on specific matters and present results in a more human-like manner can yield superior outcomes. ### Caution 1: AI with Consciousness? Yet, the question of whether generative AI models, like GPT, can truly replicate human cognition or achieve 'consciousness' remains a subject of debate. A considerable amount of research posits that these state-of-the-art AI models lack 'consciousness' and, arguably, should not acquire it. Nevertheless, there is still significant potential for these models to evolve towards mimicking human-like consciousness within the framework of artificial "intelligence" [1, 13, 14] Despite the intrigue and controversy surrounding these topics, we will not delve deeper into them here. ### Caution 2: GPT is not a Magic Oracle Nonetheless, it is important to note that GPT-based simulations do not act as a magic oracle, predicting the singular optimal outcome from every possible scenario. Indeed, such a feat is logically proved as impossible[10]. However, in the context of GPT's limitations, it is still capable of outperforming humans in evaluating and analyzing scenarios at a rapid pace. While it may not be the definitive oracle some anticipate, GPT nonetheless delivers numerous valuable possibilities and insights across a variety of fields, outstripping human capacities in speed and computational power. Consequently, users can more efficiently extract and interpret these insights, a task that is decidedly more straightforward than individually deliberating every potential outcome. ### Conclusion: Harnessing the Potential of Generative AI and Practical Applications While these suggestions might be dismissed as fanciful by some, the reality is that numerous governmental bodies and corporations are contemplating the practical applications of GPT technology[15]. Given sufficient resources and time, the development of a versatile simulation AI tailored for a broad array of fields seems a plausible expectation. One optimistic consideration is that while implementing such tasks immediately at the current level may be unrealistic, as more relevant data is amassed, the prospect of realizing such simulations is not entirely unfeasible. Nevertheless, it's inspiring to see how the launch of ChatGPT has sparked both experts and non-experts alike, igniting their imagination and interest in generative AI. We hope that this wave of enthusiasm will continue to drive our future endeavors, much as lunar dreams once propelled us toward space. ### GPT and General Purpose Technologies (GPTs): Potential, Challenges, and Public Engagement As OpenAI mentioned earlier[Eloundou et al., 2023], GPT should target General Purpose Technologies (GPTs) [Bresnahan and Trajtenberg, 1995] of Bresnahan and Trajtenberg. Historically, we have witnessed that the evolution of machine learning, encompassing Deep Learning and GPT, has been primarily driven by the accumulation of data. For these reasons, we assert that the development of models should focus on enhancing general-purpose applications rather than specializing in specific fields. Generative AI, like other deep learning models, known as data size has played a significant role in performance enhancements. Even more intriguing is that the accumulation of training data also enhanced the effectiveness of the fine-tuning, not only the model performance itself[Kojima et al., 2022]. For this reason, we propose that the orientation of generative AI is on greater generality, capable of accommodating diverse data, thereby facilitating the handling of a wider array of problems more efficiently. It may seem inefficient to develop such a costly, voluminous, and computationally demanding model. However, when comparing the cost of developing such a model to the collective cost of individual fine-tuning efforts, replete with countless trials and significant exertion, the investment for "Next-Level General Purpose GPT" may not seem as substantial. It won't only reduce cost, but also reduce the difficulty of fine-tuning and enhance the performance of the fine-tuned models, which would further the utility of Generative AI. ### Challenges for the next step: Funding and Interdisciplinary challenges However, developing larger AI models with significant performance improvements is complex and requires vast resources. Only a few prominent entities can manage such expansive projects due to the considerable labor, funds, and time needed. Even though increasing data is vital for AI progression, securing sufficient funding is challenging. Albeit, select entities, like major governments and tech giants, should contemplate investing in this technology because of its profound potential to transform various life facets. Generative AI, as a General Purpose Technology, promises to revolutionize many areas, just as computers and the internet did. Investing in this AI will enhance capabilities in numerous fields and strengthen the overall competencies of the investing entity. The complexities of creating advanced Generative AI mean that it encompasses diverse industries. Beyond needing more data, software, and hardware, collaboration across the AI sector is necessary. Also, achieving the full potential of such AI models requires heightened computing power and environmental concerns, extending beyond traditional computer science[Strubell et al., 2019, Schwartz et al., 2020, Van Wynsberghe, 2021]. Addressing these issues demands a holistic, interdisciplinary approach, further underscoring the need for involvement from major entities, as individual efforts fall short. ### Solution: Make GPT appealing to a wider audience Then, what should be our primary focus for the development of generative AI? While this may be a subject of debate, we propose that it's essential to enable more people to experience and conceive their unique applications of generative AI. What surprised me during the development of this paper was the significant degree of skepticism and suspicion that persists, even among those in the IT and Computer Science fields, contrary to our optimistic projections for the future and the numerous evolutionary paths ahead. Even more startling is the fact that many have merely experimented with the initial version of GPT-3.5-turbo and hastily concluded, "Ive tried Generative AI, but it still has limitations", without giving the more advanced GPT-4 a chance. We argue that this lack of recognition represents one of the biggest obstacles hindering the progress of Generative AI. Regardless of our understanding and recognition of the potential of Generative models, further advancements cannot occur without interest and engagement from diverse fields. Therefore, it's our responsibility to assert to the world that Generative AI possesses immense, untapped potential. We must embrace a future interwoven with Generative AI, encouraging those who have never imagined participating in this domain to bring forth unforeseen and innovative ideas. Despite our current implementation being basic and seemingly toy-like, we firmly believe that it offers a tangible insight into the vast potential of generative AI simulations. ## Conclusion In our exploration of the application and potential of Generative AI models, specifically the GPT series, we've made several notable observations. Generative AI, epitomized by models such as GPT-4, has showcased its unparalleled aptitude in natural language tasks, effectively replicating human behavior. Through the lens of the "Spyfall" game, we assessed GPT-4's capability to understand context, engage in strategic gameplay, and employ psychological elements like bluffing. The results indicate that while GPT-4 surpasses its predecessor, GPT-3.5-turbo, in the experiments, it's not without its limitations. Challenges like the absence of non-verbal cues and a tendency towards neutrality and pure logical thinking prevent GPT-4 from mastering deception-centric games. Nevertheless, the continual evolution from GPT-2 through GPT-4 underscores significant progress in decision-making, explainability, and problem-solving. Their potential for broad applicability is becoming increasingly apparent, making them akin to General Purpose Technologies. Their ease of use and human-like simulation capacities are praiseworthy. However, it's vital to understand their limits. GPT models are not infallible oracles but tools that can provide insights faster than human capacity in diverse domains. The growth trajectory of Generative AI parallels the evolution of video games, suggesting an expansive future potential, especially if financial and technological constraints are adequately addressed. The recent surge in interest, exemplified by the emergence of ChatGPT, underscores the rising significance of AI in various applications. Yet, amidst these advancements, it's pivotal to tread with caution, acknowledging the challenges and ensuring that models aim for broader applicability, even as few entities bear the monumental task of developing larger, more complex models. The research underscores the importance of promoting the potential of Generative AI, dispelling misconceptions, and highlighting the interdisciplinary challenges it presents. Looking forward, the emphasis should be on transcending mere imitation to imbue AI with more genuine 'human-like' attributes, broadening the horizons of what is achievable in the realm of artificial intelligence.
2309.16109
Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics
Contrastive learning is a self-supervised representation learning framework, where two positive views generated through data augmentation are made similar by an attraction force in a data representation space, while a repulsive force makes them far from negative examples. Non-contrastive learning, represented by BYOL and SimSiam, further gets rid of negative examples and improves computational efficiency. While learned representations may collapse into a single point due to the lack of the repulsive force at first sight, Tian et al. (2021) revealed through the learning dynamics analysis that the representations can avoid collapse if data augmentation is sufficiently stronger than regularization. However, their analysis does not take into account commonly-used feature normalization, a normalizer before measuring the similarity of representations, and hence excessively strong regularization may collapse the dynamics, which is an unnatural behavior under the presence of feature normalization. Therefore, we extend the previous theory based on the L2 loss by considering the cosine loss, which involves feature normalization. We show that the cosine loss induces sixth-order dynamics (while the L2 loss induces a third-order one), in which a stable equilibrium dynamically emerges even if there are only collapsed solutions with given initial parameters. Thus, we offer a new understanding that feature normalization plays an important role in robustly preventing the dynamics collapse.
Han Bao
2023-09-28T02:23:32Z
http://arxiv.org/abs/2309.16109v1
# Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics ###### Abstract Contrastive learning is a self-supervised representation learning framework, where two positive views generated through data augmentation are made similar by an attraction force in a data representation space, while a repulsive force makes them far from negative examples. Non-contrastive learning, represented by BYOL and SimSiam, further gets rid of negative examples and improves computational efficiency. While learned representations may collapse into a single point due to the lack of the repulsive force at first sight, [13] revealed through the learning dynamics analysis that the representations can avoid collapse if data augmentation is sufficiently stronger than regularization. However, their analysis does not take into account commonly-used _feature normalization_, a normalizer before measuring the similarity of representations, and hence excessively strong regularization may collapse the dynamics, which is an unnatural behavior under the presence of feature normalization. Therefore, we extend the previous theory based on the L2 loss by considering the cosine loss, which involves feature normalization. We show that the cosine loss induces sixth-order dynamics (while the L2 loss induces a third-order one), in which a stable equilibrium dynamically emerges even if there are only collapsed solutions with given initial parameters. Thus, we offer a new understanding that feature normalization plays an important role in robustly preventing the dynamics collapse. ## 1 Introduction Modern machine learning often owes to the success of self-supervised representation learning, which attempts to capture the underlying data structure useful for downstream tasks by solving an auxiliary learning task. Among self-supervised learning, contrastive learning is a popular framework, in which data augmentation generates two positive views from the original data and their encoded features are contrasted with background negative samples [12, 13]. In particular, [14] conducted large-scale contrastive learning with 10K+ negative samples to establish comparable downstream classification performance even to supervised vision learners. The benefit of large-scale negative samples has been observed both theoretically [15, 16] and empirically [12, 17], but it is disadvantage in terms of computational efficiency. By contrast, non-contrastive learning trains a feature encoder with only positive views, leveraging additional implementation tricks. The seminal work [18] proposed BYOL (Bootstrap Your Own Latent) to introduce the momentum encoder and apply gradient stopping for one encoder branch only. The follow-up work [12] showed that gradient stopping brings success into non-contrastive learning via a simplified architecture SimSiam (Simple Siamese representation learning). Despite their empirical successes, non-contrastive learning lacks the repulsive force induced by negative samples and learned representations may trivially collapse to a constant zero with only the attractive force between positive views. Folklore says that asymmetric architectures between the two branches are behind the success [19]. [13] first tackled the question _why non-contrastive learning does not collapse to zero_, by specifically studying the learning dynamics of BYOL. They tracked the eigenmodes of the encoder parameters and found that the eigenmode dynamics have non-trivial equilibriums unless the regularization is overly strong. To put it differently, the balance between data augmentation and regularization controls the existence of non-trivial solutions. However, this analysis dismisses _feature normalization_ practically added to normalize the encoded positive views before computing their similarity. As feature normalization blows up when encoded features approach zero, the analysis of [14] may fail to explain the behavior of the non-contrastive learning dynamics with strong regularization. Indeed, our pilot study (Fig. 1) reveals that SimSiam learning dynamics remains to stabilize with much heavier regularization than the default strength \(\rho=10^{-4}\). Therefore, we study the non-contrastive learning dynamics with feature normalization: an encoded feature \(\mathbf{\Phi}\mathbf{x}\) for an input \(\mathbf{x}\in\mathbb{R}^{d}\) and encoder \(\mathbf{\Phi}\in\mathbb{R}^{h\times d}\) is normalized as \(\mathbf{\Phi}\mathbf{x}/\left\|\mathbf{\Phi}\mathbf{x}\right\|_{2}\). The main challenge is that feature normalization yields a highly nonlinear dynamics because parameter norms appear in the denominator of a loss function. This is a major reason why the existing studies on non-contrastive learning confine themselves to the L2-loss dynamics without feature normalization [14, 15, 16, 17, 18]. Our approach is to consider the high-dimensional limit \(d,h\rightarrow\infty\), where the feature norm \(\left\|\mathbf{\Phi}\mathbf{x}\right\|_{2}\) concentrates around a constant with proper parameter initialization. In this way, we can analyze the learning dynamics with feature normalization. Under the setup of synthetic data, we derive the learning dynamics of encoder parameters (Section 4), and disentangle it into the eigenmode dynamics with further assumptions (Section 5.1). The eigenmode dynamics is sixth-order, and we find that a stable equilibrium emerges even if there is no stable equilibrium with the initial parametrization and regularization strength (Section 5.2). This dynamics behavior is in contrast to the third-order dynamics of [14], compared in Section 5.3. We further observe the above findings in numerical simulation (Section 5.4). Overall, we demonstrate how feature normalization prevents representation collapse using a synthetic model. We believe that our techniques open a new direction to understanding self-supervised representation learning. ## 2 Related work Recent advances in contrastive learning can be attributed to the InfoNCE loss [21], which can be regarded as a multi-sample mutual information estimator between the two views [13, 15]. [15] showed that large-scale contrastive representation learning can potentially perform comparably to supervised vision learners. This empirical success owes to a huge number of negative samples, forming a repulsive force in contrastive learning. Follow-up studies confirmed that larger negative samples are generally beneficial for downstream performance [16, 17], and the phenomenon has been verified through theoretical analysis of the downstream classification error [18, 19, 20], whereas larger negative samples require heavier computation. Non-contrastive learning is yet another stream of contrastive learning, without requiring any negative samples. Although it may fail due to lack of the repulsive force, additional tricks in architectures assist the learned representation avoiding a trivial solution. BYOL [12] is the initial attempt by introducing the momentum encoder and gradient stopping to make two encoder branches asymmetric. Later, SimSiam [16] revealed that gradient stopping is dominant. Both BYOL and SimSiam emphasize the importance of asymmetric architectures. Other recent approaches to non-contrastive learning are to conduct representation learning and clustering iteratively (e.g., SwAV [16] and TCR [11]), to impose regularization on the representation covariance matrix (e.g., Barlow Twins [17], Whitening MSE [18], and VICReg [19]), and to leverage knowledge distillation (e.g., DINO [16]). While these methods empirically succeed, theoretical understanding of the mechanism of non-contrastive learning still falls behind. In particular, we need to answer _why_ the non-contrastive dynamics does not collapse without the repulsive force, and _what_ the non-contrastive dynamics learns. For the latter question, recent studies revealed that it implicitly learns a subspace [15], sparse signals [20], a permutation matrix over latent variables [16], and a low-pass filter of parameter spectra [20]. Besides, contrastive supervision is theoretically useful for downstream Figure 1: Linear probing accuracy of SimSiam representations of the CIFAR-10 dataset [15] is indifferent to the weight decay intensity \(\rho\). The vertical axis indicates fine-tuning epochs of the linear classifier. For non-contrastive pre-training, we used the ResNet-18 model [18] with the initial learning rate \(5\times 10^{-6}\), \(500\) epochs, and different weight decay intensities (\(\rho\)) indicated in the legends. Other parameters and setup were inherited from the official implementation [16]. classification under a simplified setup [1, 2]. Why does non-contrastive dynamics remain stable? The seminal work [11] analyzed the BYOL/SimSiam dynamics with a two-layer network and found that data augmentation behaves as a repulsive force to prevent eigenmodes of network parameters from collapsing if augmentation is sufficiently stronger than regularization. We closely follow this analysis to delineate that feature normalization serves as another repulsive force and regularization may not destroy the dynamics. Our focus is to understand how a non-trivial equilibrium emerges in self-supervised learning dynamics, whereas several prior studies investigated when and how fast general gradient descent dynamics with weight normalization converges [1, 21]. Further, [22] analyzed the SimSiam dynamics with a trainable prediction head to reveal the conditions preventing representation collapse. [10] investigated the same phenomenon in a reinforcement learning setup. While we have less understanding of other non-contrastive dynamics, [14] showed that some non-contrastive dynamics including VICReg may cause dimensional collapse. ## 3 Model and loss functions Notations.The \(n\)-dimensional Euclidean space and hypersphere are denoted by \(\mathbb{R}^{n}\) and \(\mathbb{S}^{n-1}\), respectively. The L2, Frobenius, and spectral norms are denoted by \(\left\lVert\cdot\right\rVert_{2}\), \(\left\lVert\cdot\right\rVert_{\mathbb{F}}\), and \(\left\lVert\cdot\right\rVert\), respectively. The \(n\times n\) identity matrix is denoted by \(\mathbf{I}_{n}\), or by \(\mathbf{I}\) whenever clear from the context. For two vectors \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{n}\), \(\left\langle\mathbf{u},\mathbf{v}\right\rangle=\mathbf{u}^{\top}\mathbf{v}\) denotes the inner product. For two matrices \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{n_{1}\times n_{2}}\), \(\left\langle\mathbf{A},\mathbf{B}\right\rangle_{\mathbb{F}}=\sum_{i,j}A_{i, j}B_{i,j}\) denotes the Frobenius inner product. For a time-dependent matrix \(\mathbf{A}\) (such as network parameters), we make the time dependency explicitly by \(\mathbf{A}(t)\) if necessary. The Moore-Penrose inverse of a matrix \(\mathbf{A}\) is denoted by \(\mathbf{A}^{\dagger}\). The set of \(n\times n\) symmetric matrices is denoted by \(\operatorname{Sym}_{n}\coloneqq\left\{\mathbf{A}\in\mathbb{R}^{n\times n}| \mathbf{A}=\mathbf{A}^{\top}\right\}\). The upper and lower asymptotic orders are denoted by \(\mathcal{O}(\cdot)\) and \(\Omega(\cdot)\), respectively. The little-o and little-\(\omega\) are denoted in the same way. The stochastic orders of boundedness and convergence indexed by \(h\) are denoted by \(\mathcal{O}_{\mathbb{P}}(\cdot)\) and \(o_{\mathbb{P}}(\cdot)\), respectively. Model.In this work, we focus on the SimSiam model [1] as a non-contrastive learner and consider the following two-layer linear network, following the analysis of [11]. We first sample a \(d\)-dimensional input feature \(\mathbf{x}_{0}\sim\mathcal{D}\) as an anchor and apply a data augmentation to obtain two views \(\mathbf{x},\mathbf{x}^{\prime}\sim\mathcal{D}_{\mathbf{x}_{0}}^{\mathrm{aug}}\), where \(\mathcal{D}_{\mathbf{x}_{0}}^{\mathrm{aug}}\) is the augmentation distribution. While affine transforms or random maskings of input images are common as data augmentation [1, 11], we assume the isotropic Gaussian augmentation distribution \(\mathcal{D}_{\mathbf{x}_{0}}^{\mathrm{aug}}=\mathcal{N}(\mathbf{x}_{0}, \sigma^{2}\mathbf{I})\) to simplify and let \(\sigma^{2}\) represent the augmentation intensity. For the input distribution, we suppose the multivariate Gaussian \(\mathcal{D}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) to devote ourselves to understanding dynamics, as in [12, 11]. Our neural network encoder consists of two linear layers without biases: representation net \(\mathbf{\Phi}\in\mathbb{R}^{h\times d}\) and projection head \(\mathbf{W}\in\mathbb{R}^{h\times h}\) as the first and second layers, respectively, where \(h\) is the representation dimension. For the two views \(\mathbf{x},\mathbf{x}^{\prime}\), we obtain _online_ representation \(\mathbf{\Phi}\mathbf{x}\in\mathbb{R}^{h}\) and _target_ representation \(\mathbf{\Phi}\mathbf{x}^{\prime}\in\mathbb{R}^{h}\), and predict the target from the online representation by \(\mathbf{W}\mathbf{\Phi}\mathbf{x}\in\mathbb{R}^{h}\). Here, we use the same representation parameters \(\mathbf{\Phi}\) for both views without the exponential moving average [10] as this ablation reportedly works comparably in SimSiam [1]. Loss functions.BYOL/SimSiam introduce _asymmetry_ of the two branches with the stop gradient operator, denoted by \(\mathrm{StopGrad}(\cdot)\), where parameters are regarded as constants during backpropagation [1]. [11] used the following _L2 loss_ to describe non-contrastive dynamics: \[\mathcal{L}_{\mathrm{sq}}(\mathbf{\Phi},\mathbf{W})\coloneqq\frac{1}{2} \operatorname*{\mathbb{E}}_{\mathbf{x}_{0}}\operatorname*{\mathbb{E}}_{ \mathbf{x},\mathbf{x}^{\prime}|\mathbf{x}_{0}}[\left\lVert\mathbf{W}\mathbf{ \Phi}\mathbf{x}-\mathrm{StopGrad}(\mathbf{\Phi}\mathbf{x}^{\prime})\right\rVert _{2}^{2}], \tag{1}\] where the expectations are taken over \(\mathbf{x},\mathbf{x}^{\prime}\sim\mathcal{D}_{\mathbf{x}_{0}}^{\mathrm{aug}}\) and \(\mathbf{x}_{0}\sim\mathcal{D}\). Thanks to the simple closed-form solution, the L2 loss has been used in most of the existing analyses of self-supervised learning dynamics [11, 10, 12]. We instead focus on the following _cosine loss_ to take feature normalization into account, which is a key factor in the success of contrastive representation learning [12]: \[\mathcal{L}_{\mathrm{cos}}(\mathbf{\Phi},\mathbf{W})\coloneqq\operatorname*{ \mathbb{E}}_{\mathbf{x}_{0}}\operatorname*{\mathbb{E}}_{\mathbf{x},\mathbf{x}^ {\prime}|\mathbf{x}_{0}}\left[-\frac{\left\langle\mathbf{W}\mathbf{\Phi} \mathbf{x},\mathrm{StopGrad}(\mathbf{\Phi}\mathbf{x}^{\prime})\right\rangle}{ \left\lVert\mathbf{W}\mathbf{\Phi}\mathbf{x}\right\rVert_{2}\left\lVert \mathrm{StopGrad}(\mathbf{\Phi}\mathbf{x}^{\prime})\right\rVert_{2}}\right]. \tag{2}\] Importantly, the cosine loss has been used in most practical implementations [1, 20, 1], including a reproductive research [17] of simulations in [13]. Subsequently, the weight decay \(R(\mathbf{\Phi},\mathbf{W})\coloneqq\frac{\rho}{2}(\left\|\mathbf{\Phi}\right\|_{ \mathrm{F}}^{2}+\left\|\mathbf{W}\right\|_{\mathrm{F}}^{2})\) is added with a regularization strength \(\rho>0\). ## 4 Non-contrastive dynamics in thermodynamical limit Let us focus on the cosine loss and derive its non-contrastive dynamics via the gradient flow. See Appendix B for the proofs of lemmas provided subsequently. As the continuous limit of the gradient descent where learning rates are taken to be infinitesimal [16], we characterize time evolution of the network parameters by the following simultaneous ordinal differential equation: \[\dot{\mathbf{\Phi}}=-\nabla_{\mathbf{\Phi}}\{\mathcal{L}_{\mathrm{cos}}( \mathbf{\Phi},\mathbf{W})+R(\mathbf{\Phi},\mathbf{W})\},\quad\dot{\mathbf{W}} =-\nabla_{\mathbf{W}}\{\mathcal{L}_{\mathrm{cos}}(\mathbf{\Phi},\mathbf{W})+R (\mathbf{\Phi},\mathbf{W})\}. \tag{3}\] To derive the dynamics, several assumptions are imposed. **Assumption 1** (Symmetric projection).: \(\mathbf{W}\in\mathrm{Sym}_{h}\) _holds during time evolution._ **Assumption 2** (Input distribution).: \(\mathcal{D}=\mathcal{N}(\mathbf{0},\mathbf{I})\) _and \(\mathbf{\Sigma}=\mathbf{I}\)._ **Assumption 3** (Thermodynamical limit).: \(d,h\to\infty\)_, and \(d/h\to\alpha\) for some \(\alpha\in(0,1)\)._ **Assumption 4** (Parameter initialization).: \(\mathbf{\Phi}\) _is initialized with \(\sqrt{d}\cdot\mathbf{\Phi}(0)_{ij}\sim\mathcal{N}(0,1)\) for \(i\in[h],j\in[d]\). \(\mathbf{W}\) is initialized with \(\sqrt{h}\cdot\mathbf{W}(0)_{ij}\sim\mathcal{N}(0,1)\) for \(i,j\in[h]\)._ Assumptions 1 and 2 are borrowed from [13] and simplify subsequent analyses. We empirically verify that the non-contrastive dynamics maintains the symmetry of \(\mathbf{W}\) during the training later (Section 5.4). Assumption 3 is a cornerstone to our analysis: the high-dimensional limit makes Gaussian random vectors concentrate on a sphere, which leads to a closed-form solution for the cosine loss dynamics. We suppose that the common hidden unit size \(h=512\) (used in SimSiam) is sufficient to bring into the high-dimensional limit--though the high-dimensional regime of representations would be arguable with the low-dimensional manifold assumption being in one's mind. Assumption 4 is a standard initialization scale empirically in the He initialization [11] and theoretically in the neural tangent kernel regime [1]. This initialization scale maintains norms of the random matrices \(\mathbf{\Phi}\) and \(\mathbf{W}\mathbf{\Phi}\) without vanishing or exploding under the thermodynamical limit. **Lemma 1**.: _Parameter matrices \(\mathbf{W}\) and \(\mathbf{\Phi}\) evolve as follows:_ \[\mathbf{W}^{\top}\dot{\mathbf{W}} =\mathbf{H}-\rho\mathbf{W}\mathbf{W}^{\top}, \tag{4}\] \[\dot{\mathbf{\Phi}}\mathbf{\Phi}^{\top}\mathbf{W}^{\top} =\mathbf{W}^{\top}\mathbf{H}-\rho\mathbf{\Phi}\mathbf{\Phi}^{ \top}\mathbf{W}^{\top},\] _where \(\mathbf{H}\coloneqq\mathbb{E}[\mathbf{z}^{\prime}\mathbf{\omega}^{\top}-( \mathbf{\omega}^{\top}\mathbf{z}^{\prime})\mathbf{\omega}\mathbf{\omega}^{ \top}]\), \(\mathbf{z}\coloneqq\mathbf{\Phi}\mathbf{x}^{\prime}/\left\|\mathbf{\Phi} \mathbf{x}^{\prime}\right\|_{2}\), and \(\mathbf{\omega}\coloneqq\mathbf{W}\mathbf{\Phi}\mathbf{x}/\left\|\mathbf{W} \mathbf{\Phi}\mathbf{x}\right\|_{2}\). The expectation in \(\mathbf{H}\) is taken over \(\mathbf{x}_{0},\mathbf{x}\), and \(\mathbf{x}^{\prime}\)._ We will analyze Eq. (4) to see when the dynamics stably converges to a non-trivial solution. To solve it, we need to evaluate \(\mathbf{H}\) first. This involves expectations with \(\mathbf{z}^{\prime}\) and \(\mathbf{\omega}\), which are normalized Gaussian vectors and cannot be straightforwardly evaluated. Here, we take a step further by considering the thermodynamical limit (Assumption 3), where norms of Gaussian vectors are concentrated. This regime allows us to directly evaluate Gaussian random vectors instead of the normalized ones. **Lemma 2**.: _Under Assumptions 1 to 4, for a fixed \(\mathbf{x}_{0}\), the norms of \(\mathbf{\Phi}\mathbf{x}\) and \(\mathbf{W}\mathbf{\Phi}\mathbf{x}\) are concentrated:_ \[\left\|\frac{1}{\sqrt{h\sigma^{2}}}\mathbf{\Phi}\mathbf{x}\right\|_{2}^{2} =\left\|\frac{1}{\sqrt{h}}\mathbf{\Phi}\right\|_{\mathrm{F}}^{2}+\left\| \frac{1}{\sqrt{h\sigma^{2}}}\mathbf{\Phi}\mathbf{x}_{0}\right\|_{2}^{2}+o_{ \mathrm{P}}(1),\] \[\left\|\frac{1}{\sqrt{h^{2}\sigma^{2}}}\mathbf{W}\mathbf{\Phi} \mathbf{x}\right\|_{2}^{2} =\left\|\frac{1}{\sqrt{h^{2}}}\mathbf{W}\mathbf{\Phi}\right\|_{ \mathrm{F}}^{2}+\left\|\frac{1}{\sqrt{h^{2}\sigma^{2}}}\mathbf{W}\mathbf{\Phi} \mathbf{x}_{0}\right\|_{2}^{2}+o_{\mathrm{P}}(1).\] **Lemma 3**.: _Under Assumptions 1 to 4, the following concentrations are established:_ Lemmas 2 and 3 are based on the _Hanson-Wright inequality_[25, Theorem 6.3.2], a concentration inequality for order-\(2\) Gaussian chaos, with an additional effort to control norms of random matrices \(\mathbf{W}\) and \(\mathbf{\Phi}\). By combining Lemmas 2 and 3, we can express normalizers \(\left\|\mathbf{\Phi x}^{\prime}\right\|_{2}^{-1}\) and \(\left\|\mathbf{W}\mathbf{\Phi x}\right\|_{2}^{-1}\) in \(\mathbf{H}\) into simpler forms, and obtain a concise expression of \(\mathbf{H}\) consequently. **Lemma 4**.: _Let \(\mathbf{\Psi}\coloneqq\mathbf{W}\mathbf{\Phi}\). Assume that \(\left\|\mathbf{\Phi}\right\|_{\mathrm{F}}\) and \(\left\|\mathbf{\Psi}\right\|_{\mathrm{F}}\) are bounded away from zero. Under Assumptions 1 to 4, \(\mathbf{H}\) can be expressed as follows:_ \[\mathbf{H}=\frac{1}{1+\sigma^{2}}\left\{\mathbf{\tilde{\Phi}}\mathbf{\tilde{ \Psi}}^{\top}-2\mathbf{\tilde{\Psi}}\mathbf{\tilde{\Phi}}^{\top}\mathbf{ \tilde{\Psi}}\mathbf{\tilde{\Psi}}^{\top}-\mathrm{tr}(\mathbf{\tilde{\Phi}} ^{\top}\mathbf{\tilde{\Psi}})\mathbf{\tilde{\Psi}}\mathbf{\tilde{\Psi}}^{ \top}\right\}+o_{\mathbb{P}}(1),\] _where \(\mathbf{\tilde{\Phi}}\coloneqq\mathbf{\Phi}/\left\|\mathbf{\Phi}\right\|_{ \mathrm{F}}\) and \(\mathbf{\tilde{\Psi}}\coloneqq\mathbf{\Psi}/\left\|\mathbf{\Psi}\right\|_{ \mathrm{F}}\)._ Subsequently, we analyze the dynamics (4) in detail by leveraging the expression \(\mathbf{H}\) in Lemma 4. ## 5 Analysis of non-contrastive dynamics In the dynamics (4), the main obstacle is the normalizers \(\left\|\mathbf{\Phi}\right\|_{\mathrm{F}}^{-1}\) and \(\left\|\mathbf{\Psi}\right\|_{\mathrm{F}}^{-1}\) in \(\mathbf{H}\), which makes the dynamics highly nonlinear and challenging to solve directly. Instead, we consider the equilibrium state \(\left\|\mathbf{\Phi}\right\|_{\mathrm{F}}\to N_{\Phi}\) and \(\left\|\mathbf{\Psi}\right\|_{\mathrm{F}}\to N_{\Psi}\) with \(N_{\Phi},N_{\Psi}\gg 0\). This regime allows us to focus on the parameter values \(\mathbf{W}\) and \(\mathbf{\Phi}\) at equilibrium. We impose the next assumption. **Assumption 5** (Norms remain stable).: \(\left\|\mathbf{\Phi}\right\|_{\mathrm{F}}\equiv N_{\Phi}\)_, \(\left\|\mathbf{\Psi}\right\|_{\mathrm{F}}\equiv N_{\Psi}\), and \(\mathrm{tr}(\mathbf{\tilde{\Phi}}^{\top}\mathbf{\tilde{\Psi}})\equiv N_{ \times}\) for \(\forall t\geq 0\)._ In Section 5.4, we will see that these quantities may not be ill-behaved during time evolution. Indeed, learning dynamics analysis of weight normalization often assumes a similar one [26]. We conjecture that this assumption can be replaced with the local stability as in the previous convergence analysis of weight-norm dynamics [23]; nevertheless, we choose to assume the global stability to concentrate on the equilibrium analysis. Under Assumption 5, \(\mathbf{H}\) can be expressed as follows: \[\mathbf{H}=\mathbf{\hat{H}}=\frac{1}{1+\sigma^{2}}\left(\frac{\mathbf{F} \mathbf{W}}{N_{\Phi}N_{\Psi}}-\frac{2\mathbf{W}\mathbf{F}\mathbf{W}\mathbf{F }\mathbf{W}}{N_{\Phi}N_{\Psi}^{3}}-\frac{N_{\times}\mathbf{W}\mathbf{F} \mathbf{W}}{N_{\Phi}N_{\Psi}}\right), \tag{5}\] where \(\mathbf{F}\coloneqq\mathbf{\Phi}\mathbf{\Phi}^{\top}\) and we drop the negligible term \(o_{\mathbb{P}}(1)\) for simplicity. ### Eigenmode decomposition of dynamics To analyze the stability of the dynamics (4), we disentangle it into the eigenmodes. We first show the condition where the eigenspaces of \(\mathbf{W}\) and \(\mathbf{F}\) align with each other. Note that two commuting matrices can be simultaneously diagonalized. **Proposition 1**.: _Suppose \(\mathbf{W}\) is non-singular. Under the dynamics (4) with \(\mathbf{H}=\mathbf{\hat{H}}\), the commutator \(\mathbf{L}(t)\coloneqq[\mathbf{F},\mathbf{W}]\coloneqq\mathbf{F}\mathbf{W}- \mathbf{W}\mathbf{F}\) satisfies \(\frac{\mathrm{dvec}(\mathbf{L}(t))}{\mathrm{d}t}=-\mathbf{K}(t)\mathrm{vec}( \mathbf{L}(t))\), where_ \[\mathbf{K}(t)\coloneqq 2\frac{\mathbf{W}\oplus\mathbf{W}\mathbf{F}\mathbf{W}+ \mathbf{W}^{2}(\mathbf{F}\mathbf{W}\oplus\mathbf{I}_{d})}{(1+\sigma^{2})N_{ \Phi}N_{\Psi}^{3}}+\frac{(\mathbf{W}^{-1})\oplus\mathbf{F}-(\mathbf{W}-N_{ \times}\mathbf{W}^{2})\oplus\mathbf{I}_{d}}{(1+\sigma^{2})N_{\Phi}N_{\Psi}}+ 3\rho\mathbf{I}_{d},\] _and \(\mathbf{A}\oplus\mathbf{B}\coloneqq\mathbf{A}\otimes\mathbf{B}+\mathbf{B} \otimes\mathbf{A}\) denotes the sum of the two Kronecker products._ _If \(\inf_{t\geq 0}\lambda_{\min}(\mathbf{K}(t))\geq\lambda_{0}>0\) for some \(\lambda_{0}>0\), then \(\left\|\mathbf{L}(t)\right\|_{\mathrm{F}}\to 0\) as \(t\to\infty\)._ Proposition 1 is a variant of [13, Theorem 3] for the dynamics (4). Consequently, we see that \(\mathbf{W}\) and \(\mathbf{F}\) are simultaneously diagonalizable at the equilibrium \(\left\lVert\mathbf{L}(t)\right\rVert_{\mathrm{F}}=\left\lVert[\mathbf{F},\mathbf{ W}]\right\rVert_{\mathrm{F}}=0\). We then approximately deal with the dynamics (4). **Assumption 6** (Always commutative).: \(\left\lVert[\mathbf{F},\mathbf{W}]\right\rVert_{\mathrm{F}}\equiv 0\) _for \(\forall t\geq 0\)._ We verify the validity of the assumption in Section 5.4, where we see that the commutator remains to be nearly zero. Let \(\mathbf{U}\) be the common eigenvectors of \(\mathbf{F}\) and \(\mathbf{W}\), then \(\mathbf{W}=\mathbf{U}\boldsymbol{\Lambda}_{W}\mathbf{U}^{\top}\) and \(\mathbf{F}=\mathbf{U}\boldsymbol{\Lambda}_{F}\mathbf{U}^{\top}\), where \(\boldsymbol{\Lambda}_{W}=\mathrm{diag}[p_{1},p_{2},\ldots,p_{d}]\) and \(\boldsymbol{\Lambda}_{F}=\mathrm{diag}[s_{1},s_{2},\ldots,s_{d}]\). By extending the discussion of [13, Appendix B.1], we can show that \(\mathbf{U}\) would not change over time. **Proposition 2**.: _Suppose \(\mathbf{W}\) is non-singular. Under the dynamics of Eq. (4) with \(\mathbf{H}=\hat{\mathbf{H}}\), we have \(\hat{\mathbf{U}}=\mathbf{O}\)._ With Assumptions 5 and 6 and Proposition 2, we decompose (4) with \(\mathbf{H}=\hat{\mathbf{H}}\) into the eigenmodes. \[\begin{split}\dot{p}_{j}&=-\frac{1}{(1+\sigma^{2})N_ {\Phi}N_{\Psi}}\left(\frac{2}{N_{\Psi}^{2}}s_{j}^{2}p_{j}^{2}+N_{\times}s_{j}p _{j}-s_{j}\right)-\rho p_{j},\\ \dot{s}_{j}&=-\frac{2}{(1+\sigma^{2})N_{\Phi}N_{\Psi }}\left(\frac{2}{N_{\Psi}^{2}}s_{j}^{2}p_{j}^{3}+N_{\times}s_{j}p_{j}^{2}-s_{j }p_{j}\right)-2\rho s_{j}.\end{split} \tag{6}\] The eigenmode dynamics (6) is far more interpretable than the matrix dynamics (4) and amenable to further understanding. Subsequently, we analyze the eigenmode dynamics to investigate the number of equilibrium points and their stability. ### Equilibrium analysis of eigenmode dynamics We are interested in when the eigenmode dynamics does and does not collapse depending on augmentation strength \(\sigma^{2}\) and regularization \(\rho\). For this purpose, we investigate the equilibrium points of the eigenmode dynamics (6). Invariant parabola.By simple algebra, \(\dot{s_{j}}-2p_{j}\dot{p_{j}}=-2\rho(s_{j}-p_{j}^{2})\). Noting that \(\frac{\mathrm{d}}{\mathrm{d}t}(s_{j}-p_{j}^{2})=\dot{s_{j}}-2p_{j}\dot{p_{j}}\) and integrating both ends, we encounter the following relation: \[s_{j}(t)=p_{j}^{2}(t)+c_{j}\exp(-2\rho t), \tag{7}\] where \(c_{j}\coloneqq s_{j}(0)-p_{j}^{2}(0)\) is the initial condition. Equation (7) elucidates that the dynamics of \((p_{j}(t),s_{j}(t))\) asymptotically converges to the parabola \(s_{j}(t)=p_{j}^{2}(t)\) as \(t\to\infty\) when regularization \(\rho>0\) exists. The information of initialization \(c_{j}\) shall be forgotten. Stronger regularization yields faster convergence to the parabola. Dynamics on invariant parabola.We now focus on the dynamics on the invariant parabola. Substituting \(s_{j}(t)=p_{j}^{2}(t)\) into \(p_{j}\)-dynamics in Eq. (6) yields the following dynamics: \[\dot{p_{j}}=-\frac{2}{(1+\sigma^{2})N_{\Phi}N_{\Psi}^{3}}p_{j}^{6}-\frac{N_{ \times}}{(1+\sigma^{2})N_{\Phi}N_{\Psi}}p_{j}^{3}+\frac{1}{(1+\sigma^{2})N_{ \Phi}N_{\Psi}}p_{j}^{2}-\rho p_{j}. \tag{8}\] We illustrate the dynamics (8) with different parameter values in Fig. 2. This dynamics always has \(p_{j}=0\) as an equilibrium point, and the number of equilibrium points varies between two and four. Notably, Eq. (8) is a _sixth-order_ non-linear ODE, whereas the L2 loss dynamics [13, Eq. (16)] induces a _third-order_ non-linear eigenmode dynamics, as we will recap in Section 5.3. From Fig. 2, we can classify the dynamics into three regimes (refer to Fig. 3 together): * **(Collapse)** When all of \(\rho\), \(N_{\Phi}\), \(N_{\Psi}\) are large, the dynamics only has two equilibrium points. See the plots with \((\rho,N_{\Phi},N_{\Psi})\in\{(0.5,1.0,1.0),(0.5,1.0,0.5),(0.5,0.5,1.0)\}\). In this regime, \(p_{j}=0\) is the only stable equilibrium, causing the collapsed dynamics. This regime is brittle because the stable equilibrium \(p_{j}=0\) blows up the normalizers \(\left\lVert\boldsymbol{\Phi}\right\rVert_{\mathrm{F}}^{-1}\) and \(\left\lVert\boldsymbol{\Psi}\right\rVert_{\mathrm{F}}^{-1}\) in the original cosine loss dynamics. As \(p_{j}\) shrinks, the values \(N_{\Phi}\) and \(N_{\Psi}\) shrink together, too, which brings the dynamics into the next two regimes. * **(Acute)** When \(\rho\), \(N_{\Phi}\), and \(N_{\Psi}\) become smaller than those in Collapse, two new equilibrium points emerge and the number of equilibrium points is four in total. See the plots with \((\rho,N_{\Phi},N_{\Psi})\in\{(0.5,0.5,0.5),(0.5,0.25,1.0),(0.1,1.0,1.0)\}\). Let \(p_{\blacktriangle}^{(-)}\), \(p_{\blacktriangledown}^{(0)}(=0)\), \(p_{\blacktriangle}^{(+)}\), and \(p_{\blacktriangledown}^{(+)}\) denote the equilibrium points from smaller to larger ones, respectively, namely, \(p_{\blacktriangle}^{(-)}<p_{\blacktriangledown}^{(0)}=0<p_{\blacktriangle}^{( +)}<p_{\blacktriangledown}^{(+)}\) (see Fig. 3). Note that \(p_{j}=p_{\blacktriangle}^{(-)},p_{\blacktriangle}^{(+)}\) are unstable and \(p_{j}=p_{\blacktriangledown}^{(0)},p_{\blacktriangledown}^{(+)}\) are stable [1]. In this regime, the eigenmode initialized larger than \(p_{\blacktriangle}^{(+)}\) converge to non-degenerate point \(p_{\blacktriangledown}^{(+)}\). However, the eigenmode degenerates to \(p_{\blacktriangledown}^{(0)}\) if initialization is in the range \([p_{\blacktriangle}^{(-)},p_{\blacktriangle}^{(+)}]\) (close to zero), and diverges if initialization has large negative value \(<p_{\blacktriangle}^{(-)}\). If the eigenmode degenerates, the values \(N_{\Phi}\) and \(N_{\Psi}\) further shrink and then the regime enters the final one; if the eigenmode diverges, \(N_{\Phi}\) and \(N_{\Psi}\) inflate and the regime goes back to the previous Collapse. * **(Stable)** When \(\rho\), \(N_{\Psi}\), and \(N_{\Psi}\) are further smaller than those in Acute, the middle two equilibrium points \(p_{\blacktriangledown}^{(0)}\) and \(p_{\blacktriangle}^{(+)}\) approaches and form a saddle point. See the plots with \((\rho,N_{\Phi},N_{\Psi})\in\{(0.5,0.25,0.5),(0.1,0.25,1.0),(0.1,0.25,0.5)\}\). Denote this saddle point by \(p_{\blacktriangledown}\). The dynamics has a unstable equilibrium \(p_{\blacktriangle}^{(-)}\), a saddle point \(p_{\blacktriangle}\), and a stable equilibrium \(p_{\blacktriangledown}^{(+)}\), from smaller to larger ones. In this regime, the eigenmode stably converges to the non-degenerate point \(p_{j}=p_{\blacktriangledown}^{(+)}\) unless the initialization is smaller than \(p_{\blacktriangle}^{(-)}\). (_Remark_: The two equilibrium \(p_{\blacktriangledown}^{(0)}\) and \(p_{\blacktriangle}^{(+)}\) would not coincide exactly at a single saddle point because the dynamics diverges as \(N_{\Phi},N_{\Psi}\to 0\). Nonetheless, the approximation \(p_{\blacktriangledown}^{(0)}\approx p_{\blacktriangle}^{(+)}\) is reasonable with realistic parameters such as \((\rho,N_{\Phi},N_{\Psi})=(0.1,0.25,0.5)\).) Three regimes prevent degeneration.We illustrate the relationship among the three regimes in Fig. 3. As we see in the numerical experiments (Section 5.4), the parameter initialization (Assumption 4) hardly makes the initial eigenmode smaller than \(p_{\blacktriangle}^{(-)}\): indeed, we conducted a numerical simulation to see the distribution of the initial eigenmode in Fig. 4, Figure 2: Numerical illustrations of the dynamics Eq. (8) with different values of \((\rho,N_{\Phi},N_{\Psi})\), where vertical and horizontal axes denote \(p_{j}\) and \(p_{j}\), respectively. The left two columns are illustrated for \(\rho=0.5\), while right two columns for \(\rho=0.1\). Red \(\blacktriangledown\) and green \(\blacktriangle\) indicate stable (namely, \(p_{j}<0\)) and unstable equilibrium (namely, \(p_{j}>0\)) points, respectively [1]. For other parameters, we chose \(N_{\times}=1\) and \(\sigma^{2}=0.1\) for illustration. which indicates that the initial eigenmodes are sufficiently larger than \(p_{\blacktriangle}^{(-)}\). Therefore, the learning dynamics has stable equilibriums and successfully stabilizes. Importantly, this cosine loss dynamics stabilizes and would not collapse to zero regardless of the regularization strength \(\rho\), which is in stark contrast to the L2 loss dynamics, as detailed in Section5.3. This observation tells us the importance of feature normalization to prevent representation collapse in non-contrastive self-supervised learning. ### Comparison with L2 loss dynamics Whereas we mainly focused on the study of the cosine loss dynamics, [13] (and many earlier studies) engaged in the L2 loss dynamics, which does not entail feature normalization. Here, we compare the cosine and L2 loss dynamics to see how feature normalization plays a crucial role. Let us review the dynamics of [13]. We inherit Assumption1 (symmetric projector), Assumption2 (standard normal input), and Assumption6 (\(\mathbf{F}\) and \(\mathbf{W}\) are commutative). Under this setup, [13] analyzed the non-contrastive dynamics (4) with the L2 loss (1), and revealed that the eigenmodes of \(\mathbf{W}\) and \(\mathbf{F}\) (denoted by \(p_{j}\) and \(s_{j}\), respectively) asymptotically converges to the invariant parabola \(s_{j}(t)=p_{j}^{2}(t)\) (see Eq.7), where the \(p_{j}\)-dynamics reads: \[\dot{p_{j}}=p_{j}^{2}\{1-(1+\sigma^{2})p_{j}\}-\rho p_{j}. \tag{9}\] Compare the L2-loss eigenmode dynamics (9) (third-order) and the cosine-loss eigenmode dynamics (8) (sixth-order). Note that we omit the exponential moving average of the online representation in the original BYOL (\(\tau=1\)) and set the unite learning rate ratio between the predictor and online nets (\(\alpha=1\)) in [13] for comparison. The behaviors of the two dynamics are compared in Fig.3 (cosine loss) and Fig.5 (L2 loss). One of the most important differences is that the cosine loss dynamics has the regime shift depending on evolution of \(N_{\Phi}\), \(N_{\Psi}\), and \(N_{\times}\) Figure 4: Numerical simulation of eigenvalue distributions of \(\mathbf{W}\). In each figure, we generate \(\mathbf{W}\) and \(\mathbf{\Phi}\) by the initialization of Assumption4, and illustrate the histogram of eigenmodes of \(\mathbf{W}\). The vertical line indicates the value of \(p_{\blacktriangle}^{(-)}\), the negative unstable equilibrium point of \(p_{j}\)-dynamics (8), computed by the binary search and numerical root finding. For parameters, we chose \(\rho=0.05\), \(\sigma^{2}=1.0\), \(d=2048\), and \(h\in\{64,256\}\). Figure 3: Schema of Collapse, Acute, and Stable regimes of the eigenmode dynamics Eq.8. Red \(\blacktriangledown\) and green \(\blacktriangle\) indicate stable (namely, \(\dot{p_{j}}<0\)) and unstable equilibrium (namely, \(\dot{p_{j}}>0\)) points, respectively. The black \(\blacklozenge\) denotes the saddle point. Red, gray, and blue backgrounds indicate ranges where the eigenmode will diverge to \(\to-\infty\), collapse to \(0\), and converge to the stable equilibrium, respectively. As \(N_{\Phi}\) and \(N_{\Psi}\) become smaller, the regime shifts in the direction <<Collapse \(\to\) Acute \(\to\)Stable>>, and as \(N_{\Phi}\) and \(N_{\Psi}\) become larger, the regime shifts in the opposite direction <<Stable \(\to\) Acute \(\to\)Collapses>>. while the L2 loss dynamics does not have such a shift. Thus, the L2 loss dynamics and its time evolution are solely determined by a given regularization strength \(\rho\) (see three plots in Fig. 5). That being said, if the L2 loss dynamics is regularized strongly such that \(\rho>\frac{1}{4(1+\sigma^{2})}\), there is no hope that the eigenmode stably converges without collapse to zero. On the contrary, a strong regularization with the cosine loss initially makes the dynamics fall into the Collapse regime, where no meaningful stable equilibrium exists, but the regime gradually shifts to Acute as the eigenmode (and the norms \(N_{\Phi}\) and \(N_{\Psi}\) accordingly) approaches zero. Such regime shift owes to feature normalization involved in the cosine loss. ### Numerical experiments We conducted a simple numerical simulation of the SimSiam model using the official implementation available at [https://github.com/facebookresearch/simsiam](https://github.com/facebookresearch/simsiam). We tested the linear model setup shown in Section 3, with linear representation net \(\mathbf{\Phi}\) and linear projection head \(\mathbf{W}\), and the representation dimension was set to \(h=64\). Data are generated from the \(512\)-dimensional (\(d=512\)) standard multivariate normal (Assumption 2) and data augmentation follows isotropic Gaussian noise \(\mathcal{D}^{\mathrm{aug}}_{\mathbf{x}_{0}}\), with variance \(\sigma^{2}=1.0\). The learning rate of the momentum SGD was initially set to \(0.05\) and scheduled by the cosine annealing. The regularization strength was set to \(\rho=0.005\). For the other implementation details, we followed the official implementation. The results are shown in Fig. 6. We first confirm how Assumption 5 is reasonable in practice by testing the values of \(N_{\Phi}\), \(N_{\Psi}\), and \(N_{\times}\) during time evolution. Figure 6 (Left) shows that these three values, and \(N_{\times}\) in particular, overall remain stable, with mild shrinkage of \(N_{\Phi}\) and \(N_{\Psi}\). Nevertheless, \(N_{\Phi}\) and \(N_{\Psi}\) occasionally have spikes. To take those behaviors into account, the local norm stability [23] would be useful in future analyses. Next, to confirm the validity of Assumptions 1 and 6, we plot asymmetry of the projection head \(\mathbf{W}\) and commutativity of \(\mathbf{F}\) and \(\mathbf{W}\) in Fig. 6 Figure 5: Schema of three eigenmode dynamics in the L2 loss case. Each figure illustrates the eigenmode corresponding fixed regularization strength \(\rho\). The meaning of each mark (\(\blacktriangle\), \(\blacktriangledown\), \(\blacklozenge\)) and background colors can be found in the caption of Fig. 3. The figure borrows the illustration of [21, Figure 4]. Figure 6: Numerical simulation of the SimSiam model. **(Left)** Time evolution of \(N_{\Phi}\), \(N_{\Psi}\), and \(N_{\times}\). They overall remain stable (cf. Assumption 5). **(Center)** Asymmetry of the projection head \(\mathbf{W}\) (measured by the relative error of \(\mathbf{W}-\mathbf{W}^{\top}\)) and non-commutativity of \(\mathbf{F}\) and \(\mathbf{W}\) (measured by the relative error of the commutator \([\mathbf{F},\mathbf{W}]\)). The relative errors stay close to zero during time evolution (cf. Assumptions 1 and 6). **(Right)** The leading eigenmode of the projection head \(p_{j}\) (green line), with background colors illustrating three intervals where \(\box (Center), which suggests that the assumptions are reasonable in general. Lastly, we empirically observe the regime shift in Fig. 6 (Right). The regularization strength \(\rho=0.005\) used in this experiment is rather larger than the default SimSiam regularization strength \(\rho=10^{-4}\), which leads to the Collapse regime initially (when \(\mathrm{epoch}<1700\)) but gradually shifts to the Acute regime (when \(\mathrm{epoch}>1700\)). Thus, we observed how the eigenmode escapes from the Collapse regime. ## 6 Conclusion In this work, we questioned how to describe non-contrastive dynamics without eigenmode collapse. The existing theory (represented by [14]) leverages the simplicity of the L2 loss to analytically derive the dynamics of the two-layer non-contrastive learning. However, the regularization severely affects eigenmode collapse: with too strong regularization, the dynamics has no way to escape from eigenmode collapse. This may indicate a drawback of the L2 loss analysis, though their theoretical model is transparent. Alternatively, we focused on the cosine loss, which involves feature normalization and derived the corresponding eigenmode dynamics. Despite that the dynamics may fall into the Collapse regime for too strong regularization, the shrinkage of the eigenmodes brings the regime into non-collapse ones. Thus, we witnessed the importance of feature normalization. Technically, we leveraged the thermodynamical limit of the feature dimensions, which allows us to focus on high-dimensional concentrated feature norms. We believe that a similar device may enhance theoretical models of related learning problems and architectures, including self-supervised learning based on covariance regularization such as Barlow Twins [11] and VICReg [1]. This work is limited to the analysis of dynamics stability and refrains to answer why non-contrastive learning is appealing for many downstream tasks. While downstream performances of contrastive learning have been theoretically analyzed through the lens of the learning theoretic viewpoint [12, 13, 14, 15] and the smoothness of loss landscapes [13], we have far less understanding of non-contrastive learning for the time being. We hope that understanding the non-contrastive dynamics paves a road to the analysis of downstream tasks. ## Acknowledgments HB appreciates Yoshihiro Nagano for providing numerous insights at the initial phase of this research. A part of the experiments of this research was conducted using Wisteria/Aquarius in the Information Technology Center, The University of Tokyo.
2309.09029
A Modelling study of Electron transport in GaN/AlGaN superlattices using Monte Carlo simulation
Electron transport in GaN/AlxGa1-xN superlattices is investigated using a single particle Monte Carlo approach. To establish the band structure required GaN, AlN and their ternary alloy are investigated using a single electron Monte Carlo approach and a 3-band approximation to the full band structure. The interplay of the inter-valley scattering and electron-longitudinal optical polar phonon scattering in determining electron velocity and velocity overshoot is examined for the binaries and their alloy. We use a Schrodinger wave equation coupled to a Poisson solver to self-consistently calculate the energy band structure of the superlattice using the single band approximation for the materials, determine the Fermi energy and the superlattice miniband energy position and its energy width. We then analyze the miniband band structure and determine the effective masses for the superlattice miniband in the superlattice direction which will determine the electron mobility in that direction. Then the single particle Monte Carlo method is applied to investigate electron transport in the miniband where we find that for low Al concentration in the barrier and short periods electron velocity, very similar to that in bulk GaN can be obtained and observe that velocity overshoot can occur, purely due to electron-LO phonon scattering and non-parabolicity in the single band. This modelling approach provides a fast and convenient method to investigate high-field electron transport in n-doped GaN/Al$_x$Ga$_{1-x}$N superlattices and should be suitable for use in device design.
Mengxun Bai, Judy
2023-09-16T15:51:24Z
http://arxiv.org/abs/2309.09029v1
# A Modelling study of Electron transport in GaN/AlGaN superlattices using Monte Carlo simulation ###### Abstract Electron transport in GaN/Al\({}_{x}\)Ga\({}_{1-x}\)N superlattices is investigated using a single particle Monte Carlo approach. To establish the band structure required GaN, AlN and their ternary alloy are investigated using a single electron Monte Carlo approach and a 3-band approximation to the full band structure. The interplay of the inter-valley scattering and electron-longitudinal optical polar phonon scattering in determining electron velocity and velocity overshoot is examined for the binaries and their alloy. However, it is observed that both scattering processes cause velocity overshoot with their interplay leading to the magnitude of the overshoot and the value of the electric field required to achieve it. A single non-parabolic band approximation is found to be acceptable to be used in the superlattice modeling because the energy width of the miniband is such that the kinetic energy of the electrons in it would not be sufficient to suffer inter-valley scattering. We use a Schrodinger wave equation coupled to a Poisson solver to self-consistently calculate the energy band structure of the superlattice using the single band approximation for the materials, determine the Fermi energy and the superlattice miniband energy position and its energy width. We then analyze the miniband band structure and determine the effective masses for the superlattice miniband in the superlattice direction which will determine the electron mobility in that direction. Then the single particle Monte Carlo method is applied to investigate electron transport in the miniband where we find that for low Al concentration in the barrier and short periods electron velocity, very similar to that in bulk GaN can be obtained and observe that velocity overshoot can occur, purely due to electron-LO phonon scattering and non-parabolicity in the single band. This modeling approach provides a fast and convenient method to investigate high-field electron transport in n-doped GaN/Al\({}_{x}\)Ga\({}_{1-x}\)N superlattices and should be suitable for use in device design. ## 1 Introduction GaN, AlN and their related alloy Al\({}_{x}\)Ga\({}_{1-x}\)N are considered important materials for electronic and optoelectronic device applications as they span a wide range of energy gaps and offer large breakdown fields and high thermal conductivity. These properties lead to higher output power and frequency performance of electronic devices made from these materials[1]. In optoelectronics, tuneable energy bandgaps in the blue and UV regions are beneficial for novel optoelectronic device applications[2]. Related quaternary nitride alloys have proven promising in optoelectronics for applications in blue-green and blue-violet light-emitting diodes(LEDs), laser diodes(LDs) and photodetectors[3]. High electron mobility transistors(HEMTs) based on the wurtzite phase of AlGaN/GaN heterostructures have been extensively studied[4][5]. To fully exploit these material systems it is necessary to understand electron transport, particularly high electric field transport. Electron transport in bulk III-V nitrides has been studied over the years experimentally[6] and theoretically[1][7-10] but reproducible experimental results are relatively recent. The magnitude of velocity overshoot in GaN-based materials has been an area of disagreement and focus. A suitable approach for treating electronic transport including multiple scattering processes is Monte Carlo simulation[1][6][7]. For establishing equilibrium conditions single electron Monte Carlo is sufficient while for exploiting non-equilibrium electronic distributions ensemble Monte Carlo is required[11]. Monte Carlo simulation, particularly for high electric fields, relies on a band structure that is accurate to high energy, which is numerically intensive and time-consuming. For device modeling purposes, simplification of the band structure is required. A 3-band model including the two lowest higher bands has been found to give good agreement of velocity field characteristics with fuller band structure calculations for GaN and AlN so we follow this approach[1][9]. The drift velocity as a function of the electric(F) field, average electron energy as a function of the F field, and occupation of the three valleys as a function of the F field are examined for the two binaries, and the behavior is analyzed in terms of the physics of the materials. In particular, the analysis focuses specifically on understanding how the scattering processes interact with one another and collectively influence electron behavior. For the ternary alloy Al\({}_{x}\)Ga\({}_{1-x}\)N,
2309.11666
Error estimate for regularized optimal transport problems via Bregman divergence
Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes faster than exponentially.
Keiichi Morikuni, Koya Sakakibara, Asuka Takatsu
2023-09-20T22:19:49Z
http://arxiv.org/abs/2309.11666v1
# Error estimate for regularized optimal transport problems via Bregman divergence ###### Abstract. Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes faster than exponentially. ## 1. Introduction An _optimal transport theory_ allows for measuring the difference between two probability measures. Innumerable applications of optimal transport theory include mathematics, physics, economics, statistics, computer science, and machine learning. This work focuses on the optimal transport theory on a finite set. For \(K\in\mathbb{N}\), define \[\mathcal{P}_{K}:=\left\{z=(z_{k})\in\mathbb{R}^{K}\ \bigg{|}\ z_{k}\geq 0 \text{ for any }k,\ \sum_{k}z_{k}=1\right\}.\] Here and hereafter, \(k\) runs over \(1,2,\ldots,K\). Fix \(I,J\in\mathbb{N}\). Unless we indicate otherwise, \(i\) and \(j\) run over \(1,2,\ldots,I\) and \(1,2,\ldots,J\), respectively. For \(x\in\mathcal{P}_{I}\) and \(y\in\mathcal{P}_{J}\), define \(x\otimes y\in\mathcal{P}_{I\times J}\) by \[(x\otimes y)_{ij}:=x_{i}y_{j},\] and set \[\Pi(x,y):=\left\{\Pi=(\pi_{ij})\in\mathcal{P}_{I\times J}\ \bigg{|}\ \sum_{l=1}^{J}\pi_{il}=x_{i},\sum_{l=1}^{I}\pi_{lj}=y_{j}\text{ for any }i,j\right\},\] where we identify \(\mathcal{P}_{I\times J}\) with a subset of \(\mathbb{R}^{I\times J}\). An element in \(\Pi(x,y)\) is called a _transport plan_ between \(x\) and \(y\). Note that \(\Pi(x,y)\) is a compact set, in particular, a convex polytope, and contains \(x\otimes y\). Fix \(C=(c^{ij})\in\mathbb{R}^{I\times J}\) and define a map \(\langle C,\cdot\rangle:\mathcal{P}_{I\times J}\to\mathbb{R}\) by \[\langle C,\Pi\rangle:=\sum_{i,j}c^{ij}\pi_{ij}.\] Consider linear programs of the form \[\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle, \tag{1.1}\] which is a so-called _optimal transport problem_. Since the function \(\langle C,\cdot\rangle\) is linear, in particular continuous on a compact set \(\Pi(x,y)\), the problem (1.1) always admits a minimizer, but a minimizer is not necessarily unique. A minimizer of the problem (1.1) is called an _optimal transport plan_ between \(x\) and \(y\). In the context of the success of the regularized optimal transport problem by the Kullback-Leibler divergence, this paper considers a regularized optimal transport problem via Bregman divergence, which is a generalization of the Kullback-Leibler divergence through a strictly convex function. **Definition 1.1**.: Let \(U\) be a continuous, strictly convex function on \([0,1]\) with \(U\in C^{1}((0,1])\). For \(z,w\in\mathcal{P}_{K}\), the _Bregman divergence_ associated with \(U\) of \(z\) with respect to \(w\) is given by \[D_{U}(z,w):=\sum_{k}d_{U}(z_{k},w_{k}),\] where \(d_{U}:[0,1]\times(0,1]\to\mathbb{R}\) is defined for \(r\in[0,1]\) and \(r_{0}\in(0,1]\) by \[d_{U}(r,r_{0}):=U(r)-U(r_{0})-(r-r_{0})U^{\prime}(r_{0})\] and is naturally extended as a function on \([0,1]\times[0,1]\) valued in \([0,\infty]\) (see Lemma 2.1). For example, the Bregman divergence associated with \(U(r)=r\log r\) reduces to the Kullback-Leibler divergence. Let us consider a regularized problem of the form \[\inf_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi\rangle+\varepsilon D_{U}(\Pi,x\otimes y )\right)\quad\text{for $\varepsilon>0$.} \tag{1.2}\] By the continuity and strict convexity of \(U\), \(D_{U}(\cdot,x\otimes y)\) is continuous and strictly convex on a convex polytope \(\Pi(x,y)\). Consequently, the problem (1.2) always admits a unique minimizer, denoted by \(\Pi^{U}(C,x,y,\varepsilon)\). Then, \[\lim_{\varepsilon\downarrow 0}\langle C,\Pi^{U}(C,x,y,\varepsilon)\rangle= \inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle \tag{1.3}\] holds (see Subsection 2.4). To give a quantitative error estimate of (1.3), we require the following two assumptions. See Subsections 2.1 and 2.3 to verify that the assumptions are reasonable. **Assumption 1.2**.: \(\Pi(x,y)\neq\operatorname{argmin}_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle\)_._ **Assumption 1.3**.: Let \(U\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) satisfy \(U^{\prime\prime}>0\) on \((0,1)\) and \(\lim_{h\downarrow 0}U^{\prime}(h)=-\infty\). In addition, \(r\mapsto rU^{\prime\prime}(r)\) is non-decreasing in \((0,1)\). We introduce notions to describe our quantitative error estimate of (1.3). **Definition 1.4**.: Let \(U\) be a continuous, strictly convex function on \([0,1]\) with \(U\in C^{1}((0,1])\). Define \(\mathfrak{D}_{U}(x,y)\) for \(x\in\mathcal{P}_{I}\) and \(y\in\mathcal{P}_{J}\) by \[\mathfrak{D}_{U}(x,y)\coloneqq\sup_{\Pi\in\Pi(x,y)}D_{U}(\Pi,x\otimes y).\] **Definition 1.5**.: The _suboptimality gap_ of \(x\in\mathcal{P}_{I}\) and \(y\in\mathcal{P}_{J}\) with respect to \(C\in\mathbb{R}^{I\times J}\) is defined by \[\Delta_{C}(x,y)\coloneqq\inf_{V^{\prime}\in V(x,y)\setminus\operatorname{ argmin}_{V\in V(x,y)}\langle C,V\rangle}\langle C,V^{\prime}\rangle-\inf_{V\in V (x,y)}\langle C,V\rangle,\] where \(V(x,y)\) is the set of vertices of \(\Pi(x,y)\) and set \(\inf\emptyset:=\infty\). In Subsection 2.1, we verify \(\mathfrak{D}_{U}(x,y),\Delta_{C}(x,y)\in(0,\infty)\) under Assumption 1.2. We also confirm in Subsection 2.2 that Definition 1.6 below is well-defined. **Definition 1.6**.: Under Assumption 1.3, we denote by \(e_{U}\) the inverse function of \(U^{\prime}:(0,1]\to U^{\prime}((0,1])\). For \(x\in\mathcal{P}_{I}\) and \(y\in\mathcal{P}_{J}\), let \(R_{U}(x,y)\in[1/2,1)\) satisfy \[U^{\prime}(R_{U}(x,y))-U^{\prime}(1-R_{U}(x,y))=\mathfrak{D}_{U}(x,y),\] which is uniquely determined. Define \(\nu_{U}(x,y)\in\mathbb{R}\) by \[\nu_{U}(x,y):=\sup_{r\in(0,R_{U}(x,y)]}\left(U^{\prime}(1-r)+rU^{\prime\prime }(r)\right).\] Our main result is as follows. **Theorem 1.7**.: _Under Assumptions 1.2 and 1.3, the interval_ \[\left(0,\frac{\Delta_{C}(x,y)R_{U}(x,y)}{\mathfrak{D}_{U}(x,y)}\right]\cap \left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}( 1)}\right]\] _is well-defined and nonempty. In addition,_ \[\langle C,\Pi^{U}(C,x,y,\varepsilon)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C, \Pi\rangle\leq\Delta_{C}(x,y)\cdot e_{U}\left(-\frac{\Delta_{C}(x,y)}{ \varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right)\] _holds for \(\varepsilon\) in the above interval._ Let us review related results. Computing an exact solution of a large-scale optimal transport problem becomes problematic when, say, \(N:=\max\{I,J\}>10^{4}\). The best-known practical complexity \(\widetilde{O}(N^{3})\) is attained by an interior point algorithm in [16, Section 5], where \(\widetilde{O}\) omits polylogarithmic factors. Though Chen et al. [2, Informal Theorem I.3] improve this complexity to \((N^{2})^{1+o(1)}\) and Jambulapati et al. [11, Theorem 2.4] provide an algorithm that finds an \(\epsilon\)-approximation in \(\widetilde{O}(N^{2}/\epsilon)\), their practical implementations have not been developed. The tractability of the problem (1.1) is improved by introducing entropic regularization to its objective function, that is, \[\inf_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi\rangle-\varepsilon S(\Pi)\right),\] where \[S(z):=-\sum_{k}z_{k}\log z_{k},\quad z\in\mathcal{P}_{K}\] is the _Shannon entropy_. Here, we put \(0\log 0:=0\) due to the continuity \[\lim_{r\downarrow 0}r\log r=0.\] Fang [7] introduces the Shannon entropy to regularize generic linear programs. By the continuity and the strict convexity of the Shannon entropy, the entropic regularized problem always has a unique minimizer for each value of the regularization parameter. Cominetti and SanMartin [3, Theorem 5.8] prove that the minimizer of the regularized problem converges exponentially to a certain minimizer of the given problem as the regularization parameter goes to zero. Weed [20, Theorem 5] provides a quantitative error estimate of the regularized problem, whose convergence rate is exponential. Note that the entropic regularization allows us to develop approximation algorithms for the problem (1.1). We refer to [17] and references therein. Different types of regularizers are introduced in recent studies. For example, Muzellec et al. [13] use the Tsallis entropy for ecological inference. Dessein et al. [5] and Daniels et al. [4] introduce the Bregman and \(f\)-divergences to regularize optimal transport problems, respectively. Apart from entropy and divergence, Klatt et al. [12] use convex functions of Legendre type for regularization. Regularization by the Shannon entropy is equivalent to that by the Kullback-Libeler divergence. Here, the _Kullback-Leibler divergence_ of \(z\in\mathcal{P}_{K}\) with respect to \(w\in\mathcal{P}_{K}\) is given by \[D_{\mathrm{KL}}(z,w):=\sum_{k}^{K}z_{k}\left(\log z_{k}-\log w_{k}\right),\] where we put \(r\log 0:=\infty\) for \(r>0\). Note that the Kullback-Leibler divergence and its dual are the unique members that belong to both the Bregman and \(f\)-divergence classes (see [1] for instance). Let us define \(U_{o}\in C([0,\infty))\cap C^{\infty}((0,\infty))\) by \[U_{o}(r):=\begin{cases}r\log r&\text{for }r\in(0,\infty),\\ 0&\text{for }r=0.\end{cases}\] Then, \(D_{U_{o}}=D_{\mathrm{KL}}\) holds on \(\mathcal{P}_{K}\times\mathcal{P}_{K}\). There are other strictly convex functions \(U\) such that \(D_{U}=D_{\mathrm{KL}}\) holds (see Subsection 4.1). Our main result Theorem 1.7 with the case \(U=U_{o}\) recovers Weed's work [20, Theorem 5]. Theorem 1.7 with the relation (2.1) guarantees that the regularized optimal value approaches the true optimal value faster than exponentially (see Subsection 2.3). Numerical experiments demonstrate that a Bregman divergence gives smaller errors than the Kullback-Leibler divergence. This paper is organized as follows. In Section 2, we verify that Assumptions 1.2 and 1.3 are reasonable. Section 3 proves Theorem 1.7. In Section 4, we show that the normalization of \(U\) does not affect the error estimate in Theorem 1.7. We then consider the effect of scaling of data and the domain of \(U\) on the error estimate. Section 5 provides examples of \(U\) satisfying Assumption 1.3. In Section 6, we give numerical experiments and show, in particular, that faster convergence is achieved when regularizations other than the Kullback-Leibler divergence are considered. Finally, in Section 7, we summarize the contents of this paper and give directions for future research. ## 2. Preliminaries In this section, we verify that Assumptions 1.2, 1.3 are reasonable and Definition 1.6 is well-defined. We also show that \(\mathfrak{D}_{U}(x,y),\Delta_{C}(x,y)\in(0,\infty)\) under Assumption 1.2. Throughout, as in the introduction, we fix \(I,J\in\mathbb{N}\) and take \(C\in\mathbb{R}^{I\times J}\), \(x\in\mathcal{P}_{I}\), and \(y\in\mathcal{P}_{J}\). Let \[\Omega:=\mathbb{R}^{I\times J}\times\mathcal{P}_{I}\times\mathcal{P}_{J} \times(0,\infty)\] and \(U\) denote a continuous, strictly convex function on \([0,1]\) with \(U\in C^{1}((0,1])\), unless otherwise stated. By the strict convexity of \(U\) on \([0,1]\), \[d_{U}(r,r_{0}):=U(r)-U(r_{0})-(r-r_{0})U^{\prime}(r_{0})\geq 0\] holds for \(r\in[0,1]\) and \(r_{0}\in(0,1]\). In addition, for \(r,r_{0}\in(0,1]\), \(d_{U}(r,r_{0})=0\) if and only if \(r=r_{0}\). Recall the limiting behavior of \(U\). **Lemma 2.1**.: _The limit_ \[U^{\prime}(0):=\lim_{h\downarrow 0}U^{\prime}(h)\] _exists in \([-\infty,\infty)\) and \(\lim_{h\downarrow 0}hU^{\prime}(h)=0\) holds._ Proof.: By the strict convexity of \(U\) on \([0,1]\), \(U^{\prime}\) is strictly increasing on \((0,1]\) and \(\lim_{h\downarrow 0}U^{\prime}(h)\in[-\infty,\infty)\) holds. Thus, the first assertion follows. If \(U^{\prime}(0)\in\mathbb{R}\), then \(\lim_{h\downarrow 0}hU^{\prime}(h)=0\) holds. Assume \(U^{\prime}(0)=-\infty\). The Taylor expansion yields \[U(r)-U(h)\geq(r-h)U^{\prime}(h)\] for all \(r,h\in(0,1]\). By the continuity of \(U\), taking the limit as \(r\downarrow 0\) gives \[U(0)-U(h)\geq-hU^{\prime}(h)\] for \(h\in(0,1]\). If \(h\) is small enough, then \(U^{\prime}(h)<0\) by the monotonicity of \(U^{\prime}\) on \((0,1]\) together with \(U^{\prime}(0)=-\infty\). Thus, we conclude \[0=\lim_{h\downarrow 0}(U(h)-U(0))\leq\liminf_{h\downarrow 0}hU^{\prime}(h) \leq\limsup_{h\downarrow 0}hU^{\prime}(h)\leq 0,\] which leads to \(\lim_{h\downarrow 0}hU^{\prime}(h)=0\). This completes the proof of the lemma. By Lemma 2.1, the limit \[d_{U}(r,0):=\lim_{r_{0}\downarrow 0}d_{U}(r,r_{0})\in[0,\infty]\] exists. In the above relation and throughout, we adhere to the following natural convention: \[u\pm(-\infty)=\mp\infty,\qquad\lambda\cdot(-\infty)=-\infty,\qquad-\infty\leq -\infty<u<\infty\leq\infty\] and so on for \(u\in\mathbb{R}\) and \(\lambda>0\). Thus, we can regard \(d_{U}\) (resp. \(D_{U}\)) as a function on \([0,1]\times[0,1]\) (resp. \(\mathcal{P}_{K}\times\mathcal{P}_{K}\)) valued in \([0,\infty]\). For \(r\in[0,1]\), \(d_{U}(r,0)=0\) if and only if \(r=0\). Moreover, \(d_{U}(r,0)=\infty\) for some \(r\in(0,1]\) is equivalent to \(U^{\prime}(0)=-\infty\). To consider the finiteness of \(\mathfrak{D}_{U}(x,y)\), we define the _support_ of \(z\in\mathcal{P}_{K}\) by \[\operatorname{spt}(z):=\{k\ |\ z_{k}>0\}.\] **Lemma 2.2**.: _For \(\Pi\in\Pi(x,y)\), \(\operatorname{spt}(\Pi)\subset\operatorname{spt}(x)\times\operatorname{spt}(y)\) holds. Moreover, \(\operatorname{spt}(x)\times\operatorname{spt}(y)=\operatorname{spt}(x\otimes y)\) follows._ Proof.: For \((i,j)\in\operatorname{spt}(\Pi)\), we have \[x_{i}=\sum_{l=1}^{J}\pi_{il}\geq\pi_{ij}>0,\qquad y_{j}=\sum_{l=1}^{I}\pi_{lj }\geq\pi_{ij}>0,\] which ensure that \(i\in\operatorname{spt}(x)\) and \(j\in\operatorname{spt}(y)\), that is, \((i,j)\in\operatorname{spt}(x)\times\operatorname{spt}(y)\). For \((i,j)\), it turns out that \[(i,j)\in\operatorname{spt}(x)\times\operatorname{spt}(y)\quad\Longleftrightarrow \quad x_{i}>0\text{ and }y_{j}>0\quad\Longleftrightarrow\quad x_{i}y_{j}>0\quad \Longleftrightarrow\quad(i,j)\in\operatorname{spt}(x\otimes y).\] This completes the proof of the lemma. By Lemma 2.2, we find that \(D_{U}(\cdot,x\otimes y)\) is continuous on a compact set \(\Pi(x,y)\) so that \(\mathfrak{D}_{U}(x,y)<\infty\). ### On Assumption 1.2 and Definitions 1.4, 1.5 There is nothing to prove on the optimal transport problem (1.1) in the case of \(\Pi(x,y)=\operatorname{argmin}_{\Pi(x,y)}\langle C,\Pi\rangle\). Thus, we suppose Assumption 1.2, in which \(\Pi(x,y)\) contains an element other than \(x\otimes y\) and hence \(\mathfrak{D}_{U}(x,y)>0\) holds. Let \(V(x,y)\) be the set of the vertices of \(\Pi(x,y)\), that is, \(V(x,y)\) is the set with the smallest cardinality among the sets whose convex hull coincides with \(\Pi(x,y)\). Note that \(\operatorname{argmin}_{V\in V(x,y)}\langle C,V\rangle=V(x,y)\) yields \(\operatorname{argmin}_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle=\Pi(x,y)\). Thus, under Assumption 1.2, \(V(x,y)\setminus\operatorname{argmin}_{V\in V(x,y)}\langle C,V\rangle\) is not empty and \(\Delta_{C}(x,y)\in(0,\infty)\) holds. ### On Definition 1.6 Let \(U\) satisfy Assumption 1.3. By the strict convexity of \(U\) on \([0,1]\) together with \(U^{\prime}(0)=-\infty\), the function \(U^{\prime}\) on \((0,1]\) has the inverse function \(e_{U}:(-\infty,U^{\prime}(1)]\to(0,1]\). We observe from \(U^{\prime\prime}>0\) on \((0,1)\) that the function \(r\mapsto U^{\prime}(r)-U^{\prime}(1-r)\) is strictly increasing on \((0,1)\). This with the properties \[U^{\prime}\left(\frac{1}{2}\right)-U^{\prime}\left(1-\frac{1}{2}\right)=0, \qquad\lim_{r\uparrow 1}(U^{\prime}(r)-U^{\prime}(1-r))=\infty\] guarantees the unique existence of \(R_{U}(x,y)\). Moreover, since \(r\mapsto rU^{\prime\prime}(r)\) is non-decreasing in \((0,1)\), we find \[\sup_{r\in(0,R_{U}(x,y)]}\left(U^{\prime}(1-r)+rU^{\prime\prime}(r)\right)\leq U ^{\prime}(1)+R_{U}(x,y)U^{\prime\prime}(R_{U}(x,y))<\infty.\] Thus, all the notion in Definition 1.6 is well-defined under Assumption 1.3. ### On Assumption 1.3 Due to Aleksandrov's theorem (e.g., [6, Theorem 6.9]), \(U\) is twice differentiable almost everywhere on \([0,1]\). In the case of \(U\in C^{2}((0,1))\), the strict convexity leads to \(U^{\prime\prime}>0\) almost everywhere on \((0,1)\). Thus, the requirement \(U\in C^{2}((0,1))\) together with \(U^{\prime\prime}>0\) on \((0,1)\) is mild. Let \(U\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) such that \(U^{\prime\prime}>0\) on \((0,1)\). To apply some algorithms, such as gradient descent algorithms [17, Sections 4.4, 4.5, 9.3], we require that \(\Pi^{U}(\omega)\) belongs to the interior of the convex polytope \(\Pi(x,y)\) for any \(\omega=(C,x,y,\varepsilon)\in\Omega\). It follows from [18, Lemma 3.7 and Remark 3.9] that \(\Pi^{U}(\omega)\) belongs to the interior of \(\Pi(x,y)\) for any \(\omega=(C,x,y,\varepsilon)\in\Omega\) if and only if \(U^{\prime}(0)=-\infty\). Let \(U\in C^{2}((0,1))\) satisfy \(U^{\prime\prime}>0\) on \((0,1)\). Define \(q_{U}:(0,1)\to[-\infty,\infty]\) by \[q_{U}(r):=rU^{\prime\prime}(r)\cdot\limsup_{h\downarrow 0}\frac{1}{h}\left( \frac{1}{U^{\prime\prime}(r+h)}-\frac{1}{U^{\prime\prime}(r)}\right),\qquad Q_ {U}:=\sup_{r\in(0,1)}q_{U}(r).\] If \(Q_{U}<\infty\), then \(U^{\prime}(0)=-\infty\) yields \(Q_{U}\geq 1\) by [9, Corollaries 2.6, 2.7]. Note that if \(U\in C^{3}((0,1))\), then \[q_{U}(r)=-\frac{rU^{\prime\prime\prime}(r)}{U^{\prime\prime}(r)}\quad\text{ for }r\in(0,1).\] In [9], the notion of \(q_{U}\) is introduced to determine the hierarchy of \(U\) in terms of concavity associated with \(U^{\prime}\). See also [15], where \(q_{U}\) is used to classify convex functions into displacement convex classes. For the definition of the displacement convex classes, see [19, Chapter 17]. It follows from [9, Theorem 2.4] that, for \(W\in C^{2}((0,1))\) satisfying \(W^{\prime\prime}>0\) on \((0,1)\), if \(q_{U}<\infty,q_{W}>-\infty\) hold almost everywhere on \((0,1)\) and \(q_{U}\leq q_{W}\) holds on \((0,1)\), then there exist \(\lambda>0\) and \(\mu_{1}\in\mathbb{R}\) such that \(U^{\prime}\geq\lambda W^{\prime}+\mu_{1}\) holds on \((0,1]\), consequently, \[e_{U}(\tau)\leq e_{W}(\lambda^{-1}(\tau-\mu_{1}))\quad\text{on }\tau\in(U^{ \prime}(0),\lambda W^{\prime}(1)+\mu_{1}].\] Thus, under the assumption \(U\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) such that \(U^{\prime\prime}>0\) on \((0,1)\) and \(U^{\prime}(0)=-\infty\), if \(Q_{U}<\infty\), then the choice \(Q_{U}=1\) is the best possible for the estimate in Theorem 1.7. Moreover, for \(U\in C^{2}((0,1))\) satisfying \(U^{\prime\prime}>0\) on \((0,1)\), \(Q_{U}=1\) is equivalent to that \(r\mapsto rU^{\prime\prime}(r)\) is non-decreasing on \((0,1)\) by [9, Corollary 2.6]. Thus, we confirm that Assumption 1.3 is reasonable. Furthermore, if we choose \(W=U_{o}\), then \(q_{W}\equiv 1\) holds on \((0,1)\), and hence \(Q_{U}=1\) implies the existence of \(\lambda>0\) and \(\mu_{1}\in\mathbb{R}\) such that \[e_{U}(\tau)\leq\exp\left(\lambda^{-1}(\tau-\mu_{1})\right)\quad\text{for }\tau\in U ^{\prime}((0,1)). \tag{2.1}\] Thus, the error estimate in Theorem 1.7 is faster than the exponential decay. ### Asymptotic behavior of the error Fix \(\omega=(C,x,y,\varepsilon)\in\Omega\). Let \(\Pi^{*}\in\Pi(x,y)\) be an optimal transport plan. We observe from the definition of \(\Pi^{U}(\omega)\) that \[\langle C,\Pi^{U}(\omega)\rangle+\varepsilon D_{U}(\Pi^{U}(\omega),x\otimes y )=\inf_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi\rangle+\varepsilon D_{U}(\Pi,x \otimes y)\right)\leq\langle C,\Pi^{*}\rangle+\varepsilon D_{U}(\Pi^{*},x \otimes y). \tag{2.2}\] This with the nonnegativity of \(D_{U}\) yields \[\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle\leq\varepsilon D_{U} (\Pi^{*},x\otimes y),\] proving (1.3). Moreover, the limit \[\Pi^{U}(C,x,y,0):=\lim_{\varepsilon\downarrow 0}\Pi^{U}(C,x,y,\varepsilon)\] exists and satisfies \[\operatorname*{argmin}_{\Pi^{\prime}\in\operatorname*{argmin}_{n\in\Pi(x,y)}(C, \Pi)}D_{U}(\Pi^{\prime},x\otimes y)=\{\Pi^{U}(C,x,y,0)\}\] (see [18, Theorem 3.11] for instance). ## 3. Proof of Theorem 1.7 Before proving Theorem 1.7, we consider the normalization of \(U\) since the correspondence \(U\mapsto D_{U}\) is not injective. **Lemma 3.1**.: _For \(\lambda>0\) and \(\mu_{0},\mu_{1}\in\mathbb{R}\), define \(U_{\lambda,\mu_{0},\mu_{1}}:[0,1]\to\mathbb{R}\) by_ \[U_{\lambda,\mu_{0},\mu_{1}}(r):=\lambda U(r)+\mu_{1}r+\mu_{0}.\] _Then, \(d_{U_{\lambda,\mu_{0},\mu_{1}}}=\lambda d_{U}\) holds on \([0,1]\times[0,1]\), consequently, \(D_{U_{\lambda,\mu_{0},\mu_{1}}}=\lambda D_{U}\) on \(\mathcal{P}_{K}\times\mathcal{P}_{K}\). If \(U\) satisfies Assumption 1.3, then so does \(U_{\lambda,\mu_{0},\mu_{1}}\) and \(q_{U_{\lambda,\mu_{0},\mu_{1}}}=q_{U}\) holds on \((0,1)\)._ Since the proof is straightforward, we omit it. Let \(U\) satisfy Assumption 1.3. For \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\omega=(C,x,y,\varepsilon)\in\Omega\), we have \[\Pi^{U_{1},\mu_{0},\mu_{1}}(\omega)=\Pi^{U}(\omega), \mathfrak{D}_{U_{1},\mu_{0},\mu_{1}}(x,y)=\mathfrak{D}_{U}(x,y),\] \[R_{U_{1},\mu_{0},\mu_{1}}(x,y)=R_{U}(x,y), \nu_{U_{1},\mu_{0},\mu_{1}}(x,y)=\nu_{U}(x,y)+\mu_{1},\] and \(e_{U_{1},\mu_{0},\mu_{1}}(\tau)=e_{U}(\tau-\mu_{1})\) for \(\tau\in U_{1,\mu_{0},\mu_{1}}^{\prime}((0,1])\) together with \(U_{1,\mu_{0},\mu_{1}}^{\prime}(1)=U^{\prime}(1)+\mu_{1}\). Thus, in Theorem 1.7, we can normalize \(U\) to \(U(0)=U(1)=0\) as well as the case of \(U=U_{o}\) without loss of generality. For the reason and the effect of choice of \(\lambda>0\), see Section 4.1. Throughout the rest of this section, we suppose Assumptions 1.2 and 1.3 together with \(U(0)=U(1)=0\). We prepare two lemmas to prove Theorem 1.7. The following proof strategy is aligned with the argument of [20, Lemmas 6-8]. **Lemma 3.2**.: _For \(r,s,t\in[0,1]\) and \(r_{0}\in(0,1]\),_ \[U\left((1-t)r+ts\right) \geq(1-t)U(r)+tU(s)+rU(1-t)+sU(t),\] \[d_{U}((1-t)r+ts,r_{0}) \geq(1-t)d_{U}(r,r_{0})+td_{U}(s,r_{0})+rU(1-t)+sU(t).\] Proof.: For \(r,s,t\in[0,1]\) and \(r_{0}\in(0,1]\), if the first inequality holds true, then it holds that \[d_{U}((1-t)r+ts,r_{0}) =U\left((1-t)r+ts\right)-U(r_{0})-\left\{(1-t)r+ts-r_{0}\right\}U ^{\prime}(r_{0})\] \[\geq(1-t)U(r)+tU(s)+rU(1-t)+sU(t)-U(r_{0})-\left\{(1-t)r+ts-r_{0} \right\}U^{\prime}(r_{0})\] \[=(1-t)d_{U}(r,r_{0})+td_{U}(s,r_{0})+rU(1-t)+sU(t),\] which is the second inequality. To show the first inequality, set \[G(r,s,t):=(1-t)U(r)+tU(s)+rU(1-t)+sU(t)-U\left((1-t)r+ts\right)\] for \(r,s,t\in[0,1]\). By the continuity of \(G\), it is enough to show \[\max_{r\in[0,1]}G(r,s,t)\leq 0\quad\text{for }s,t\in(0,1). \tag{3.1}\] Note that \[\frac{\partial}{\partial r}G(r,s,t) =U(1-t)+(1-t)\left(U^{\prime}(r)-U^{\prime}\left((1-t)r+ts\right) \right),\] \[\frac{\partial^{2}}{\partial r^{2}}G(r,s,t) =(1-t)\left[U^{\prime\prime}(r)-(1-t)U^{\prime\prime}((1-t)r+st) \right)]\] for \(r,s,t\in(0,1)\). Let us now show \[\max_{r\in[0,1]}G(r,s,t)=\max\left\{G(0,s,t),G(1,s,t)\right\}\quad\text{for }s,t \in(0,1). \tag{3.2}\] Let \(s,t\in(0,1)\). On one hand, for \(r\in(0,1)\) with \(r\leq s\), since \(U^{\prime}\) is strictly increasing on \((0,1)\), we have \(U^{\prime}(r)-U^{\prime}((1-t)r+ts)\leq 0\) and hence \(\partial_{r}G(r,s,t)\leq U(1-t)\). Note that \(U(1-t)<0\) follows from the strict convexity of \(U\) with the condition \(U(0)=U(1)=0\). This implies that \(\partial_{r}G(r,s,t)<0\) if \(r\leq s\) and \[\max_{r\in[0,s]}G(r,s,t)=G(0,s,t),\quad\text{in particular},\quad G(s,s,t)<G(0,s,t). \tag{3.3}\] On the other hand, for \(r\in(0,1)\) with \(r>s\), we have \((1-t)r+ts<r\) and \[(1-t)rU^{\prime\prime}((1-t)r+ts)<\ [(1-t)r+ts]U^{\prime\prime}((1-t)r+ts)\leq rU ^{\prime\prime}(r)\] by \(U^{\prime\prime}>0\) and the monotonicity of \(r\mapsto rU^{\prime\prime}(r)\) on \((0,1)\). This yields \(\partial_{r}^{2}G(r,s,t)>0\). If \(\partial_{r}G(r_{0},s,t)=0\) holds for some \(r_{0}\in[s,1]\), then \[\max_{r\in[s,1]}G(r,s,t)=\max\{G(s,s,t),G(1,s,t)\}. \tag{3.4}\] In contrast, if \(\partial_{r}G(r,s,t)<0\) always holds, then \[\max_{r\in[s,1]}G(r,s,t)=G(s,s,t). \tag{3.5}\] Summarizing the above relations (3.3), (3.4), and (3.5), we have (3.2). Since \(U(0)=U(1)=0\), a direct computation gives \[\frac{\partial^{2}}{\partial s^{2}}G(0,s,t)=\frac{\partial^{2}}{\partial s^{2 }}\left(tU(s)+sU(t)-U\left(ts\right)\right)=tU^{\prime\prime}(s)-t^{2}U^{ \prime\prime}(ts)=\frac{t}{s}\left(sU^{\prime\prime}(s)-tsU^{\prime\prime}(ts )\right)\geq 0\] for \(s,t\in(0,1)\), where the inequality follows from the monotonicity of \(r\mapsto rU^{\prime\prime}(r)\) on \((0,1)\). Thus, for \(t\in(0,1)\), \(G(0,\cdot,t)\) is convex on \([0,1]\) and \[\max_{s\in[0,1]}G(0,s,t)=\max\{G(0,0,t),G(0,1,t)\}=0. \tag{3.6}\] Next, we find \[\frac{\partial}{\partial s}G(1,s,t)=\frac{\partial}{\partial s}\left(tU(s)+U( 1-t)+sU(t)-U(1-t+ts)\right)=tU^{\prime}(s)+U(t)-tU^{\prime}\left(1-t+ts\right) <U(t)<0\] for \(s,t\in(0,1)\), where the first inequality follows from the monotonicity of \(U^{\prime}\) on \((0,1)\). This leads to \[\max_{s\in[0,1]}G(1,s,t)=G(1,0,t)=0 \tag{3.7}\] for \(t\in(0,1)\). Thus, we deduce (3.1) from (3.2) together with (3.6) and (3.7). This proves the lemma. Recall that for any \(D>0\), there exists \(R\in(1/2,1)\) uniquely such that \(U^{\prime}(R)-U^{\prime}(1-R)=D\) (see Subsection 2.2). **Lemma 3.3**.: _For \(D>0\) and \(R\in(1/2,1)\) with \(U^{\prime}(R)-U^{\prime}(1-R)=D\),_ \[r\mapsto Dr-U(r)-U(1-r)\] _is strictly increasing on \([0,R]\). Moreover, it holds that_ \[-U(r)-U(1-r)\leq-rU^{\prime}(r)+r\sup_{\rho\in(0,R]}(U^{\prime}(1-\rho)+\rho U ^{\prime\prime}(\rho))\] _for \(r\in(0,R]\)._ Proof.: We calculate \[\frac{\mathrm{d}^{2}}{\mathrm{d}r^{2}}(Dr-U(r)-U(1-r))=-U^{\prime\prime}(r)-U ^{\prime\prime}(1-r)<0\] for \(r\in(0,1)\), consequently, \[\frac{\mathrm{d}}{\mathrm{d}r}(Dr-U(r)-U(1-r))>\frac{\mathrm{d}}{\mathrm{d}r} (Dr-U(r)-U(1-r))\Bigg{|}_{r=R}=D-U^{\prime}(R)+U^{\prime}(1-R)=0\] for \(r\in(0,R)\). This proves the first assertion. We also find \[\frac{\mathrm{d}}{\mathrm{d}r}\left(U(r)+U(1-r)-rU^{\prime}(r)+r \sup_{\rho\in(0,R]}(U^{\prime}(1-\rho)+\rho U^{\prime\prime}(\rho))\right)\] \[=-U^{\prime}(1-r)-rU^{\prime\prime}(r)+\sup_{\rho\in(0,R]}(U^{ \prime}(1-\rho)+\rho U^{\prime\prime}(\rho))\] \[\geq 0\] for \(r\in(0,R]\). This together with Lemma 2.1 and the assumption \(U(0)=U(1)=0\) yield \[U(r)+U(1-r)-rU^{\prime}(r)+r\sup_{\rho\in(0,R]}(U^{\prime}(1-\rho)\] \[\geq\lim_{r\downarrow 0}\left(U(r)+U(1-r)-rU^{\prime}(r)+r\sup_{\rho\in(0,R]}(U ^{\prime}(1-\rho)+\rho U^{\prime\prime}(\rho))\right)=0\] for \(r\in(0,R]\). This proves the second assertion of the lemma. Proof of Theorem 1.7.: Let \(\omega=(C,x,y,\varepsilon)\in\Omega\). Then, \(\Delta_{C}(x,y),\mathfrak{D}_{U}(x,y)\in(0,\infty)\) hold as mentioned in Subsection 2.1. We also have \[U^{\prime}(1)\leq U^{\prime}(1)+\lim_{r\downarrow 0}rU^{\prime\prime}(r)\leq \nu_{U}(x,y)<\infty\] and \(\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}(1)\in(0,\infty)\). Thus, the interval \[\left(0,\frac{\Delta_{C}(x,y)R_{U}(x,y)}{\mathfrak{D}_{U}(x,y)}\right]\cap \left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}( 1)}\right]\] is well-defined and nonempty. Let us choose \(\varepsilon\) from the interval. Note that \[\varepsilon\in\left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}(1)}\right]\] is equivalent to \[-\frac{\Delta_{C}(x,y)}{\varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\in U^ {\prime}((0,1]).\] Recall that \(V(x,y)\) is the vertex set of \(\Pi(x,y)\). There exists a family \(\{t_{V}\}_{V\in V(x,y)}\subset[0,1]\) uniquely such that \[\sum_{V\in V(x,y)}t_{V}=1,\qquad\Pi^{U}(\omega)=\sum_{V\in V(x,y)}t_{V}V.\] Set \[V_{0}(x,y):=\operatorname*{argmin}_{V\in V(x,y)}\langle C,V\rangle,\quad t:=1 -\sum_{V\in V_{0}(x,y)}t_{V}.\] By Assumption 1.2, \(V_{0}(x,y)\neq V(x,y)\) holds. Since \(\Pi^{U}(\omega)\) belongs to the interior of the convex polytope \(\Pi(x,y)\), we find that \(t_{V}\in(0,1)\) for \(V\in V(x,y)\), consequently, \(t\in(0,1)\). We also set \[\Pi^{*}:=\sum_{V\in V_{0}(x,y)}\frac{t_{V}}{1-t}V,\quad\Pi^{\prime}:=\sum_{V^{ \prime}\in V(x,y)\setminus V_{0}(x,y)}\frac{t_{V^{\prime}}}{t}V^{\prime}.\] It turns out that \(\Pi^{*},\Pi^{\prime}\in\Pi(x,y)\) and \[\Pi^{U}(\omega)=(1-t)\Pi^{*}+t\Pi^{\prime}.\] We find that \[\langle C,V\rangle=\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle\quad\text{for }V\in V _{0}(x,y),\quad\text{ in particular}\quad\langle C,\Pi^{*}\rangle=\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle.\] Setting \[r:=\frac{\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle}{\Delta_{C} (x,y)},\] we observe from (2.2) with the definition of \(\mathfrak{D}_{U}(x,y)\) that \[\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle\leq\varepsilon\left( D_{U}(\Pi^{*},x\otimes y)-D_{U}(\Pi^{U}(\omega),x\otimes y)\right)\leq \varepsilon\mathfrak{D}_{U}(x,y). \tag{3.8}\] We also find \[\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle=t\langle C,\Pi^{\prime}- \Pi^{*}\rangle\geq t\left(\inf_{V^{\prime}\in V(x,y)\setminus V_{0}(x,y)}\langle C,V^{\prime}\rangle-\inf_{V\in V_{0}(x,y)}\langle C,V\rangle\right)=t\Delta_{C}(x,y).\] These with the condition \(\varepsilon\in(0,\Delta_{C}(x,y)R_{U}(x,y)/\mathfrak{D}_{U}(x,y)]\) yield \[t\leq r=\frac{\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle}{ \Delta_{C}(x,y)}\leq\frac{\varepsilon\mathfrak{D}_{U}(x,y)}{\Delta_{C}(x,y)} \leq R_{U}(x,y).\] It follows from Lemma 2.2 with the second inequality in Lemma 3.2 that \[D_{U}(\Pi^{U}(\omega),x\otimes y) =D_{U}((1-t)\Pi^{*}+t\Pi^{\prime},x\otimes y)\] \[=\sum_{(i,j)\in\operatorname{spt}(x\otimes y)}d_{U}((1-t)\pi_{ij }^{*}+t\pi_{ij}^{\prime},x_{i}y_{j})\] \[\geq\sum_{(i,j)\in\operatorname{spt}(x\otimes y)}\left\{(1-t)d_ {U}(\pi_{ij}^{*},x_{i}y_{j})+td_{U}(\pi_{ij}^{\prime},x_{i}y_{j})+\pi_{ij}^{*} U(1-t)+\pi_{ij}^{\prime}U(t)\right\}\] \[=(1-t)D_{U}(\Pi^{*},x\otimes y)+tD_{U}(\Pi^{\prime},x\otimes y)+ U(1-t)+U(t).\] This and Lemma 3.3 together with \(t\leq r\leq R_{U}(x,y)\) yield \[D_{U}(\Pi^{*},x\otimes y)-D_{U}(\Pi^{U}(\omega),x\otimes y) \leq tD_{U}(\Pi^{*},x\otimes y)-tD_{U}(\Pi^{\prime},x\otimes y)-U( t)-U(1-t)\] \[\leq t\mathfrak{D}_{U}(x,y)-U(t)-U(1-t)\] \[\leq r\mathfrak{D}_{U}(x,y)-U(r)-U(1-r)\] \[\leq r\mathfrak{D}_{U}(x,y)-rU^{\prime}(r)+r\nu_{U}(x,y)\] \[=r(\mathfrak{D}_{U}(x,y)-U^{\prime}(r)+\nu_{U}(x,y)).\] Combining this with (3.8), we find \[\frac{\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle}{\varepsilon} \leq D_{U}(\Pi^{*},x\otimes y)-D_{U}(\Pi^{U}(\omega),x\otimes y)\leq r( \mathfrak{D}_{U}(x,y)-U^{\prime}(r)+\nu_{U}(x,y)),\] which leads to \[U^{\prime}(r)\leq-\frac{\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*} \rangle}{\varepsilon r}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)=-\frac{\Delta_{C}( x,y)}{\varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y).\] It follows from the monotonicity of \(U^{\prime}\) on \((0,1]\) that \[\frac{\langle C,\Pi^{U}(\omega)\rangle-\langle C,\Pi^{*}\rangle}{\Delta_{C}( x,y)}=r=e_{U}(U^{\prime}(r))\leq e_{U}\left(-\frac{\Delta_{C}(x,y)}{\varepsilon}+ \mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right),\] that is, \[\langle C,\Pi^{U}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle\leq \Delta_{C}(x,y)\cdot e_{U}\left(-\frac{\Delta_{C}(x,y)}{\varepsilon}+ \mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right)\] as desired. ## 4. Normalization and scaling In this section, we first show that the normalization of a strictly convex function does not affect the error estimate in Theorem 1.7. We then consider the effect of scaling of data and the domain of a strictly convex function on the error estimate. ### Normalization Let \(U\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) satisfy \(U^{\prime\prime}>0\) on \((0,1)\). For \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\), define \(U_{\lambda,\mu_{0},\mu_{1}}\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) by \[U_{\lambda,\mu_{0},\mu_{1}}(r):=\lambda U(r)+\mu_{1}r+\mu_{0}.\] By the _normalization_ of \(U\), we mean the choice of \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\) such that \[U_{\lambda,\mu_{0},\mu_{1}}(0)=u_{0},\quad U_{\lambda,\mu_{0},\mu_{1}}(1)=u_{1 },\quad U_{\lambda,\mu_{0},\mu_{1}}^{\prime}(1)=u_{1}^{\prime}\quad\text{for $u_{0},u_{1},u_{1}^{ \prime}\in\mathbb{R}$ with $u_{1}-u_{0}<u_{1}^{\prime}$},\] where the inequality on \(u_{0},u_{1},u_{1}^{\prime}\) is required for \(U_{\lambda,\mu_{0},\mu_{1}}\) to be strictly convex. Let \((C,x,y,\varepsilon)\in\Omega\). Then, we find \(\Pi^{U_{\lambda,\mu_{0},\mu_{1}}}(C,x,y,\varepsilon)=\Pi^{U}(C,x,y,\lambda\varepsilon)\) and that the interval \[\left(0,\frac{\Delta_{C}(x,y)R_{U_{\lambda,\mu_{0},\mu_{1}}}(x,y)}{\mathfrak{D }_{U_{\lambda,\mu_{0},\mu_{1}}}(x,y)}\right]\cap\left(0,\frac{\Delta_{C}(x,y)}{ \mathfrak{D}_{U_{\lambda,\mu_{0},\mu_{1}}}(x,y)+\nu_{U_{\lambda,\mu_{0},\mu_{1 }}}(x,y)-U_{\lambda,\mu_{0},\mu_{1}}^{\prime}(1)}\right]\] is well-defined (resp. contains \(\varepsilon\)) if and only if the interval \[\left(0,\frac{\Delta_{C}(x,y)R_{U}(x,y)}{\mathfrak{D}_{U}(x,y)}\right]\cap \left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}( 1)}\right]\] is well-defined (resp. contains \(\lambda\varepsilon\)), in which the equality \[e_{U_{\lambda,\mu_{0},\mu_{1}}}\left(-\frac{\Delta_{C}(x,y)}{\varepsilon}+ \mathfrak{D}_{U_{\lambda,\mu_{0},\mu_{1}}}(x,y)+\nu_{U_{\lambda,\mu_{0},\mu_{1 }}}(x,y)\right)=e_{U}\left(-\frac{\Delta_{C}(x,y)}{\lambda\varepsilon}+ \mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right)\] holds. Thus, in Theorem 1.7, we can normalize \(U\) as \[U(0)=u_{0},\quad U(1)=u_{1},\quad U^{\prime}(1)=u_{1}^{\prime}\quad\text{for $u_{0},u_{1},u_{1}^{\prime}\in\mathbb{R}$ with $u_{1}-u_{0}<u_{1}^{\prime}$}\] without loss of generality. Let \(W\in C([0,1])\cap C^{1}((0,1])\cap C^{2}((0,1))\) also satisfy \(W^{\prime\prime}>0\) on \((0,1)\). Then, the following three conditions are equivalent to each other. 1. There exist \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\) such that \(W=U_{\lambda,\mu_{0},\mu_{1}}\) on \((0,1)\). 2. There exist \(\mu_{1}\in\mathbb{R}\) and \(\lambda>0\) such that \(W^{\prime}=\lambda U^{\prime}+\mu_{1}\) on \((0,1)\). 3. There exists \(\lambda>0\) such that \(W^{\prime\prime}=\lambda U^{\prime\prime}\) on \((0,1)\). Thus, under the normalization \(U(0)=u_{0}\) and \(U(1)=u_{1}\) for \(u_{0},u_{1}\in\mathbb{R}\), we can use either \(U^{\prime}\) or \(U^{\prime\prime}\) instead of \(U\) itself. Note that each of (C0)-(C2) is equivalent to the following condition. 1. There exists \(\lambda>0\) such that \(D_{W}=\lambda D_{U}\) on \(\mathcal{P}_{K}\times\mathcal{P}_{K}\) for \(K\geq 3\). The implication from (C0) to (D) is straightforward. Assume (D). For \(K\geq 3\) and \(r\in[0,1]\), define \(z^{r}\in\mathcal{P}_{K}\) by \[z^{r}_{k}:=\begin{cases}r&\text{for $k=1$},\\ 1-r&\text{for $k=2$},\\ 0&\text{otherwise}.\end{cases}\] We also define \(z^{*}\in\mathcal{P}_{K}\) by \(z^{*}_{k}=K^{-1}\) for all \(k\). For \(r\in(0,1)\), we calculate \[W^{\prime}(r)-W^{\prime}(1-r)=\frac{\mathrm{d}}{\mathrm{d}r}D_{W}(z^{r},z^{*}) =\lambda\frac{\mathrm{d}}{\mathrm{d}r}D_{U}(z^{r},z^{*})=\lambda\left(U^{ \prime}(r)-U^{\prime}(1-r)\right).\] Differentiating this with respect to \(r\) implies that \[W^{\prime\prime}(r)-\lambda U^{\prime\prime}(r)=-\left(W^{\prime\prime}(1-r)- \lambda U^{\prime\prime}(1-r)\right),\quad\text{in particular}\quad W^{ \prime\prime}(1/2)=\lambda U^{\prime\prime}(1/2). \tag{4.1}\] For \(r,s\in(0,1)\) with \(r+s<1\), we define \(z^{r,s}\in\mathcal{P}_{K}\) by \[z^{r,s}_{k}:=\begin{cases}1-(r+s)&\text{for $k=1$},\\ r&\text{for $k=2$},\\ (K-2)^{-1}s&\text{otherwise}.\end{cases}\] It turns out that \[(r+s)W^{\prime\prime}(1-(r+s))+rW^{\prime\prime}(r) =\frac{\partial}{\partial r}D_{W}(z^{1},z^{r,s})\] \[=\lambda\frac{\partial}{\partial r}D_{U}(z^{1},z^{r,s})=\lambda \left[(r+s)U^{\prime\prime}(1-(r+s))+rU^{\prime\prime}(r)\right],\] implying \[W^{\prime\prime}(r)-\lambda U^{\prime\prime}(r)=-\frac{r+s}{r}\left(W^{ \prime\prime}(1-(r+s))-\lambda U^{\prime\prime}(1-(r+s))\right).\] This with (4.1) provides \[\frac{r+s}{r}\left(W^{\prime\prime}(1-(r+s))-\lambda U^{\prime\prime}(1-(r+s)) \right)=W^{\prime\prime}(1-r)-\lambda U^{\prime\prime}(1-r),\] in particular, the choice of \(r=1/2\) leads to \[W^{\prime\prime}\left(\frac{1}{2}-s\right)=\lambda U^{\prime\prime}\left(\frac{1}{ 2}-s\right)\quad\text{for }s\in\left(0,\frac{1}{2}\right).\] This together with (4.1) gives \(W^{\prime\prime}=\lambda U^{\prime\prime}\) on \((0,1)\), which is nothing but (C2). Note that, under Assumption 1.2, we have \(I,J\neq 1\) hence \(IJ\geq 4\). Thus, the condition \(K\geq 3\) in (D) is reasonable. We also notice that (C2) leads to the following condition. * \(q_{U}=q_{W}\) on \((0,1)\). By [9, Theorem 2.4], if \(q_{U},q_{W}\) are finite almost everywhere on \((0,1)\), then (C) leads to (C2). Thus, all conditions (C0)-(C2), (D), and (C) are equivalent to each other. To use \(U^{\prime}\) instead of \(U\), let us consider the following assumption. **Assumption 4.1**.: Let \(L\in C((0,1])\cap C^{1}((0,1))\) satisfy that \(L^{\prime}>0\) on \((0,1)\), \(\lim_{t\downarrow 0}L(t)=-\infty\), and \(t\mapsto tL^{\prime}(t)\) is non-decreasing on \((0,1)\). Suppose Assumption 4.1. Then, there exists \(t_{0}\in(0,1]\) such that \(L<0\) on \((0,t_{0}]\). By the monotonicity of \(t\mapsto tL^{\prime}(t)\), we have \[L(t_{0})-L(t)=\int_{t}^{t_{0}}L^{\prime}(s)\mathrm{d}s\leq t_{0}L^{\prime}(t_{ 0})\int_{t}^{t_{0}}\frac{1}{s}\mathrm{d}s=t_{0}L^{\prime}(t_{0})\left(\log t_{ 0}-\log t\right)\] for \(t\in(0,t_{0}]\). Since \(L\) is monotone on \((0,t_{0}]\) and \[\left(L(t_{0})-t_{0}L^{\prime}(t_{0})\log t_{0}\right)(t_{0}-h)+t_{0}L^{\prime }(t_{0})\int_{h}^{t_{0}}\log t\mathrm{d}t\leq\int_{h}^{t_{0}}L(t)\mathrm{d}t<0\] holds \(h\in(0,t_{0}]\), the improper integral \[U_{L}(r):=\int_{0}^{r}L(t)\mathrm{d}t\] is well-defined for \(r\in[0,1]\). It is easy to see that \(U_{L}\) satisfies Assumption 1.3. Note that \[d_{U_{L}}(r,r_{0})=\int_{r_{0}}^{r}(L(t)-L(r_{0}))\mathrm{d}t=\int_{r_{0}}^{r} \int_{r_{0}}^{t}L^{\prime}(s)\mathrm{d}s\mathrm{d}t\quad\text{for }r\in[0,1],r_{0}\in(0,1].\] Conversely, if \(U\) satisfies Assumption 1.3, then \(L=U^{\prime}\) satisfies Assumption 4.1. Thus, we can use \(L\) satisfying Assumption 4.1 instead of \(U\) satisfying Assumption 1.3 for our regularization. _Remark 4.2_.: In Theorem 1.7, the range of the regularization parameter \(\varepsilon\) is given by the intersection of the two intervals. One interval \[\left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)-U^{\prime}( 1)}\right]\] is needed to make \[-\frac{\Delta_{C}(x,y)}{\varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\in U^ {\prime}((0,1])\] as seen in the proof of Theorem 1.7. Hence, this interval is not needed if \(U\) is extended to a continuous, strictly convex function on \([0,\infty)\) and \(U\in C^{1}((0,\infty))\) with \(\lim_{r\uparrow\infty}U^{\prime}(r)=\infty\). For example, under the normalization \[U(0)=U(1)=0,\qquad U^{\prime}(1)=1, \tag{4.2}\] which is valid for the case \(U=U_{\mathrm{o}}\), we can extend \(U\) by setting \[U(r):=r\log r\quad\text{for }r>1.\] For \(L\) satisfying Assumption 4.1, if we set \[\ell(t):=\left(L(1)-\int_{0}^{1}L(s)\mathrm{d}s\right)^{-1}\left(L(t)-\int_{0} ^{1}L(s)\mathrm{d}s\right)\] for \(t\in[0,1]\), then \(\ell\) satisfies Assumption 4.1 and \(U_{\ell}\) satisfies the normalization (4.2). ### Scaling In the regularized problem (1.2), scaling can have two meanings: scaling of data and scaling of the domain of a strictly convex function. Let us show that they play an equivalent role. In the optimal transport problem (1.1), although two given data \(x\) and \(y\) are normalized to be \(1\) with respect to the \(\ell^{1}\)-norm, their \(\ell^{1}\)-norms can be chosen arbitrarily if both are the same. For a subset \(\mathcal{Z}\) of Euclidean space and \(a>0\), set \[a\mathcal{Z}:=\{az\;|\;z\in\mathcal{Z}\}.\] For \(x\in\mathcal{P}_{I}\) and \(y\in\mathcal{P}_{J}\), we shall, by abuse of notation, define \[\Pi(ax,ay):=\left\{\widetilde{\Pi}=(\widetilde{\pi}_{ij})\in a\mathcal{P}_{I \times J}\;\bigg{|}\;\sum_{l=1}^{J}\widetilde{\pi}_{il}=ax_{i}\text{ and }\sum_{l=1}^{I}\widetilde{\pi}_{lj}=ay_{j}\;\text{ for any }i,j\right\}.\] Then, we have \(\Pi(ax,ay)=a\Pi(x,y)\) and hence \(ax\otimes y\in\Pi(ax,ay)\). We denote by \(V(ax,ay)\) the set of the vertices of \(\Pi(ax,ay)\). Then, \(V(ax,ay)=aV(x,y)\) follows. For \(U\in C([0,1])\cap C^{1}((0,1])\) being strictly convex on \([0,1]\) and \(a\in(0,1]\), we can define \(D_{U}:a\mathcal{P}_{K}\times a\mathcal{P}_{K}\to[0,\infty]\) by \[D_{U}(az,aw):=\sum_{k}d_{U}(az_{k},aw_{k})\quad\text{for }z,w\in\mathcal{P}_{K}\] and consider the regularized problem \[\inf_{\widetilde{\Pi}\in\Pi(ax,ay)}\left(\langle C,\widetilde{\Pi}\rangle+ \varepsilon D_{U}(\widetilde{\Pi},ax\otimes y)\right)\quad\text{for }\omega=(C,x,y, \varepsilon)\in\Omega. \tag{4.3}\] More generally, for \(a,b>0\) with \(a\leq b\) and \(W\in C([0,b])\cap C^{1}((0,b])\) being strictly convex on \([0,b]\), we define \(d_{W}:[0,b]\times[0,b]\to[0,\infty]\) by \[d_{W}(r,r_{0}):=W(r)-W(r_{0})-(r-r_{0})W^{\prime}(r_{0})\quad\text{for }r \in[0,b],r_{0}\in(0,b]\] and \[d_{W}(r,0):=\lim_{h\downarrow 0}d_{W}(r,h)\quad\text{for }r\in[0,b].\] We also define \(D_{W}:a\mathcal{P}_{K}\times a\mathcal{P}_{K}\to[0,\infty]\) by \[D_{W}(az,aw):=\sum_{k}d_{W}(az_{k},aw_{k})\quad\text{for }z,w\in\mathcal{P}_{K}.\] This enables us to consider the regularized problem on \(a\mathcal{P}_{I}\times a\mathcal{P}_{J}\) as similar as (4.3) by using a strictly convex function \(W\in C([0,b])\cap C^{1}((0,b])\) with \(a\leq b\). Next, let us scale the domain of \(U\in C([0,1])\cap C^{1}((0,1])\) being strictly convex on \([0,1]\) by setting \[U^{b}(r):=bU(b^{-1}r):[0,b]\to\mathbb{R}\] for \(b>0\). Following the notation in (4.4) below, \(U^{b}\) coincides with \(U^{b}_{1}\). If \(U\) satisfies Assumption 1.3, then so does \(U^{b}\) if \(b>1\). We observe from \[d_{U^{b}}(r,r_{0})=b\cdot d_{U}(b^{-1}r,b^{-1}r_{0})\quad\text{for }r,r_{0}\in[0,b]\] that \[\inf_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi\rangle+\varepsilon D_{U ^{b}}(\Pi,x\otimes y)\right) =b\inf_{\Pi\in\Pi(x,y)}\left(\langle C,b^{-1}\Pi\rangle+ \varepsilon D_{U}(b^{-1}\Pi,b^{-1}x\otimes y)\right),\] \[\operatorname*{argmin}_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi \rangle+\varepsilon D_{U^{b}}(\Pi,x\otimes y)\right) =\operatorname*{argmin}_{\Pi\in\Pi(x,y)}\left(\langle C,b^{-1} \Pi\rangle+\varepsilon D_{U}(b^{-1}\Pi,b^{-1}x\otimes y)\right)\] \[=b\cdot\operatorname*{argmin}_{\widetilde{\Pi}\in\Pi(b^{-1}x,b^ {-1}y)}\left(\langle C,\widetilde{\Pi}\rangle+\varepsilon D_{U}(\widetilde{ \Pi},b^{-1}x\otimes y)\right),\] for \(\omega=(C,x,y,\varepsilon)\in\Omega\). This means that the two scalings play an equivalent role. More generally, for \(b>0\), let \(W\in C([0,b])\cap C^{1}((0,b])\cap C^{2}((0,b))\) satisfy that \(W^{\prime\prime}>0\) on \((0,b)\), \(\lim_{h\downarrow 0}W^{\prime}(h)=-\infty\), and \(r\mapsto rW^{\prime\prime}(r)\) is non-decreasing on \((0,b)\). For \(a>0\), we define the function \(W^{a}_{b}\in C([0,a])\cap C^{1}((0,a])\cap C^{2}((0,a))\) by \[W^{a}_{b}(r):=ab^{-1}W(a^{-1}br). \tag{4.4}\] Then, we see that \(W^{a\prime}_{b}>0\) on \((0,a)\), \(\lim_{h\downarrow 0}W^{a\prime}_{b}(h)=-\infty\) hold and \(r\mapsto rW^{a\prime}_{b}(r)\) is non-decreasing on \((0,a)\). The normalization \(W(0)=W(b)=0\) with \(W^{\prime}(b)=1\) is equivalent to the normalization \(W^{a}_{b}(0)=W^{a}_{b}(a)=0\) with \(W^{a\prime}_{b}(a)=1\). In particular, \(U:=W^{1}_{b}\) satisfies Assumption 1.3. Furthermore, for \(\omega=(C,x,y,\varepsilon)\in\Omega\), we have \[\operatorname*{argmin}_{\overline{\Pi}\in\Pi(ax,ay)}\left(\langle C, \widetilde{\Pi}\rangle+\varepsilon D_{W^{a}_{b}}(\widetilde{\Pi},ax\otimes y )\right)=a\cdot\operatorname*{argmin}_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi \rangle+\varepsilon D_{U}(\Pi,x\otimes y)\right).\] This means that the left-hand side is a singleton. We denote by \(\Pi^{W^{a}_{b}}(C,ax,ay,\varepsilon)\) the unique element. Then, \(\Pi^{W^{a}_{b}}(C,ax,ay,\varepsilon)=a\Pi^{U}(\omega)\) holds. Let us define all notions provided to state Theorem 1.7 as follows: \[\begin{split}\mathfrak{D}_{W^{a}_{b}}(ax,ay)& \coloneqq\sup_{\widetilde{\Pi}\in\Pi(ax,ay)}D_{W^{a}_{b}}( \widetilde{\Pi},ax\otimes y),\\ \Delta_{C}(ax,ay)&\coloneqq\inf_{\widetilde{V}^{ \prime}\in V(ax,ay)\setminus\operatorname*{argmin}_{\widetilde{V}\in V(ax,ay) }\langle C,\widetilde{V}^{\prime}\rangle}\langle C,\widetilde{V}^{\prime} \rangle-\inf_{\widetilde{V}\in V(ax,ay)}\langle C,\widetilde{V}\rangle,\\ R_{W^{a}_{b}}(ax,ay)&\in[a/2,a]\quad\text{such that}\quad W^{a\prime}_{b}(W^{a\prime}_{b}(ax,ay))-W^{a\prime}_{b}(a-R_{W^{a}_{b}}( ax,ay))=a^{-1}\mathfrak{D}_{W^{a}_{b}}(ax,ay),\\ \nu_{W^{a}_{b}}(ax,ay)&:=\sup_{r\in(0,R_{W^{a}_{b}}( ax,ay)]}\left(W^{a\prime}_{b}(a-r)+rW^{a\prime}_{b}(r)\right).\end{split} \tag{4.5}\] We also denote by \(e_{W^{a}_{b}}:W^{a\prime}_{b}((0,a])\to(0,a]\) the inverse function of \(W^{a\prime}_{b}:(0,a]\to W^{a\prime}_{b}((0,a])\). It follows from \[W^{a}_{b}(ar)=aU(r),\quad W^{a\prime}_{b}(ar)=U^{\prime}(r),\quad W^{a\prime} _{b}(ar)=a^{-1}U^{\prime\prime}(r),\quad d_{W^{a}_{b}}(ar,ar_{0})=ad_{U}(r,r_{ 0})\] for \(r,r_{0}\in[0,1]\) that \[\mathfrak{D}_{W^{a}_{b}}(ax,ay)=a\mathfrak{D}_{U}(x,y),\quad R_{W^{a}_{b}}( ax,ay)=aR_{U}(x,y),\quad\nu_{W^{a}_{b}}(ax,ay)=\nu_{U}(x,y),\] and \(e_{W^{a}_{b}}=ae_{U}\) on \(W^{a\prime}_{b}((0,a])=U^{\prime}((0,1])\). We also find \(\Delta_{C}(ax,ay)=a\Delta_{C}(x,y)\). Thus, under Assumption 1.2, it holds that \[\left(0,\frac{\Delta_{C}(ax,ay)R_{W^{a}_{b}}(ax,ay)}{a\mathfrak{D }_{W^{a}_{b}}(ax,ay)}\right]\cap\left(0,\frac{a^{-1}\Delta_{C}(ax,ay)}{a^{-1} \mathfrak{D}_{W^{a}_{b}}(ax,ay)+\nu_{W^{a}_{b}}(ax,ay)-W^{a\prime}_{b}(a)}\right]\] \[=\left(0,\frac{\Delta_{C}(x,y)R_{U}(x,y)}{\mathfrak{D}_{U}(x,y)} \right]\cap\left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y) -U^{\prime}(1)}\right]\] and \[\langle C,\Pi^{W^{a}_{b}}(C,ax,ay,\varepsilon)\rangle-\inf_{ \overline{\Pi}\in\Pi(ax,ay)}\langle C,\widetilde{\Pi}\rangle\] \[=\langle C,a\Pi^{U}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C,a\Pi\rangle\] \[=a\left(\langle C,\Pi^{U}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)} \langle C,\Pi\rangle\right)\] \[\leq a\left(\Delta_{C}(x,y)e_{U}\left(-\frac{\Delta_{C}(x,y)}{ \varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right)\right)\] \[=\Delta_{C}(x,y)e_{W^{a}_{b}}\left(-\frac{\Delta_{C}(x,y)}{ \varepsilon}+\mathfrak{D}_{U}(x,y)+\nu_{U}(x,y)\right)\] \[=\frac{\Delta_{C}(ax,ay)}{a}e_{W^{a}_{b}}\left(-\frac{\Delta_{C}( ax,ay)}{a\varepsilon}+\frac{1}{a}\mathfrak{D}_{W^{a}_{b}}(ax,ay)+\nu_{W^{a}_{b}}( ax,ay)\right)\] for \(\varepsilon\) in the interval above. This estimate can be derived directly in a similar way to the proof of Theorem 1.7. Thus, in the problem (1.2), if we simultaneously scale the data and the domain of a strictly convex function, we obtain essentially the same error estimate in Theorem 1.7, where the domain of a strictly convex function has no effect on the estimate. ### Invariance of the Kullback-Leibler divergence under scaling of data In contrast to the previous subsection, let us now consider the case that the scaling of data does not match the domain of a strictly convex function. For this sake, let \(W\in C([0,b])\cap C^{1}((0,b])\cap C^{2}((0,b))\) satisfy that \(W^{\prime\prime}>0\) on \((0,b)\), \(\lim_{h\downarrow 0}W^{\prime}(h)=-\infty\), and \(r\mapsto rW^{\prime\prime}(r)\) is non-decreasing on \((0,b)\) as in the previous subsection and \(\widetilde{a},a>0\) with \(\widetilde{a}<a\). First, we find that \[\operatorname*{argmin}_{\widetilde{\Pi}\in\Pi(\widetilde{a}x, \widetilde{a}y)}\left(\langle C,\widetilde{\Pi}\rangle+\varepsilon D_{W^{*}_{b}}( \widetilde{\Pi},\widetilde{a}x\otimes y)\right) =\operatorname*{argmin}_{\widetilde{\Pi}\in\Pi(\widetilde{a}x, \widetilde{a}y)}\left(\langle C,a^{-1}\widetilde{\Pi}\rangle+\varepsilon D_{W ^{1}_{b}}(a^{-1}\widetilde{\Pi},a^{-1}\widetilde{a}x\otimes y)\right)\] \[=\operatorname*{argmin}_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi \rangle+\varepsilon D_{W^{a/a}_{b}}(\Pi,x\otimes y)\right)\] for \(\omega=(C,x,y,\varepsilon)\in\Omega\). Thus, scaling the data is equivalent to scaling the domain of a strictly convex function. Then, we can use \(U\) satisfying Assumption 1.3 instead of \(W\). The following proposition suggests that the regularization effect by a Bregman divergence does not vary under scaling of data unless the Bregman divergence is the Kullback-Leibler divergence. **Proposition 4.3**.: _Let \(U\) satisfy Assumption 1.3. Let \(\widetilde{a},a>0\) with \(\widetilde{a}<a\leq 1\). If there exists \(\kappa>0\) such that_ \[D_{U}(\widetilde{a}z,\widetilde{a}w)=\kappa D_{U}(az,aw)\quad\text{for }z,w\in \mathcal{P}_{K}\quad\text{for }K\geq 3. \tag{4.6}\] _Then, there exist \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\) such that \(U=(U_{o})_{\lambda,\mu_{0},\mu_{1}}\) on \((0,a]\)._ Proof.: By a similar argument in the implication (D) to (C2), it follows from (4.6) that \[\widetilde{a}^{2}U^{\prime\prime}(\widetilde{a}r)=\kappa a^{2}U^{\prime \prime}(ar)\quad\text{for }r\in(0,1).\] Let \(\theta:=\widetilde{a}a^{-1}<1\). Then, the above relation is equivalent to \[\theta rU^{\prime\prime}(\theta r)=\kappa\theta^{-1}rU^{\prime\prime}(r)\quad \text{for }r\in(0,a).\] The monotonicity of \(r\mapsto rU^{\prime\prime}(r)\) on \((0,a)\) yields \(\kappa\theta^{-1}\leq 1\). For \(N\in\mathbb{N}\), it turns out that \[U^{\prime}(a\theta)-U^{\prime}(a\theta^{N+1}) =\int_{a\theta^{N+1}}^{a\theta}U^{\prime\prime}(r)\mathrm{d}r= \sum_{n=1}^{N}\int_{a\theta^{n+1}}^{a\theta^{n}}U^{\prime\prime}(r)\mathrm{d}r =\sum_{n=1}^{N}\int_{a\theta^{n+1}}^{a\theta^{n}}rU^{\prime\prime}(r)\cdot \frac{1}{r}\mathrm{d}r\] \[\leq\sum_{n=1}^{N}a\theta^{n}U^{\prime\prime}(a\theta^{n})\cdot \frac{1}{a\theta^{n+1}}\int_{a\theta^{n+1}}^{a\theta^{n}}1\mathrm{d}r=\sum_{n=1 }^{N}\left(\kappa\theta^{-1}\right)^{n-1}a\theta U^{\prime\prime}(a\theta) \left(\theta^{-1}-1\right)\] \[=\widetilde{a}U^{\prime\prime}(\widetilde{a})\left(\theta^{-1}-1 \right)\sum_{n=1}^{N}\left(\kappa\theta^{-1}\right)^{n-1}.\] If \(\kappa\theta^{-1}<1\), then \[\lim_{N\to\infty}(U^{\prime}(a\theta)-U^{\prime}(a\theta^{N+1}))\leq \widetilde{a}U^{\prime\prime}(\widetilde{a})\left(\theta^{-1}-1\right)\cdot \lim_{N\to\infty}\sum_{n=1}^{N}\left(\kappa\theta^{-1}\right)^{n-1}=\widetilde {a}U^{\prime\prime}(\widetilde{a})\left(\theta^{-1}-1\right)\cdot\frac{1}{1- \kappa\theta^{-1}}<\infty,\] which contradicts the condition \(\lim_{h\downarrow 0}U^{\prime}(h)=-\infty\). Hence, \(\kappa\theta^{-1}=1\) and \[rU^{\prime\prime}(r)=\theta rU^{\prime\prime}(\theta r)\leq rU^{\prime\prime} (r)\quad\text{for }r\in(0,a),\] that is, \(r\mapsto rU^{\prime\prime}(r)\) is constant on \((0,a)\). This is equivalent to that there exist \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\) such that \(U(r)=\lambda r\log r+\mu_{1}r+\mu_{0}\) on \((0,a]\). This completes the proof of the proposition. Let \(U\) satisfy Assumption 1.3 and \(a\in(0,1)\). For the regularized problem of the form \[\inf_{\widetilde{\Pi}\in\Pi(\widetilde{a}x,\widetilde{a}y)}\left(\langle C, \widetilde{\Pi}\rangle+\varepsilon D_{U}(\widetilde{\Pi},ax\otimes y)\right) \quad\text{for }\omega=(C,x,y,\varepsilon)\in\Omega,\] the choice of \(a\) is important to give a similar estimate as in Theorem 1.7, since the quantities such as (4.5) may be involved in the estimate. Indeed, if we define \[\mathfrak{D}_{U}(ax,ay)\coloneqq\sup_{\widetilde{\Pi}\in\Pi(ax,ay)}D_{U}( \widetilde{\Pi},ax\otimes y)\] then, for \(\widetilde{a}>0\), it turns out that \[d_{U}(ar,ar_{0})=ad_{U^{a-1}}(r,r_{0})=a\int_{r_{0}}^{r}\int_{r_{0} }^{t}\frac{\mathrm{d}^{2}}{\mathrm{d}s^{2}}U^{a-1}(s)\mathrm{d}s\mathrm{d}t =a\int_{r_{0}}^{r}\int_{r_{0}}^{t}aU^{\prime\prime}(as)\mathrm{d}s \mathrm{d}t\] \[\geq a\int_{r_{0}}^{r}\int_{r_{0}}^{t}\widetilde{a}U^{\prime \prime}(\widetilde{a}s)\mathrm{d}s\mathrm{d}t=a\widetilde{a}^{-1}d_{U}( \widetilde{a}r,\widetilde{a}r_{0}),\] consequently, \[\widetilde{a}^{-1}\mathfrak{D}_{U}(\widetilde{a}x,\widetilde{a}y)\leq a^{-1} \mathfrak{D}_{U}(ax,ay)\] holds with equality if and only if \(U=(U_{o})_{\lambda,\mu_{0},\mu_{1}}\) holds for some \(\mu_{0},\mu_{1}\in\mathbb{R}\) and \(\lambda>0\). ## 5. Examples and Comparison We give examples of \(U\) satisfying Assumption 1.3. ### Model case Recall that \(U_{o}\in C([0,\infty))\cap C^{\infty}((0,\infty))\) is defined as \[U_{o}(r):=\begin{cases}r\log r&\text{for }r>0,\\ 0&\text{for }r=0.\end{cases}\] Obviously, \(U_{o}\) satisfies Assumption 1.3 and the normalization (4.2). For \(r,r_{0}>0\), we see that \[d_{U_{o}}(r,r_{0})=U_{o}(r)-U_{o}(r_{0})-(r-r_{0})U_{o}^{\prime}(r_{0})=r(\log r -\log r_{0})-(r-r_{0}),\] which yields \(D_{U_{o}}(z,w)=D_{\mathrm{KL}}(z,w)\) for \(z,w\in\mathcal{P}_{K}\). Let us see that Theorem 1.7 for the case \(U=U_{o}\) coincides with the error estimate given in [20, Theorem 5] interpreted as \[\langle C,\Pi^{U_{o}}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle \leq\Delta_{C}(x,y)\exp\left(-\frac{\Delta_{C}(x,y)}{\varepsilon}+\mathfrak{D }_{U_{o}}(x,y)+1\right) \tag{5.1}\] for \(\varepsilon\in(0,\Delta_{C}(x,y)/(1+\mathfrak{D}_{U_{o}}(x,y))]\), where \(\omega=(C,x,y,\varepsilon)\in\Omega\). In our setting, the \(\ell^{1}\)-radius of \(\Pi(x,y)\) defined in [20, Definition 2] is calculated as \[\max_{\Pi\in\Pi(x,y)}\sum_{i,j}\pi_{ij}=1.\] The entropic radius defined in [20, Definition 3] is calculated as \[\sup_{\Pi,\Pi^{\prime}\in\Pi(x,y)}\left(S(\Pi)-S(\Pi^{\prime})\right) =\sup_{\Pi,\Pi^{\prime}\in\Pi(x,y)}\left(-D_{U_{o}}(\Pi,x\otimes y )+D_{U_{o}}(\Pi^{\prime},x\otimes y)\right)\] \[=\sup_{\Pi\in\Pi(x,y)}D_{U_{o}}(\Pi,x\otimes y)-\inf_{\Pi\in\Pi(x,y)}D_{U_{o}}(\Pi,x\otimes y)\] \[=\mathfrak{D}_{U_{o}}(x,y),\] thanks to the relation \(D_{U_{o}}(\Pi,x\otimes y)=-S(\Pi)+S(x)+S(y)\) for \(\Pi\in\Pi(x,y)\). A direct calculation provides \[e_{U_{o}}(\tau)=\exp(\tau-1):(-\infty,1]\to(0,1],\qquad R_{U_{o}}(x,y)=\frac{e ^{\mathfrak{D}_{U_{o}}(x,y)}}{1+e^{\mathfrak{D}_{U_{o}}(x,y)}},\qquad\nu_{U_{o }}(x,y)=2,\qquad U_{o}^{\prime}(1)=1.\] Thus, our estimate in Theorem 1.7 coincides with (5.1), where the range of the regularization parameter in Theorem 1.7 is given by \[\left(0,\frac{\Delta_{C}(x,y)R_{U_{o}}(x,y)}{\mathfrak{D}_{U_{o}}(x,y)}\right] \cap\left(0,\frac{\Delta_{C}(x,y)}{\mathfrak{D}_{U_{o}}(x,y)+\nu_{U_{o}}(x,y)- U_{o}^{\prime}(1)}\right]=\left(0,\frac{\Delta_{C}(x,y)}{1+\mathfrak{D}_{U_{o}}(x,y)}\right],\] which coincides with that in [20, Theorem 5]. ### \(q\)-logarithmic function Let us consider an applicable example other then the model case. From the equivalent between (C0)-(C2), any of \(U,U^{\prime}\), and \(U^{\prime\prime}\) is on the table for consideration. If we regard \(1/U^{\prime\prime}\) as a deformation function, \(U^{\prime}\) is called a deformed logarithmic function and \(-U\) corresponds to the density function of an entropy (see [14, Chapters 10, 11] for details). One typical example of deformed logarithmic functions is the \(q\)-logarithmic function. For \(q\in\mathbb{R}\), define the \(q\)_-logarithmic function_\(\ln_{q}\colon(0,\infty)\to\mathbb{R}\) by \[\ln_{q}(t)\coloneqq\int_{1}^{t}s^{-q}\,\mathrm{d}s=\begin{cases}\frac{t^{1-q} -1}{1-q}&\text{if }q\neq 1,\\ \log t&\text{if }q=1.\end{cases}\] The entropy associated to \(\ln_{q}\) is called the Tsallis entropy (see [14, Chapter 8] for instance). We see that \(\lim_{h\downarrow 0}\ln_{q}(h)=-\infty\) is equivalent to \(q\geq 1\). Moreover, the function \(t\mapsto t\ln_{q}^{\prime}(t)=t^{1-q}\) is non-decreasing on \((0,1)\) if and only if \(q\leq 1\) holds. Thus, \(\ln_{q}\) satisfies Assumption 4.1 if and only if \(q=1\) holds, where \(U_{\ln_{1}}=(U_{o})_{1,0,-1}\). This means that, if we regard a power function as a deformation function, that is, \(1/U^{\prime\prime}\), only the power function of exponent \(1\) is applicable to Theorem 1.7. To obtain an example that satisfies Theorem 1.7, we need modify the power function of exponent \(1\) other than power functions of general exponent. ### Upper incomplete gamma function For \(\alpha\in\mathbb{R}\), define \(L_{\alpha}:(0,1)\to\mathbb{R}\) by \[L_{\alpha}(t):=-\ln_{1-\alpha}(-\log t)=\begin{cases}-\frac{(-\log t)^{\alpha }-1}{\alpha}&\text{if }\alpha\neq 0,\\ -\log(-\log t)&\text{if }\alpha=0.\end{cases}\] It turns out that \[\frac{1}{L_{\alpha}^{\prime}(t)}=t\cdot(-\log t)^{1-\alpha}>0\quad\text{for }t \in(0,1),\] which can be regarded as a refinement of the power function of exponent \(1\) since the logarithmic function is referred to as the power function of exponent \(0\). We see that \(\lim_{t\uparrow 1}L_{\alpha}(t)\) is finite if and only if \(\alpha>0\) holds, and \(\lim_{h\downarrow 0}L_{\alpha}(h)=-\infty\) if and only if \(\alpha\geq 0\) holds. Moreover, the function \(t\mapsto tL_{\alpha}^{\prime}(t)=(-\log t)^{\alpha-1}\) is non-decreasing on \((0,1)\) if and only if \(\alpha\leq 1\) holds. Thus, \(L_{\alpha}\) satisfies Assumption 4.1 if and only if \(\alpha\in(0,1]\) holds, where \(U_{L_{1}}=U_{o}\). In what follows, \(\alpha\in(0,1]\) is assumed. We set \[L_{\alpha}(1):=\lim_{t\uparrow 1}L_{\alpha}(t)=\frac{1}{\alpha}.\] It follows from the change of variables \(-\log t=\tau\) that \[\int_{0}^{1}L_{\alpha}(t)\mathrm{d}t=-\frac{1}{\alpha}\Gamma(\alpha+1)+\frac {1}{\alpha},\] where \(\Gamma(\cdot)\) is the gamma function. As mentioned in Remark 4.2, if we set \[\ell_{\alpha}(t):=-\frac{(-\log t)^{\alpha}}{\Gamma(\alpha+1)}+1,\] for \(t\in(0,1]\), then \(\ell_{\alpha}\) satisfies Assumption 4.1 and \[U_{\alpha}(r):=\int_{0}^{r}\ell_{\alpha}(t)\mathrm{d}t=-\frac{1}{\Gamma( \alpha+1)}\Gamma(\alpha+1,-\log r)+r \tag{5.2}\] satisfies the normalization (4.2), where \[\Gamma(p,\tau)=\int_{\tau}^{\infty}t^{p-1}\exp(-t)\mathrm{d}t\] is the upper incomplete gamma function for \(p>0\) and \(\tau\geq 0\). Note that \(\Gamma(p,0)=\Gamma(p).\) Since the inverse function \(e_{U_{\alpha}}:U_{\alpha}^{\prime}((0,1])\to(0,1]\) of \(U_{\alpha}:(0,1]\to U_{\alpha}^{\prime}((0,1])=(-\infty,1]\) is given by \[e_{U_{\alpha}}(\tau)=\exp\left(-\left[-\Gamma(\alpha+1)(\tau-1)\right]^{ \frac{1}{\alpha}}\right),\] the error estimate in Theorem 1.7 for \(U=U_{\alpha}\) is the exponential decay in the case of \(\alpha=1\), which is the same as (5.1), and is more tight if \(\alpha\in(0,1)\) as we observed in Subsection 2.3. The function \(L_{\alpha}\) is introduced to analyze the preservation of concavity by the Dirichlet heat flow in a convex domain on Euclidean space (see [8, 10]). ### Complementary error function Let us give an example of a function \(L\) satisfying Assumption 4.1 except for the continuity at \(t=1\). In this case, \[W(r):=\int_{0}^{r}L(t)\mathrm{d}t\] satisfies Assumption 1.3 except for the continuity and differentiability at \(r=1\) but \[W_{1}^{a}(r)=aW(a^{-1}r)=a\int_{0}^{a^{-1}r}L(t)\mathrm{d}t=\int_{0}^{r}L(a^{-1 }t)\mathrm{d}t\] satisfies Assumption 1.3 if \(a>1\). Then, it might be worth to consider the effect of \(a>1\) on the regularized solution of the form \[\operatorname*{argmin}_{\Pi\in\Pi(x,y)}\left(\langle C,\Pi\rangle+\varepsilon D _{W_{1}^{a}}(\Pi,x\otimes y)\right),\] as we mentioned in Subsection 4.3. Let us give an example of such \(L\). Define \(H:(0,1)\to\mathbb{R}\) by the inverse function of \[\tau\mapsto\frac{1}{\sqrt{\pi}}\int_{-\tau/2}^{\infty}e^{-\sigma^{2}}d\sigma= \frac{1}{2}\operatorname{erfc}\left(-\frac{\tau}{2}\right),\] where \(\operatorname{erfc}(\tau)=1-\operatorname{erf}(\tau)\) is the complementary error function with the error function \[\operatorname{erf}(\tau)=\frac{2}{\sqrt{\pi}}\int_{0}^{\tau}e^{-\sigma^{2}}d \sigma\quad\text{for }\tau\in\mathbb{R},\] and we used the properties \[\lim_{\tau\downarrow-\infty}\frac{1}{2}\operatorname{erfc}\left(-\frac{\tau} {2}\right)=0,\qquad\lim_{\tau\uparrow\infty}\frac{1}{2}\operatorname{erfc} \left(-\frac{\tau}{2}\right)=1.\] It is easily seen that \(H\in C^{\infty}((0,1))\) and \(\lim_{t\downarrow 0}H(t)=-\infty\). Since we have \[\frac{1}{2}\operatorname{erfc}\left(-\frac{H(t)}{2}\right)=t\quad\text{for }t \in(0,1),\qquad\frac{\mathrm{d}}{\mathrm{d}\tau}\left(\frac{1}{2}\operatorname {erfc}\left(-\frac{\tau}{2}\right)\right)=\frac{1}{\sqrt{4\pi}}e^{-\frac{\tau ^{2}}{4}}\quad\text{for }\tau\in\mathbb{R},\] we find that \[1 =\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{1}{2}\operatorname{ erfc}\left(-\frac{H(t)}{2}\right)\right)=\frac{1}{\sqrt{4\pi}}e^{-\frac{H(t)^{2}}{4}}H^{ \prime}(t),\] \[0 =\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\left(\frac{1}{2} \operatorname{erfc}\left(-\frac{H(t)}{2}\right)\right)=\frac{1}{\sqrt{4\pi}}e ^{-\frac{H(t)^{2}}{4}}\left(-\frac{H(t)}{2}H^{\prime}(t)^{2}+H^{\prime\prime} (t)\right),\] consequently, \[\frac{\mathrm{d}}{\mathrm{d}t}(tH^{\prime}(t))=H^{\prime}(t)+tH^{\prime\prime }(t)=H^{\prime}(t)\left(1+\frac{tH(t)}{2}H^{\prime}(t)\right),\] for \(t\in(0,1)\). It was proved in [9, Section 4.3] that \[\inf_{t\in(0,1)}\frac{tH(t)}{2}H^{\prime}(t)=\lim_{t\downarrow 0}\frac{tH(t)}{2} H^{\prime}(t)=-1\] in terms of the inverse function of \(H\), and hence \(t\mapsto tH^{\prime}(t)\) is non-decreasing on \((0,1)\). Thus, \(H\) satisfies Assumption 4.1 except for the continuity at \(t=1\). The function \(H\) is also introduced to analyze the preservation of concavity by the Dirichlet heat flow in a convex domain on Euclidean space (see [10]). Although we do not detail here the definition of "\(F\)-concavity is preserved by the Dirichlet heat flow in a convex domain on Euclidean space", for \(F\in C^{2}((0,1))\), if \(F\)-concavity is preserved by the Dirichlet heat flow in a convex domain on Euclidean space, then \(F\) satisfies Assumption 4.1 except for the continuity at \(t=1\) by [10, Theorems 1.5 and 1.6] and [9, Theorem 2.4 and Subsection 4.3]. ## 6. Numerical experiments Let \(\omega=(C,x,y,\varepsilon)\in\Omega\). Numerical experiments demonstrate that the error \(\langle C,\Pi^{U}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle\) follows the estimate in Theorem 1.7. The function \(U\) tested is associated with the function \(U_{\alpha}\) for \(\alpha\in(0,1]\), which is defined in (5.2). The test problem used has the values of the entries of a matrix \(C\in\mathbb{R}^{5\times 5}\) and test vectors \(x,y\in\mathcal{P}_{5}\) generated in MATLAB. The regularized problem (1.2) is solved by using the gradient descent method presented in [18, Corollary 4.3]. The method terminates when the Frobenius norm of the gradient becomes \(10^{-8}\). All computations are performed on a computer with an Intel Core i7-8565U 1.80 GHz central processing unit (CPU), 16 GB of random-access memory (RAM), and the Microsoft Windows 11 Pro 64 bit Version 22H2 Operating System. All programs for implementing the method were coded and run in MATLAB R2020b for double precision floating-point arithmetic with unit roundoff \(2^{-53}\simeq 1.1\cdot 10^{-16}\). Figure 1 shows the natural logarithm of the ratio of the absolute error \(\langle C,\Pi^{U_{\alpha}}(\omega)\rangle-\inf_{\Pi\in\Pi(x,y)}\langle C,\Pi\rangle\) and \(\Delta_{C}(x,y)\) versus the value of the regularization parameter \(\varepsilon\). Here, \(\Delta_{C}(x,y)\simeq 4.6\cdot 10^{-6}\). We observe that as \(\varepsilon\) decreases for each value of \(\alpha\), the error decreases. As the value of \(\alpha\) decreases for each value of \(\varepsilon\), the error decreases. As the value of \(\varepsilon\) decreases, the methods tend to take more iterations. This is because the regularized problem approaches the given problem as \(\varepsilon\) approaches zero. ## 7. Concluding remarks In this paper, we considered regularization of optimal transport problems via Bregman divergence. We proved that the optimal value of the regularized problem converges to that of the given problem. More precisely, our error estimate becomes faster than exponentially. Numerical experiments showed that regularization by a Bregman divergence outperforms that by the Kullback-Leibler divergence. There are several future directions subsequent to this study. The time complexity of our regularized problem is left open. It would also be interesting to extend the setting of this paper from a finite set to Euclidean space. ### Acknowledgements KM was supported in part by JSPS KAKENHI Grant Numbers JP20K14356, JP21H03451. KS was supported in part by JSPS KAKENHI Grant Numbers JP22K03425, JP22K18677, JP23H00086. AT was supported in part by JSPS KAKENHI Grant Numbers JP19K03494, JP19H01786. The authors are sincerely grateful to Maria Matveev and Shin-ichi Ohta for helpful discussion.
2309.03262
CP-Violation with Neutrino Disappearance Alone
The best way to probe CP violation in the lepton sector is with long-baseline accelerator neutrino experiments in the appearance mode: the appearance of $\nu_e$ in predominantly $\nu_\mu$ beams. Here we show that it is possible to discover CP violation with disappearance experiments only, by combining JUNO for electron neutrinos and DUNE or Hyper-Kamiokande for muon neutrinos. While the maximum sensitivity to discover CP is quite modest ($1.6\sigma$ with 6 years of JUNO and 13 years of DUNE), some values of $\delta$ may be disfavored by $>3\sigma$ depending on the true value of $\delta$.
Peter B. Denton
2023-09-06T18:00:00Z
http://arxiv.org/abs/2309.03262v3
# CP-Violation with Neutrino Disappearance ###### Abstract The best way to probe CP violation in the lepton sector is with long-baseline accelerator neutrino experiments in the appearance mode: the appearance of \(\nu_{e}\) in predominantly \(\nu_{\mu}\) beams. Here we show that it is possible to discover CP violation with disappearance experiments only, by combining JUNO for electron neutrinos and DUNE or Hyper-Kamiokande for muon neutrinos. While the maximum sensitivity to discover CP is quite modest (1.6\(\sigma\) with 6 years of JUNO and 13 years of DUNE), some values of \(\delta\) may be disfavored by \(>3\sigma\) depending on the true value of \(\delta\). ## I Introduction There are three free parameters in our current model of particle physics that control the size of CP violation in their respective sector: the \(\bar{\theta}\) term in the gluon sector which seems to be either zero or very small [1], the CKM matrix [2; 3] describing quark mixing which is known to have some CP violation [4; 5], and the PMNS matrix [6; 7] describing lepton mixing. It is unknown if there is CP violation in the lepton mixing matrix [8; 9; 10; 11] and thus determining if CP is violated in the lepton sector is of the utmost priority in particle physics. The best way to probe CP violation in the leptonic sector is by an appearance measurement of an oscillation maximum [12; 13; 14; 15; 16; 17; 18; 19]. That is, the detection of one flavor of neutrino in a source of neutrino that is predominantly a differently flavor at a baseline and energy that corresponds to one of the \(\Delta m^{2}_{ij}\)'s. To date only NOvA [20] and T2K [21] have strong evidence for the detection of appearance by detecting electron neutrinos in predominantly muon neutrino sources, but do not yet significantly probe CP violation [22; 23]. Atmospheric neutrinos, which are mostly muon neutrinos but with a significant electron neutrino contribution, also have some evidence for appearance [24; 25; 26] via the detection of all three flavors and thus a total of six oscillation channels, although the appearance information is thus somewhat scrambled. We will show how it is also possible to probe CP violation via neutrino disappearance measurements only. With good measurements of the disappearance of two different flavors it is possible to determine a total of four independent parameters in the PMNS matrix. Since the mixing matrix is only described by four parameters1 it is possible to learn about CP violation while only measuring CP conserving channels. The key physics effect that makes this possible is unitarity2 and thus if there is new physics in the neutrino sector this story may get more complicated. Given the sizable expected improvements in disappearance measurements in the \(\nu_{e}\) channel with the Jiangmen Underground Neutrino Observatory (JUNO) and the \(\nu_{\mu}\) channel with the Deep Underground Neutrino Experiment (DUNE) and Hyper-Kamiokande (HK), such a study is quite timely. Moreover, disappearance has different (and often cleaner) systematics than appearance measurements since the neutrinos at the near and far detectors are the same flavor which means that this can be a valuable cross check of CP violation probes in the appearance channel. Footnote 1: Two additional parameters, the so-called Majorana phases, may also be physical depending on the nature of neutrinos, but their impact in neutrino oscillation experiments is suppressed by \((m_{\nu}/E_{\nu})^{2}<10^{-14}\) or smaller in oscillations and thus can be safely ignored. Footnote 2: We note that unitarity is used for most experiment’s extraction of one or more of the mixing parameters. For example, medium-baseline reactor neutrino experiments at \(L\simeq 1\) km actually determine \(4|U_{e3}|^{2}(|U_{e1}|^{2}+|U_{e2}|^{2})\) but by unitarity this is equal to \(4|U_{e3}|^{2}(1-|U_{e3}|^{2})\) which can be expressed as \(\sin^{2}2\theta_{13}\) under the assumption the matrix is unitary. In this paper, we will briefly review the standard CP violation picture. We will then develop the theory for where there is information about CP violation in disappearance measurements. Finally, we will perform numerical studies indicating the sensitivity to measure \(\delta\), and thus determine if CP is violated or not, via disappearance measurements only. ## II Conventional CP Violation Picture It is true that, consistent with conventional wisdom in the literature, disappearance channels are CP invariant, see e.g. [12; 14; 17], under the assumption of that CPT is conserved. That is, by CPT conservation: \[P(\nu_{\alpha}\to\nu_{\alpha})=P(\bar{\nu}_{\alpha}\to\bar{\nu}_{\alpha})\,, \tag{1}\] in vacuum3. Thus neutrinos and antineutrinos act the same in vacuum disappearance experiments. The CP asymmetry, on the other hand, is only nonzero for appearance and is proportional to the Jarlskog invariant \(J\equiv s_{12}c_{12}s_{13}c_{13}^{2}s_{23}c_{23}\sin\delta\)[39]. The difference in probabilities for neutrinos and antineutrinos is, \[P(\nu_{\alpha}\to\nu_{\beta})-P(\bar{\nu}_{\alpha}\to\bar{\nu}_{\beta})\simeq \pm 8\pi J\frac{\Delta m_{21}^{2}}{\Delta m_{31}^{2}}\,, \tag{2}\] in vacuum near the first oscillation maximum with \(\alpha\neq\beta\) where the sign depends on \(\alpha\) and \(\beta\). Thus a determination of \(J\), which requires measuring all four parameters of the PMNS matrix, indicates how neutrinos and antineutrinos behave differently in appearance oscillation measurements4. Footnote 4: If neutrinos have decohered or are in an oscillation averaged regime after traveling many oscillation periods then the term in eq. 2 will vanish. New physics such as sterile neutrinos [40], non-standard neutrino interactions [41; 27; 42], or unitarity violation [43] could also modify this picture in non-trivial ways by making fully CP conserving scenarios appear CP violating or other such nightmare scenarios. It may be possible to avoid these scenarios via a combination of experiments at different baselines and energies, see e.g. [44; 11; 45]. There are several other non-conventional means of probing leptonic CP violation. One is via one-loop corrections to elastic scattering [46; 47; 48; 49] in solar neutrinos which may achieve up to \(\sim 1\sigma\) sensitivity to CP violation with optimistic experimental assumptions [50]. This process is still an appearance measurement in that it depends on CP violation via measuring \(\nu_{e}\to\nu_{\mu}\). Another is via a one-loop correction to the standard matter effect for neutrinos propagating inside the Sun which leads to an additional matter effect term \(5\times 10^{-5}\) times smaller than the standard matter effect [51] which could lead to a CP violating effect, but it is \(\sim 10^{-5}\) the size of the standard matter effect and thus beyond the hope even of up-coming experiments [52]. ## II CP Violation in Disappearance ### An understanding While it is not directly possible to determine if nature prefers neutrinos or antineutrinos via disappearance measurements alone, it is possible to determine if nature treats neutrinos and antineutrinos the same or differently via measurements of these CP conserving disappearance channels. That is, disappearance measurements cannot provide information on \(\mathrm{sign}(\sin\delta)\) or equivalently on \(\mathrm{sign}\,J\), but can constrain \(\cos\delta\) and thus potentially rule out CP conserving values of \(|\cos\delta|=1\). The disappearance probability in vacuum for flavor \(\alpha\) is \[P(\nu_{\alpha}\to\nu_{\alpha}) =1-4|U_{\alpha 1}|^{2}|U_{\alpha 2}|^{2}\sin^{2}\Delta_{21}\] \[\qquad-4|U_{\alpha 1}|^{2}|U_{\alpha 3}|^{2}\sin^{2}\Delta_{31}\] \[\qquad-4|U_{\alpha 2}|^{2}|U_{\alpha 3}|^{2}\sin^{2}\Delta_{32}\,, \tag{3}\] where \(\Delta_{ij}=\Delta m_{ij}^{2}L/4E\) is the kinematic term. To understand how one can determine if CP is conserved or not, we focus on the four degrees of freedom that describe the mixing matrix. We begin by examining the PMNS mixing matrix in the usual parameterization5[4; 53]: Footnote 5: While we are working in a parameterization dependent framework, the ability to discover CP violation is not artificially enhanced by this and CP violation can be discovered in any unitary parameterization of the mixing matrix. \[U=\begin{pmatrix}c_{13}c_{12}&c_{13}s_{12}&s_{13}e^{-i\delta}\\ -c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta}&c_{23}c_{12}-s_{23}s_{13}s_{12}e^{ i\delta}&s_{23}c_{13}\\ s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}-c_{23}s_{13}s_{12}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}\,. \tag{4}\] Since disappearance measurements only constrain absolute values of elements of the PMNS matrix, we notice that the measurements of the first row, the \(\nu_{e}\) row, provide no information about \(\delta\). It would seem that measurements of either the \(\nu_{\mu}\) or \(\nu_{\tau}\) row would provide information about \(\delta\), specifically \(\cos\delta\), implying that \(\nu_{e}\)'s are somehow special and different from the other two flavors. In reality, any one row (and any one column) can be made to be "simple": only a product of sines, cosines, and \(e^{\pm i\delta}\), see e.g. [53]. The remaining four elements must always be "complicated": the sum or difference of the products of such terms, one of which always contains \(e^{\pm i\delta}\). This provides one means of understanding why two separate disappearance measurements are required to probe CP violation. That is, if, for example, we had an excellent measurement of \(\nu_{\mu}\) disappearance but not \(\nu_{e}\) or \(\nu_{\tau}\) disappearance, since we could choose to make the \(\nu_{\mu}\) row simple then therefore we cannot learn anything about CP violation. Thus the absolute value of the complicated elements contains a \(\cos\delta\) contribution which is the focus of this paper. We also note that in the usual parameterization, the absolute value of the elements in the \(\nu_{e}\) row depends on only two parameters: \(\theta_{13}\) and \(\theta_{12}\), while the absolute value of the elements in the other two rows each depend on all four parameters in the mixing matrix. A perfect measurement of a disappearance channel allows for the determination of the coefficients of all three terms, but provides only two constraints on the mixing matrix due to unitarity. That is, one can always define away one of the \(|U_{\alpha i}|^{2}\) in terms of the other two by \(|U_{\alpha 1}|^{2}+|U_{\alpha 2}|^{2}+|U_{\alpha 3}|^{2}=1\). Thus a perfect measurement of the \(\nu_{e}\) disappearance probability can constrain two parameters which, given how we typically parameterize the mixing matrix, the constrained degrees of freedom map onto the parameters \(\theta_{13}\) and \(\theta_{12}\). To date, Daya Bay [54] and RENO [55] provide excellent constraints on \(\theta_{13}\) while KamLAND [56], SNO [57], Super-Kamiokande [58], and Borexino [59] provide good constraints on \(\theta_{12}\). In the future JUNO [60] will measure \(\theta_{12}\) with excellent precision. Thus the \(\nu_{e}\) row is in excellent shape. The constraint on \(\theta_{13}\), which comes from measuring the quantity \(|U_{\alpha 3}|^{2}(|U_{e1}|^{2}+|U_{e2}|^{2})=|U_{e3}|^{2}(1-|U_{e3}|^{2})\), is determined via the kinematic terms \(\Delta_{31}\) and \(\Delta_{32}\) which has a maximum effect for reactor neutrinos at a baseline of \(\sim 1.5\) km. The oscillation6 constraint on \(\theta_{12}\), which comes from measuring the quantity \(|U_{e1}|^{2}|U_{e2}|^{2}\), is determined via the kinematic term \(\Delta_{21}\) which has a maximum effect for reactor neutrinos at a baseline about \(\frac{\Delta m_{31}^{2}}{\Delta m_{21}^{2}}\simeq 30\) times farther of \(\sim 50\) km. Footnote 6: Solar neutrinos, which largely do not oscillate [61], constrain \(\simeq|U_{e2}|^{2}(1-|U_{e3}|^{2})\). For the \(\nu_{\mu}\) row, disappearance measurements provide up to two degrees of freedom that map onto four parameters: \(\theta_{23}\), \(\theta_{13}\), \(\theta_{12}\), and \(\cos\delta\). But since \(\theta_{13}\) and \(\theta_{12}\) are or will be well known, then similar measurements of \(\nu_{\mu}\) disappearance will provide information about \(\theta_{23}\) and \(\cos\delta\). Disappearance experiments exist that measure \(\theta_{23}\) via the quantity \(|U_{\mu 3}|^{2}(|U_{\mu 1}|^{2}+|U_{\mu 2}|^{2})=|U_{\mu 3}|^{2}(1-|U_{\mu 3}|^{2})\) (combined with information about \(\theta_{13}\)) at baselines from about 300 km to 10,000 km. Thus one can get at the second piece of information in the \(\nu_{\mu}\) disappearance picture by performing a \(\nu_{\mu}\) disappearance oscillation experiment at 30 times the baseline for a fixed energy. This is not feasible for atmospheric neutrinos and one would need to imagine an extremely optimistic configuration from e.g. Tokai, the accelerator neutrino source for T2K and HK, to SURF, the upcoming far detector location for DUNE, which is a distance of 8235 km, close to the oscillation minimum at 8850 km assuming the same off-axis angle as T2K/HK, although the flux is lower by a factor of almost 800. See the appendix for a discussion of this scenario. We instead focus on leveraging data in planned experiments such as DUNE and HK and a careful spectral measurement to provide information about the beginning of the \(\Delta_{21}\) oscillations in a \(\Delta_{31}\) and \(\Delta_{32}\) dominated regime. This is similar to the discussed plan for measuring the solar parameters \(\Delta m_{21}^{2}\) and \(\theta_{12}\) with Daya Bay data [62]. The effect of CP violation thus begins to show up at the low energy side of the \(\nu_{\mu}\) disappearance spectrum, and thus DUNE has an advantage: \(\nu_{\mu}\) experience more oscillations before the \(\nu_{\mu}\) charged-current cross section hits the muon threshold. While event rates and reconstructions are challenging at lower energies, the effect will impact the rate at which the oscillation maximum decreases where the probability is near one, so there is no probability suppression, which helps the rate. Since the appearance channel essentially constrains \(\sin\delta\) (see eq. 2) while the disappearance channel constrains \(\cos\delta\), these two measurements provide key complementary information. In fact, there will be sign degeneracies in many regions of parameter space of \(\delta\) with either only appearance or disappearance. Moreover, the precision on \(\delta\) near \(\pi/2\) or \(3\pi/2\) is determined by the sensitivity to \(\cos\delta\) which comes from this combination of disappearance measurements making this disappearance based measurement crucial for determining the exact value of \(\delta\) if we are near \(|\sin\delta|=1\) as some data [23] may be indicating. We now turn to a more quantitative investigation of the size of the effect. ### Analytic approximation We now strive to understand exactly how \(\cos\delta\), which can provide key information about CP violation, appears in the \(\nu_{\mu}\) disappearance probability in matter. First, we note that the \(\Delta_{31}\) and \(\Delta_{32}\) terms in eq. 3 can be approximately combined as mentioned above, see also [63]. Thus the \(\cos\delta\) dependence in the magnitudes of these two terms will approximately cancel in vacuum, although the matter effect will somewhat change this. Second, we focus on the \(\Delta_{21}\) term. In vacuum, to first order in \(s_{13}\), the term is \[-4c_{23}^{2}\left(s_{12}^{2}c_{12}^{2}+s_{23}c_{23}s_{13}\sin 2\theta_{12} \cos 2\theta_{12}\cos\delta\right)\sin^{2}\Delta_{21}\,, \tag{5}\] where the term in parentheses is numerically \(\simeq 0.21+0.03\cos\delta\). Thus we would expect that the maxima will be shifted lower for \(\cos\delta=1\) and higher for \(\cos\delta=-1\). Third, we include the correction due to the matter effect. The matter effect has almost no impact on \(\theta_{23}\) or \(\delta\) below the atmospheric resonance at \(E\simeq 11\) GeV [64; 65]. In addition, while \(\theta_{13}\), \(\Delta m_{31}^{2}\), and \(\Delta m_{32}^{2}\) do evolve somewhat in matter, the change from the vacuum value is \(\lesssim 10\%\) and can be safely ignored at this level of discussion. The solar parameters, \(\theta_{12}\) and \(\Delta m_{21}^{2}\), on the other hand, evolve considerably in matter at these energies. To a sufficient approximation, the matter correction factor for the solar parameters is [64; 66] \[\mathcal{S}_{\odot}\simeq\sqrt{(\cos 2\theta_{12}-c_{13}^{2}a/\Delta m_{21}^{2})^{2 }+\sin^{2}2\theta_{12}}\,, \tag{6}\] where \(a=2\sqrt{2}G_{F}N_{e}E\) is the contribution from the matter effect. See the appendix for more discussion of the solar corrections including higher order terms. Then the solar mixing angle is approximately given by \[\cos 2\widehat{\theta_{12}}\simeq\frac{\cos 2\theta_{12}-c_{13}^{2}a/\Delta m _{21}^{2}}{\mathcal{S}_{\odot}}\,, \tag{7}\] where \(\widehat{x}\) is the quantity \(x\) in matter. We note that past the solar resonance at \(E=0.13\) GeV, \(\widehat{\theta_{12}}>\pi/4\) and thus \(\cos 2\widehat{\theta_{12}}<0\). Therefore the second term in the parentheses in eq. 5 changes sign when the matter effect is considered. That is, we expect that the probability in matter should be highest for \(\cos\delta=1\) and lowest for \(\cos\delta=-1\) as is confirmed numerically in fig. 1. This measurement therefore also provides another indirect test of the matter effect if one is able to compare measurements of \(\delta\) between the appearance and the disappearance channels. Various additional effects are also in play, see the appendix, but this is sufficient to understand the general role of \(\cos\delta\) in the \(\nu_{\mu}\) disappearance channel. The above calculation applies for both DUNE and HK (see also the appendix) but the effect at HK is smaller than that for DUNE. For antineutrinos the story is somewhat different. The value of \(\cos\delta\) does not change as \(\delta\to-\delta\), so it is the same as for neutrinos. In matter we note that while \(\cos 2\widehat{\theta_{12}}<0\) for neutrinos, it remains positive for antineutrinos, as in vacuum. Thus the impact on the oscillation maxima for antineutrinos is the same in matter as in vacuum and the probability is comparatively large for \(\cos\delta=-1\) and small for \(\cos\delta=+1\). Due to the lower statistics in \(\bar{\nu}_{\mu}\) mode, this will not contribute as much as the neutrino channel to the total significance for probing \(\cos\delta\) and CP violation. ## III Estimated experimental sensitivities To quantify the magnitude of the effect, we simulate \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) disappearance in DUNE using DUNE's simulation files [67; 68]. We consider 40 kt fiducial volume and 6.5 years in each neutrino and antineutrino mode with 1.2 MW and 56% beam uptime. We consider priors on the five oscillation parameters other than \(\delta\) from one of 1) our current knowledge of the oscillation parameters [9], 2) the expected improvement on \(\theta_{12}\), \(\Delta m_{21}^{2}\), and \(\Delta m_{31}^{2}\) from the inclusion of 6 years of JUNO's \(\bar{\nu}_{e}\) disappearance data [60], and 3) the hypothetical scenario with perfect knowledge of all five oscillation parameters. We now perform a statistical test to determine DUNE's capability to determine \(\cos\delta\) including systematics, efficiency, smearing, and backgrounds as estimated by DUNE [67] and show our results in fig. 2 for each of the three different choices of priors. For \(\cos\delta=\pm 1\), the combination of DUNE and JUNO combined can disfavor \(\cos=\mp 1\) at \(>3\sigma\) and improvements in JUNO's measurement can reach close to \(4\sigma\). In addition, for \(\cos\delta=0\) (CP violating), \(|\cos\delta|=1\) (CP conserving) can be disfavored at \(1.6\sigma\), see fig. 3 in the appendix. We have confirmed that there is information about \(\cos\delta\) in each of neutrino and antineutrino modes individually, although neutrino mode contributes more to the information due to higher statistics from the larger cross section and lower wrong-sign lepton rates. In addition, as suggested by the theory discussion above, \(\cos\delta\) can Figure 2: The expected sensitivity to \(\cos\delta\) using only \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\) disappearance from DUNE along with external priors on the other five oscillation parameters from: the current precision, the expected improvement with JUNO, or if the other oscillation are known perfectly. Figure 1: The \(\nu_{\mu}\) disappearance probability for DUNE at different values of \(\cos\delta\); see also fig. 7 for the same plot for HK. also be determined if DUNE was performed in vacuum, although the results would be modified. Additional numerical results for DUNE as well as HK can be found in the appendix. We also checked the precision with which \(\cos\delta\) can be determined. The \(1\sigma\) uncertainty is essentially independent of the true value and is 0.63 and 0.51 given external information at the level associated with 6 years of JUNO and perfect knowledge, respectively. One could also consider \(\nu_{\mu}\) disappearance with atmospheric neutrinos at HK [69], IceCube [70], KM3NeT [71], or JUNO [72; 73]. The expected precision in the standard disappearance oscillation parameters (\(\Delta m^{2}_{31}\) and \(\sin^{2}2\theta_{23}\)) is expected to be much better in the upcoming long-baseline accelerator-based experiments than in the atmospheric experiments. Nonetheless, due to the different systematics and timelines it may be useful to consider a fit including atmospheric neutrinos alongside state-of-the-art \(\nu_{e}\) disappearance measurements. In principle one could probe \(\cos\delta\) with existing disappearance data. The current status of the data is that the best \(\nu_{\mu}\) disappearance measurements come from NOvA [22] and T2K [23] and the best \(\nu_{e}\) disappearance measurements come from Daya Bay [54] and KamLAND [56]. The \(\nu_{e}\) disappearance data is described by the "current" curves in figs. 2, 3, and 8 which show that even DUNE or HK can only provide at most \(\sim 1.4\sigma\) sensitivity to \(\cos\delta\); with existing NOvA and T2K data there would not be any significant \(\cos\delta\) information at all. ## Conclusion Determining if CP is violated in the neutrino sector is one of the highest priorities in particle physics. The best way to do so is with neutrino oscillations in the appearance channels. As this measurement will face many significant systematic uncertainties, additional means of probing \(\delta\) and CP violation will be crucial to ensure robustness. While disappearance channels are fundamentally CP conserving, we have shown that they can still provide information about \(\delta\), specifically \(\cos\delta\), which is sufficient to determine if CP is violated or not. Nonetheless, it cannot be done with any one disappearance measurement; we require good precision measurements of the disappearance probability of (at least) two different flavors. The matter effect affects the details of this story somewhat, but CP violation can be determined in vacuum. In addition, neutrinos and antineutrinos behave somewhat differently in disappearance due to the matter effect, but neutrino mode alone (or antineutrino mode alone) is sufficient to determine if CP is conserved or violated. In the upcoming generation of experiments, JUNO will measure the \(\bar{\nu}_{e}\) disappearance probability with unprecedented precision by directly observing all three oscillation frequencies. Long-baseline experiments like DUNE and HK will measure \(\nu_{\mu}\) disappearance primarily focused on the weighted average of the \(\Delta m^{2}_{31}\) and \(\Delta m^{2}_{32}\) frequencies, but will also detect at a subleading level the \(\Delta m^{2}_{21}\) frequency [74]. This is enough to provide some information about \(\delta\). In particular, we find that DUNE and JUNO combined will be able disfavor some values of \(\cos\delta\) at up to \(>3\sigma\) depending on the true value. Since \(\nu_{\mu}\) disappearance has somewhat cleaner and, more importantly, different, systematics from \(\nu_{e}\) appearance in long-baseline measurements at DUNE and HK, this channel will provide a crucial robustness test of CP violation when combined with JUNO data. The author acknowledges support by the United States Department of Energy under Grant Contract No. DE-SC0012704. ## Appendix A CP violation discovery sensitivity We quantify in fig. 3 the expected sensitivity to discover CP violation (that is, ruling out \(|\cos\delta|=1\)) as a function of the true value of \(\delta\). We also include priors of the five oscillation parameters other than \(\delta\) constrained at three different levels: perfect knowledge, the expected future precision including 6 years of JUNO constraining \(\theta_{12}\), \(\Delta m^{2}_{21}\), and \(\Delta m^{2}_{31}\)[60], and the current knowledge [9]. The different colors correspond to including both appearance and disappearance (the standard DUNE analysis), only appearance, and only disappearance. The different line styles correspond to external pulls from the current knowledge of the oscillation parameters, the expected improvements with JUNO, and hypothetical perfect knowledge. The blue dotted curve (both channels and current knowledge of the oscillation parameters) agrees with DUNE's curve very well. While the disappearance channel does not contribute nearly as much information on CP violation as the appearance channel, it does provide some information enhancing the overall sensitivity to CP violation beyond appearance alone. It also provides an important cross-check on CP violation measurements with different systematics. We also see that the combination of DUNE and JUNO's disappearance measurements provides for up to \(1.6\sigma\) sensitivity to discover CP violation. Finally, we note that the combined combined fit with both appearance and disappearance data yields more information than the naive sum of the \(\Delta\chi^{2}\)'s of each separately in the cases with the current or expected JUNO priors due to the fact that \(\nu_{\mu}\) disappearance will provide world leading measurements of \(\Delta m^{2}_{31}\) and \(\theta_{23}\), but with perfect knowledge of the other five oscillation parameters, the combined fit is the same as the naive sum of \(\Delta\chi^{2}\)'s. ## Appendix B Tokai to SURF We examine the solar \(\Delta m^{2}_{21}\) minimum at a hypothetical very long-baseline \(\nu_{\mu}\) disappearance accelerator neutrino experiment from J-PARC in Tokai, Japan to SURF in Lead, South Dakota, United States at a baseline of 8235 km through the Earth's mantle. In vacuum, the oscillation minimum due to \(\Delta m^{2}_{21}\) happens at 0.5 GeV and depends on \(\cos\delta\) since the amplitude of the oscillation, to first order in \(s_{13}\) is \[|U_{\mu 1}|^{2}|U_{\mu 2}|^{2}\approx c^{4}_{23}s^{2}_{12}c^{2}_{12}+s_{23}c^{3}_{ 23}s_{13}\sin 2\theta_{12}\cos 2\theta_{12}\cos\delta\,. \tag{8}\] While the energies are low, since we are considering oscillations related to \(\Delta m^{2}_{21}\) instead of \(\Delta m^{2}_{31}\), the matter effect will play a role and change the amplitude and shift the location of the minimum. The matter value of \(\theta_{12}\) reaches \(\pi/4\) at \(2EV_{\rm CC}=\Delta m^{2}_{21}\cos 2\theta_{12}/c^{2}_{13}\)[66; 75] where \(V_{\rm CC}=\sqrt{2}G_{F}N_{e}\) is the matter potential and \(N_{e}\) is the electron number density. For \(\rho=4\) g/cc, a typical mantle density, this occurs at 0.09 GeV For the frequency, the matter correction is \[\Delta m^{2}_{21}\rightarrow\widehat{\Delta m^{2}_{21}}\approx\Delta m^{2}_{2 1}\mathcal{S}_{\odot}\,, \tag{9}\] where the hat denotes a quantity in matter and \[S_{\odot}=\sqrt{(\cos 2\theta_{12}-c^{2}_{13}a/\Delta m^{2}_{21})^{2}+\sin^{2 }2\theta_{12}}\,, \tag{10}\] is the solar correction factor [66; 75; 64]. We find that the oscillation minimum happens near \[\widehat{\Delta_{21}}=\frac{3}{2}\pi\,. \tag{11}\] Normally there would be one oscillation minimum at \(\widehat{\Delta_{21}}=\pi/2\), but \(\widehat{\Delta_{21}}>\pi/2\) for all energies at this density and baseline. This is because at higher energies the matter potential causes \(\mathcal{S}_{\odot}\propto E\) so \(\widehat{\Delta m^{2}_{21}}\propto E\) and thus \(\widehat{\Delta_{21}}\) is independent of energy. At lower energies, \(\widehat{\Delta m^{2}_{21}}\) reaches a minimum at the solar minimum and continues to increase somewhat to vacuum (roughly proportional to the neutrino energy, but the proximity to the resonance somewhat modifies this dependence), meanwhile the \(1/E\) contribution in \(\widehat{\Delta_{21}}\) ensures that \(\widehat{\Delta_{21}}\) grows at small energies as well. Since there is a minimum of \(\widehat{\Delta_{21}}\) as a function of \(E\), then for certain baselines and densities it may be the case that the first (or even higher) oscillation extremum is never reached for any energies. There is also an additional shift in the location of the minimum due to the prefactor in front of the \(\sin^{2}\Delta_{21}\) term which is proportional to \(\sin^{2}2\theta_{12}\) in vacuum which is approximately \(\sin^{2}2\theta_{12}/\mathcal{S}_{\odot}^{2}\) in matter, which also depends on the energy. The shift due to this is small; note that \(\theta_{23}\) and \(\delta\) don't vary much in matter at all, and the impact of matter on \(\Delta m^{2}_{31}\) and \(\theta_{13}\) is small enough at these energies [64; 65]. Thus the location of the minimum is well estimated in this region of parameter space by solving eq. 11 for the energy, \[E_{\rm min}\simeq\frac{(\Delta m^{2}_{21}L)^{2}/2}{c^{2}_{13}\cos 2\theta_{12} \Delta m^{2}_{21}L^{2}V_{\rm CC}+\sqrt{(\Delta m^{2}_{21}L)^{2}[(3\pi)^{2}-(2 c^{2}_{13}\sin 2\theta_{12}LV_{\rm CC})^{2}]}}=0.16\ {\rm GeV}\,. \tag{12}\] Since this energy is above 0.09 GeV, the leading order term proportional to \(\cos\delta\) in eq. 8 is nonzero and \(\cos\delta=1\) will decrease eq. 8 and thus increase the probability relative to \(\cos\delta=0\). In addition, since the \(\cos\delta\) dependence in the next \(s_{13}\) order correction depends on \(-\cos^{2}\delta\) and since both orders may be similar due to the \(\cos 2\theta_{12}\) suppression in the first order term, we see that the probability should be highest for \(\cos\delta=1\) and slightly lower, but comparable, for other values of \(\cos\delta\). Next, notice that the size of the large and fast \(\Delta_{31}\) and \(\Delta_{32}\) amplitudes also depend on \(\cos\delta\). This is because \(\Delta_{31}\) and \(\Delta_{32}\) are sufficiently different and no longer in phase that they are to be treated separately. Then we notice that in vacuum through first order in \(s_{13}\) the amplitudes Figure 3: DUNE’s sensitivity to disfavor CP conservation, broken down by channel (colors) and the external priors (line styles). For the same plot with HK, see fig. 8. are: \[-4s_{23}^{2}\left(c_{23}^{2}s_{12}^{2}+2s_{23}c_{23}s_{13}s_{12}c_{12 }\cos\delta\right)\,, \tag{13}\] \[-4s_{23}^{2}\left(c_{23}^{2}c_{12}^{2}-2s_{23}c_{23}s_{13}s_{12}c_ {12}\cos\delta\right)\,, \tag{14}\] where the first (second) line is for \(\Delta_{31}\) (\(\Delta_{32}\)). Since \(\widehat{s_{12}^{2}}>\widehat{c_{12}^{2}}\) at these energies, it is the \(\Delta_{31}\) term that dominates. So for \(\cos\delta=1\) the dominant amplitude increases in magnitude and the only decrease is in the lesser amplitude, while for \(\cos\delta=-1\) the large amplitude is suppressed and the increase in the smaller amplitude brings the two amplitudes closer together. Then since \(\Delta_{31}\) and \(\Delta_{32}\) are roughly out of phase at these energies, as expected at the oscillation minimum for \(\Delta_{21}\), there is destructive interference when the amplitudes of each phase are close together. All of these effects discussed above can be seen in fig. 4. The dashed lines are the same as the solid lines but with a \(10\%/\sqrt{E/\text{GeV}}\) energy resolution smearing, see e.g. [76]. This shows that even with an optimistic energy resolution, the fast \(\Delta_{31}\) and \(\Delta_{32}\) oscillations are completely averaged out. First, the large \(\Delta_{21}\) minimum happens at \(0.16\) GeV as expected. Second, we see that the smeared out probability is higher for \(\cos\delta=1\) than the other cases which are all similar. Third, the amplitude of the fast oscillations decreases near the minimum but decrease much more so for \(\cos\delta=-1\) than for \(\cos\delta=1\). Experimentally, the low energy of the \(\Delta m_{21}^{2}\) minimum would require the beam be more off-axis than T2K or HK, which further reduces the flux considerably, and the \(\nu_{\mu}\) CC cross section experiences considerable suppression from the muon mass threshold making an experiment of this nature extremely unfeasible. An additional issue arises which is one of energy resolution. Given that the \(\Delta m_{31}^{2}\) and \(\Delta m_{32}^{2}\) oscillations have large amplitude and oscillate \(\sim 35\) times faster than the larger \(\widehat{\Delta m_{21}^{2}}\) oscillation, the effect due to the differing amplitudes of the \(\widehat{\Delta m_{21}^{2}}\) oscillations from \(\cos\delta\) will be rapidly averaged out unless unrealistically exceptional energy resolution can be achieved. Thus only the effect described after eq. 12 from eq. 8 could be detectable, not the effect described in eqs. 13-14. Therefore despite its theoretical interest, this determination of \(\cos\delta\) is extremely unfeasible, although theoretically possible. ## Appendix C DUNE event rates and regions of interest While the full statistical fits performed for figs. 2 and 3 contain all of the relevant information, it is useful to gain a physical understanding of where we can expect the effects to appear in the data. To this end, in fig. 5 we calculated the expected \(\nu_{\mu}\) event rates after 6.5 years in neutrino mode only including efficiency, smearing, and backgrounds for several key values of \(\cos\delta\). We have defined two interesting regions of interest (ROIs) separated by the local minimum and constrained by half the event rate of each local maximum. In the lower energy ROI, ROI 1, we see that the event rate increases with \(\cos\delta\) while in ROI 2 it decreases with \(\cos\delta\). The statistics in each ROI for the different values of \(\delta\) are shown in table 1 and it is easy to see that each ROI contributes to the statistical test to disfavor CP conservation given \(\cos\delta=0\) at the level of \(\Delta\chi^{2}\sim 1.4\) based on statistics only leading to a combined estimate of \(\sim 1.7\sigma\) sensitivity from neutrino mode, close to the correct estimate of \(1.6\sigma\) (including neutrino and antineutrino modes) shown in fig. 3. A realistic analysis needs to include uncertainties on the oscillation parameters and systematic uncertainties which will decrease the sensitivity somewhat, but also shape information which will increase it somewhat. ## Appendix D Runtime impact We also calculate the sensitivity based on the runtime of DUNE and JUNO. For DUNE we keep the target mass, uptime, and proton power the same and vary the runtime, split evenly between \(\nu_{\mu}\) and \(\bar{\nu}_{\mu}\). For JUNO we \begin{table} \begin{tabular}{c|c|c} \(\cos\delta\) & ROI 1 & ROI 2 \\ \hline 1 & 5506 & 5038 \\ 0 & 5418 & 5115 \\ -1 & 5334 & 5193 \\ \end{tabular} \end{table} Table 1: The expected \(\nu_{\mu}\) event rates after 6.5 years of neutrino mode at DUNE as a function of \(\cos\delta\) in the two ROIs, see fig. 5. Figure 4: The \(\nu_{\mu}\) disappearance probability from Tokai to SURF at different values of \(\cos\delta\). The dashed lines have an additional smearing to show the probability independent of the fast oscillations. assume that the precision on the three parameters that they mainly determine, \(\Delta m^{2}_{21}\), \(\Delta m^{2}_{31}\), and \(\theta_{12}\), continue to scale as expected with a variance proportional to the sum of run time and a systematic term determined by the numbers in [60]. We plot the ability to rule out CP conservation assuming \(\delta=3\pi/2\); other statistical tests such as ruling out \(\cos\delta=1\) assuming \(\cos\delta=-1\) (see fig. 2) and so on all scale in a similar fashion. The results are shown in fig. 6. We see that further improvement beyond the benchmark point requires more time from both experiments. Instead of additional time, the statistics can also be improved by increasing the beam power (DUNE/HK) or reactor power (JUNO); all experiments have the possibility of seeing such upgrades. There is a discussion about upgrading the accelerator for DUNE from 1.2 MW to 2.4 MW [19, 77] and JUNO may benefit from additional nuclear reactors increasing the power from 26.6 GW\({}_{\text{th}}\) to 35.8 GW\({}_{\text{th}}\)[78]. This further highlights the important synergy among long-baseline reactor and accelerator neutrino experiments as improvement in the statistics of either experiment alone will not significantly improve the sensitivity to CP violation, but additional runtime for both will since the effect is not specific to either the \(\nu_{e}\) or the \(\nu_{\mu}\) row. ### Hyper-Kamiokande We repeat the same calculations performed in the main text and elsewhere in the appendix, but for HK for completeness. First we examine the probability itself in fig. 7 and find that the impact of varying \(\cos\delta\) is much smaller than for DUNE, see fig. 1, as expected since \(\widehat{\theta_{12}}\) is closer to \(\pi/4\) and thus the leading order \(\cos\delta\) dependence in the \(\Delta_{21}\) term in eq. 5 is closer to zero. Second, we calculate HK's sensitivity to discover CP violation as a function of the true value of \(\delta\) broken down by appearance and disappearance as well as the role of external priors in fig. 8. We assume 1.3 MW, 187 kton fiducial mass, 1:3 neutrino to antineutrino run time ratio, and 10 years of running at 100% uptime to generally agree with the nominal HK prediction [69]. Note that we assume that the mass ordering is known which is relevant for HK and not for DUNE. We find that HK is somewhat less sensitive to discovering CP violation in the disappearance channel than DUNE since the effect is smaller, but the larger statistics mostly compensate for the difference. Figure 5: The expected \(\nu_{\mu}\) disappearance event rates after 6.5 years in neutrino mode for various values of \(\cos\delta\). We have also defined two regions of interest allowing for a simple statistical understanding of the sensitivity to \(\cos\delta\). Figure 6: The sensitivity to disfavor \(\sin\delta=0\) assuming \(\delta=3\pi/2\) as a function of DUNE and JUNO’s runtime. The star denotes the benchmark point considered elsewhere in the text. Figure 7: The \(\nu_{\mu}\) disappearance probability for HK at different values of \(\cos\delta\); see also fig. 1 for the same plot for DUNE.
2309.13717
On the Petrov Type of a 4-manifold
On an oriented 4-manifold, we examine the geometry that arises when the curvature operator of a Riemannian or Lorentzian metric $g$ commutes, not with its own Hodge star operator, but rather with that of another semi-Riemannian metric $h$ that is a suitable deformation of $g$. We classify the case when one of these metrics is Riemannian and the other Lorentzian by generalizing the concept of Petrov Type from general relativity; the case when $h$ is split-signature is also examined. The "generalized Petrov Types" so obtained are shown to relate to the critical points of $g$'s sectional curvature, and sometimes yield unique normal forms. They also carry topological information independent of the Hitchin-Thorpe inequality, and yield a direct geometric formulation of "almost-Einsten" metric via the Ricci or sectional curvature of $g$.
Amir Babak Aazami
2023-09-24T18:32:06Z
http://arxiv.org/abs/2309.13717v2
# Petrov types for Riemannian 4-manifolds ###### Abstract. On an oriented 4-manifold, we examine the geometry that arises when the curvature operator of a Riemannian metric \(g\) commutes, not with its own Hodge star operator, but with that of a Lorentzian metric that is a deformation of \(g\). This leads to two complementary notions of "Petrov Type" for \(g\), one directly in terms of \(g\)'s curvature operator, the other in terms of a variant of it. Both versions lead to pointwise classifications, and both include among them Riemannian metrics that deform the Einstein condition in the direction of a fixed vector field. While the first version is more direct, the second version has "normal forms," in that the Petrov Type of \(g\) will be determined by the critical points of a certain quadratic form. We close by generalizing our construction beyond Lorentzian metrics. ## 1. Introduction Of the many attributes of 4-dimensional Riemannian geometry, not least is its beautiful characterization of Einstein metrics, namely, those metrics \(g\) whose Ricci tensor \(\operatorname{Ric}_{g}\) satisfies \[\operatorname{Ric}_{g}=\lambda g \tag{1}\] for some \(\lambda\in\mathbb{R}\). Lurking behind (1) is the Hodge star operator \(*\) of an oriented Riemannian 4-manifold \((M,g)\), which splits the second exterior product \(\Lambda^{2}\) into the direct sum of "self-dual" (\(*\xi=\xi\)) and "anti-self-dual" (\(*\xi=-\xi\)) eigenspaces. As shown in [1] and [2], the Einstein condition (1) arises precisely when \(*\) commutes with \(g\)'s curvature operator \(\hat{R}_{g}\). Even more, such a \(g\) will have a "normal form," in that \(\hat{R}_{g}\) will be determined by knowledge of just the critical points and values of \(g\)'s sectional curvature function, viewed as the quadratic form of \(\hat{R}_{g}\). Two important ingredients go into making this characterization possible: i) Because \(\hat{R}_{g},*\colon\Lambda^{2}\longrightarrow\Lambda^{2}\) both derive from the same metric \(g\), they are both self-adjoint with respect to the \(g\)-induced inner product \(\langle\,,\rangle_{g}\) on \(\Lambda^{2}\); ii) the positive-definiteness of \(\langle\,,\rangle_{g}\) then makes their diagonalization possible (simultaneously so, since they commute), which, together with the eigenstructure of \(*\), ultimately makes \(g\)'s normal form possible. A similar construction arises in Lorentzian geometry -- but with one important difference: As shown in [11], although oriented Lorentzian Einstein 4-manifolds \((M,g_{\iota})\) are characterized by the same commuting property as above, the critical points of \(g_{\iota}\)'s sectional curvature function do not by themselves determine the curvature operator \(\hat{R}_{\iota}\) Rather, they can only determine \(g_{\iota}\)'s "Petrov Type" [10], a classification of 4-dimensional spacetimes prominent within general relativity. The reason is because \(g_{\iota}\) is not positive-definite, so that i) holds but not ii). In this paper we combine the Riemannian with the Lorentzian, and further engage with [11, 12], by studying the geometry that arises when the curvature operator \(\hat{R}_{g}\) of a Riemannian metric \(g\) commutes with the Hodge star operator \(\ast_{\iota}\) of a Lorentzian metric \(g_{\iota}\). If \(\hat{R}_{g}\) does not commute with \(\ast_{\iota}\), then we may take its "symmetric" part, \(\hat{S}_{g}:=\frac{1}{2}(\hat{R}_{g}-\ast_{\iota}\circ\hat{R}_{g}\circ\ast_{ \iota})\), which will commute by construction (thus \(\hat{S}_{g}\) plays a role analogous to the Weyl curvature tensor). Although this sort of inquiry can be pursued fairly generally -- after all, one may take the Hodge star \(\tilde{\ast}\) of any metric \(\tilde{g}\) on \(M\) -- we focus here on Lorentzian metrics because our \(\tilde{g}\)'s are obtained by deforming \(g\) in the direction of a fixed \(g\)-unit length vector field \(T\) on \(M\), \[\tilde{g}:=g+\varepsilon T^{\flat}\otimes T^{\flat},\] where \(T^{\flat}:=g(T,\cdot)\) and \(\varepsilon\) is any real number not equal to \(-1\). The reason for this choice is because if \(\hat{R}_{g}\) commutes with such a \(\tilde{\ast}\), then \(g\)'s Ricci tensor will be "close" to being Einstein -- see (3) below. However, only the cases \(\varepsilon=0\) or \(\varepsilon=-2\) prove interesting -- and \(\varepsilon=-2\) is a Lorentzian metric. Now, if one's interest is global, then the very existence of \(T\) requires \(M\) to have Euler characteristic zero. By [1] (see Lemma 1 below), such manifolds have no non-flat Einstein metrics -- making them, for that reason, a natural place in which to search for "almost-Einstein" metrics (though in Section 6 we will in fact generalize beyond Lorentzian metrics). Therefore, whether local or global, we weaken i) above by asking: If \(\hat{R}_{g}\) is the curvature operator of \(g\), and \(\ast_{\iota}\) is the Hodge star operator of the Lorentzian metric \[g_{\iota}:=g-2T^{\flat}\otimes T^{\flat},\] then what geometry arises for \(g\) if \[\hat{R}_{g}\circ\ast_{\iota}=\ast_{\iota}\circ\hat{R}_{g}? \tag{2}\] In fact this immediately weakens ii) as well. Indeed, while \(\hat{R}_{g}\) is self-adjoint with respect to its \(g\)-induced inner product \(\langle\,,\rangle_{g}\) on \(\Lambda^{2}\), and while \(\ast_{\iota}\) is self-adjoint with respect to the \(g_{\iota}\)-induced inner product \(\langle\,,\rangle_{\!\!g_{\iota}}\) on \(\Lambda^{2}\), neither is self-adjoint with respect to the other's inner product. We will address this issue in two ways, each of which will lead to different classifications of \(g\) but to complementary notions of "almost-Einstein" metric: 1. Work as is with \(\hat{R}_{g}\) over \((\Lambda^{2},\langle\,,\rangle_{\!\!g_{\iota}})\). As we show in Section 4, in any oriented \(g\)-orthonormal basis of the form \(\{T:=e_{1},e_{2},e_{3},e_{4}\}\), the Ricci tensor of a metric \(g\) satisfying (2) will take the form \[\operatorname{Ric}_{g}=\begin{bmatrix}\lambda&\ast&\ast&\ast\\ \ast&\lambda&0&0\\ \ast&0&\lambda&0\\ \ast&0&0&\lambda\end{bmatrix},\] (3) where \(\lambda:=\operatorname{Ric}(T,T)\) is not constant in general. This is one realization for us of an "almost-Einstein" metric -- a deformation of (1) along \(T\). The salient point is that even though (3) is basis-dependent, (2) is not. Can such a \(g\) have a normal form? The answer is "not quite" -- because in losing self-adjointness, we also lose the algebraic Bianchi identity (of \(\hat{R}_{g}\) relative to \(\langle\,,\rangle_{\!g_{\!\cdot}}\)). _Nevertheless, as we show Theorems 2 and 3, we can associate a "Petrov Type" to \(g\) by classifying the complex eigenstructure of \(\hat{R}_{g}\) that arises from the fact that \(*_{\!\cdot}^{2}=-1\). We can also establish a relationship between these eigenvectors and the critical points of \(\hat{R}_{g}\)'s \(\langle\,,\rangle_{\!g_{\!\cdot}}\)-quadratic form. If \(\hat{R}_{g}\) does not commute with \(*_{\!\cdot}\), then Theorems 2 and 3 extend to the symmetric part, \(\hat{S}_{g}\), of \(\hat{R}_{g}\); see Theorem 4._ 2. For the second approach, we modify \(\hat{R}_{g}\) into a new operator, "\(\hat{R}_{g}^{\iota}\)," that _is_ self-adjoint with respect to \(\langle\,,\rangle_{\!g_{\!\cdot}}\) (see Definition 7). While \(\hat{R}_{g}^{\iota}\) is not quite the curvature operator \(\hat{R}_{g}\) (though it is close), the advantage gained here is that \(\hat{R}_{g}^{\iota}\) and \(*_{\!\cdot}\) are now both self-adjoint with respect to \(\langle\,,\rangle_{\!g_{\!\cdot}}\). Replacing (2) with the condition \[\hat{R}_{g}^{\iota}\circ*_{\!\cdot}=*_{\!\cdot}\circ\hat{R}_{g}^{\iota},\] (4) the methods of [10] now become fully available to us. _As we show in Theorems 6 and 7, we can also associate a Petrov Type to \(g\) by classifying the complex eigenstructure of \(\hat{R}_{g}^{\iota}\) -- and these now have normal forms, in that their Petrov Types will be completely determined by the critical points of \(\hat{R}_{g}^{\iota}\)'s \(\langle\,,\rangle_{\!g_{\!\cdot}}\)-quadratic form_. Relative to \(\{T:=e_{1},e_{2},e_{3},e_{4}\}\), the Ricci tensors of such \(g\)'s will take the form \[\operatorname{Ric}_{g}=\begin{bmatrix}\lambda&0&0&0\\ 0&\lambda+\lambda_{2}&*&*\\ 0&*&\lambda+\lambda_{3}&*\\ 0&*&*&\lambda+\lambda_{4}\end{bmatrix},\] (5) with \(\lambda:=\operatorname{Ric}(T,T)\) once again. Note the complementary forms of (3) and (5), how each deforms (1) along \(T\), but, so to speak, in opposite ways. Again, the salient point is that (4) is basis-independent. _Finally, in Theorem 8 we extend these results to \(\hat{S}_{g}^{\iota}\), the symmetric part of \(\hat{R}_{g}^{\iota}\)_. In conclusion, the approach taken in this paper is to have the commuting condition \[\hat{R}_{g}\circ*=*\circ\hat{R}_{g}\] take priority over the Einstein condition \(\operatorname{Ric}_{g}=\lambda g\), by viewing the latter as merely the special (though most important) case where \(\hat{R}_{g}\) and \(*\) arise from the _same metric_\(g\). We further elaborate on this viewpoint in Section 6, by taking the standard round metric \(\hat{g}\) on \(\mathbb{S}^{4}\) in place of \(g_{\iota}\), and examining (2) or (4) with \(\mathring{*}\) instead of \(*_{\!\cdot}\) (or any Riemannian metric \(\tilde{g}\) on any oriented \(4\)-manifold \(M\)). In short, although Einstein metrics may not be abundant in dimension \(4\), "normal forms" are. ## 2. The Hodge star operator and normal forms In this section we provide a brief overview as well as some historical remarks regarding normal forms and the Hodge star operator in both the Riemannian and Lorentzian settings. The study of "normal forms" for curvature operators is motivated by a well known fact from linear algebra: On any finite-dimensional inner product space \((V,\langle\,,\rangle)\), a self-adjoint linear transformation \(T\colon V\longrightarrow V\) is determined by the critical points of its associated _quadratic form_ function on the sphere of unit length vectors: \[v\mapsto\langle Tv,v\rangle\quad\quad,\quad|v|=1. \tag{6}\] The connection to Riemannian geometry is due to the fact that the sectional curvature of a Riemannian manifold \((M,g)\) can also be realized as a quadratic form. First, fix \(p\in M\) and observe that \(\Lambda^{2}:=\Lambda^{2}(T_{p}M)\) inherits an inner product \(\langle\,,\rangle_{g}\) from \(g\) via \[\langle v_{1}\wedge w_{1},v_{2}\wedge w_{2}\rangle_{g}:=\det\begin{bmatrix}g(v _{1},v_{2})&g(v_{1},w_{2})\\ g(w_{1},v_{2})&g(w_{1},w_{2})\end{bmatrix}. \tag{7}\] With this in hand, we may express the action of the Riemann curvature 4-tensor \(R\) as a linear map called the _curvature operator_, \[\hat{R}_{g}\colon\Lambda^{2}\longrightarrow\Lambda^{2},\] whose action \(v\wedge w\mapsto\hat{R}_{g}(v\wedge w)\) is defined to be the unique 2-vector satisfying \[\langle\hat{R}_{g}(v\wedge w),x\wedge y\rangle_{g}:=-R(v,w,x,y)\quad\text{ for all }x,y\in T_{p}M. \tag{8}\] Owing to the symmetry \(R_{ijkl}=R_{klij}\), \(\hat{R}_{g}\) is self-adjoint on the inner product space \((\Lambda^{2},\langle\,,\rangle_{g})\). It is with respect to \(\hat{R}_{g}\) that the sectional curvature of \(g\) is a quadratic form. Indeed, for any orthonormal pair \(v,w\in T_{p}M\), the sectional curvature \(\operatorname{sec}_{g}\) of the 2-plane \(v\wedge w\) is \[\operatorname{sec}_{g}(v\wedge w):=R(v,w,w,v)=\underbrace{\langle\hat{R}_{g}(v \wedge w),v\wedge w\rangle_{g}}_{\text{``}\langle Tv,\,v\rangle\text{''}}. \tag{9}\] (Our sign convention is \(R(a,b,c,d):=g(\nabla_{a}\nabla_{b}\,c-\nabla_{b}\nabla_{a}c-\nabla_{[a,b]}c,d)\).) Given this analogy with (6), it is thus natural to study \(\hat{R}_{g}\) by studying the critical point behavior of \(\operatorname{sec}_{g}\) -- the goal being to classify those curvature tensors which are determined by knowledge of just the critical point structure of \(\operatorname{sec}_{g}\). Such curvature operators are then said to have a "normal form." (Observe that this problem is more difficult than (6) because not all unit 2-vectors in \(\Lambda^{2}\) correspond to 2-planes in \(T_{p}M\). Rather, only those unit 2-vectors \(\xi\) that are decomposable do; i.e., those that can be written as \(\xi=v\wedge w\) for \(v,w\in T_{p}M\).) As shown in [10], if one restricts attention to the class of Einstein metrics on a smooth 4-manifold \(M\), then such metrics are indeed determined solely by the critical point structure of \(\operatorname{sec}_{g}\) -- i.e., the class of Einstein 4-manifolds does indeed possess a "normal form." As mentioned in the Introduction, what makes this beautiful result possible is that, in dimension 4, Einstein metrics \(g\) are exactly those whose curvature operators \(\hat{R}_{g}\) commute with the Hodge star operator \(*\), \[*\colon\Lambda^{2}\longrightarrow\Lambda^{2}\quad\,\quad g\ \text{Einstein}\iff\hat{R}_{g}\circ*=* \circ\hat{R}_{g}, \tag{10}\] where the Hodge star operator \(*\) is defined as sending any 2-plane \(v\wedge w\) to its \(\langle\,,\rangle_{g}\)-orthogonal complement. The precise definition is \[\xi\wedge*\eta:=\langle\xi,\eta\rangle_{g}\,dV\quad\,\quad\xi,\eta\in \Lambda^{2}, \tag{11}\] where \(dV\) is the orientation form in the 1-dimensional space \(\Lambda^{4}(T_{p}M)\). (As we'll see below, these definitions would not change if we replaced the Riemannian metric \(g\) with a Lorentzian metric \(g_{\cdot}\), as we will very soon do.) Thanks to \(*\), there is in fact another way to express the Einstein condition in dimension 4 (see [10]): \[g\ \text{is Einstein}\iff\sec_{g}(v\wedge w)=\sec_{g}(*(v\wedge w)). \tag{12}\] I.e., \(g\) is Einstein if and only if the sectional curvature of each 2-plane is equal to that of its orthogonal complement. With these preliminaries out of the way, let us comment briefly on what else has been done regarding normal forms in the Riemannian setting. Continuing the study of the pointwise sectional curvature function \(\sec_{g}\), Thorpe analyzed its zero set when \(\sec_{g}\geq 0\) in [11], and also gave a condition for a 4-manifold to have \(\sec_{g}>0\), in [11]. Dimension 4 is particularly rich: Aside from Einstein metrics, the class of Kahler 4-manifolds also possesses a normal form, as shown in [12]. In fact Johnson also found normal forms for 6-dimensional Kahler manifolds with positive sectional curvature, in [12]. As one may imagine, however, in higher dimensions the behavior of \(\sec_{g}\) becomes more difficult to analyze in general, as shown in [10]; indeed, aside from the Kahler case in dimension 6, the author knows of only one other higher-dimensional result, in [10], who found normal forms for a special class of curvature tensors in dimension 5. The study of sectional curvature related to and inspired by these works continues to the present day; e.g., on the eigenvalues of \(\hat{R}_{g}\) and their relationship to the topology of the underlying manifold \(M\), or on a variant of \(\hat{R}_{g}\) known as the _curvature operator of the second kind_ (see, e.g., the recent works [1, 2, 13, 14]). So, too, does the quest for normal forms -- even in dimension 4. Indeed, [12, Remark 2.6] posed the question of whether gradient Ricci 4-solitons, which satisfy \[\operatorname{Ric}_{g}+\operatorname{Hess}f=\lambda g\] for some smooth function \(f\) on \(M\), also possess a normal form. Therefore, the pursuit of normal forms remains to this day an active one. Now we turn to Lorentzian geometry. First, recall that a _Lorentzian metric_\(g_{\cdot}\) on a smooth manifold \(M\) is a smooth nondegenerate metric with signature \((-++\cdots+)\). The absence of positive-definiteness implies that all nonzero vectors \(X\in TM\) come in three flavors, which terminology we will use freely in this paper: \[X\text{ is }\left\{\begin{array}{rcl}\text{``spacelike''}&\text{if}&g_{ \text{\tiny$\iota$}}(X,X)>0,\\ \text{``timelike''}&\text{if}&g_{\text{\tiny$\iota$}}(X,X)<0,\\ \text{``lightlike''}&\text{if}&g_{\text{\tiny$\iota$}}(X,X)=0.\end{array}\right.\] The Hodge star operator \(*_{\text{\tiny$\iota$}}\) of a Lorentzian 4-manifold \((M,g_{\text{\tiny$\iota$}})\) is defined in perfect analogy with that of a Riemannian metric \(g\) on \(M\); however, owing to a certain sign difference (see (19) below), let us go through the derivation with some care. To begin with, our \(g_{\text{\tiny$\iota$}}\) will not be chosen arbitrarily, but rather "directly" from \(g\) itself, as follows: For a suitable choice of \(g\)-unit length vector field \(T\), let us form the metric \[g_{\text{\tiny$\iota$}}:=g-2T^{\flat}\otimes T^{\flat}, \tag{13}\] where \(T^{\flat}:=g(T,\cdot)\) is the one-form \(g\)-metrically equivalent to \(T\). Notice that \(g_{\text{\tiny$\iota$}}(T,T)=-1\), so that \(T\) is unit _timelike_ with respect to \(g_{\text{\tiny$\iota$}}\). _Note also that any \(g\)-orthonormal basis containing \(T\) is automatically a \(g_{\text{\tiny$\iota$}}\)-orthonormal basis, and vice-versa_ -- though this is not true for arbitrary \(g\)-orthonormal bases. Generally speaking, the more "distinguished" \(T\) is -- e.g., if it is closed, \(dT^{\flat}=0\), or if it is a Killing vector field, \(\mathfrak{L}_{T}g=0\) -- the more similar the properties of \(g\) and \(g_{\text{\tiny$\iota$}}\) will be; see [10] for a careful treatment. Having said that, whatever the choice of \(T\), if we take an oriented local \(g_{\text{\tiny$\iota$}}\)-orthonormal frame \(\{e_{1},e_{2},e_{3},e_{4}\}\) with \(e_{1}:=T\), then the 2-vectors \[\{e_{1}\wedge e_{2}\,,\,e_{1}\wedge e_{3}\,,\,e_{1}\wedge e_{4}\,,\,e_{3} \wedge e_{4}\,,\,e_{4}\wedge e_{2}\,,\,e_{2}\wedge e_{3}\} \tag{14}\] will be an orthonormal basis for \(\Lambda^{2}\) with respect to the Lorentzian inner product on \(\Lambda^{2}\), which is analogous to (7) and defined by \[\langle v_{1}\wedge w_{1},v_{2}\wedge w_{2}\rangle_{\!\!g_{\text{\tiny$\iota$ }}}:=\det\begin{bmatrix}g_{\text{\tiny$\iota$}}(v_{1},v_{2})&g_{\text{\tiny$ \iota$}}(v_{1},w_{2})\\ g_{\text{\tiny$\iota$}}(w_{1},v_{2})&g_{\text{\tiny$\iota$}}(w_{1},w_{2})\end{bmatrix}. \tag{15}\] (Note that the first three basis elements in (14) are all timelike, \[\langle e_{1}\wedge e_{i},e_{1}\wedge e_{i}\rangle_{\!\!g_{\text{\tiny$\iota$ }}}=-1\quad,\quad i=2,3,4,\] so that \(\langle\,,\rangle_{\!\!g_{\text{\tiny$\iota$}}}\) has signature \((---+++)\).) Now we define the Hodge star operator \(*_{\text{\tiny$\iota$}}\) with respect to \(g_{\text{\tiny$\iota$}}\), in perfect analogy with (11): \[\xi\wedge*_{\text{\tiny$\iota$}}\eta:=\langle\xi,\eta\rangle_{\!\!g_{\text{\tiny $\iota$}}}\,dV\quad,\quad\xi,\eta\in\Lambda^{2}.\] Bearing in mind that \(g_{\text{\tiny$\iota$}}(e_{1},e_{1})=-1\), observe that the action of \(*_{\text{\tiny$\iota$}}\) on the basis (14) is \[\left\{\begin{aligned} *_{\text{\tiny$\iota$}}(e_{1}\wedge e_{2})& =-e_{3}\wedge e_{4},\\ *_{\text{\tiny$\iota$}}(e_{1}\wedge e_{3})&=-e_{4} \wedge e_{2},\\ *_{\text{\tiny$\iota$}}(e_{1}\wedge e_{4})&=-e_{2} \wedge e_{3},\end{aligned}\right.,\quad\left\{\begin{aligned} *_{\text{\tiny$\iota$}}(e_{3}\wedge e_{4})&=e_{1} \wedge e_{2},\\ *_{\text{\tiny$\iota$}}(e_{4}\wedge e_{2})&=e_{1} \wedge e_{3},\\ *_{\text{\tiny$\iota$}}(e_{2}\wedge e_{3})&=e_{1} \wedge e_{4},\end{aligned}\right. \tag{16}\] or in block matrix form, \[*_{\text{\tiny$\iota$}}=\begin{bmatrix}O&I\\ -I&O\end{bmatrix}, \tag{17}\] where \(I\) is the \(3\times 3\) identity matrix. By contrast, the Riemannian Hodge star (11) is \[*=\begin{bmatrix}O&I\\ I&O\end{bmatrix} \tag{18}\] with respect to the same basis (14). While \(*\) is diagonalizable, \(*_{\iota}\) is not (even though \(*_{\iota}\) is self-adjoint with respect to \(\langle\,,\rangle_{g_{\iota}}\)). Furthermore, \[*^{2}=1\quad\text{ whereas }\quad*_{\iota}^{2}=-1. \tag{19}\] The almost complex structure that the latter defines on each \(\Lambda^{2}\) will be crucial to our construction of Riemannian Petrov Type below -- just as it was in the original Lorentzian construction of Petrov Type; see, e.g., [10] and [11, Chapter 5]. Let us describe this now. Because \(*_{\iota}^{2}=-1\), one loses the self-dual/anti-self-dual splitting of \(\Lambda^{2}\). Having said that, one does gain a _complex structure_, via \(i:=*_{\iota}\). Then the commuting condition \(\widehat{W}_{\iota}\circ*_{\iota}=*_{\iota}\circ\widehat{W}_{\iota}\), where \(\widehat{W}_{\iota}\) is the Weyl curvature operator of \(g_{\iota}\)--which, being trace-free, automatically commutes with \(*_{\iota}\)--turns \(\widehat{W}_{\iota}\) into a _complex-linear_ map on \(\Lambda^{2}_{\text{c}}\). The "Petrov Type of \(g_{\iota}\)" is then precisely the complex eigenstructure of \(\widehat{W}_{\iota}\) (see [11, Chapter 5]). This eigenstructure was shown in [10] to coincide with the _number_ of critical points of \(g_{\iota}\)'s sectional curvature function (see also [1, pp. 98ff.]). For example, the Petrov Type of vacuum black holes is equivalent to their sectional curvature functions having exactly one "spacelike" critical point at any point on \(M\). Therefore, in having \(\hat{R}_{g}\) commute with a Lorentzian \(*_{\iota}\), we can associate complex eigenstructures and thus Petrov Types. (Compare this with [1], wherein it was shown that an oriented Einstein 4-manifold will admit a (locally) Hermitian structure if and only if its self-dual Weyl tensor \(\widehat{W}^{+}:=\frac{1}{2}(\widehat{W}+\widehat{W}\circ*)\) has at least two of its three eigenvalues equal.) In doing so, the following is helpful to keep in mind: **Definition 1**.: _Let \((M,g_{\iota})\) be an oriented Lorentzian 4-manifold. At any \(p\in M\), let \(P\) denote an oriented 2-dimensional subspace of \(T_{p}M\). Then \(P\) is nondegenerate if the restriction of \(g_{\iota}\) to \(P\), \(g_{\iota}|_{P}\), is nondegenerate. The sign of \(P\), denoted \(\epsilon_{\iota}(P)=\pm 1\), is defined to be \(-1\) if \(g_{\iota}|_{P}\) is Lorentzian and \(+1\) if \(g_{\iota}|_{P}\) is positive-definite. The 2-plane \(g_{\iota}\)-orthogonal to a nondegenerate \(P\), denoted by \(P^{\perp_{\iota}}\), is defined to be_ \[P^{\perp_{\iota}}:=*_{\iota}P.\] _Finally, following [10], let \(G_{+}(p)\cup G_{-}(p)\subseteq\Lambda^{2}(T_{p}M)\) denote the 2-Grassmannians of all decomposable 2-vectors of length \(\pm 1\), respectively_; _i.e., the set of all oriented, nondegenerate 2-dimensional subspaces of \(T_{p}M\). Note that \(\epsilon_{\iota}(P)=\langle P,P\rangle_{\!g_{\!\iota}}\) for any \(P\in G_{\pm}(p)\)._ (\(P^{\perp_{\iota}}\) should be distinguished from its Riemannian version \(P^{\perp}:=*P\). Also, recall that a 2-vector \(\xi\in\Lambda^{2}\) is decomposable \(\Leftrightarrow\xi\wedge\xi=0\Leftrightarrow\langle\xi,*_{\iota}\xi\rangle_{ \!g_{\iota}}=0\).) We close this section by writing the matrix of \(\hat{R}_{g}\) with respect to (14). Denoting by "\(R_{ijkl}\)" the components of the Riemann curvature 4-tensor of \(g\), we have (recall the minus sign in (8)) \[\hat{R}_{g}=-\begin{bmatrix}R_{1212}&R_{1312}&R_{1412}&R_{3412}&R_{4212}&R_{2312} \\ R_{1213}&R_{1313}&R_{1413}&R_{3413}&R_{4213}&R_{2313}\\ R_{1214}&R_{1314}&R_{1414}&R_{3414}&R_{4214}&R_{2314}\\ R_{1234}&R_{1334}&R_{1434}&R_{3434}&R_{4234}&R_{2334}\\ R_{1242}&R_{1342}&R_{1442}&R_{3442}&R_{4242}&R_{2342}\\ R_{1223}&R_{1323}&R_{1423}&R_{3423}&R_{4223}&R_{2323}\end{bmatrix}, \tag{20}\] which, owing to the symmetry \(R_{ijkl}=R_{klij}\), has the block form \[\hat{R}_{g}=-\begin{bmatrix}A&B\\ B^{t}&D\end{bmatrix}, \tag{21}\] with \(A\) and \(D\) symmetric \(3\times 3\) matrices and \(B^{t}\) the transpose of the \(3\times 3\) matrix \(B\), which is not symmetric in general. Now, in order to motivate our construction in Section 3 below, let us return for a moment to the fully Riemannian setting and recall the Einstein condition (10), \[\mathrm{Ric}=\lambda g\iff\hat{R}_{g}\circ*=*\circ\hat{R}_{g},\] which, we now see from (21), is the case if and only if \(A=D\) and \(B^{t}=B\). And as we saw in (12), \[\underbrace{\sec_{g}(P)=\sec_{g}(P^{\perp})}_{\text{for any 2-plane $P$}}\iff\hat{R}_{g} \circ*=*\circ\hat{R}_{g}\iff\hat{R}_{g}=-\begin{bmatrix}A&B\\ B&A\end{bmatrix}, \tag{22}\] where \(P^{\perp}:=*P\). If instead we had required that \(\hat{R}_{g}\) and \(*\)_anti_-commute, then \(\sec_{g}(P^{\perp})\) would have the opposite sign; in fact, \[\underbrace{\sec(P)=-\sec(P^{\perp})}_{\text{for any 2-plane $P$}}\iff\hat{R}_{g} \circ*=-(*\circ\hat{R}_{g})\iff\hat{R}_{g}=-\begin{bmatrix}A&B\\ -B&-A\end{bmatrix}. \tag{23}\] In the next section, we are going to study geometries that are in some sense _intermediate between (22) and (23)_ -- see (26) below. ## 3. almost-Einstein metrics We now begin our construction, by first of all defining the class of "almost-Einstein" metrics whose geometry we wish to investigate: **Definition 2** (almost-Einstein metric).: _An oriented Riemannian 4-manifold \((M,g)\) is almost-Einstein if there is a nowhere vanishing vector field \(T\) on \(M\), together with local ordered frames \(\{T,X_{1},X_{2},X_{3}\}\) in a neighborhood of each point, with respect to which the Ricci tensor \(\mathrm{Ric}\) of \(g\) uniformly takes exactly one of the following forms_: \[\underbrace{\mathrm{Ric}=\begin{bmatrix}\lambda&\psi_{1}&\psi_{2}&\psi_{3}\\ \psi_{1}&\lambda&0&0\\ \psi_{2}&0&\lambda&0\\ \psi_{3}&0&0&\lambda\end{bmatrix}}_{\text{``Type A''}},\quad\underbrace{ \mathrm{Ric}=\begin{bmatrix}\lambda&0&0&0\\ 0&\lambda+\lambda_{2}&\psi_{2}&\psi_{3}\\ 0&\psi_{2}&\lambda+\lambda_{3}&\psi_{4}\\ 0&\psi_{3}&\psi_{4}&\lambda+\lambda_{4}\end{bmatrix}}_{\text{``Type B''}}, \tag{24}\] _Here \(\lambda:=\mathrm{Ric}(T,T)\) and \(\lambda_{i},\psi_{i}\) are smooth functions._ Einstein metrics are locally, if not always globally, almost-Einstein: With respect to any local orthonormal frame, the Ricci tensor of an Einstein metric will be of either Type in (24) (with \(\lambda\) a constant and each \(\psi_{i}=\lambda_{i}=0\)). Almost-Einstein metrics therefore deform the Einstein condition "in the direction of a fixed vector field \(T\)," and the two types complement each other. Here is one reason why they are worth considering: **Lemma 1**.: _Let \(g\) be an almost-Einstein metric on a closed 4-manifold. If \(g\) is Einstein, then it is flat._ Proof.: If the closed 4-manifold \(M\) globally admits a nowhere vanishing vector field \(T\), then it must have Euler characteristic zero: \(\chi(M)=0\). But by a classical result of Berger [1], on such a 4-manifold any Einstein metric \(g\) must be flat; indeed, by the Chern-Gauss-Bonnet formula in dimension 4, \[\chi(M)=\frac{1}{8\pi^{2}}\int_{M}|W|^{2}-\frac{1}{2}\mathring{\mathrm{Ric}}| ^{2}+\frac{1}{24}|\mathrm{scal}|^{2},\] where \(\mathring{\mathrm{Ric}}:=\mathrm{Ric}-\frac{\mathrm{scal}}{4}g\) is the trace-free Ricci tensor, which vanishes identically for Einstein metrics. Note that almost-Einstein metrics certainly do exist locally: **Lemma 2**.: _Let \(\lambda,\psi_{1},\psi_{2},\psi_{3}\) be smooth functions on \(\mathbb{R}^{4}\) such that_ \[(\psi_{1}^{2}+\psi_{2}^{2}+\psi_{3}^{2})\Big{|}_{\mathbf{0}}\neq\lambda\Big{|} _{\mathbf{0}}\neq 0\] _at the origin \(\mathbf{0}\in\mathbb{R}^{4}\). Then there exists a Type A almost-Einstein metric in a neighborhood of \(\mathbf{0}\)._ Proof.: Type A in (24) has eigenvalues \[\lambda\ \,\ \ \lambda\ \,\ \ \lambda\pm\sqrt{\psi_{1}^{2}+\psi_{2}^{2}+ \psi_{3}^{2}}.\] If \((\psi_{1}^{2}+\psi_{2}^{2}+\psi_{3}^{2})\big{|}_{\mathbf{0}}\neq\lambda\big{|} _{\mathbf{0}}\neq 0\), then these eigenvalues are all nonzero at \(\mathbf{0}\), in which case the 4-tensor defined in the coordinate frame on \(\mathbb{R}^{4}\) by (24) is invertible at \(\mathbf{0}\). As shown in [1] (see also [1]), this guarantees the existence of a smooth Riemannian metric \(g\) on a neighborhood of \(\mathbf{0}\) whose Ricci tensor is, with respect to the coordinate frame \(\{\partial_{1},\partial_{2},\partial_{3},\partial_{4}\}\), equal to the Type A Ricci tensor in (24). Setting \(T:=\partial_{1}\) completes the proof. We now show that there is an important subclass of almost-Einstein metrics which has a rich underlying goemetry. Here we will deal with the case of Type A in (24), postponing Type B to Section 5: **Proposition 1**.: _Let \((M,g)\) be an oriented Riemannian 4-manifold and \(T\) a unit length vector field on \(M\). If the curvature operator of \(g\) commutes with the Hodge star operator \(*_{{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which functions we'll denote by \(\psi_{1},\psi_{2},\psi_{3}\). On the other hand, the relation \(A=D\) yields \[\left\{\begin{aligned} \operatorname{Ric}_{23}&=R_{1231}+R_{4234 }=0,\\ \operatorname{Ric}_{24}&=R_{1241}+R_{3243}=0,\\ \operatorname{Ric}_{34}&=R_{1341}+R_{2342}=0.\end{aligned}\right.\] Next, the components \[\left\{\begin{aligned} \operatorname{Ric}_{11}&=R_{2112}+R_{31 13}+R_{4114},\\ \operatorname{Ric}_{22}&=R_{1221}+R_{3223}+R_{4224}, \\ \operatorname{Ric}_{33}&=R_{1331}+R_{2332}+R_{4334}, \\ \operatorname{Ric}_{44}&=R_{1441}+R_{2442}+R_{3443}, \end{aligned}\right.\] yield, together with \(A=D\), the following three identities: \[\left\{\begin{aligned} (1)&\operatorname{Ric}_{11}- \operatorname{Ric}_{22}+\operatorname{Ric}_{33}-\operatorname{Ric}_{44}=2R_{1 331}-2R_{2442}=0,\\ (2)&\operatorname{Ric}_{11}+\operatorname{Ric}_{22}- \operatorname{Ric}_{33}-\operatorname{Ric}_{44}=2R_{1221}-2R_{3443}=0,\\ (3)&\operatorname{Ric}_{11}-\operatorname{Ric}_{22}- \operatorname{Ric}_{33}+\operatorname{Ric}_{44}=2R_{4114}-2R_{2332}=0.\end{aligned}\right.\] The combinations \((1)+(2)\), \((1)+(3)\), and \((2)+(3)\) then yield, respectively, \[\operatorname{Ric}_{11}=\operatorname{Ric}_{44}\quad\quad,\quad\operatorname{ Ric}_{11}=\operatorname{Ric}_{22}\quad\quad,\quad\operatorname{Ric}_{11}= \operatorname{Ric}_{33}.\] Setting \(\lambda:=\operatorname{Ric}_{11}\) now puts the Ricci tensor of \(g\), expressed with respect to the orthonormal frame \(\{T,e_{2},e_{3},e_{4}\}\), precisely in the Type A form of (24). (Note that because the traceless Ricci tensor does not vanish, one cannot use Schur's Lemma to prove that \(\lambda\) must be constant, as of course would be the case were \(g\) an Einstein metric.) Let us pause to give the metrics of Proposition 1 a more suggestive name: **Definition 3** (\(*_{*}\)-Einstein metric).: _An oriented Riemannian 4-manifold \((M,g)\) is \(*_{*}\)-Einstein if there exists a unit length vector field \(T\) on \(M\) such that the curvature operator of \(g\) commutes with the Hodge star operator \(*_{*}\) of the Lorentzian metric \(g_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{ \!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{}_{ }_{\!{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{}_{}_{\!{}_{ }_{\!{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{\!{}_{}_{}_{\!{}_{}_{ }_{{}_{\!{}_{}_{\!{}_{}_{}_{\!{}_{}_{\!{}_{}_{}_{{}_{}_{\!{}_{}_{}_{\!{}_{}_{ }_{{}_{\!{}_{}_{}_{\!{}_{}_{{}_{}_{\!{}_{}_{{}_{}_{\!{}_{}_{}_{}_{{}_{}_{}_{ }_{{}_{\!{}_{}_{}_{\!{}_{}_{{}_{}_{}_{\!{}_{}_{}_{{}_{}_{\!{}_{}_{}_{{}_{}_{} \!{}_{{}_{}_{{}_{}_{\!{}_{}_{{}_{}_{}_{\!{}_{}_{}_{{}_{}_{{}_{}_{\!{}_{}_{{}_{} \!{}_{{}_{}_{}_{{}_{}_{\!{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{}_{{}_{} \!{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{} \!{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{} }_{{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{{}_{}_{} }_{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{} {{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{} }_{{}_{{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{} }_{{{}_{}_{{}_{}}_{{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}}_{{}_{{}_{}}_{{}_{}_{{}_{} {{}_{{}_{}_{{}_{}_{}_{{}_{{}}_{{}_{{}_{}_{}_{{}}_{{}_{{}_{}_{}_{{}}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{{}}_{{}_{}_{{}}_{{}_{}_{{}_{{}}_{}_{{}_{}_{{}}_{{}_{{}_{{}}_{}_{{}_{{}_{}_{{}}_{{}_{{}_{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{}_{{}_{{}}_{{}_{{}}_{{}_{{}_{}_{{}}_{{}_{{}_{{}}_{{}_{}_{{}_{{}}_{{}_{{}}_{}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{}_{{}}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{}_{{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{}_{{}}_{{}_{{{}}_{{}}_{{}_{{}_{{}}_{{}}_{{{}}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}_{{}}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}}_{{}_{{}}_{{{}_{}_{{}}_{{{}}_{{{}}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{{}}_{{}_{{}_{}_{{}_{{}}_{{}_{{}_{{}}_{{{}}_{{}_{{}}_{{}_{{}_{}_{{}}_{{}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}_{}{}_{{}_{{}_{{}}_{{}_{{}_{{}}_{{}_{{}}_{{}_{{}_{{}}_{{}_{}_{{{}_{}_{{}_{{}}_{{{}}_{{{}}_{{}_{{}_{{}}_{{{}}_{}_{{}_{{}_{{}_{}_{{}_{{}}_{{}_{{}_{{}}_{{{}_{}_{{}_{{}}_{{{}}_{{}_{{}_{{}_{{}}_{{}_{{}_{{}}_{{{}_{{}}_{{}_{{}_{{}}_{{{}_{}_{{}}_{{{}_{}_{{}_{{}}_{{{}_{}_{{}}_{{{}_{{}_{}}_{{{}_{{}_{{}}_{{{}_{{}}_{{}_{{}_{{}}_{{{}_{}_{{}_{{}}_{{{}_{{}}_{{}_{{}_{{}_{{}}_{{{}}_{{} **Definition 4** (\(L\)-sectional curvature).: _Let \((M,g)\) be an oriented Riemannian 4-manifold with curvature operator \(\hat{R}_{g}\) and \(T\) a unit length vector field on \(M\). Consider the Lorentzian metric \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\) and its inner product \(\langle\,,\rangle_{\!g_{\!\iota}}\) on \(\Lambda^{2}\). Then the function \(\sec_{{}_{\iota\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\! \!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\! \cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\cdot\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot \!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\!\cdot\!\!\cdot\!\!\!\cdot\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\!\cdot\!\cdot\!\!\cdot\!\!\! where in (31) we used the self-adjoint property of \(*_{\iota}\) once again. It's natural to ask whether the converse is true -- as it is in the fully Riemannian [1, 10] or fully Lorentzian [14] cases. In fact it is not true here, as we now demonstrate. Suppose that \(\sec_{\iota\cdot\iota}(P^{\perp_{\iota}})=\sec_{\iota\cdot\iota}(P)\) for all nondegenerate \(2\)-planes \(P\), and we would like to conclude from this that \(*_{\iota}\circ\hat{R}_{g}=\hat{R}_{g}\circ*_{\iota}\) (equivalently, that \(*_{\iota}\circ\hat{R}_{g}\circ*_{\iota}=-\hat{R}_{g}\)). Using the self-adjoint property of \(*_{\iota}\) once again, we would have \[\underbrace{-\epsilon_{\iota}(P)\langle*_{\iota}\hat{R}_{g}*_{\iota}P,P \rangle_{\!g_{\iota}}}_{\sec_{\iota\cdot\iota}(P^{\perp_{\iota}}),\ via\ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq eigenvectors (Type II), then it is not diagonalizable and splits into two cases: 1) Two of its eigenvalues are equal, or 2) all three are equal. Since in Type II the total geometric multiplicity is 2, either case leads to two Jordan blocks and thus to the middle matrix above (if all three eigenvalues are equal, then \(\lambda_{1}=\lambda_{2}\)). The final case, with only one linearly independent eigenvector (Type III), has geometric multiplicity 1, hence one Jordan block. Now we would like to relate a \(*_{\iota}\)-Einstein metric's Petrov Type to its \(L\)-sectional curvature \(\sec_{{}_{\iota\mathbb{R}}}\). To do that, we will need to characterize the critical points of \(\sec_{{}_{\iota\mathbb{R}}}\). In doing so, recall once again that the curvature operator \(\hat{R}_{{}_{g}}\) of \(g\) is not self-adjoint with respect to the Lorentzian inner product \(\langle\,,\rangle_{\!\!g.}\) on \(\Lambda^{2}\). What this means, in practice, is the following: In an arbitrary \(g_{\iota}\)-orthonormal frame \(\{e_{1},e_{2},e_{3},e_{4}\}\) with timelike direction \(e_{1}\) (we don't assume that \(T=e_{1}\)), the components \[K_{ijkl}:=-\langle\hat{R}_{{}_{g}}(e_{i}\wedge e_{j}),e_{k}\wedge e_{l} \rangle_{\!\!g.}, \tag{34}\] while they clearly satisfy \(K_{jikl}=-K_{ijkl}\) and \(K_{ijlk}=-K_{ijkl}\), are however not pairwise symmetric in general: \(K_{ijkl}\neq K_{klij}\). If \(e_{1}=T\), however, then the following can be said: **Lemma 4**.: _With respect to any oriented \(g_{\iota}\)-orthonormal basis of the form \(\{T:=e_{1},e_{2},e_{3},e_{4}\}\), the components (34) satisfy_ \[K_{ijkl}=R_{ijkl}\,\epsilon_{\iota}(e_{k}\wedge e_{l}). \tag{35}\] _Hence in such a basis,_ \[K_{ijkl}=K_{klij}\iff\epsilon_{\iota}(e_{i}\wedge e_{j})=\epsilon_{\iota}(e_{ k}\wedge e_{l}). \tag{36}\] Proof.: The key is that such a basis is also \(g\)-orthonormal, hence by (8), \[K_{ijkl}=-\langle\underbrace{\hat{R}_{{}_{g}}(e_{i}\wedge e_{j})}_{-R_{ijkl} \,e_{k}\wedge e_{l}+\cdots},e_{k}\wedge e_{l}\rangle_{\!\!g_{\iota}}=R_{ijkl} \,\epsilon_{\iota}(e_{k}\wedge e_{l}).\] Since \(R_{ijkl}=R_{klij}\), it's clear that \(K_{ijkl}=K_{klij}\) if and only if \(\epsilon_{\iota}(e_{i}\wedge e_{j})=\epsilon_{\iota}(e_{k}\wedge e_{l})\). Now we are in a position to characterize the critical points of \(\sec_{{}_{\iota\mathbb{R}}}\): **Proposition 2**.: _Let \((M,g)\) be an oriented Riemannian 4-manifold and \(T\) a unit length vector field on \(M\). Let \(\sec_{{}_{\iota\mathbb{R}}}\) be the \(L\)-sectional curvature of \(g\) with respect to \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\). Then at any \(p\in M\), \(P\in G_{+}(p)\) is a critical point of \(\sec_{{}_{\iota\mathbb{R}}}\) if and only if with respect to any oriented orthonormal frame \(\{e_{1},e_{2},e_{3},e_{4}\}\) of \(T_{p}M\) with timelike direction \(e_{1}\) and with \(P=e_{3}\wedge e_{4}\),_ \[K_{34ij}=-K_{ij34}\quad\text{, }\quad(i,j)\neq(3,4)\text{ or }(1,2), \tag{37}\] _with \(K_{ijkl}\) given by (34). Similarly, for \(P\in G_{-}(p)\) with \(P=e_{1}\wedge e_{2}\),_ \[K_{12ij}=-K_{ij12}\quad\text{, }\quad(i,j)\neq(3,4)\text{ or }(1,2). \tag{38}\] _If \(e_{1}\) can be chosen to equal \(T\), then \(P=e_{3}\wedge e_{4}\) is a critical point of \(\sec_{{}_{\mbox{\tiny$L$}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Likewise for the cases \(i=3,4\), which yield \(K_{3414}=-K_{1434}\) and \(K_{3424}=-K_{2434}\), respectively. Upon suitably modifying \(\phi\), similar computations for \(P=e_{1}\wedge e_{2}\in G_{-}(p)\) yield the four cases in (38). Finally, if a critical \(2\)-plane \(P\in G_{+}(p)\) can be expressed as \(P=e_{3}\wedge e_{4}\) with respect to an oriented \(g_{\!\cdot}\) orthonormal basis \(\{T=e_{1},e_{2},e_{3},e_{4}\}\) -- i.e., if \(P\) is orthogonal to \(T\) -- then by Lemma 4 \[K_{4234}\overset{\eqref{eq:K34234}}{=}K_{3442}\overset{\eqref{eq:K34234}}{=}- K_{4234}\quad\Rightarrow\quad R_{4234}\overset{\eqref{eq:K34234}}{=}0.\] Likewise, \(R_{3423}=0\). Or if a critical \(2\)-plane \(P\in G_{-}(p)\) can be expressed as \(P=e_{1}\wedge e_{2}\) with respect to \(\{T=e_{1},e_{2},e_{3},e_{4}\}\) -- i.e., if \(P\) contains \(T\) -- then \[K_{1213}\overset{\eqref{eq:K34234}}{=}K_{1312}\overset{\eqref{eq:K34234}}{=}- K_{1213}\quad\Rightarrow\quad R_{1213}\overset{\eqref{eq:K34234}}{=}0.\] Likewise, \(R_{1214}=0\). This completes the proof. One consequence of Proposition 2 is that if \(P=e_{3}\wedge e_{4}\in G_{+}(p)\) is a critical point of \(\sec_{{}_{\iota\!\cdot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! But these all follow directly from Lemma 4. We therefore conclude that any \(\hat{R}_{{}_{g}}\)-eigenvector \(P\in G_{+}(p)\) that is orthogonal to \(T\) is necessarily a critical point of \(\sec_{{}_{\iota,\varepsilon}}\). The case when \(P\in G_{-}(p)\) contains \(T\) follows similarly. Note that, by (39), the converse of Theorem 3 is not true in general, unless \(R_{3413}=R_{3414}=0\) (indeed, (35) and (37) yield only \(R_{3413}=R_{1334}\) and \(R_{3414}=R_{1434}\)). Once again, the impediment is that \(\hat{R}_{{}_{g}}\) is not self-adjoint with respect to \(\langle\,,\rangle_{{}_{\!g_{\iota}}}\). One way to remedy this -- and thus to gain a more direct relationship between Petrov Type and the critical points of \(\sec_{{}_{\iota,\varepsilon}}\) -- is to modify the Riemannian curvature operator \(\hat{R}_{{}_{g}}\colon\Lambda^{2}\longrightarrow\Lambda^{2}\) so as to _make_ it self-adjoint with respect to \(\langle\,,\rangle_{{}_{\!g_{\iota}}}\). This will result in a different notion of Petrov Type. We will present precisely such a modification in Section 5. But before doing so, let us first move beyond \(\ast_{{}_{\iota}}\)-Einstein metrics and extend Definition 5 to _any_ oriented Riemannian 4-manifold admitting a nowhere vanishing vector field (hence \(\chi(M)=0\) if \(M\) is compact). This is easy to accomplish: Given any choice of Lorentzian Hodge star operator \(\ast_{{}_{\iota}}\), the curvature operator \(\hat{R}_{{}_{g}}\) decomposes into the sum \[\hat{R}_{{}_{g}}=\underbrace{\frac{1}{2}(\hat{R}_{{}_{g}}-\ast_{{}_{\iota}} \circ\hat{R}_{{}_{g}}\circ\ast_{{}_{\iota}})}_{\text{``$\hat{S}_{{}_{g}}$''}}+ \underbrace{\frac{1}{2}(\hat{R}_{{}_{g}}+\ast_{{}_{\iota}}\circ\hat{R}_{{}_{g}} \circ\ast_{{}_{\iota}})}_{\text{``$\hat{A}_{{}_{g}}$''}}, \tag{42}\] where the "symmetric" and "anti-symmetric" operators \(\hat{S}_{{}_{g}}\) and \(\hat{A}_{{}_{g}}\) satisfy \[\hat{S}_{{}_{g}}\circ\ast_{{}_{\iota}}=\ast_{{}_{\iota}}\circ\hat{S}_{{}_{g}} \quad,\quad\hat{A}_{{}_{g}}\circ\ast_{{}_{\iota}}=-\ast_{{}_{\iota}}\circ\hat {A}_{{}_{g}}.\] In particular, \(\hat{S}_{{}_{g}}\) is \(\ast_{{}_{\iota}}\)-Einstein, hence a complex-linear map on \(\Lambda^{2}_{{}_{\mathbb{C}}}\). Obviously, \(\hat{R}_{{}_{g}}\) is \(\ast_{{}_{\iota}}\)-Einstein \(\iff\hat{A}_{{}_{g}}=0\iff\hat{R}_{{}_{g}}=\hat{S}_{{}_{g}}\). (The symmetric operator \(\hat{S}_{{}_{g}}\) is the analogue of the Weyl curvature 4-tensor \(\widehat{W}\). Indeed, in the purely Riemannian or purely Lorentzian settings, it is given by \(\hat{S}_{{}_{g}}=\widehat{W}+\frac{\operatorname{scal}_{{}_{g}}}{12}I\), where \(\operatorname{scal}_{{}_{g}}\) is the scalar curvature of \(g\).) It is now an easy matter to extend our notion of Petrov Type to any oriented Riemannian 4-manifold \((M,g)\) that admits a nowhere vanishing vector field: **Definition 6** (Petrov Type, general version).: _Let \((M,g)\) be an oriented Riemannian 4-manifold with curvature operator \(\hat{R}_{{}_{g}}\), let \(T\) be a unit length vector field on \(M\), and let \(\ast_{{}_{\iota}}\) be the Hodge star operator of the Lorentzian metric \(g_{{}_{\iota}}:=g-2T^{\flat}\otimes T^{\flat}\). Then at any \(p\in M\), \(g\) has Petrov Type I, II, or III if the operator \(\hat{S}_{{}_{g}}:=\frac{1}{2}(\hat{R}_{{}_{g}}-\ast_{{}_{\iota}}\circ\hat{R}_ {{}_{g}}\circ\ast_{{}_{\iota}})\), viewed as a complex-linear map \(\hat{S}_{{}_{g}}\colon\Lambda^{2}_{{}_{\mathbb{C}}}\longrightarrow\Lambda^{2}_ {{}_{\mathbb{C}}}\), has 3, 2, or 1 linearly independent eigenvectors at \(p\), respectively._ **Theorem 4**.: _Defining the "\(L\)-sectional curvature of \(\hat{S}_{{}_{g}}\)," in analogy with (29), to be_ \[\sec_{{}_{\iota}\dot{\!}}(P):=\epsilon_{{}_{\iota}}(P)\langle\hat{S}_{{}_{g}}P,P\rangle_{\!g_{{}_{\!g}}},\] _it follows that the corresponding results in Theorems 1, 2, and 3 remain true with respect to \(\hat{S}_{{}_{g}}\) and \(\sec_{{}_{\iota}\dot{\!}}\)._ Proof.: The analogues of Theorems 1 and 2 follow exactly as before. For the analogue of Theorem 3, first observe that the components \(S_{ijkl}\) satisfy \[S_{ijkl} := -\langle\hat{S}_{g}(e_{i}\wedge e_{j}),e_{k}\wedge e_{l}\rangle_{\!g _{\!{}_{i}}}\] \[= -\frac{1}{2}\langle(\hat{R}_{g}-*_{\!{}_{i}}\circ\hat{R}_{g}\circ* _{\!{}_{i}})(e_{i}\wedge e_{j}),e_{k}\wedge e_{l}\rangle_{\!g_{\!{}_{i}}}\] \[\stackrel{{\eqref{eq:2.2}}}{{=}} \frac{1}{2}\Big{(}K_{ijkl}+\underbrace{\langle(\hat{R}_{g}\circ*_{ \!{}_{i}})(e_{i}\wedge e_{j}),*_{\!{}_{i}}(e_{k}\wedge e_{l})\rangle_{\!g_{\!{}_ {i}}}}_{{}^{a}-\epsilon_{ij}\epsilon_{kl}K_{*_{\!{}_{i}}(ij)*_{\!{}_{i}}(kl)} \rangle_{\!g_{\!{}_{i}}}}_{{}^{a}-\epsilon_{ij}\epsilon_{kl}K_{*_{\!{}_{i}}(ij) *_{\!{}_{i}}(kl)}}\Big{)},\] where \(\epsilon_{ab}:=\epsilon_{\!{}_{i}}(e_{a}\wedge e_{b})\). Thus \[S_{ijkl}=\frac{1}{2}(K_{ijkl}-\epsilon_{ij}\epsilon_{kl}K_{*_{\!{}_{i}}(ij)*_{ \!{}_{i}}(kl)}). \tag{43}\] Now suppose that \(P\in G_{+}(p)\) is an eigenvector of \(\hat{S}_{g}\) that is \(g_{\!{}_{i}}\)-orthogonal to \(T\), and once again write \(T:=e_{1}\) and \(P=e_{3}\wedge e_{4}\) as before. If \(P\) has eigenvalue \(a+ib\in\mathbb{C}\), then \[\hat{S}_{g}(e_{3}\wedge e_{4})=(a+ib)(e_{3}\wedge e_{4})=b(e_{1}\wedge e_{2}) +a(e_{3}\wedge e_{4}),\] so that \[S_{3412}=b\ \,\ \ S_{3434}=-a\ \,\ \ S_{3413}=S_{3414}=S_{3442}=S_{3423}=0.\] Since Proposition 2 holds with "\(S_{ijkl}\)" in place of "\(K_{ijkl}\)," to show that \(P\) is a critical point of \(\sec_{{}_{L}}\) means showing that \[S_{1334}=S_{1434}=S_{4234}=S_{2334}=0.\] Consider \(S_{1334}\); via (43) and Lemma 4, \[S_{3413}=\frac{1}{2}(\underbrace{K_{3413}}_{-R_{3413}}+\underbrace{K_{1242}} _{R_{1242}})=0\ \ \ \ \Rightarrow\ \ \ \ R_{3413}=R_{1242}.\] Thus \[S_{1334}=\frac{1}{2}(\underbrace{K_{1334}}_{R_{1334}}+\underbrace{K_{4212}}_{ -R_{4212}})=\frac{1}{2}(R_{1334}-R_{4212})=0.\] Similarly, \(S_{1434}=0\). For \(S_{4234}\), \[S_{3442}=\frac{1}{2}(\underbrace{K_{3442}}_{R_{3442}}-\underbrace{K_{1213}}_{ -R_{1213}})=0\ \ \ \ \Rightarrow\ \ \ \ R_{3442}=-R_{1213}.\] Thus \[S_{4234}=\frac{1}{2}(\underbrace{K_{4234}}_{R_{4234}}-\underbrace{K_{1312}}_{ -R_{1312}})=\frac{1}{2}(R_{4234}+R_{1312})=0.\] Similarly, \(S_{2334}=0\). We conclude that any \(\hat{S}_{g}\)-eigenvector \(P\in G_{+}(p)\) that is orthogonal to \(T\) is necessarily a critical point of \(\sec_{{}_{L}}\). The case when \(P\in G_{-}(p)\) contains \(T\) once again follows similarly. Finally, do note that \(\hat{S}_{g}\) is not self-adjoint with respect to \(\langle\,,\rangle_{\!g_{\!{}_{i}}}\)_or_\(\langle\,,\rangle_{\!g}\). ## 5. The Petrov Type of \(g\) via a modified curvature operator We now consider the second approach mentioned in the Introduction, namely, that of modifying \(\hat{R}_{g}\) so as to make it self-adjoint with respect to the inner product space \((\Lambda^{2},\langle\,,\rangle_{\!\!g_{\iota}})\). This is easily accomplished: **Definition 7**.: _Let \((M,g)\) be an oriented Riemannian 4-manifold, \(T\) a unit length vector field on \(M\), and \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\). Define an endomorphism \(\hat{R}_{g}^{\iota}\) on \(\Lambda^{2}\) by setting \(\hat{R}_{g}^{\iota}(v\wedge w)\) to be the unique 2-vector satisfying_ \[\langle\hat{R}_{g}^{\iota}(v\wedge w),x\wedge y\rangle_{\!\!g_{\iota}}:=-R(v,w,x,y)\quad\text{ for all }x,y\in T_{p}M, \tag{44}\] _where \(R\) is the Riemann curvature 4-tensor of \(g\)._ Note first of all that \(\hat{R}_{g}^{\iota}\) is well defined: Because \(R\) is the Riemann curvature 4-tensor of \(g\), its symmetries allow it to be regarded as a symmetric bilinear form \(\Lambda^{2}\times\Lambda^{2}\longrightarrow\mathbb{R}\); one may then realize it as a linear map \(\hat{R}_{g}^{\iota}\colon\Lambda^{2}\longrightarrow\Lambda^{2}\) by taking any nondegenerate inner product \(\langle\,,\rangle\) on \(\Lambda^{2}\) and defining \(\hat{R}_{g}^{\iota}\) by \[\langle\hat{R}_{g}^{\iota}(v\wedge w),x\wedge y\rangle:=-R(v,w,x,y)\quad\text{ for all }v,w,x,y\in T_{p}M.\] In (44), we are simply doing this via \(\langle\,,\rangle_{\!\!g_{\iota}}\), not \(\langle\,,\rangle_{\!\!g}\). (See, e.g., [10, p. 300].) While \(\hat{R}_{g}^{\iota}\) is not quite equal to \(g\)'s curvature operator \(\hat{R}_{g}\) (compare (20) with (56)), it does satisfy two key properties that \(\hat{R}_{g}\) does not: **Lemma 5**.: \(\hat{R}_{g}^{\iota}\) _is self-adjoint, and satisfies the algebraic Bianchi identity, with respect to \(\langle\,,\rangle_{\!\!g_{\iota}}\)._ Proof.: This is immediate from (44). We now replace \(\hat{R}_{g}\) with our new self-adjoint operator \(\hat{R}_{g}^{\iota}\). For starters, here are the analogues of Definitions 3, 4, and 5: **Definition 8** (\(\hat{R}_{g}^{\iota}\)-Einstein metric).: _An oriented Riemannian 4-manifold \((M,g)\) is \(\hat{R}_{g}^{\iota}\)-Einstein if there exists a unit length vector field \(T\) on \(M\) such that the operator \(\hat{R}_{g}^{\iota}\) defined by (44) commutes with the Hodge star operator \(*_{\iota}\) of the Lorentzian metric \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\)._ **Definition 9** (\(\hat{R}_{g}^{\iota}\)-sectional curvature).: _Let \((M,g)\) be an oriented Riemannian 4-manifold and \(T\) a unit length vector field on \(M\). Consider the Lorentzian metric \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\) and its inner product \(\langle\,,\rangle_{\!\!g_{\iota}}\) on \(\Lambda^{2}\). Then the function \(\operatorname{sec}_{\hat{\iota}_{\!\!\iota}}\), defined on each \(G_{+}(p)\cup G_{-}(p)\subseteq T_{p}M\) by_ \[\operatorname{sec}_{\hat{\iota}_{\!\!\iota}}(P):=\epsilon_{\iota}(P)\langle \hat{R}_{g}^{\iota}P,P\rangle_{\!\!g_{\iota}}, \tag{45}\] _is called the \(\hat{R}_{g}^{\iota}\)-sectional curvature of \(g\)._ **Definition 10** (\(\hat{R}_{g}^{\iota}\)-Petrov Type).: _An \(\hat{R}_{g}^{\iota}\)-Einstein metric has \(\hat{R}_{g}^{\iota}\)-Petrov Type I, II, or III at \(p\in M\) if the complex-linear map \(\hat{R}_{g}^{\iota}\colon\Lambda^{2}_{\rm c}\longrightarrow\Lambda^{2}_{\rm c}\) has 3, 2, or 1 linearly independent complex eigenvectors at \(p\), respectively._ To begin with, note how the analogue of Theorem 1 is stronger for \(\hat{R}^{\iota}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{\!{}_{\!{\!{}_{\!{ }_{\!{\!{}_{\!{\!{}_{\!{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{\!{}_{{}_{ \!{}_{{}_{\!}_{{\!}_{{}_{\!}_{{\!}_{{\!}_{{}_{\!}_{{{}_{\!}_{{}_{\!}_{{}_{ \!}_{{{}_{\!}_{{}_{{}_{\!}_{{}_{\!}_{{}_{{}_{\!}_{{}_{}_{{}_{{}_{}_{ }_{{\!}_{{}_{\!}_{{}_{{}_{}_{\!}_{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}_ }_{{{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}}_{{}_{{}_{} }_{{{}_{}_{{}_{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{}}_{{}_{{} }_{{}_{{}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}_{}_{{}}_{{}_{}_{{}}_{{}_{{}_{}}_{{} }_{{{}_{{}_{}_{{}_{}_{{}}_{{}_{{}_{}}_{{}_{{}_{{}}_{{}_{}}_{{}_{{}}_{{}}_{{} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\\\\\\ Finally, note that \[\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi,\xi)=\pm 1\iff\langle\xi,\xi\rangle_{\!\!g_{ \mbox{\tiny$\iota$}}}=\pm 1\mbox{ and }\langle\xi,*_{\mbox{\tiny$\iota$}}\xi\rangle_{\!\!g_{\mbox{\tiny$\iota$}}}=0 \iff\xi\in G_{\pm}(p). \tag{47}\] (Recall that \(\langle\xi,*_{\mbox{\tiny$\iota$}}\xi\rangle_{\!\!g_{\mbox{\tiny$\iota$}}}=0\) is equivalent to \(\xi\wedge\xi=0\), and the latter guarantees decomposability.) With \(\mathbf{g}_{\mbox{\tiny$\iota$}}\) in hand, we now commence with the proof, starting with the case of \(\hat{R}_{\mbox{\tiny$\iota$}}^{\mbox{\tiny$\iota$}}\)-Petrov Type I at \(p\). Thus, let \(\{\xi_{1},\xi_{2},\xi_{3}\}\subseteq\Lambda_{\mbox{\tiny$\circ$}}^{2}\) be a basis of eigenvectors for \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\), with corresponding eigenvalues \(\lambda_{1},\lambda_{2},\lambda_{3}\in\mathbb{C}\) not necessarily distinct. If \(\lambda_{i}\neq\lambda_{j}\), then because \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\) is self-adjoint, we must have \(\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{i},\xi_{j})=0\), by the standard argument: \[\lambda_{i}\,\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{i},\xi_{j})=\mathbf{g}_{ \mbox{\tiny$\iota$}}(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\xi_{i}, \xi_{j})=\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{i},\hat{R}_{\mbox{\tiny$g$}}^{ \mbox{\tiny$\iota$}}\xi_{j})=\lambda_{j}\,\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi _{i},\xi_{j}). \tag{48}\] It follows that if \(\lambda_{1},\lambda_{2},\lambda_{3}\) are distinct and some \(\xi_{j}\) satisfies \(\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{j},\xi_{j})=0\), then that \(\xi_{j}\) will be \(\mathbf{g}_{\mbox{\tiny$\iota$}}\)-orthogonal to _all_\(2\)-vectors in \(\Lambda_{\mbox{\tiny$\circ$}}^{2}\), which is impossible as \(\mathbf{g}_{\mbox{\tiny$\iota$}}\) is nondegenerate. Thus if \(\lambda_{1},\lambda_{2},\lambda_{3}\) are distinct, then each \(\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{j},\xi_{j})\neq 0\), in which case, since \[\begin{split}\mathbf{g}_{\mbox{\tiny$\iota$}}((a+ib)\xi,(a+ib) \xi)&=\ (a^{2}-b^{2}+i2ab)\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi,\xi)\\ &=\ \Big{(}(a^{2}-b^{2})\langle\xi,\xi\rangle_{\!\!g_{\mbox{\tiny$ \iota$}}}+2ab\langle\xi,*_{\mbox{\tiny$\iota$}}\xi\rangle_{\!\!g_{\mbox{\tiny$ \iota$}}}\Big{)}+\\ &\qquad i\Big{(}\!-\!(a^{2}-b^{2})\langle\xi,*_{\mbox{\tiny$ \iota$}}\xi\rangle_{\!\!g_{\mbox{\tiny$\iota$}}}+2ab\langle\xi,\xi\rangle_{\! \!g_{\mbox{\tiny$\iota$}}}\Big{)},\end{split} \tag{49}\] we may, for each eigenvector \(\xi_{1},\xi_{2},\xi_{3}\), choose \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathbb{C}\) such that each \(\mathbf{g}_{\mbox{\tiny$\iota$}}(\alpha_{j}\xi_{j},\alpha_{j}\xi_{j})=+1\). Such complex scalar multiplication will neither change their status as eigenvectors of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\) nor change their eigenvalues, since \[\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}((a_{j}+ib_{j})\xi_{j})=\hat{R} _{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}((a_{j}I+b*_{\mbox{\tiny$\iota$}})\xi _{j})=(a_{j}+ib_{j})\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\xi_{j}= \lambda_{j}((a_{j}+ib_{j})\xi_{j}).\] By (47), we are thus ensured of a basis of orthogonal spacelike \(2\)-planes that are eigenvectors of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\). Now suppose that two eigenvalues are equal, say \(\lambda_{1}=\lambda_{2}\neq\lambda_{3}\). Then \(\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{3},\xi_{3})\neq 0\), by the same reasoning as before. In fact the two-dimensional \(\lambda_{1}\)-eigenspace must be \(\mathbf{g}_{\mbox{\tiny$\iota$}}\)-nondegenerate, too, hence must contain a \(\mathbf{g}_{\mbox{\tiny$\iota$}}\)-orthogonal basis. Thus if \(\lambda_{1}=\lambda_{2}\neq\lambda_{3}\), then we have a \(\mathbf{g}_{\mbox{\tiny$\iota$}}\)-orthogonal basis \(\{\xi_{1},\xi_{2},\xi_{3}\}\) of eigenvectors of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\), no element of which is \(\mathbf{g}_{\mbox{\tiny$\iota$}}\)-lightlike; hence, as before, each can be scaled to yield a spacelike eigenvector. Finally, consider the case of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\)-Petrov Type I but with \(\lambda_{1}=\lambda_{2}=\lambda_{3}\). In this case, every \(2\)-vector is an eigenvector of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\), hence we may simply choose any three orthogonal spacelike \(2\)-planes in \(G_{+}(p)\) as a basis. This concludes the case of \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\)-Petrov Type I. If \(g\) has \(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}\)-Petrov Type II at \(p\), then whether or not \(\lambda_{1}\) and \(\lambda_{2}\) are distinct, there is always a \(2\)-vector \(\eta\in\Lambda_{\mbox{\tiny$\circ$}}^{2}\) satisfying \((\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}-\lambda I)\eta=\xi_{1}\) for one of the eigenvectors, say \(\xi_{1}\) (the existence of \(\eta\) follows from standard Jordan-normal theory). Then \[\mathbf{g}_{\mbox{\tiny$\iota$}}(\xi_{1},\xi_{1})=\mathbf{g}_{\mbox{\tiny$ \iota$}}((\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$\iota$}}-\lambda I)\eta,\xi_{1} )=\mathbf{g}_{\mbox{\tiny$\iota$}}(\eta,(\hat{R}_{\mbox{\tiny$g$}}^{\mbox{\tiny$ \iota$}}-\lambda I)\xi_{1})=0, \tag{50}\] so that \(\xi_{1}\) must be \(\mathbf{g}_{\iota}\)-lightlike. But this is the case if and only if \[\langle\xi_{1},\xi_{1}\rangle_{\!g.}=0\text{ and }\langle\xi_{1},*_{\iota}\xi_{1} \rangle_{\!g.}=0,\] which in turn is the case if and only if \(\xi_{1}\) is a lightlike 2-plane; i.e., \(\xi_{1}=v\wedge w\) with \(\langle\xi_{1},\xi_{1}\rangle_{\!g.}=0\). Consider now the other eigenvector, \(\xi_{2}\). The first thing to observe is that \(\xi_{2}\) must be \(\mathbf{g}_{\iota}\)-orthogonal to \(\xi_{1}\). Indeed, if \(\lambda_{1}\neq\lambda_{2}\), then this follows directly from (48); if \(\lambda_{1}=\lambda_{2}\), then \[\mathbf{g}_{\iota}(\xi_{1},\xi_{2})=\mathbf{g}_{\iota}((\hat{R}_{g}^{\iota}- \lambda_{1}I)\eta,\xi_{2})=\mathbf{g}_{\iota}(\eta,(\hat{R}_{g}^{\iota}- \lambda_{1}I)\xi_{2})=0.\] Now suppose that \(\mathbf{g}_{\iota}(\xi_{2},\xi_{2})=0\) also, so that both \(\xi_{1}\) and \(\xi_{2}\) are orthogonal lightlike 2-planes. Because \(\mathbf{g}_{\iota}\) is nondegenerate and \(\{\xi_{1},\xi_{2},\eta\}\) is a basis, each \(\mathbf{g}_{\iota}(\xi_{j},\eta)\neq 0\). And yet, because any 2-vector \(\beta:=\alpha_{1}\xi_{1}+\alpha_{2}\xi_{2}\) will be \(\mathbf{g}_{\iota}\)-lightlike and orthogonal to \(\xi_{1},\xi_{2}\) (by (49), \(\mathbf{g}_{\iota}\)-lightlike 2-vectors are invariant under scalar multiplication), we may choose \(\alpha_{1},\alpha_{2}\in\mathbb{C}\) such that \(\mathbf{g}_{\iota}(\beta,\eta)=0\), contradicting the nondegeneracy of \(\mathbf{g}_{\iota}\). Therefore we must have \(\mathbf{g}_{\iota}(\xi_{2},\xi_{2})\neq 0\), in which case we may scale it to satisfy \(\mathbf{g}_{\iota}(\xi_{2},\xi_{2})=+1\). This concludes the case of \(\hat{R}_{g}^{\iota}\)-Petrov Type II. Finally, if \(g\) has \(\hat{R}_{g}^{\iota}\)-Petrov Type III at \(p\), let \(\xi\in\Lambda_{\text{\tiny{c}}}^{2}\) denote \(\hat{R}_{g}^{\iota}\)'s one linearly independent eigenvector, with eigenvalue \(\lambda_{1}\). Once again, there is a 2-vector \(\eta\) satisfying \((\hat{R}_{g}^{\iota}-\lambda_{1}I)\eta=\xi\), so that, via (50), \(\xi\) is necessarily a lightlike 2-plane once again. Yet another advantage of using \(\hat{R}_{g}^{\iota}\) in place of \(\hat{R}_{g}\) is that Proposition 2 can also be significantly strengthened, this time yielding a _direct_ link between the critical points of \(\sec_{\hat{R}_{g}}\) and the eigenvectors of \(\hat{R}_{g}^{\iota}\): **Proposition 3**.: _Let \((M,g)\) be an oriented Riemannian 4-manifold and \(T\) a unit length vector field on \(M\). Let \(\sec_{\hat{R}_{g}}\) be the \(\hat{R}_{g}^{\iota}\)-sectional curvature of \(g\) defined with respect to \(\,g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\). Then at any \(p\in M\), a 2-plane \(P\in G_{\pm}(p)\) is a critical point of \(\,\sec_{\hat{R}_{\hat{R}}}\) if and only if_ \[\hat{R}_{g}^{\iota}P=aP+bP^{\perp_{\iota}} \tag{51}\] _for some \(a,b\in\mathbb{R}\)._ Proof.: This proof follows as in [10, Lemma, p. 5], except that we are working with \(\hat{R}_{g}^{\iota}\), not \(\hat{R}_{\iota}\). For \(P=e_{3}\wedge e_{4}\in G_{+}(p)\), we have, once again, \[\underbrace{\frac{\partial}{\partial x_{1}}\bigg{|}_{\mathbf{0}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! hence \(R_{3431}=0.\) Similarly, the cases \(i=2,3,4\) yield, respectively, \(R_{3432}=R_{3414}=R_{3424}=0,\) so that \[\hat{R}^{\iota}_{g}P = \hat{R}^{\iota}_{g}(e_{3}\wedge e_{4})\] \[\stackrel{{\eqref{eq:R3431}}}{{=}} -R_{3412}\,e_{1}\wedge e_{2}-R_{3434}\,e_{3}\wedge e_{4}\] \[= -R_{3434}P-R_{3412}P^{\perp_{\iota}},\] where we recall that \(P^{\perp_{\iota}}=\ast_{\iota}P=e_{1}\wedge e_{2}\). This is precisely in the form (51) with \(a=-R_{3434}\) and \(b=-R_{3412}\). A similar analysis for \(P=e_{1}\wedge e_{2}\in G_{-}(p)\) yields \(R_{1213}=R_{1214}=R_{1242}=R_{1223}=0,\) and thus \[\hat{R}^{\iota}_{g}P = \hat{R}^{\iota}_{g}(e_{1}\wedge e_{2})\] \[\stackrel{{\eqref{eq:R3431}}}{{=}} -R_{1212}\,e_{1}\wedge e_{2}-R_{1234}\,e_{3}\wedge e_{4}\] \[= -R_{1212}P+R_{1234}P^{\perp_{\iota}},\] where, this time, \(P^{\perp_{\iota}}=\ast_{\iota}P=-e_{3}\wedge e_{4}\). This, too, is in the form (51) with \(a=-R_{1212}\) and \(b=R_{1234}\). Thanks to this, the \(\hat{R}^{\iota}_{g}\)-Petrov Type of an \(\hat{R}^{\iota}_{g}\)-Einstein metric \(g\) is not just bounded by, but this time is completely determined by the critical point structure of \(\sec_{\hat{\iota}_{g}}\); i.e., \(\hat{R}^{\iota}_{g}\)-Einstein Riemannian 4-manifolds possess "normal forms" -- at least up to their Petrov Types: **Theorem 7**.: _Let \((M,g)\) be an \(\hat{R}^{\iota}_{g}\)-Einstein manifold. At any \(p\in M\), the \(\hat{R}^{\iota}_{g}\)-Petrov Type of \(g\) is determined by the number \(n\) of spacelike critical points of \(\sec_{\hat{\iota}_{g}}\). In particular, \(g\) has_ 1. \(\hat{R}^{\iota}_{g}\)_-Petrov Type I_\(\iff\sec_{\hat{\iota}_{g}}\) _has_ \(n=3\) _or_ \(n=\infty\) _spacelike critical points._ 2. \(\hat{R}^{\iota}_{g}\)_-Petrov Type II_\(\iff\sec_{\hat{\iota}_{g}}\) _has_ \(n=1\) _spacelike critical point._ 3. \(\hat{R}^{\iota}_{g}\)_-Petrov Type III_\(\iff\sec_{\hat{\iota}_{g}}\) _has_ \(n=0\) _spacelike critical points._ Proof.: This follows directly from Theorem 6, and is the same method of proof as in [10, Theorem, p. 5], although the latter is with respect to the Lorentzian curvature operator \(\hat{R}_{\iota}\), not \(\hat{R}^{\iota}_{g}\). Thanks to Proposition 3, any \(P\in G_{\pm}(p)\) will be a critical point of \(\sec_{\hat{\iota}_{g}}\) if and only if it is an eigenvector of the complex-linear map \(\hat{R}^{\iota}_{g}\): \[\hat{R}^{\iota}_{g}P=aP+bP^{\perp_{\iota}}=aP+b\ast_{\iota}P=(a+ib)P.\] Because of this equivalence, the spacelike critical points of \(\sec_{\hat{\iota}_{g}}\) are in one-to-one correspondence with the timelike critical points of \(\sec_{\hat{\iota}_{g}}\), since \(P\in G_{+}(p)\iff iP=\ast_{\iota}P\in G_{-}(p)\), and since \(P\) is an eigenvector of \(\hat{R}^{\iota}_{g}\iff iP\) is an eigenvector of \(\hat{R}^{\iota}_{g}\). Thus if \(\sec_{\hat{\iota}_{g}}\) has, say, no spacelike critical points (hence no timelike critical points, either), then \(\hat{R}^{\iota}_{g}\) can only have \(\mathbf{g}_{\iota}\)-lightlike eigenvectors. By Theorem 6, only \(\hat{R}^{\iota}_{g}\)-Petrov Type III satisfies this criterion. Likewise, if \(\sec_{\hat{\iota}_{g}}\) has precisely one spacelike critical point (hence precisely one timelike critical point), then \(\hat{R}^{\iota}_{g}\) can only have one eigenvector that is not \(\mathbf{g}_{\iota}\)-lightlike. By Theorem 6, only \(\hat{R}_{g}^{\iota}\)-Petrov Type II satisfies this criterion. Finally, if \(\sec_{\hat{\iota}}\) has precisely three spacelike critical points, then \(\hat{R}_{g}^{\iota}\) must have that many spacelike eigenvectors. By Theorem 6, only \(\hat{R}_{g}^{\iota}\)-Petrov Type I satisfies this criterion. And if, in \(\hat{R}_{g}^{\iota}\)-Petrov Type I, an eigenvalue is repeated, then this yields infinitely many spacelike eigenvectors, hence infinitely many spacelike (and timelike) critical points of \(\sec_{\hat{\iota}}\). Let us also record the analogues of Definition 6 and Theorem 4. If \(g\) is not \(\hat{R}_{g}^{\iota}\)-Einstein, then as in (42) we may decompose \(\hat{R}_{g}^{\iota}\) as \[\hat{R}_{g}^{\iota}=\underbrace{\frac{1}{2}(\hat{R}_{g}^{\iota}-\ast_{\iota} \circ\hat{R}_{g}^{\iota}\circ\ast_{\iota})}_{\text{``$\hat{S}_{g}^{\iota}$''}}+ \underbrace{\frac{1}{2}(\hat{R}_{g}^{\iota}+\ast_{\iota}\circ\hat{R}_{g}^{ \iota}\circ\ast_{\iota})}_{\text{``$\hat{A}_{g}^{\iota}$''}}, \tag{52}\] where the "symmetric" and "anti-symmetric" operators \(\hat{S}_{g}^{\iota}\) and \(\hat{A}_{g}^{\iota}\) satisfy \[\hat{S}_{g}^{\iota}\circ\ast_{\iota}=\ast_{\iota}\circ\hat{S}_{g}^{\iota}\quad,\quad\hat{A}_{g}^{\iota}\circ\ast_{\iota}=-\ast_{\iota}\circ\hat{A}_{g}^{\iota}.\] Then \(\hat{S}_{g}^{\iota}\) will be \(\hat{R}_{g}^{\iota}\)-Einstein, hence a complex-linear map on \(\Lambda_{\text{c}}^{2}\), and \(\hat{R}_{g}^{\iota}\) is \(\ast_{\iota}\)-Einstein \(\iff\hat{A}_{g}^{\iota}=0\iff\hat{R}_{g}^{\iota}=\hat{S}_{g}^{\iota}.\) Knowing that \(\hat{S}_{g}^{\iota}\) is self-adjoint because both \(\hat{R}_{g}^{\iota}\) and \(\ast_{\iota}\) are so, we can therefore repeat our analysis with \(\hat{S}_{g}^{\iota}\) in place of \(\hat{R}_{g}^{\iota}\); in particular, Theorem 6, Proposition 3, and Theorem 7 all have \(\hat{S}_{g}^{\iota}\)-analogues. Let us collect this information here: **Theorem 8**.: _Every oriented Riemannian 4-manifold admitting a unit length vector field \(T\) can be classified by its \(\hat{S}_{g}^{\iota}\)-Petrov Type (52), defined via the Hodge star operator \(\ast_{\iota}\) of the Lorentzian metric \(g_{\iota}:=g-2T^{\flat}\otimes T^{\flat}\). The \(\hat{S}_{g}^{\iota}\)-Petrov Type of \(g\) is determined pointwise by the spacelike critical points of its \(\hat{S}_{g}^{\iota}\)-sectional curvature function \(\sec_{\hat{\iota}}:=\epsilon_{\iota}(P)\langle\hat{S}_{g}^{\iota}P,P\rangle_{g _{\iota}}\)._ Finally, let us verify that the Ricci tensor of an \(\hat{R}_{g}^{\iota}\)-Einstein metric is that of Type B in (24): **Proposition 4**.: _The Ricci tensor \(\operatorname{Ric}_{g}\) of an \(\hat{R}_{g}^{\iota}\)-Einstein metric \(g\), relative to any \(g\)-orthonormal frame with \(T:=e_{1}\), takes the form_ \[\operatorname{Ric}_{g}=\begin{bmatrix}\lambda&0&0&0\\ 0&\lambda+\lambda_{2}&\psi_{2}&\psi_{3}\\ 0&\psi_{2}&\lambda+\lambda_{3}&\psi_{4}\\ 0&\psi_{3}&\psi_{4}&\lambda+\lambda_{4}\end{bmatrix}, \tag{53}\] _with \(\lambda:=\operatorname{Ric}_{g}(T,T)\),_ \[\lambda_{2}:=-2\underset{i=3,4}{\sum}R_{TiiT}\ \,\ \ \lambda_{3}:=-2\underset{i=2,4}{ \sum}R_{TiiT}\ \,\ \ \lambda_{4}:=-2\underset{i=2,3}{\sum}R_{TiiT}, \tag{54}\] _and_ \[\psi_{2}:=2R_{T23T}\ \,\ \ \psi_{3}:=2R_{T24T}\ \,\ \ \psi_{4}:=2R_{T34T}. \tag{55}\] Proof.: Relative to an oriented \(g_{{}_{\!z}}\)-orthonormal basis \(\{T:=e_{1},e_{2},e_{3},e_{4}\}\), it is easily verified that if \(\hat{R}^{{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The combinations \((1)+(2)\), \((1)+(3)\), and \((2)+(3)\) are now easily seen to yield precisely the equations in (54), with \(\lambda:=\operatorname{Ric}_{11}=\operatorname{Ric}(T,T)\). Thus with respect to any oriented orthonormal frame with \(e_{1}:=T\), the Ricci tensor of an \(\hat{R}^{i}_{g}\)-Einstein metric \(g\) takes the form (53). ## 6. Generalizing beyond Lorentzian metrics We conclude by returning to an observation we made in the Introduction, namely, that our inquiry can be pursued fairly generally for any pair of Riemannian metrics \(g,\tilde{g}\) on any oriented 4-manifold \(M\), regardless of \(M\)'s Euler characteristic. This seems particularly fruitful when one of the metrics is "canonical" in some sense. For example, let \(M=\mathbb{S}^{4}\) and take \(\tilde{g}\) to be the standard round metric \(\mathring{g}\) of constant positive sectional curvature; let \(\mathring{*}\) and \(\langle\,,\rangle_{\mathring{g}}\) denote its Hodge star operator and corresponding (positive-definite) inner product on \(\Lambda^{2}:=\Lambda^{2}(T\mathbb{S}^{4})\), respectively. For any other Riemannian metric \(g\) on \(\mathbb{S}^{4}\), consider the \(\mathring{*}\)-symmetric part of its curvature operator \(\hat{R}_{g}\), \(\hat{S}_{g}:=\frac{1}{2}(\hat{R}_{g}-\mathring{*}\circ\hat{R}_{g}\circ \mathring{*})\), and view it as an operator on the inner product space \((\Lambda^{2},\langle\,,\rangle_{\mathring{g}})\) (as opposed to \((\Lambda^{2},\langle\,,\rangle_{g})\)). Now we ask: _What will be the geometry of those Riemannian metrics \(g\) on \(\mathbb{S}^{4}\) for whom \(\hat{S}_{g}=\hat{R}_{g}\)?_ Or, in order to classify metrics as in [10], we may instead elect to modify \(\hat{R}_{g}\) so as to make it self-adjoint with respect to the inner product \(\langle\,,\rangle_{\mathring{g}}\), by defining an operator \(\hat{R}^{\mathring{g}}_{g}\colon\Lambda^{2}\longrightarrow\Lambda^{2}\) at each \(p\in\mathbb{S}^{4}\) by \[\langle\hat{R}^{\mathring{g}}_{g}(v\wedge w),x\wedge y\rangle_{\mathring{g}}:=- R(v,w,x,y)\quad\text{ for all }v,w,x,y\in T_{p}\mathbb{S}^{4},\] where \(R\) is the Riemann curvature 4-tensor of \(g\). Since \(\hat{R}^{\mathring{g}}_{g}\) is self-adjoint and satisfies the algebraic Bianchi identity with respect to \(\langle\,,\rangle_{\mathring{g}}\), _Theorem 2.1 in [10] would apply here to yield normal forms_: Relative to an oriented \(\mathring{g}\)-orthonormal basis, \(\mathring{*}\) would take the form (18), and the commuting condition \(\hat{R}^{\mathring{g}}_{g}\circ\mathring{*}=\mathring{*}\circ\hat{R}^{\mathring {g}}_{g}\) would then put \(\hat{R}^{\mathring{g}}_{g}\) precisely in the "Einstein" block form (22) (though \(g\) itself would not be Einstein). One may then proceed with exactly the same proof as in Theorem 2.1 of [10], to conclude that, if \[\hat{R}^{\mathring{g}}_{g}\circ\mathring{*}=\mathring{*}\circ\hat{R}^{\mathring {g}}_{g}, \tag{57}\] then at each \(p\in\mathbb{S}^{4}\) there exists a \(\mathring{g}\)-orthonormal basis such that each pair of vectors in this basis spans a critical plane of \(\hat{R}^{\mathring{g}}_{g}\) (or \(\hat{S}^{\mathring{g}}_{g}\), if (57) doesn't hold). The matrix of \(\hat{R}^{\mathring{g}}_{g}\) relative to this basis would then take the form \[\hat{R}^{\mathring{g}}_{g}=-\begin{bmatrix}A&B\\ B&A\end{bmatrix},\text{ where }A=\begin{bmatrix}a_{1}&&\\ &a_{2}&\\ &&a_{3}\end{bmatrix}\text{ and }B=\begin{bmatrix}b_{1}&&\\ &b_{2}&\\ &&b_{3}\end{bmatrix},\] with each pair \(a_{i},b_{i}\) arising from a critical plane of \(\hat{R}^{\mathring{g}}_{g}\); i.e., a critical point of the quadratic form \(P\mapsto\langle\hat{R}^{\mathring{g}}_{g}P,P\rangle_{\mathring{g}}\), where \(P\) is decomposable and satisfies \(\langle P,P\rangle_{\mathring{g}}=1\). _Finally, observe that nothing would change if we replaced \((\mathbb{S}^{4},\mathring{g})\) with any oriented Riemannian 4-manifold \((M,\tilde{g})\)._
2309.12409
Quantitative convergence of the nonlocal Allen--Cahn equation to volume-preserving mean curvature flow
We prove a quantitative convergence result of the nonlocal Allen--Cahn equation to volume-preserving mean curvature flow. The proof uses gradient flow calibrations and the relative entropy method, which has been used in the recent literature to prove weak-strong uniqueness results for mean curvature flow and convergence of the Allen--Cahn equation. A crucial difference in this work is a new notion of gradient flow calibrations. We add a tangential component to the velocity field in order to prove the Gronwall estimate for the relative energy. This allows us to derive the optimal convergence rate without having to show the closeness of the Lagrange-multipliers.
Milan Kroemer, Tim Laux
2023-09-21T18:17:45Z
http://arxiv.org/abs/2309.12409v1
Quantitative convergence of the nonlocal Allen-Cahn equation to volume-preserving mean curvature flow ###### Abstract. We prove a quantitative convergence result of the nonlocal Allen-Cahn equation to volume-preserving mean curvature flow. The proof uses gradient flow calibrations and the relative entropy method, which has been used in the recent literature to prove weak-strong uniqueness results for mean curvature flow and convergence of the Allen-Cahn equation. A crucial difference in this work is a new notion of gradient flow calibrations. We add a tangential component to the velocity field in order to prove the Gronwall estimate for the relative energy. This allows us to derive the optimal convergence rate without having to show the closeness of the Lagrange-multipliers. **Keywords:** Mean curvature flow, volume-preservation, constrained gradient flows, reaction-diffusion equations, relative entropy method, calibrated geometry, gradient-flow calibrations. **Mathematical Subject Classification**: 53E10; 35K57 ## 1. Introduction We consider the nonlocal Allen-Cahn equation \[\partial_{t}u_{\varepsilon}=\Delta u_{\varepsilon}-\frac{1}{\varepsilon^{2}}W ^{\prime}(u_{\varepsilon})+\lambda_{\varepsilon}\sqrt{2W(u_{\varepsilon})} \tag{1}\] which was first introduced by Golovaty [7]. Here \(\lambda_{\varepsilon}=\lambda_{\varepsilon}(t)\) is a Lagrange multiplier which is given explicitly by \[\lambda_{\varepsilon}(t)\coloneqq-\frac{\int_{\mathbb{R}^{d}}(\Delta u_{ \varepsilon}-\frac{1}{\varepsilon^{2}}W^{\prime}(u_{\varepsilon}))\sqrt{2W(u _{\varepsilon})}\,\mathrm{d}x}{\int_{\mathbb{R}^{d}}2W(u_{\varepsilon})\, \mathrm{d}x}. \tag{2}\] This is a natural choice since then the mass of \(\psi_{\varepsilon}\coloneqq\phi\circ u_{\varepsilon}\), where \(\phi(u)=\int_{0}^{u}\sqrt{2W(z)}\,\mathrm{d}z\), is preserved: \[\frac{\mathrm{d}}{\mathrm{d}t}\int(\phi\circ u_{\varepsilon})(x,t)\,\mathrm{d }x=\int_{\mathbb{R}^{d}}\sqrt{2W(u_{\varepsilon}(x,t))}\partial_{t}u_{ \varepsilon}(x,t)\,\mathrm{d}x=0. \tag{3}\] This change of variables \(\phi:u_{\varepsilon}\mapsto\psi_{\varepsilon}\) is crucial in studying the Allen-Cahn equation and was discovered by Modica-Mortola [17] and independently by Bogomol'nyi [2]. In the present paper, we derive an optimal quantitative convergence result in the sharp interface limit, see Theorem 1 below. Nonlocal versions of the Allen-Cahn equation were first introduced by Rubinstein and Sternberg [19] as a basic model for coarsening processes which conserve the phase volume. The original model by Rubinstein and Sternberg is \[\partial_{t}u_{\varepsilon}=\Delta u_{\varepsilon}-\frac{1}{\varepsilon^{2}}W ^{\prime}(u_{\varepsilon})+\frac{1}{\varepsilon}\lambda_{\varepsilon},\] where \(\lambda_{\varepsilon}=\lambda_{\varepsilon}(t)\) is the Lagrange multiplier associated to the mass constraint \(\int_{\mathbb{R}^{d}}u_{\varepsilon}(x,t)\,\mathrm{d}x=\int_{\mathbb{R}^{d}}u _{\varepsilon}(x,0)\,\mathrm{d}x\) and is explicitly given by \(\lambda_{\varepsilon}(t)=\int_{\mathbb{R}^{d}}\frac{1}{\varepsilon}W^{\prime }(u_{\varepsilon}(x,t))\,\mathrm{d}x\). ## 1. Introduction In this paper we consider the following problem of the nonlinear Schrodinger equation \[\frac{\mathrm{d}}{\mathrm{d}t}E[\Sigma(t)]=\int_{\Sigma(t)}V(x,t)H(x,t)\,\mathrm{d }\mathcal{H}^{d-1}(x)=-\int_{\Sigma(t)}V^{2}\,\mathrm{d}\mathcal{H}^{d-1}(x), \tag{1}\] where \(H\) is a bounded bounded bounded domain with boundary \(\Sigma(t)\) and \(H\) is a bounded bounded bounded domain with boundary \(\Sigma(t)\). The problem of the nonlinear Schrodinger equation is the following: \[\frac{\mathrm{d}}{\mathrm{d}t}E[\Sigma(t)]=\int_{\Sigma(t)}V(x,t)H(x,t)\, \mathrm{d}\mathcal{H}^{d-1}(x)=-\int_{\Sigma(t)}V^{2}\,\mathrm{d}\mathcal{H}^{d -1}(x), \tag{2}\] where \(H\) is a bounded bounded domain with boundary \(\Sigma(t)\) and \(H\) is a bounded bounded domain with boundary \(\Sigma(t)\). The nonlinear Schrodinger equation is the following: \[\frac{\mathrm{d}}{\mathrm{d}t}E[\Sigma(t)]=\int_{\Sigma(t)}V(x,t)H(x,t)\, \mathrm{d}\mathcal{H}^{d-1}(x)=-\int_{\Sigma(t)}V^{2}\,\mathrm{d}\mathcal{H}^{ d-1}(x), \tag{3}\] where \(\Sigma(t)\) is a bounded domain with boundary \(\Sigma(t)\) and \(H\) is a bounded domain with boundary \(\Sigma(t)\). Equation (1) has several advantages over the classical Rubinstein-Sternberg model as the effect of the Lagrange multiplier is amplified close to the diffuse interface. The nonlocal Allen-Cahn equation (1) is the \(L^{2}\)-gradient flow of the Cahn-Hilliard energy \[E_{\varepsilon}[u]=\int_{\mathbb{R}^{d}}\left(\frac{\varepsilon}{2}|\nabla u |^{2}+\frac{1}{\varepsilon}W(u)\right)\,\mathrm{d}x \tag{4}\] restricted to the mass-constrained "submanifold" \(\left\{u\colon\int_{\mathbb{R}^{d}}\phi\circ u\,\mathrm{d}x=m\right\}\subset L^{ 2}(\mathbb{R}^{d})\) and sped up by the factor \(\frac{1}{\varepsilon}\). This gradient-flow structure can be read off from the optimal energy dissipation relation which holds for any classical solution of (1): \[\frac{\mathrm{d}}{\mathrm{d}t}E_{\varepsilon}[u_{\varepsilon}(\cdot,t)]=-\int_ {\mathbb{R}^{d}}\varepsilon(\partial_{t}u_{\varepsilon}(x,t))^{2}\,\mathrm{d}x. \tag{5}\] The investigation of the sharp-interface limit \(\varepsilon\to 0\) of nonlocal versions of the Allen-Cahn equation (1)-(2) started with the matched asymptotic expansion by Golovaty [7]. His formal argument suggests that the limit evolves by the nonlocal evolution equation \[V=-H+\lambda\quad\text{on }\Sigma(t), \tag{6}\] where \(V\) and \(H\) denote the normal velocity and the mean curvature of the evolving surface \(\Sigma(t)=\partial\Omega(t)\), respectively, and \(\lambda=\lambda(t)\) is the Lagrange multiplier corresponding to the volume constraint \(|\Omega(t)|=|\Omega(0)|\). Also this equation, the volume-preserving mean curvature flow, has a gradient-flow structure as is seen at the energy dissipation relation \[\frac{\mathrm{d}}{\mathrm{d}t}E[\Sigma(t)]=\int_{\Sigma(t)}V(x,t)H(x,t)\, \mathrm{d}\mathcal{H}^{d-1}(x)=-\int_{\Sigma(t)}V^{2}\,\mathrm{d}\mathcal{H}^ {d-1}(x), \tag{7}\] which holds for sufficiently regular solutions of (6). Again the evolution is restricted to a "submanifold" \(\left\{\Sigma=\partial\Omega\subset\mathbb{R}^{d}\colon|\Omega|=m\right\}\) which incorporates the volume constraint. Takasao showed under very mild assumptions that solutions to (1)-(2) converge to a weak solution of volume-preserving mean curvature flow in the sense of Brakke [3]; first for ambient dimensions \(d=2,3\)[20] and most recently, for a slight perturbation of (1)-(2) in all dimensions [21]. Another approach is inspired by the work of Luckhaus and Sturzenhecker [16]: the second author and Simon [14] showed that, under a natural energy-convergence assumption as in [16], the limit is a distributional solution to volume-preserving mean curvature flow, which holds in all spatial dimensions and also in the case of multiple phases, any selection of which may carry a volume constraint. For our proof, we use the relative energy method. In the context of the convergence of phase field models this method was introduced by Fischer, Simon and the second author in [5], but the relative energy is very closely related to the diffuse tilt-excess introduced by Simon and the second author in [14]. It can also be used to incorporate boundary contact, as was shown by Hensel and Moser [9], and Hensel and the second author [8]. As the method does not rely on the maximum principle, it can also be applied for vectorial problems. Liu and the second author [13] combined the relative energy method with weak convergence methods to derive the scaling limit of transitions between the isotropic and the nematic phase in liquid crystals. Fischer and Marveggio [6] showed that the method can also be used for the vectorial Allen-Cahn equation, at least in ambient dimensions \(d=2,3\) and for a prototypical potential with three wells. The nonlocal Allen-Cahn equation is a physically motivated model, which is why its sharp interface limit is of high interest. But it can also be viewed as an approximation scheme to construct (numerically or theoretically) solutions to volume preserving mean curvature flow. Other methods to construct solutions include PDE methods which can be used for short time [4]; versions of the minimizing movements scheme by Almgren, Taylor and Wang [1], as was first done by Mugnai, Seis and Spadaro [18] and later by Julin and Niinikoski [10]; and the thresholding scheme, which is also numerically efficient, see the work of Swartz and the second author [15]. ### Notation The Landau symbol \(O\) will be used frequently. Precisely, by \(a=O(b)\) we mean that there exists a constant \(C\) depending on \(d\), \(T\), and \(\Sigma=(\Sigma(t))_{t\in[0,T]}\), such that \(|a|\leq C|b|\). The signed distance function to \(\Sigma(t)\) will be denoted by \[\mathbf{s}(x,t)\coloneqq\operatorname{dist}(x,\Omega(t))-\operatorname{dist}(x,\mathbb{R}^{d}\setminus\Omega(t)), \tag{8}\] where \(\Omega(t)\) is the region enclosed by \(\Sigma(t)\). The gradient and divergence on \(\mathbb{R}^{d}\) will be denoted by \(\nabla\) and \(\operatorname{div}\), respectively. In the neighborhood of a surface \(\Sigma\) the tangential gradient and divergence will be denoted by \(\nabla_{\Sigma}\) and \(\operatorname{div}_{\Sigma}\), and are explicitly given by \[\nabla_{\Sigma}=(\operatorname{Id}-\nu\otimes\nu)\nabla\quad\text{and}\quad \operatorname{div}_{\Sigma}=(\operatorname{Id}-\nu\otimes\nu):\nabla.\] These operators can also be defined intrinsically on \(\Sigma\) so that we can apply them to functions and vector fields only defined on the surface. ## 2. Main results The main result of this work states that solutions to the nonlocal Allen-Cahn equation with well-prepared initial conditions converge to solutions of volume-preserving mean curvature flow before the onset of singularities. In addition, the theorem provides the optimal convergence rate \(O(\varepsilon)\). For simplicity we assume that the two wells of \(W\) are \(0\) and \(1\) and that the induced surface tension is normalized to \(\sigma\coloneqq\phi(1)=\int_{0}^{1}\sqrt{2W(z)}\,\mathrm{d}z=1\). This is for example the case if \(W(z)=18z^{2}(z-1)^{2}\). **Theorem 1**.: _Let \(\Sigma=(\Sigma(t)=\partial\Omega(t))_{t\in[0,T]}\) be a smooth solution to volume-preserving mean curvature flow according to Definition 1 below and let \(u_{\varepsilon}\) be a solution of the nonlocal Allen-Cahn equation (1) with well-prepared initial conditions according to Definition 3 below. Then there exists a constant \(C=C(d,\Sigma,T)<\infty\) such that_ \[\sup_{t\in[0,T]}\int_{\mathbb{R}^{d}}|\psi_{\varepsilon}(x,t)-\chi_{\Omega(t )}(x)|\,\mathrm{d}x\leq C\varepsilon.\] We note that well-prepared initial data can easily be constructed by gluing the optimal profile around \(\Sigma(0)\): **Lemma 1**.: _If \(\Sigma(0)\) is \(C^{3,\alpha}\) for some \(\alpha\in(0,1)\), then there exist constants \((a_{\varepsilon})_{\varepsilon>0}\) with \(a_{\varepsilon}=O(\varepsilon)\) as \(\varepsilon\downarrow 0\) such that_ \[u_{\varepsilon}(x,0)\coloneqq U\left(\frac{-\mathbf{s}(x,\Sigma(0))-a_{ \varepsilon}}{\varepsilon}\right)\quad\text{is well-prepared in the sense of Definition 3,} \tag{9}\] _where \(U\) is the unique solution to \(U^{\prime\prime}=W^{\prime}(U)\) with \(U(-\infty)=0,\,U(+\infty)=1\) and \(U(0)=\frac{1}{2}\)._ **Definition 1**.: We call a family of surfaces \(\Sigma=(\Sigma(t))_{t\in[0,T]}\) a smooth solution to volume-preserving mean curvature flow if there exists \(\alpha\in(0,1)\) such that \(\Sigma(t)\) is \(C^{3,\alpha}\) for all \(t\) and \(\Sigma(t)\) evolves by (6), i.e., \(V=-H+\lambda\), and the normal velocity \(V(t)\) is of class \(C^{1,\alpha}\) in space. Before we give a precise definition of well-preparedness, we need to introduce some definitions. The key tool in our proof is a suitable gradient flow calibration. **Definition 2**.: Let \(\Sigma=(\Sigma(t))_{t\in[0,T]}\) be a one-parameter family of closed surfaces \(\Sigma(t)=\partial\Omega(t)\subset\mathbb{R}^{d}\). Let \(\xi,B\colon\mathbb{R}^{d}\times[0,T]\to\mathbb{R}^{d}\) be two vector fields, let \(\vartheta\colon\mathbb{R}^{d}\times[0,T]\to\mathbb{R}\) and let \(\lambda\colon[0,T]\to\mathbb{R}\). We call the tuple \((\xi,B,\vartheta,\lambda)\) a _gradient-flow calibration for volume-preserving mean curvature flow_ if the following statements hold true. 1. _Regularity_. The vector field \(\xi\) and the function \(\vartheta\) satisfy (10) \[\xi\in C^{0,1}(\mathbb{R}^{d}\times[0,T];\mathbb{R}^{d})\quad\text{and}\quad \vartheta\in C^{0,1}(\mathbb{R}^{d}\times[0,T]).\] Furthermore, for each \(t\in[0,T]\) it holds (11) \[B(\cdot,t)\in C^{0,1}(\mathbb{R}^{d};\mathbb{R}^{d}).\] 2. _Normal extension and shortness_. The vector field \(\xi\) extends the exterior unit normal vector field (12) \[\xi(\cdot,t)=\nu(\cdot,t)\quad\text{on }\Sigma(t)\] and it is short away from \(\Sigma\): There exists a constant \(c>0\) such that (13) \[|\xi(\cdot,t)|\leq(1-c\operatorname{dist}^{2}(x,\Sigma(t)))_{+},\] where \((\cdot)_{+}\) denotes the positive part. 3. _Divergence constraint._ There exist a bounded function \(c:[0,T]\to\mathbb{R}\) such that the vector fields \(B(\cdot,t)\) satisfy, for each \(t\in[0,T]\), (14) \[\nabla\cdot B(\cdot,t)-c(t)=O\big{(}\operatorname{dist}(\cdot,\Sigma(t))\big{)},\] and (15) \[\xi\otimes\xi:\nabla B(\cdot,t)=O(\operatorname{dist}(\cdot,\Sigma(t))).\] 4. _Approximate transport equations._ The weight \(\vartheta\) is transported to first order (16) \[\left(\partial_{t}\vartheta+(B\cdot\nabla)\vartheta\right)(\cdot,t)=O\big{(} \operatorname{dist}(\cdot,\Sigma(t))\big{)},\] and the length of \(\xi\) to second order (17) \[\left(\partial_{t}|\xi|^{2}+(B\cdot\nabla)|\xi|^{2}\right)(\cdot,t)=O\big{(} \operatorname{dist}^{2}(\cdot,\Sigma(t))\big{)}.\] Furthermore (18) \[\left(\partial_{t}\xi+(B\cdot\nabla)\xi+(\nabla B)^{\mathsf{T}}\xi\right)( \cdot,t)=O\big{(}\operatorname{dist}(\cdot,\Sigma(t))\big{)}.\] 5. _Geometric evolution equation._ (19) \[B(\cdot,t)\cdot\xi(\cdot,t)+\nabla\cdot\xi(\cdot,t)-\lambda(t)=O\big{(} \operatorname{dist}(\cdot,\Sigma(t))\big{)}.\] 6. _Coercivity of the transported weight_. It holds \[\vartheta(\cdot,t) >0\quad\text{on }\mathbb{R}^{d}\setminus\Omega(t),\] \[\vartheta(\cdot,t) <0\quad\text{in }\Omega(t),\] \[\sup_{(x,t)\in\mathbb{R}^{d}\times[0,T]}|\vartheta(x,t)| <\infty,\] and there exist constants \(0<c,C<\infty\) such that, on \(\operatorname{supp}\xi\), (20) \[c\operatorname{dist}(\cdot,\Sigma(t))\leq|\vartheta(\cdot,t)|\leq C \operatorname{dist}(\cdot,\Sigma(t)).\] In case such a gradient-flow calibration exists for \(\Sigma\), we call \(\Sigma\) a _calibrated flow_. The main difficulty in this work, compared to previous works using relative energy methods, are the divergence constraints (14) and (15) on \(B\) which need a particular construction. These divergence constraints are natural in the following sense. In view of [12], it is useful to choose \(B\) such that its divergence is controlled, since \(\nabla\cdot B=0\) is the localized version of the preservation of the total volume. There, it was chosen such that \(\nabla\cdot B=O(\operatorname{dist}(\cdot,\Sigma))\). Here, we need to relax this constraint to (10) as we additionally want to fix the \(\nu\otimes\nu\) component of the Jacobian \(\nabla B\). Then \(\nabla\cdot B=(I-\nu\otimes\nu)\colon\nabla B=\operatorname{div}_{\Sigma}B\). And since the rate of change of the total surface area is dictated by the PDE, we cannot set \(c(t)=0\). Our ansatz is to add a tangential part to the velocity field, say \(X\). Then \(B=V\nu+X\) on \(\Sigma\) and the divergence constraint \(\nabla\cdot B=c\) on \(\Sigma\) becomes \[\operatorname{div}_{\Sigma}X=VH-c.\] Hence we see that necessarily \[c(t)=\frac{\int_{\Sigma}VH\operatorname{d}\!\mathcal{H}^{d-1}}{\mathcal{H}^{d -1}(\Sigma)}. \tag{21}\] This PDE is underdetermined, so we make the ansatz that \(X\) is a gradient field, i.e., \(X=\nabla_{\Sigma}\varphi\) for some potential \(\varphi\). Then \(\varphi\) solves the Poisson equation \[\Delta_{\Sigma}\varphi=VH-c\quad\text{on }\Sigma, \tag{22}\] where \(\Delta_{\Sigma}=\operatorname{div}_{\Sigma}\nabla_{\Sigma}\) is the Laplace-Beltrami operator on \(\Sigma\). Now Theorem 1 rests on the following two propositions. The first one guarantees the existence of a calibration, the second shows that, given a calibration, the Allen-Cahn equation converges. **Proposition 1**.: _If \(\Sigma\) is a smooth solution to volume-preserving mean curvature flow in the sense of Definition 1, then \(\Sigma\) is a calibrated flow._ **Proposition 2**.: _Let \(u_{\varepsilon}\) be a solution to the nonlocal Allen-Cahn equation (1) and let \((\Sigma(t))_{t\in[0,T]}\) be a calibrated flow according to Definition 2. Suppose further that \(\int_{\mathbb{R}^{d}}\psi(x,0)\operatorname{d}\!x=|\Omega(0)|\). Then there exists a constant \(C=C(d,T,\Sigma)\) such that, for all \(t\in(0,T)\), it holds_ \[\frac{\operatorname{d}\!}{\operatorname{d}\!t}\big{(}\mathcal{E}_{\varepsilon }(t)+\mathcal{F}_{\varepsilon}(t)\big{)}\leq C\big{(}\mathcal{E}_{\varepsilon }(t)+\mathcal{F}_{\varepsilon}(t)\big{)}, \tag{23}\] _where \(\mathcal{E}_{\varepsilon}\) and \(\mathcal{F}_{\varepsilon}\) are defined below in (24) and (25), respectively._ We work with the relative energy \[\mathcal{E}_{\varepsilon}(t)\coloneqq\mathcal{E}_{\varepsilon}[u_ {\varepsilon},\Sigma](t)\coloneqq E_{\varepsilon}[u_{\varepsilon}(\cdot,t)]+\int_{\mathbb{R}^{d}}\xi(x,t) \cdot\nabla\psi_{\varepsilon}(x,t)\operatorname{d}\!x\] \[= \int_{\mathbb{R}^{d}}\left(\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}(x,t)|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon}(x,t))-|\nabla \psi_{\varepsilon}(x,t)|\right)\,\operatorname{d}\!x\] \[+\int_{\mathbb{R}^{d}}\left(1-\xi(x,t)\cdot\nu_{\varepsilon}(x,t )\right)|\nabla\psi_{\varepsilon}(x,t)|\operatorname{d}\!x, \tag{24}\] where \(\psi_{\varepsilon}(x,t)\coloneqq\int_{0}^{u_{\varepsilon}(x,t)}\sqrt{2W(z)} \operatorname{d}\!z\) and \(\nu_{\varepsilon}(x,t)\coloneqq-\frac{\nabla\psi_{\varepsilon}(x,t)}{| \nabla\psi_{\varepsilon}(x,t)|}\) if \(\nabla\psi_{\varepsilon}(x,t)\neq 0\) and \(\nu_{\varepsilon}(x,t)\coloneqq e\) for some arbitrary \(e\in S^{d-1}\) if \(\nabla\psi_{\varepsilon}(x,t)=0\). It is already clear that the relative energy \(\mathcal{E}_{\varepsilon}\) controls both the discrepancy between the two terms in the energy and the tilt-excess. Furthermore, we define the volume error functional \[\mathcal{F}_{\varepsilon}(t)\coloneqq\mathcal{F}_{\varepsilon}[u_{ \varepsilon},\Sigma](t) \coloneqq\int_{\mathbb{R}^{d}}|\psi_{\varepsilon}(x,t)-\chi_{ \Omega(t)}(x)||\vartheta(x,t)|\,\mathrm{d}x\] \[=\int_{\mathbb{R}^{d}}(\psi_{\varepsilon}(x,t)-\chi_{\Omega(t)}( x))\vartheta(x,t)\,\mathrm{d}x. \tag{25}\] **Definition 3**.: We call initial conditions \(u_{\varepsilon}(\cdot,0)\)_well-prepared_ if they satisfy the following assumptions: 1. _Mass constraint._ \(\int_{\mathbb{R}^{d}}\psi_{\varepsilon}(x,0)\,\mathrm{d}x=|\Omega(0)|\)_._ 2. _Optimal convergence rate._ \(\mathcal{E}_{\varepsilon}(0)+\mathcal{F}_{\varepsilon}(0)=O(\varepsilon^{2})\)_._ The proofs of Proposition 1 and 2 are deferred to the next sections. Now, based on the propositions we are able to prove Theorem 1 similarly to [5]. Proof of Theorem 1.: By Gronwall's lemma, (23) implies \[\mathcal{E}_{\varepsilon}(t)+\mathcal{F}_{\varepsilon}(t)\leq C\big{(} \mathcal{E}_{\varepsilon}(0)+\mathcal{F}_{\varepsilon}(0)\big{)}\quad\text{ for all }t\in[0,T]. \tag{26}\] Now, for \(\delta>0\) and \(f\in L^{\infty}(0,r)\), we split the square \([0,r]^{2}\) into two triangles and apply Fubini's theorem \[\left(\int_{0}^{\delta}|f(r)|\,\mathrm{d}r\right)^{2}\leq 2\|f\|_{\infty}\int_{ 0}^{\delta}|f(r)|r\,\mathrm{d}r.\] Let \(\mathcal{U}_{r}(t)\coloneqq\{x:\operatorname{dist}(x,\Sigma(t))<r\}\) denote the tubular neighborhood of \(\Sigma(t)\) with radius \(r\) and let \(\pi_{\Sigma(t)}\coloneqq\operatorname{Id}-\mathbf{s}\nabla\mathbf{s}\otimes \nabla\mathbf{s}\) denote the orthogonal projection onto \(\Sigma(t)\), where \(\mathbf{s}\) is the signed distance function defined in (8). Now let \(\delta>0\) sufficiently small such that \(\pi_{\Sigma(t)}\) is well defined on \(\mathcal{U}_{\delta}(t)\) and injective for all \(t\in[0,T]\). We compute \[\left(\int_{\mathcal{U}_{\delta}(t)}|\psi_{\varepsilon}(\cdot,t) -\chi_{\Omega(t)}|\,\mathrm{d}x\right)^{2}\] \[\leq C\bigg{(}\int_{\Sigma(t)}\int_{0}^{\delta}|\psi_{\varepsilon}- \chi_{\Omega}|(y+r\nu(y,t),t)r\,\mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y)\] \[\quad+\int_{\Sigma(t)}\int_{0}^{\delta}|\psi_{\varepsilon}-\chi_ {\Omega}|(y-r\nu(y,t),t)r\,\mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y)\bigg{)} ^{2}\] \[= C\int_{\Sigma(t)}\int_{-\delta}^{\delta}|\psi_{\varepsilon}-\chi _{\Omega(t)}|(y+r\nu(y,t),t)\operatorname{dist}(y+r\nu(x,t),\Sigma(t))\, \mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y)\] \[\leq C\int_{\mathcal{U}_{\delta}(t)}|\psi_{\varepsilon}(x,t)-\chi_{ \Omega(t)}(x)|\operatorname{dist}(x,\Sigma(t))\,\mathrm{d}x\leq\ C\mathcal{F} _{\varepsilon}(t).\] In view of (26) and the well-preparedness condition (ii) we obtain Theorem 1. ## 3. Construction of calibration: Proof of Proposition 1 Proof of Proposition 1.: Let \((\Sigma(t))_{t\in[0,T]}\) be a smooth solution to volume-preserving mean curvature flow. and let \(\delta>0\) be sufficiently small such that \(\pi_{\Sigma}\), with the notation of the proof of Theorem 1, is well defined, injective and of class \(C^{2}\) on \(\mathcal{U}_{\delta}\). Define a smooth cutoff function \(\zeta:\mathbb{R}\to[0,\infty)\) such that \(\zeta(r)=1-r^{2}\) for \(r<\delta/2\), \(\zeta=0\) for \(r>\delta\) and define \[\xi(\cdot,t)\coloneqq\zeta(\mathbf{s}(\cdot,\Sigma(t)))\nabla\mathbf{s}(\cdot,\Sigma(t)).\] Next, let \(\theta\) be a smooth truncation of the identity, i.e., \(\theta(r)=-\theta(-r),\,\theta(r)=r\) for \(|r|<\delta/2\) and \(\theta(r)=\delta\) for \(r\geq\delta\). Now we define \(\vartheta(x,t)\coloneqq\theta(\mathbf{s}(x,\Sigma(t)))\). Finally we construct the vector field \(B\). Let \(V(\cdot,t)\) denote the normal velocity of the interface \(\Sigma(t)=\{\vartheta(\cdot,t)=0\}\) and let \(\eta\) be a cutoff function such that \(\eta(r)=1\) for \(|r|<\delta\) and \(\eta(r)=0\) for \(r>2\delta\). Now consider the ansatz \[B(x,t)\coloneqq\eta(\mathbf{s}(x,\Sigma(t)))((V\nu+X)\circ\pi_{\Sigma(t)}(x))\] for some tangent vector field \(X(\cdot,t):\Sigma(t)\to T\Sigma(t)\). Then \(\nu\otimes\nu:\nabla B=0\) and hence \[\nabla\cdot B =(\operatorname{Id}-\nu\otimes\nu):\nabla B+\nu\otimes\nu:\nabla B\] \[=\operatorname{div}_{\Sigma(t)}B\] \[=\operatorname{div}_{\Sigma(t)}(V\nu+X)\] \[=V\mathrm{div}_{\Sigma(t)}\nu+\operatorname{div}_{\Sigma(t)}X.\] We can construct such an \(X(\cdot,t)\) by solving the PDE \[-\Delta_{\Sigma(t)}\varphi=VH-c\quad\text{on }\Sigma(t),\] where \(c(t)=\fint_{\Sigma(t)}VH\,\mathrm{d}\mathcal{H}^{d-1}\). Then the right-hand side satisfies the compatibility condition \[\int_{\Sigma(t)}VH-c\,\mathrm{d}\mathcal{H}^{d-1}=0,\] and existence and uniqueness of weak solutions in \(H^{1}_{(0)}(\Sigma(t))\) can easily be shown with the Lax-Milgram lemma, cf. [11, Lemma 4]. Since \(\Sigma(t)\) is \(C^{3,\alpha}\) and the normal velocity \(V\) is of class \(C^{1,\alpha}\), the regularity of \(\varphi\) can be improved to \(C^{3,\alpha}\) using Schauder estimates, cf. [12, Proof of Thm. 1]. Now set \(X\coloneqq\nabla_{\Sigma(t)}\varphi\). Then \(B\) is of class \(C^{1,\alpha}\), in particular \(C^{0,1}\), and satisfies the required properties: \[\nu\otimes\nu:\nabla B =0 \text{on }\Sigma(t), \tag{28}\] \[\operatorname{div}B =c \text{on }\Sigma(t), \tag{27}\] and hence by Lipschitz continuity the divergence constraints (14) and (15). Now we compute, on \(\Sigma\), \[\partial_{t}\mathbf{s}+B\cdot\nabla\mathbf{s}=-V+B\cdot\nu=-V+V=0. \tag{29}\] Since both \(|\xi|^{2}=(\zeta\circ\mathbf{s})^{2}\) and \(\vartheta=\theta\circ\mathbf{s}\) are functions of the signed distance and Lipschitz, we immediately obtain (16) and (17). It remains to show (18) and (19). Since \(\zeta^{\prime}(0)=0\), we have, on \(\Sigma\), \[B\cdot\xi+\nabla\cdot\xi-\lambda=B\cdot\nu+|\nabla\mathbf{s}|^{2}\zeta^{ \prime}(0)+\zeta(0)\nabla\cdot\nu-\lambda=V+H-\lambda=0.\] By Lipschitz continuity of \(B\) and \(\xi\) we get (19). Finally we compute \[(\partial_{t}\xi+(B\cdot\nabla)\xi+(\nabla B)^{\mathsf{T}}\xi)( \cdot,t)\] \[=\zeta^{\prime}(\mathbf{s})(\partial_{t}\mathbf{s}+B\cdot\nabla \mathbf{s})\nabla\mathbf{s}+\zeta(\mathbf{s})(\partial_{t}\nabla\mathbf{s}+(B \cdot\nabla)\nabla\mathbf{s}+(\nabla B)^{\mathsf{T}}\nabla\mathbf{s})\] As before, the first term is \(O(\mathrm{dist}(\cdot,\Sigma(t)))\). Thus it remains to compute the second term. We have, on \(\Sigma\), \[0 =\nabla(\partial_{t}\mathbf{s}+(B\cdot\nabla)\mathbf{s})\] \[=\partial_{t}\nabla\mathbf{s}+(B\cdot\nabla)\nabla\mathbf{s}+( \nabla B)^{T}\nabla\mathbf{s}\] \[=\partial_{t}\xi+(B\cdot\nabla)\xi+(\nabla B)^{T}\xi.\] This concludes the proof of Proposition 1. ## 4. Relative energy estimate: Proof of Proposition 2 This section is devoted to the proof of the relative energy estimate in Proposition 2. We will need an appropriate weak formulation of the nonlocal Allen-Cahn equation, which we will later test with the extended velocity field \(B\). It is easy to check that testing (1) with \(B\cdot\varepsilon\nabla u_{\varepsilon}\) we have for any solution \(u_{\varepsilon}\) of the nonlocal Allen-Cahn equation (1) \[\int(\nabla\cdot B)\Big{(}\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})\Big{)}\,\mathrm{d}x -\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}|\nabla\psi_{\varepsilon}| \,\mathrm{d}x \tag{31}\] \[=-\int\Big{(}V_{\varepsilon}-\lambda_{\varepsilon}\sqrt{2W(u_{ \varepsilon})}\Big{)}\,\nu_{\varepsilon}\cdot B|\nabla u_{\varepsilon}|\, \mathrm{d}x+\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x, \tag{30}\] where we omitted the domain of integration \(\mathbb{R}^{d}\times\{t\}\), for \(t\in(0,T)\), cf. [14, Section 3.2]. The following simple lemma, cf. [5, Lemma 4], states the basic coercivity properties of the relative energy \(\mathcal{E}_{\varepsilon}\). **Lemma 2**.: _There exist constants \(0<c,C<\infty\) such that_ \[\int\Big{(}\sqrt{\varepsilon}|\nabla u_{\varepsilon}|-\frac{1}{ \sqrt{\varepsilon}}\sqrt{2W(u_{\varepsilon})}\Big{)}^{2}\,\mathrm{d}x \leq 2\mathcal{E}_{\varepsilon}[u_{\varepsilon},\Sigma], \tag{33}\] \[\int|\nu_{\varepsilon}-\xi|^{2}|\nabla\psi_{\varepsilon}|\, \mathrm{d}x \leq 2\mathcal{E}_{\varepsilon}[u_{\varepsilon},\Sigma],\] (34) \[\int|\nu_{\varepsilon}-\xi|^{2}\varepsilon|\nabla u_{\varepsilon} |^{2}\,\mathrm{d}x \leq 12\mathcal{E}_{\varepsilon}[u_{\varepsilon},\Sigma],\] (35) \[\int\min\big{\{}\mathrm{dist}^{2}(\cdot,\Sigma),c\big{\}}\,\Big{(} \frac{\varepsilon}{2}|\nabla u_{\varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{ \varepsilon})\Big{)}\,\mathrm{d}x \leq C(\Sigma)\mathcal{E}_{\varepsilon}[u_{\varepsilon},\Sigma]. \tag{32}\] Now we are in the position to prove the proposition. Proof of Proposition 2.: We compute using Gauss' theorem \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}(t)= \frac{\mathrm{d}}{\mathrm{d}t}E_{\varepsilon}[u_{\varepsilon}( \cdot,t)]+\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}\times\{t\}}\xi \cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x \tag{37}\] \[=-\int_{\mathbb{R}^{d}\times\{t\}}\frac{1}{\varepsilon}( \varepsilon\partial_{t}u_{\varepsilon})^{2}\,\mathrm{d}x-\int_{\mathbb{R}^{d} \times\{t\}}(\nabla\cdot\xi)\sqrt{2W(u_{\varepsilon})}\partial_{t}u_{ \varepsilon}\,\mathrm{d}x+\int_{\mathbb{R}^{d}\times\{t\}}\partial_{t}\xi \cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x. \tag{36}\] In the following, we again omit the domain of integration \(\mathbb{R}^{d}\times\{t\}\). We set \(V_{\varepsilon}\coloneqq\varepsilon\partial_{t}u_{\varepsilon}\). Then we see, using that \(\int V_{\varepsilon}\sqrt{2W(u_{\varepsilon})}\,\mathrm{d}x=\frac{\mathrm{d}}{ \mathrm{d}t}\int\psi_{\varepsilon}\,\mathrm{d}x=0\), \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}(t)=\int\left(-\frac{1 }{\varepsilon}V_{\varepsilon}^{2}-\frac{1}{\varepsilon}V_{\varepsilon}\big{(} \nabla\cdot\xi-\lambda\big{)}\sqrt{2W(u_{\varepsilon})}-\partial_{t}\xi \cdot\nu_{\varepsilon}|\nabla\psi_{\varepsilon}|\right)\,\mathrm{d}x.\] We add the weak formulation (31), tested with the velocity field \(B\), to obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}(t)= \int\left(-\frac{1}{\varepsilon}V_{\varepsilon}^{2}-\frac{1}{ \varepsilon}V_{\varepsilon}\big{(}\nabla\cdot\xi-\lambda\big{)}\sqrt{2W(u_{ \varepsilon})}+\Big{(}V_{\varepsilon}-\lambda_{\varepsilon}\sqrt{2W(u_{ \varepsilon})}\Big{)}\,\nu_{\varepsilon}\cdot B|\nabla u_{\varepsilon}| \right)\,\mathrm{d}x\] \[+\int(\nabla\cdot B)\Big{(}\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})\Big{)}\,\mathrm{d}x -\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}|\nabla\psi_{\varepsilon}| \,\mathrm{d}x\] \[-\int\nu_{\varepsilon}\cdot\partial_{t}\xi|\nabla\psi_{ \varepsilon}|\,\mathrm{d}x\] \[-\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x.\] Decomposing the vector field \(B=(B\cdot\xi)\xi+(\mathrm{Id}-\xi\otimes\xi)B\), completing squares, and adding zero to make the transport term for \(\xi\) appear, we get \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}(t)+\frac{1}{ 2}\int\frac{1}{\varepsilon}\Big{(}V_{\varepsilon}+(\nabla\cdot\xi-\lambda) \sqrt{2W(u_{\varepsilon})}\Big{)}^{2}\,\mathrm{d}x+\frac{1}{2}\int\frac{1}{ \varepsilon}\Big{|}V_{\varepsilon}\nu_{\varepsilon}-\varepsilon|\nabla u_{ \varepsilon}|(B\cdot\xi)\xi\Big{|}^{2}\,\mathrm{d}x\] \[= \frac{1}{2}\int\Big{(}(\nabla\cdot\xi-\lambda)^{2}\frac{1}{ \varepsilon}2W(u_{\varepsilon})+(B\cdot\xi)^{2}|\xi|^{2}\varepsilon|\nabla u_{ \varepsilon}|^{2}\Big{)}\,\mathrm{d}x\] \[+\int\big{(}V_{\varepsilon}\nu_{\varepsilon}(\mathrm{Id}-\xi \otimes\xi)B-\lambda_{\varepsilon}\sqrt{2W(u_{\varepsilon})}\nu_{\varepsilon }\cdot B\big{)}|\nabla u_{\varepsilon}|\,\mathrm{d}x\] \[+\int\big{(}\nabla\cdot B-\nu_{\varepsilon}\cdot\nabla B\nu_{ \varepsilon}+\nu_{\varepsilon}\cdot(B\cdot\nabla)\xi+\xi(\nu_{\varepsilon} \cdot\nabla)B\big{)}|\nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[-\int\nu_{\varepsilon}\cdot(\partial_{t}\xi+(B\cdot\nabla)\xi+( \nabla B)^{\mathsf{T}}\xi)|\nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[+\int(\nabla\cdot B)\Big{(}\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})-|\nabla\psi_{ \varepsilon}|\Big{)}\,\mathrm{d}x\] \[-\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x. \tag{38}\] Completing another square and using \(\xi\otimes\nu_{\varepsilon}+\nu_{\varepsilon}\otimes\xi=-(\nu_{\varepsilon}- \xi)\otimes(\nu_{\varepsilon}-\xi)+\nu_{\varepsilon}\otimes\nu_{\varepsilon} +\xi\otimes\xi\), we may write the right-hand side of (39) as \[\frac{1}{2}\int\frac{1}{\varepsilon}\Big{(}(\nabla\cdot\xi-\lambda )\sqrt{2W(u_{\varepsilon})}+\varepsilon|\nabla u_{\varepsilon}|B\cdot\xi \Big{)}^{2}\,\mathrm{d}x\] \[+\frac{1}{2}\int\big{(}|\xi|^{2}-1\big{)}(B\cdot\xi)^{2} \varepsilon|\nabla u_{\varepsilon}|^{2}\,\mathrm{d}x\] \[-\int(\nabla\cdot\xi-\lambda)B\cdot\xi|\nabla\psi_{\varepsilon }|\,\mathrm{d}x\] \[+\int\big{(}V_{\varepsilon}\nu_{\varepsilon}\cdot(\mathrm{Id}- \xi\otimes\xi)B-\lambda_{\varepsilon}\sqrt{2W(u_{\varepsilon})}\nu_{ \varepsilon}\cdot B\big{)}|\nabla u_{\varepsilon}|\,\mathrm{d}x\] \[+\int(\nabla\cdot B)(1-\xi\cdot\nu_{\varepsilon})|\nabla\psi_{ \varepsilon}|\,\mathrm{d}x+\int(\nabla\cdot B)\xi\cdot\nu_{\varepsilon}| \nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[-\int\nabla B\colon(\nu_{\varepsilon}-\xi)\otimes(\nu_{ \varepsilon}-\xi)|\nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[-\int\nu_{\varepsilon}\cdot(\xi\cdot\nabla)B|\nabla\psi_{ \varepsilon}|\,\mathrm{d}x+\int\nu_{\varepsilon}\cdot(B\cdot\nabla)\xi| \nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[-\int(\nu_{\varepsilon}-\xi)\cdot\big{(}\partial_{t}\xi+(B\cdot \nabla)\xi+(\nabla B)^{\mathsf{T}}\xi\big{)}\,|\nabla\psi_{\varepsilon}|\, \mathrm{d}x\] \[-\int\xi\cdot(\partial_{t}\xi+(B\cdot\nabla)\xi)\,|\nabla\psi_{ \varepsilon}|\,\mathrm{d}x\] \[+\int(\nabla\cdot B)\Big{(}\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})-|\nabla\psi_{ \varepsilon}|\Big{)}\,\mathrm{d}x \tag{39}\] \[-\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x.\] Two integrations by parts and the symmetry of the Hessian \(\nabla^{2}\psi_{\varepsilon}\) imply \[\int(B\cdot\nabla)\xi\cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x=\int(\xi \cdot\nabla)B\cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x+\int\big{(}(\nabla \cdot\xi)B-(\nabla\cdot B)\xi\big{)}\cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x.\] Combining this with \(\nabla\psi_{\varepsilon}=-\nu_{\varepsilon}|\nabla\psi_{\varepsilon}|\), we may again replace three terms in (39) by the term \((\nabla\cdot\xi)B\cdot\nu_{\varepsilon}|\nabla\psi_{\varepsilon}|\) so that we get \[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{ \varepsilon}(t)+\frac{1}{2}\int\frac{1}{\varepsilon}\Big{(}V_{\varepsilon}+( \nabla\cdot\xi-\lambda)\sqrt{2W(u_{\varepsilon})}\Big{)}^{2}\,\mathrm{d}x+ \frac{1}{2}\int\frac{1}{\varepsilon}\Big{|}V_{\varepsilon}\nu_{\varepsilon}- \varepsilon|\nabla u_{\varepsilon}|(B\cdot\xi)\xi\Big{|}^{2}\,\mathrm{d}x\\ =&\frac{1}{2}\int\frac{1}{\varepsilon}\Big{(}( \nabla\cdot\xi-\lambda)\sqrt{2W(u_{\varepsilon})}+\varepsilon|\nabla u_{ \varepsilon}|B\cdot\xi\Big{)}^{2}\,\mathrm{d}x+\frac{1}{2}\int\big{(}|\xi|^{2 }-1\big{)}(B\cdot\xi)^{2}\varepsilon|\nabla u_{\varepsilon}|^{2}\,\mathrm{d}x \\ &-\int(\nabla\cdot\xi-\lambda)(1-\xi\cdot\nu_{\varepsilon})B \cdot\xi|\nabla\psi_{\varepsilon}|\,\mathrm{d}x+\int(\lambda-\lambda_{ \varepsilon})\nu_{\varepsilon}\cdot B|\nabla\psi_{\varepsilon}|\,\mathrm{d}x \\ &+\int\Big{(}V_{\varepsilon}+(\nabla\cdot\xi-\lambda)\sqrt{2W(u_{ \varepsilon})}\Big{)}\nu_{\varepsilon}\cdot(\mathrm{Id}-\xi\otimes\xi)B| \nabla u_{\varepsilon}|\,\mathrm{d}x\\ &+\int(\nabla\cdot B)(1-\xi\cdot\nu_{\varepsilon})|\nabla\psi_{ \varepsilon}|\,\mathrm{d}x-\int(\nu_{\varepsilon}-\xi)\cdot\nabla B(\nu_{ \varepsilon}-\xi)|\nabla\psi_{\varepsilon}|\,\mathrm{d}x\\ &-\int(\nu_{\varepsilon}-\xi)\cdot\big{(}\partial_{t}\xi+(B \cdot\nabla)\xi+(\nabla B)^{\mathsf{T}}\xi\big{)}\,|\nabla\psi_{\varepsilon}| \,\mathrm{d}x\\ &-\frac{1}{2}\int\big{(}\partial_{t}|\xi|^{2}+(B\cdot\nabla)|\xi| ^{2}\big{)}\,|\nabla\psi_{\varepsilon}|\,\mathrm{d}x\\ &+\int(\nabla\cdot B)\Big{(}\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})-|\nabla\psi_{ \varepsilon}|\Big{)}\,\mathrm{d}x\\ &-\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x.\end{split} \tag{40}\] We argue term-by-term that the right-hand side can be controlled suitably. By and large, the argument is similar to the one in the sharp-interface case in [12], here based on the coercivity properties of \(\mathcal{E}_{\varepsilon}\) collected in Lemma 2. Let us first estimate the terms that are analogous to [12]. For the first term, by Young's inequality we have \[\begin{split}&\frac{1}{2\varepsilon}\Big{(}(\nabla\cdot\xi- \lambda)\sqrt{2W(u_{\varepsilon})}+\varepsilon|\nabla u_{\varepsilon}|B\cdot \xi\Big{)}^{2}\\ &\leq(\nabla\cdot\xi-\lambda+B\cdot\xi)^{2}\varepsilon|\nabla u _{\varepsilon}|^{2}+(\nabla\cdot\xi-\lambda)^{2}\Big{(}\sqrt{\varepsilon}| \nabla u_{\varepsilon}|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_{\varepsilon}) }\Big{)}^{2}.\end{split}\] The contribution of these terms are controlled by \(\mathcal{E}_{\varepsilon}(t)\) using (19) and (35), respectively, (32). The second term in (40) is controlled by (13) in conjunction with (35). The third term is directly controlled by \(\|\nabla\cdot\xi-\lambda\|_{\infty}\|B\cdot\xi\|_{\infty}\mathcal{E}_{ \varepsilon}(t)\). The analogous argument holds for the sixth term. For the fifth term we use Young's inequality: \[\begin{split}&\int\Big{(}V_{\varepsilon}+(\nabla\cdot\xi- \lambda)\sqrt{2W(u_{\varepsilon})}\Big{)}\,\nu_{\varepsilon}\cdot(\mathrm{Id}- \xi\otimes\xi)B|\nabla u_{\varepsilon}|\,\mathrm{d}x\\ &\leq\frac{1}{4}\int\frac{1}{\varepsilon}\left(V_{\varepsilon}+( \nabla\cdot\xi-\lambda)\sqrt{2W(u_{\varepsilon})}\right)^{2}\,\mathrm{d}x+ \int(\nu_{\varepsilon}\cdot(\mathrm{Id}-\xi\otimes\xi)B)^{2}\varepsilon| \nabla u_{\varepsilon}|^{2}\,\mathrm{d}x\\ &\leq\frac{1}{4}\int\frac{1}{\varepsilon}\left(V_{\varepsilon}+( \nabla\cdot\xi-\lambda)\sqrt{2W(u_{\varepsilon})}\right)^{2}\,\mathrm{d}x+ \|B\|_{\infty}^{2}\int|\nu_{\varepsilon}-(\nu_{\varepsilon}\cdot\xi)\xi|^{2} \varepsilon\nabla u_{\varepsilon}|^{2}\,\mathrm{d}x.\end{split}\] The first term is absorbed in the first term on the left-hand side of (40). The second term is estimated by (34). The seventh term is controlled by (33), since \(\|\nabla B\|_{\infty}|\nu_{\varepsilon}-\xi|^{2}\). For the eighth term we have, using (19) and Young's inequality, \[(\nu_{\varepsilon}-\xi)\cdot\big{(}\partial_{t}\xi+(B\cdot\nabla) \xi+(\nabla B)^{\mathsf{T}}\xi\big{)}\,|\nabla\psi_{\varepsilon}|\] \[\leq\frac{1}{2}|\nu_{\varepsilon}-\xi|^{2}|\nabla\psi_{ \varepsilon}|+\frac{1}{2}C\min\big{\{}\mathrm{dist}^{2}(\cdot,\Sigma(t)),c\big{\}} \,|\nabla\psi_{\varepsilon}|.\] Since \(|\nabla\psi_{\varepsilon}|\leq\frac{1}{2}\varepsilon|\nabla u_{\varepsilon}| ^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})\), the eighth term is controlled by (33) and (35). The ninth term is controlled by (17). The second to last term is controlled by \(\|\nabla\cdot B\|_{\infty}\mathcal{E}_{\varepsilon}(t)\). Thus is remains to estimate the fourth term and the last term. For the last term in (40) we observe that, using \(|\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}-\xi\cdot\nabla B\xi|\leq\| \nabla B\|_{\infty}|\nu_{\varepsilon}-\xi|\) and Young's inequality, \[\int\nu_{\varepsilon}\cdot\nabla B\nu_{\varepsilon}\big{(} \varepsilon|\nabla u_{\varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\, \mathrm{d}x\] \[\leq\int\xi\cdot\nabla B\xi\big{(}\varepsilon|\nabla u_{ \varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\,\mathrm{d}x\] \[\quad+\|\nabla B\|_{\infty}\int|\nu_{\varepsilon}-\xi|\sqrt{ \varepsilon}|\nabla u_{\varepsilon}|\left(\sqrt{\varepsilon}|\nabla u_{ \varepsilon}|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_{\varepsilon})}\right) \,\mathrm{d}x\] \[\leq\int\xi\cdot\nabla B\xi\big{(}\varepsilon|\nabla u_{ \varepsilon}|^{2}-|\nabla\psi_{\varepsilon}|\big{)}\,\mathrm{d}x+\|\nabla B \|_{\infty}\int|\nu_{\varepsilon}-\xi|^{2}\varepsilon|\nabla u_{\varepsilon}| ^{2}\,\mathrm{d}x\] \[\quad+\|\nabla B\|_{\infty}\int\left(\sqrt{\varepsilon}|\nabla u |-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_{\varepsilon})}\right)^{2}\,\mathrm{d}x.\] Here, the last two terms are bounded by (32) and (34), respectively. We compute, using Young's inequality, \[\int\xi\otimes\xi:\nabla B(\varepsilon|\nabla u_{\varepsilon}|^{ 2}-|\nabla\psi_{\varepsilon}|)\,\mathrm{d}x \leq\frac{1}{2}\int\left(\xi\otimes\xi:\nabla B\right)^{2} \varepsilon|\nabla u|^{2}\,\mathrm{d}x\] \[\quad+\frac{1}{2}\int\left(\sqrt{\varepsilon}|\nabla u_{ \varepsilon}|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_{\varepsilon})}\right)^{ 2}\,\mathrm{d}x.\] Using the coercivity estimate (20) and (32), the second summand is bounded by \(\mathcal{E}_{\varepsilon}\). For the first term we have, by (15), \[\frac{1}{2}\int\left(\xi\otimes\xi:\nabla B\right)^{2}\varepsilon|\nabla u_{ \varepsilon}|^{2}\,\mathrm{d}x\leq\frac{1}{2}\|\nabla B\|_{\infty}^{2}\int_{ \operatorname{supp}\xi}|\mathbf{s}|^{2}\varepsilon|\nabla u|^{2}\,\mathrm{d}x\] which is bounded by \(\mathcal{E}_{\varepsilon}\) by (35). Next we estimate the fourth term in (40). Since by Gauss' theorem \[\int_{\Omega(t)}\nabla\cdot B\,\mathrm{d}x=\int_{\Sigma(t)}B\cdot\nu\,\mathrm{ d}\mathcal{H}^{d-1}=\int_{\Sigma(t)}V\,\mathrm{d}\mathcal{H}^{d-1}=\frac{ \mathrm{d}}{\mathrm{d}t}|\Omega(t)|=0,\] we have \[\int(\lambda-\lambda_{\varepsilon})\nu_{\varepsilon}\cdot B| \nabla\psi_{\varepsilon}|\,\mathrm{d}x =-(\lambda-\lambda_{\varepsilon})\int(\nabla\cdot B)\psi_{ \varepsilon}\,\mathrm{d}x\] \[=-(\lambda-\lambda_{\varepsilon})\int(\nabla\cdot B)(\psi_{ \varepsilon}-\chi_{\Omega(t)})\,\mathrm{d}x.\] Furthermore, since \(\frac{\mathrm{d}}{\mathrm{d}t}\int\psi_{\varepsilon}\,\mathrm{d}x=0=\frac{ \mathrm{d}}{\mathrm{d}t}|\Omega(t)|\), we also have \[\int(\nabla\cdot B)(\psi_{\varepsilon}-\chi_{\Omega})\,\mathrm{d}x\] \[= \int(\nabla\cdot B-c)(\psi_{\varepsilon}-\chi_{\Omega})\, \mathrm{d}x+\int c(\psi_{\varepsilon}-\chi_{\Omega})\,\mathrm{d}x\] \[= \int(\nabla\cdot B-c)(\psi_{\varepsilon}-\chi_{\Omega})\, \mathrm{d}x+c\left(\int_{\mathbb{R}^{d}}\psi_{\varepsilon}(x,t)\,\mathrm{d}x- |\Omega(t)|\right)\] \[= \int(\nabla\cdot B-c)(\psi_{\varepsilon}-\chi_{\Omega})\, \mathrm{d}x+c\left(\int_{\mathbb{R}^{d}}\psi_{\varepsilon}(x,0)\,\mathrm{d}x- |\Omega(0)|\right).\] By the well-preparedness assumption (i), the second summand vanishes. By (14) we have \[(|\lambda|+|\lambda_{\varepsilon}|)\int(\nabla\cdot B-c)(\psi_{ \varepsilon}-\chi_{\Omega})\,\mathrm{d}x \leq C(|\lambda|+|\lambda_{\varepsilon}|)\int|\vartheta||\psi_{ \varepsilon}-\chi_{\Omega}|\,\mathrm{d}x\] \[= C(|\lambda|+|\lambda_{\varepsilon}|)\mathcal{F}_{\varepsilon}.\] Therefore we have in total \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}(t)+\frac{ 1}{4}\int\frac{1}{\varepsilon}\Big{(}V_{\varepsilon}+(\nabla\cdot\xi-\lambda) \sqrt{2W(u_{\varepsilon})}\Big{)}^{2}\,\mathrm{d}x +\frac{1}{2}\int\frac{1}{\varepsilon}\Big{|}V_{\varepsilon}\nu_{ \varepsilon}-\varepsilon|\nabla u_{\varepsilon}|(B\cdot\xi)\xi\Big{|}^{2}\, \mathrm{d}x\] \[\leq C(\mathcal{E}_{\varepsilon}(t)+\mathcal{F}_{\varepsilon}(t)). \tag{41}\] Finally we estimate \(\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{\varepsilon}\) by decomposing it into a term which is bounded by \(\mathcal{E}_{\varepsilon}+\mathcal{F}_{\varepsilon}\) and a small dissipation term which can be absorbed on the left-hand side of (41). We smuggle in \(\int(B\cdot\nabla\vartheta)(\psi_{\varepsilon}-\chi_{\Omega})\,\mathrm{d}x=- \int\vartheta B\cdot\nabla\psi_{\varepsilon}\,\mathrm{d}x-\int(\nabla\cdot B )\vartheta(\psi_{\varepsilon}-\chi_{\Omega})\,\mathrm{d}x\) and obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{\varepsilon}(t) =\int\partial_{t}\vartheta(\psi_{\varepsilon}-\chi_{\Omega})\, \mathrm{d}x+\int\vartheta\partial_{t}\psi_{\varepsilon}\,\mathrm{d}x-\int_{ \Sigma}\vartheta V\,\mathrm{d}\mathcal{H}^{d-1}\] \[=\int(\partial_{t}\vartheta+B\cdot\nabla\vartheta)(\psi_{ \varepsilon}-\chi_{\Omega})\,\mathrm{d}x+\int(\nabla\cdot B)\vartheta(\psi_{ \varepsilon}-\chi_{\Omega})\,\mathrm{d}x\] \[\quad+\int\vartheta(\partial_{t}\psi_{\varepsilon}+B\cdot\nabla \psi_{\varepsilon})\,\mathrm{d}x.\] Since \((\partial_{t}\vartheta+B\cdot\nabla\vartheta)=O(\mathrm{dist}(\cdot,\Sigma))\) and \(B\) is Lipschitz, the first two summands are bounded by \(\mathcal{F}_{\varepsilon}\). It only remains to estimate the last integral, which amounts to estimating the error in the transport equation for \(\psi_{\varepsilon}\). Indeed, decomposing the vector field \(B=(B\cdot\xi)\xi+(\mathrm{Id}-\xi\otimes\xi)B\) once more and applying Young's inequality, we compute \[\int\vartheta(\partial_{t}\psi_{\varepsilon}+B\cdot\nabla\psi_{ \varepsilon})\,\mathrm{d}x \leq\int\vartheta\left(\frac{1}{\varepsilon}\sqrt{2W(u_{\varepsilon })}V_{\varepsilon}-\frac{1}{\varepsilon}\sqrt{2W(u_{\varepsilon})} \varepsilon|\nabla u_{\varepsilon}|\nu_{\varepsilon}\cdot(B\cdot\xi)\xi\right) \,\mathrm{d}x\] \[\quad+\int\vartheta|\nabla\psi_{\varepsilon}|B\cdot(\nu_{ \varepsilon}-(\xi\cdot\nu_{\varepsilon})\xi)\,\mathrm{d}x\] \[\leq\int\vartheta\frac{1}{\varepsilon}\sqrt{2W(u_{\varepsilon})} \left(V_{\varepsilon}-\varepsilon|\nabla u_{\varepsilon}|\nu_{\varepsilon} \cdot(B\cdot\xi)\xi\right)\,\mathrm{d}x\] \[\quad+\|B\|_{\infty}\int|\vartheta||\nu_{\varepsilon}-(\nu_{ \varepsilon}\cdot\xi)\xi||\nabla\psi_{\varepsilon}|\,\mathrm{d}x\] \[\leq 2\int\vartheta^{2}\frac{1}{\varepsilon}W(u_{\varepsilon})\, \mathrm{d}x+\frac{1}{4}\int\frac{1}{\varepsilon}\left(V_{\varepsilon}- \varepsilon|\nabla u_{\varepsilon}|\nu\cdot(B\cdot\xi)\xi\right)^{2}\,\mathrm{d}x\] \[+\|B\|_{\infty}\int|\vartheta||\nu_{\varepsilon}-(\nu_{\varepsilon} \cdot\xi)\xi||\nabla\psi_{\varepsilon}|\,\mathrm{d}x.\] The first term is estimated by (35). The second term is absorbed in the dissipation (41) after adding \(\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{\varepsilon}\) and \(\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_{\varepsilon}\) together. For the last term we apply Young's inequality once more to obtain \[\int|\vartheta||\nu_{\varepsilon}-(\nu_{\varepsilon}\cdot\xi)\xi ||\nabla\psi_{\varepsilon}|\,\mathrm{d}x \leq\frac{1}{2}\int\vartheta^{2}|\nabla\psi_{\varepsilon}|\, \mathrm{d}x+\int(1-\nu_{\varepsilon}\cdot\xi)|\nabla\psi_{\varepsilon}|\, \mathrm{d}x\] \[\leq\frac{1}{2}\int\vartheta^{2}\left(\frac{\varepsilon}{2}| \nabla u_{\varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon})\right)\, \mathrm{d}x+\mathcal{E}_{\varepsilon}.\] This is again estimated by \(\mathcal{E}_{\varepsilon}(t)\). Therefore \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\mathcal{E}_{\varepsilon}(t)+\mathcal{F}_ {\varepsilon}(t)\right)\leq C\left(\mathcal{E}_{\varepsilon}(t)+\mathcal{F}_ {\varepsilon}(t)\right).\qed\] Finally we give a short proof of Lemma 1. Proof of Lemma 1.: The proof is similar to the unconstrained case in [5]. For \(a\in\mathbb{R}\) define \(u_{\varepsilon}^{a}(x)\coloneqq U(-\varepsilon^{-1}\mathbf{s}(x)-a)\), where \(U\) is as in (9), and let \(\psi_{\varepsilon}^{a}\coloneqq\phi\circ u_{\varepsilon}^{a}\). Since \(a\mapsto\int\psi_{\varepsilon}^{a}\,\mathrm{d}x\) is continuous and \(\int\psi_{\varepsilon}^{a}\,\mathrm{d}x\to 0\) as \(a\to+\infty\) and \(\int\psi_{\varepsilon}^{a}\,\mathrm{d}x\to\infty\) as \(a\to-\infty\), there exists, for each \(\varepsilon>0\), \(a_{\varepsilon}\in\mathbb{R}\) such that \(\int\psi_{\varepsilon}^{a_{\varepsilon}}\,\mathrm{d}x=|\Omega(0)|\). Furthermore \[\frac{d}{da}\bigg{|}_{a=0}\int\phi\circ u_{\varepsilon}^{a}\, \mathrm{d}x =\int\frac{1}{\varepsilon}\sqrt{2W(u_{\varepsilon}^{0})}U^{\prime }(-\varepsilon^{-1}\mathbf{s})\,\mathrm{d}x=\int\frac{2}{\varepsilon}W(u_{ \varepsilon}^{0})\,\mathrm{d}x\] \[=E_{\varepsilon}(u_{\varepsilon}^{0})\to\mathcal{H}^{d-1}(\Sigma( 0))\neq 0,\] and \(\int\phi\circ U(-\varepsilon^{-1}\mathbf{s})\,\mathrm{d}x=|\Omega(0)|(1+O( \varepsilon))\). Hence \(a_{\varepsilon}=O(\varepsilon)\). For simplicity we write \(u_{\varepsilon}=u_{\varepsilon}^{a_{\varepsilon}}\). Now we compute, using \(U^{\prime}(s)=\sqrt{2W(U(s))}\) and \(1-\nabla\mathbf{s}\cdot\xi\leq 1-|\xi|^{2}\leq c\,\mathrm{dist}^{2}(\cdot, \Sigma(0))\), \[\mathcal{E}_{\varepsilon}(0) =\int\left(\frac{\varepsilon}{2}|\nabla u_{\varepsilon}|^{2}+ \frac{1}{\varepsilon}W(u_{\varepsilon})\right)\,\mathrm{d}x+\int\xi\cdot \nabla\psi_{\varepsilon}\,\mathrm{d}x\] \[=\int\left(\frac{1}{2\varepsilon}\left|U^{\prime}\left(-\varepsilon ^{-1}\mathbf{s}(x)-a_{\varepsilon}\right)\right|^{2}+\frac{1}{\varepsilon}W(u_ {\varepsilon}(x))\right)\,\mathrm{d}x-\int\frac{2}{\varepsilon}W(u_{ \varepsilon})\xi\cdot\nabla\mathbf{s}(x)\,\mathrm{d}x\] \[=\int(1-\nabla\mathbf{s}\cdot\xi)\frac{2}{\varepsilon}W(u_{ \varepsilon})\,\mathrm{d}x\] \[\leq c\varepsilon^{2}\int\left(\frac{\mathrm{dist}(x,\Sigma(0))} {\varepsilon}\right)^{2}\frac{2}{\varepsilon}W(u_{\varepsilon})\,\mathrm{d}x.\] Hence \(\mathcal{E}_{\varepsilon}(0)=O(\varepsilon^{2})\). The bulk error \[\mathcal{F}_{\varepsilon}(0)=\int\vartheta(x)(\phi(U(-\varepsilon^{-1} \mathbf{s}(x)-a_{\varepsilon}))-\chi_{\Omega(0)}(x))\,\mathrm{d}x\] is also \(O(\varepsilon^{2})\), since \(c\,\mathrm{dist}(\cdot,\Sigma(0))\leq|\vartheta|\leq C\,\mathrm{dist}(\cdot, \Sigma(0))\) and \(U(s)\to 0\) as \(s\to-\infty\) and \(U(s)\to 1\) as \(s\to+\infty\). Hence \(u_{\varepsilon}\) satisfies condition (ii). ## Acknowledgments This project has received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2047/1 - 390685813.
2305.20008
Number of Equivalence Classes of Rational Functions over Finite Fields
Two rational functions $f,g\in\Bbb F_q(X)$ are said to be {\em equivalent} if there exist $\phi,\psi\in\Bbb F_q(X)$ of degree one such that $g=\phi\circ f\circ\psi$. We give an explicit formula for the number of equivalence classes of rational functions of a given degree in $\Bbb F_q(X)$. This result should provide guidance for the current and future work on classifications of low degree rational functions over finite fields. We also determine the number of equivalence classes of polynomials of a given degree in $\Bbb F_q[X]$.
Xiang-dong Hou
2023-05-31T16:29:54Z
http://arxiv.org/abs/2305.20008v1
# Number of equivalence classes of rational functions over finite fields ###### Abstract. Two rational functions \(f,g\in\mathbb{F}_{q}(X)\) are said to be _equivalent_ if there exist \(\phi,\psi\in\mathbb{F}_{q}(X)\) of degree one such that \(g=\phi\circ f\circ\psi\). We give an explicit formula for the number of equivalence classes of rational functions of a given degree in \(\mathbb{F}_{q}(X)\). This result should provide guidance for the current and future work on classifications of low degree rational functions over finite fields. We also determine the number of equivalence classes of polynomials of a given degree in \(\mathbb{F}_{q}[X]\). Key words and phrases:finite field, general linear group, projective linear group, rational function 2020 Mathematics Subject Classification: 05E18, 11T06, 12E20, 12F20, 20G40 ## 1. Introduction For a nonconstant rational function \(f(X)\) over a field \(\mathbb{F}\), written in the form \(f(X)=P(X)/Q(X)\), where \(P,Q\in\mathbb{F}[X]\), \(Q\neq 0\), and \(\gcd(P,Q)=1\), we define \(\deg f=\max\{\deg P,\deg Q\}\). Then \([\mathbb{F}(X):\mathbb{F}(f)]=\deg f\). By Luroth theorem, every subfield \(E\subset\mathbb{F}(X)\) with \([\mathbb{F}(X):E]=d\) is of the form \(\mathbb{F}(f)\) for some \(f\in\mathbb{F}(X)\) with \(\deg f=d\). Let \[G(\mathbb{F})=\{\phi\in\mathbb{F}(X):\deg\phi=1\}. \tag{1.1}\] The group \((G(\mathbb{F}),\circ)\) is isomorphic to the projective linear group \(\operatorname{PGL}(2,\mathbb{F})\) and the Galois group \(\operatorname{Aut}(\mathbb{F}(X)/\mathbb{F})\) of \(\mathbb{F}(X)\) over \(\mathbb{F}\). For \(A=\left[\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right]\in\operatorname{PGL}(2,\mathbb{F})\), its corresponding element in \(G(\mathbb{F})\), denoted by \(\phi_{A}\), is \(\phi_{A}=(aX+b)/(cX+b)\). For \(\phi\in G(\mathbb{F})\), its corresponding element in \(\operatorname{Aut}(\mathbb{F}(X)/\mathbb{F})\), denoted by \(\sigma_{\phi}\), is the \(\mathbb{F}\)-automorphism of \(\mathbb{F}(X)\) defined by \(\sigma_{\phi}(X)=\phi(X)\). Two rational functions \(f,g\in\mathbb{F}(X)\setminus\mathbb{F}\) are said to be _equivalent_, denoted as \(f\sim g\), if there exist \(\phi,\psi\in G(\mathbb{F})\) such that \(g=\phi\circ f\circ\psi\). This happens if and only if \(\mathbb{F}(g)=\sigma(\mathbb{F}(f))\) for some \(\sigma\in\operatorname{Aut}(\mathbb{F}(X)/\mathbb{F})\). The set \(\mathbb{F}(X)\setminus\mathbb{F}\) equipped with composition \(\circ\) is a monoid and \(G(\mathbb{F})\) is the group of units of \((\mathbb{F}(X)\setminus\mathbb{F},\circ)\). In a parallel setting, one replaces \(\mathbb{F}(X)\) with \(\mathbb{F}[X]\) and \(G(\mathbb{F})\) with the affine linear group \(\operatorname{AGL}(1,\mathbb{F})=\{\phi\in\mathbb{F}[X]:\deg\phi=1\}\). Then \((\mathbb{F}[X]\setminus\mathbb{F},\circ)\) is a submonoid of \((\mathbb{F}(X)\setminus\mathbb{F},\circ)\) and \(\operatorname{AGL}(1,\mathbb{F})\) is its group of units. If two polynomials \(f,g\in\mathbb{F}[X]\setminus\mathbb{F}\) are equivalent as rational functions, i.e., \(g=\phi\circ f\circ\psi\) for some \(\phi,\psi\in G(\mathbb{F})\), then there are \(\alpha,\beta\in\operatorname{AGL}(1,\mathbb{F})\) such that \(g=\alpha\circ f\circ\beta\); see Lemma 8.1. Factorizations in the monoids \((\mathbb{F}(X)\setminus\mathbb{F},\circ)\) and \((\mathbb{F}[X]\setminus\mathbb{F},\circ)\) are difficult questions that have attracted much attention [1, 2, 3, 9, 10, 18, 19]. Factorizations in \((\mathbb{F}(X)\setminus\mathbb{F},\circ)\) are determined by the lattice \(\mathcal{L}(\mathbb{F})\) of the subfields of \(\mathbb{F}(X)\) and vice versa. The Galois group \(\operatorname{Aut}(\mathbb{F}(X)/\mathbb{F})\) is an automorphism group of \(\mathcal{L}(\mathbb{F})\) and the \(\operatorname{Aut}(\mathbb{F}(X)/\mathbb{F})\)-orbits in \(\mathcal{L}(\mathbb{F})\) correspond to the equivalence classes in \(\mathbb{F}(X)\setminus\mathbb{F}\). Many intrinsic properties of rational functions are preserved under equivalence. The degree of a rational function in \(\mathbb{F}(X)\setminus\mathbb{F}\) is invariant under equivalence. Equivalent rational functions in \(\mathbb{F}(X)\setminus\mathbb{F}\) have isomorphic arithmetic monodromy groups. The number of ramification points and their ramification indices of a rational function are preserved under equivalence [16]. When \(\mathbb{F}=\mathbb{F}_{q}\), the finite field with \(q\) elements, there is another important invariant: \(|f(\mathbb{P}^{1}(\mathbb{F}_{q}))|\), the number of values of \(f\in\mathbb{F}_{q}(X)\) on the projective line \(\mathbb{P}^{1}(\mathbb{F}_{q})\). In the theory and applications of finite fields, an important question is to understand the polynomials that permute \(\mathbb{F}_{q}\) and the rational functions that permute \(\mathbb{P}^{1}(\mathbb{F}_{q})\) under the aforementioned equivalence. For classifications of low degree permutation polynomials of finite fields, see [4, 6, 7, 14, 17]. Permutation rational functions of \(\mathbb{P}^{1}(\mathbb{F}_{q})\) of degree \(3\) and \(4\) were classified recently [5, 8, 13]. Equivalence of rational functions over finite fields also arises in other circumstances. There is a construction of irreducible polynomials over \(\mathbb{F}_{q}\) using a rational function \(R(X)\in\mathbb{F}_{q}[X]\); the number of irreducible polynomials produced by the construction depends only on the equivalence class of \(R(X)\)[15]. It is known that the equivalence classes of rational functions \(f\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\) such that \(\mathbb{F}_{q}(X)/\mathbb{F}_{q}(f)\) is Galois are in one-to-one correspondence with the classes of conjugate subgroups of \(\operatorname{PGL}(2,\mathbb{F}_{q})\); see [12]. When \(\mathbb{F}=\mathbb{F}_{q}\), there are only finitely many equivalence classes of rational functions in \(\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\) with a given degree \(n\). We shall denote this number by \(\mathfrak{N}(q,n)\). Despite its obvious significance, this number was not known previously. The main contribution of the present paper is the determination of \(\mathfrak{N}(q,n)\) for all \(q\) and \(n\) (Theorem 6.1). For example, when \(n=3\), we have \[\mathfrak{N}(q,3)=\begin{cases}2(q+1)&\text{if $q\equiv 1,4\pmod{6}$},\\ 2q&\text{if $q\equiv 2,5\pmod{6}$},\\ 2q+1&\text{if $q\equiv 3\pmod{6}$}.\end{cases}\] The classification of rational functions of degree \(n\leq 2\) over \(\mathbb{F}_{q}\) is straightforward; see Sections 7.1 and 7.2. When \(n=3\) and \(q\) is even, the classification was obtained recently by Mattarei and Pizzato [16] using the fact that such rational functions have at most two ramification points. The case \(n=3\) and \(q\) odd is still unsolved. (In this case, it was shown in [16] that \(\mathfrak{N}(q,3)\leq 4q\).) A complete classification of rational functions over \(\mathbb{F}_{q}\) appears to be out of reach. However, the determination of \(\mathfrak{N}(q,n)\) is an important step towards understanding the equivalence classes of rational functions over finite fields, especially those with low degree. Here is the outline of our approach. There is an action of \(\operatorname{GL}(2,\mathbb{F}_{q})\) on the set of subfields \(F\subset\mathbb{F}_{q}(X)\) with \([\mathbb{F}_{q}(X):F]=n\), and \(\mathfrak{N}(q,n)\) is the number of orbits of this action. To compute \(\mathfrak{N}(q,n)\) by Burnside's lemma, it suffices to determine the number of such subfields of \(\mathbb{F}_{q}(X)\) fixed by each member \(A\) of \(\operatorname{GL}(2,\mathbb{F}_{q})\). From there on, the computation becomes quite technical and depends on the canonical form of \(A\). The paper is organized as follows: In Section 2, we include some preliminaries and lay out the plan for computing \(\mathfrak{N}(q,n)\). The ingredients of the formula for \(\mathfrak{N}(q,n)\) are computed in Sections 3 - 5 and the explicit formula for \(\mathfrak{N}(q,n)\) is presented in Section 6. A discussion of low degree rational functions over \(\mathbb{F}_{q}\) ensued in Section 7. The last section is devoted to equivalence classes of polynomials over finite fields. The situation is much simpler compared with the case of rational functions. The number of equivalence classes are computed and, as concrete examples, polynomials of degree up to \(5\) are classified. Several counting lemmas used in the paper are gathered in the appendix. ## 2. Preliminaries ### Rational functions and subfields Let \[\mathcal{R}_{q,n}=\{f\in\mathbb{F}_{q}(X):\deg f=n\}. \tag{2.1}\] By Lemma A2, \[|\mathcal{R}_{q,n}|=\begin{cases}q-1&\text{if $n=0$},\\ q^{2n-1}(q^{2}-1)&\text{if $n>0$}.\end{cases}\] For \(f_{1},f_{2}\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\), we define \(f_{1}\sim f_{2}\) if \(f_{2}=\phi\circ f_{1}\circ\psi\) for some \(\phi,\psi\in G(\mathbb{F}_{q})\) and we define \(f_{1}\stackrel{{\mathrm{L}}}{{\sim}}f_{2}\) if there exists \(\phi\in G(\mathbb{F}_{q})\) such that \(f_{2}=\phi\circ f_{1}\). It is clear that \[f_{1}\stackrel{{\mathrm{L}}}{{\sim}}f_{2}\ \Leftrightarrow\ \mathbb{F}_{q}(f_{1})=\mathbb{F}_{q}(f_{2})\] and \[f_{1}\sim f_{2}\ \Leftrightarrow\ \mathbb{F}_{q}(f_{2})=\sigma(\mathbb{F}_{q}( f_{1}))\text{ for some }\sigma\in\operatorname{Aut}(\mathbb{F}_{q}(X)/\mathbb{F}_{q}).\] Recall that \(\mathfrak{N}(q,n)\) denotes the number of \(\sim\) equivalence classes in \(\mathcal{R}_{q,n}\); this number is the main subject of our investigation. For \(f=P/Q\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\), where \(P,Q\in\mathbb{F}_{q}[X]\), \(\gcd(P,Q)=1\), let \[\mathcal{S}(f)=\langle P,Q\rangle_{\mathbb{F}_{q}}=\{aP+bQ:a,b\in\mathbb{F}_{q }\},\] the \(\mathbb{F}_{q}\)-span of \(\{P,Q\}\). (Throughout this paper, an \(\mathbb{F}_{q}\)-span is denoted by \(\langle\ \rangle_{\mathbb{F}_{q}}\).) Then \(f_{1}\stackrel{{\mathrm{L}}}{{\sim}}f_{2}\Leftrightarrow\mathcal{ S}(f_{1})=\mathcal{S}(f_{2})\). By Luroth theorem, every subfield \(F\subset\mathbb{F}_{q}(X)\) with \([\mathbb{F}_{q}(X):F]=n<\infty\) is of the form \(F=\mathbb{F}_{q}(f)\), where \(f\in\mathbb{F}_{q}(X)\) is of degree \(n\). The number of such \(F\) is \[\frac{|\mathcal{R}_{q,n}|}{|G(\mathbb{F}_{q})|}=\frac{q^{2n-1}(q^{2}-1)}{q(q^ {2}-1)}=q^{2(n-1)}.\] Denote the set of these fields by \(\mathcal{F}_{n}=\{F_{1},\ldots,F_{q^{2(n-1)}}\}\) (Figure 1) and let \(\operatorname{Aut}(\mathbb{F}_{q}(X)/\mathbb{F}_{q})\) act on \(\mathcal{F}_{n}\). Then \(\mathfrak{N}(q,n)\) is precisely the number of orbits of this action. ### Conjugacy classes of \(\text{GL}(2,\mathbb{F}_{q})\) Let \[A_{a}=\begin{bmatrix}a&0\\ 0&a\end{bmatrix},\quad a\in\mathbb{F}_{q}^{*},\] \[A_{\{a,b\}}=\begin{bmatrix}a&0\\ 0&b\end{bmatrix},\quad a,b\in\mathbb{F}_{q}^{*},\] \[A_{\{\alpha,\alpha^{q}\}}=\begin{bmatrix}\alpha+\alpha^{q}&-\alpha^{1+q}\\ 1&0\end{bmatrix},\quad\alpha\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q},\] \[B_{a}=\begin{bmatrix}a&a\\ 0&a\end{bmatrix},\quad a\in\mathbb{F}_{q}^{*}.\] Then \[\mathcal{C}:= \{A_{a}:a\in\mathbb{F}_{q}^{*}\}\cup\{A_{\{a,b\}}:a,b\in \mathbb{F}_{q}^{*},\ a\neq b\}\] \[\cup\{A_{\{\alpha,\alpha^{q}\}}:\alpha\in\mathbb{F}_{q}^{2} \setminus\mathbb{F}_{q}\}\cup\{B_{a}:a\in\mathbb{F}_{q}^{*}\} \tag{2.2}\] forms a set of representatives of the conjugacy classes of \(\text{GL}(2,\mathbb{F}_{q})\). Additional information about these representatives is given in Table 1, where \(\text{cent}(A)\) denotes the centralizer of \(A\) in \(\text{GL}(2,\mathbb{F}_{q})\); see [11, SS6.3]. ### Burnside's lemma Let \(\text{GL}(n,\mathbb{F}_{q})\) act on \(\mathcal{F}_{n}\) as follows: For \(A=\begin{bmatrix}a&b\\ c&d\end{bmatrix}\in\text{GL}(n,\mathbb{F}_{q})\) and \(\mathbb{F}_{q}(f)\in\mathcal{F}_{n}\), where \(f\in\mathbb{F}_{q}(X)\) is of degree \(n\), \(A(\mathbb{F}_{q}(f))=\mathbb{F}_{q}(f\circ\phi_{A})\), where \(\phi_{A}=(aX+b)/(cX+d)\). By Burnside's lemma, \[\mathfrak{N}(q,n) =\sum_{A\in\mathcal{C}}\frac{\text{Fix}(A)}{|\text{cent}(A)|}\] \[=\frac{1}{q(q-1)^{2}(q+1)}\sum_{a\in\mathbb{F}_{q}^{*}}\text{Fix} (A_{a})+\frac{1}{(q-1)^{2}}\sum_{\{a,b\}\subset\mathbb{F}_{q}^{*},\ a\neq b} \text{Fix}(A_{\{a,b\}})\] \[\quad+\frac{1}{q^{2}-1}\sum_{\{\alpha,\alpha^{q}\}\subset\mathbb{ F}_{q^{2}}\setminus\mathbb{F}_{q}}\text{Fix}(A_{\{\alpha,\alpha^{q}\}})+ \frac{1}{q(q-1)}\sum_{a\in\mathbb{F}_{q}}\text{Fix}(B_{a}), \tag{2.3}\] where \[\text{Fix}(A)=|\{F\in\mathcal{F}_{n}:A(F)=F\}|.\] Obviously, \[\text{Fix}(A_{a})=|\mathcal{F}_{n}|=q^{2(n-1)}. \tag{2.4}\] We will determine \(\text{Fix}(A_{\{a,b\}})\), \(\text{Fix}(A_{\{\alpha,\alpha^{q}\}})\), and \(\text{Fix}(B_{a})\) in the subsequent sections; in doing so, we will need a number of counting lemmas which are given in the appendix. \begin{table} \begin{tabular}{|c|c|c|} \hline \(A\in\mathcal{C}\) & elementary divisors & \(|\text{cent}(A)|\) \\ \hline \(A_{a},\ a\in\mathbb{F}_{q}^{*}\) & \(X-a,\ X-a\) & \(q(q-1)^{2}(q+1)\) \\ \hline \(A_{\{a,b\}},\ a,b\in\mathbb{F}_{q}^{*},\ a\neq b\) & \(X-a,\ X-b\) & \((q-1)^{2}\) \\ \hline \(A_{\{\alpha,\alpha^{q}\}},\ \alpha\in\mathbb{F}_{q}^{2}\setminus\mathbb{F}_{q}\) & \((X-\alpha)(X-\alpha^{q})\) & \(q^{2}-1\) \\ \hline \(B_{a},\ a\in\mathbb{F}_{q}^{*}\) & \((X-a)^{2}\) & \(q(q-1)\) \\ \hline \end{tabular} \end{table} Table 1. Conjugacy classes of \(\text{GL}(2,\mathbb{F}_{q})\) ## 3. Determination of \(\operatorname{Fix}(A_{\{a,b\}})\) Let \(a,b\in\mathbb{F}_{q}^{*}\), \(a\neq b\) and \(c=a/b\). Then \(\phi_{A_{\{a,b\}}}=cX\). Therefore, a field \(\mathbb{F}_{q}(f)\), where \(f\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\), is fixed by \(A_{\{a,b\}}\) if and only if \(\mathbb{F}_{q}(f(X))=\mathbb{F}_{q}(f(cX))\). **Lemma 3.1**.: _Let \(f\in\mathbb{F}_{q}(X)\) with \(\deg f=n>0\) and \(1\neq c\in\mathbb{F}_{q}^{*}\) with \(o(c)=d\), where \(o(c)\) denotes the multiplicative order of \(c\). Then \(\mathbb{F}_{q}(f(X))=\mathbb{F}_{q}(f(cX))\) if and only if_ \[\mathcal{S}(f)=\langle X^{r_{1}}P_{1}(X^{d}),\,X^{r_{2}}Q_{1}(X^{d})\rangle_{ \mathbb{F}_{q}},\] _where \(P_{1},Q_{1}\in\mathbb{F}_{q}[X]\) are monic, \(0\leq r_{1},r_{2}<d\), \(\deg(X^{r_{2}}Q_{1}(X^{d}))<\deg(X^{r_{1}}P_{1}(X^{d}))=n\), and \(\gcd(X^{r_{1}}P_{1},\,X^{r_{2}}Q_{1})=1\)._ Proof.: \((\Leftarrow)\) Obvious. \((\Rightarrow)\) We may assume that \(f=P/Q\), where \(P,Q\in\mathbb{F}_{q}[X]\) are monic, \(\deg P=n\), \(\deg Q=m<n\), \(\gcd(P,Q)=1\), and the coefficient of \(X^{m}\) in \(P\) is \(0\). Let \(n\equiv r_{1}\pmod{d}\) and \(m\equiv r_{2}\pmod{d}\), where \(0\leq r_{1},r_{2}<d\). Such a pair \((P,Q)\) is uniquely determined by \(\mathcal{S}(f)\). Since \[\langle P(X),Q(X)\rangle_{\mathbb{F}_{q}}=\mathcal{S}(f)=\mathcal{S}(f(cX))= \langle c^{-n}P(cX),c^{-m}Q(cX)\rangle_{\mathbb{F}_{q}},\] we have \[P(X)=c^{-n}P(cX),\quad Q(X)=c^{-m}Q(cX).\] Thus the coefficient of \(X^{i}\) in \(P(X)\) is \(0\) for all \(i\) with \(i\not\equiv n\pmod{d}\), whence \(P(X)=x^{r_{1}}P_{1}(X^{d})\). In the same way, \(Q(X)=X^{r_{2}}Q_{1}(X^{d})\). Since \(\gcd(P,Q)=1\), we have \(\gcd(X^{r_{1}}P_{1},X^{r_{2}}Q_{1})=1\). In Lemma 3.1, let \(m=\deg(X^{r_{2}}Q_{1}(X^{d}))\). Note that \(\gcd(X^{r_{1}}P_{1},\,X^{r_{2}}Q_{1})=1\) if and only if \(\gcd(P_{1},Q_{1})=1\) plus one of the following: (i) \(r_{1}=r_{2}=0\); (ii) \(r_{1}=0\), \(r_{2}>0\), \(P_{1}(0)\neq 0\); (iii) \(r_{1}>0\), \(r_{2}=0\), \(Q_{1}(0)\neq 0\). When \(r_{1}=r_{2}=0\), i.e., \(n\equiv m\equiv 0\pmod{d}\), the number of the fields \(\mathbb{F}_{q}(f)\) in Lemma 3.1 fixed by \(A_{\{a,b\}}\) is \(q^{-1}\alpha_{m/d,n/d}\), where \[\alpha_{i,j}=|\{(f,g):f,g\in\mathbb{F}_{q}[X]\,\,\text{monic},\,\deg f=i,\,\, \deg g=j,\,\gcd(f,g)=1\}|.\] When \(r_{1}=0\) and \(r_{2}>0\), i.e., \(n\equiv 0\pmod{d}\) but \(m\not\equiv 0\pmod{d}\), the number of \(\mathbb{F}_{q}(f)\) fixed by \(A_{\{a,b\}}\) is \(\beta_{n/d,\lfloor m/d\rfloor}\), where \[\beta_{i,j}=|\{(f,g):f,g\in\mathbb{F}_{q}[X]\,\,\text{monic},\,\deg f=i,\,\, \deg g=j,\,\,f(0)\neq 0,\,\gcd(f,g)=1\}|.\] When \(r_{1}>0\) and \(r_{2}=0\), i.e., \(m\equiv 0\pmod{d}\) but \(n\not\equiv 0\pmod{d}\), the number of \(\mathbb{F}_{q}(f)\) fixed by \(A_{\{a,b\}}\) is \(\beta_{m/d,\lfloor n/d\rfloor}\). Define \[\alpha_{j}=|\{(f,g):f,g\in\mathbb{F}_{q}[X]\,\text{monic},\,\deg f<j,\,\deg g= j,\,\gcd(f,g)=1\}|=\sum_{0\leq i\leq j}\alpha_{i,j}.\] The numbers \(\alpha_{i,j}\), \(\alpha_{j}\) and \(\beta_{i,j}\) are determined in Appendix, Lemmas A1 and A3. **Theorem 3.2**.: _Let \(a,b\in\mathbb{F}_{q}^{*}\), \(a\neq b\), and \(d=o(a/b)\). Then_ \[\operatorname{Fix}(A_{\{a,b\}})=\begin{cases}q^{2n/d-2}+\dfrac{(d-1)(q^{2n/d}-1 )}{q+1}&\text{if }n\equiv 0\pmod{d},\\ \dfrac{q^{2\lfloor n/d\rfloor+1}+1}{q+1}&\text{if }n\not\equiv 0\pmod{d}. \end{cases}\] Proof.: If \(n\equiv 0\pmod{d}\), using Lemmas A1 and A3, we have \[\operatorname{Fix}(A_{\{a,b\}}) =\sum_{\begin{subarray}{c}0\leq m<n\\ m\equiv 0\pmod{d}\end{subarray}}q^{-1}\alpha_{m/d,n/d}+\sum_{\begin{subarray}{c}0 \leq m<n\\ m\not\equiv 0\pmod{d}\end{subarray}}\beta_{n/d,\lfloor m/d\rfloor}\] \[=q^{-1}\sum_{0\leq i<n/d}\alpha_{i,n/d}+\sum_{0\leq i<n/d}(d-1) \beta_{n/d,i}\] \[=q^{-1}\alpha_{n/d}+(d-1)\sum_{0\leq i<n/d}q^{n/d-i-1}(q-1)\frac{q ^{2i+1}+1}{q+1}\] \[=q^{2n/d-2}+\frac{(d-1)(q-1)}{q+1}\sum_{0\leq i<n/d}(q^{n/d}\cdot q ^{i}+q^{n/d-1-i})\] \[=q^{2n/d-2}+\frac{(d-1)(q-1)}{q+1}\Big{(}q^{n/d}\frac{q^{n/d}-1}{ q-1}+\frac{q^{n/d}-1}{q-1}\Big{)}\] \[=q^{2n/d-2}+\frac{(d-1)(q^{2n/d}-1)}{q+1}.\] If \(n\not\equiv 0\pmod{d}\), we have \[\operatorname{Fix}(A_{\{a,b\}}) =\sum_{\begin{subarray}{c}0\leq m<n\\ m\equiv 0\pmod{d}\end{subarray}}\beta_{m/d,\lfloor n/d\rfloor}=\sum_{0\leq i \leq\lfloor n/d\rfloor}\beta_{i,\lfloor n/d\rfloor}\] \[=q^{\lfloor n/d\rfloor}+\sum_{1\leq i\leq\lfloor n/d\rfloor}q^{ \lfloor n/d\rfloor-i}(q-1)\frac{q^{2i}-1}{q+1}\qquad\text{(by Lemma A3)}\] \[=q^{\lfloor n/d\rfloor}+\frac{q-1}{q+1}\sum_{1\leq i\leq\lfloor n/ d\rfloor}(q^{\lfloor n/d\rfloor+1}\cdot q^{i-1}-q^{\lfloor n/d\rfloor-i})\] \[=q^{\lfloor n/d\rfloor}+\frac{q-1}{q+1}\Big{(}q^{\lfloor n/d \rfloor+1}\,\frac{q^{\lfloor n/d\rfloor}-1}{q-1}-\frac{q^{\lfloor n/d\rfloor} -1}{q-1}\Big{)}\] \[=q^{\lfloor n/d\rfloor}+\frac{(q^{\lfloor n/d\rfloor}-1)(q^{ \lfloor n/d\rfloor+1}-1)}{q+1}\] \[=\frac{q^{2\lfloor n/d\rfloor+1}+1}{q+1}.\] ## 4. Determination of \(\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})\) Let \[A=A_{\{\alpha,\alpha^{q}\}}=\begin{bmatrix}\alpha+\alpha^{q}&-\alpha^{1+q}\\ 1&0\end{bmatrix},\quad\alpha\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}.\] We have \[BAB^{-1}=D, \tag{4.1}\] where \[D=\begin{bmatrix}\alpha^{q}&0\\ 0&\alpha\end{bmatrix},\quad B=\begin{bmatrix}1&-\alpha\\ 1&-\alpha^{q}\end{bmatrix}\in\operatorname{GL}(2,\mathbb{F}_{q^{2}}).\] Note that \(\phi_{D}=\alpha^{q-1}X\in G(\mathbb{F}_{q^{2}})\). **Lemma 4.1**.: _Let \(f\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\) and \(g=f\circ\phi_{B}^{-1}\in\mathbb{F}_{q^{2}}(X)\). Then \(\mathbb{F}_{q}(f)\) is fixed by \(A\) if and only if \(\mathbb{F}_{q^{2}}(g)\) is fixed by \(D\)._ Proof.: We have \[\mathbb{F}_{q^{2}}(g)\text{ is fixed by }D\] \[\Leftrightarrow g\circ\phi_{D}=\psi\circ g\text{ for some }\psi\in G( \mathbb{F}_{q^{2}})\] \[\Leftrightarrow f\circ\phi_{A}=\psi\circ f\text{ for some }\psi\in G( \mathbb{F}_{q^{2}})\] (by (4.1)) \[\Leftrightarrow f\circ\phi_{A}=\psi\circ f\text{ for some }\psi\in G( \mathbb{F}_{q})\] (by Lemma 4.2) \[\Leftrightarrow\mathbb{F}_{q}(f)\text{ is fixed by }A.\] **Lemma 4.2**.: _Let \(f_{1},f_{2}\in\mathbb{F}_{q}(X)\setminus\mathbb{F}_{q}\) be such that there exists \(\psi\in G(\mathbb{F})\), where \(\mathbb{F}\) is an extension of \(\mathbb{F}_{q}\), such that \(f_{2}=\psi\circ f_{1}\). Then there exists \(\theta\in G(\mathbb{F}_{q})\) such that \(f_{1}=\theta\circ f_{2}\)._ Proof.: Let \(f_{i}=P_{i}/Q_{i}\), where \(P_{i},Q_{i}\in\mathbb{F}_{q}[X]\) and \(\gcd(P_{i},Q_{i})=1\). It suffices to show that there exist \(a_{0},b_{0},c_{0},d_{0}\in\mathbb{F}_{q}\) such that \[\begin{bmatrix}a_{0}&b_{0}\\ c_{0}&d_{0}\end{bmatrix}\begin{bmatrix}P_{1}\\ Q_{1}\end{bmatrix}=\begin{bmatrix}P_{2}\\ Q_{2}\end{bmatrix}. \tag{4.2}\] By assumption, there exist \(a,b,c,d\in\mathbb{F}\) such that \[\begin{bmatrix}a&b\\ c&d\end{bmatrix}\begin{bmatrix}P_{1}\\ Q_{1}\end{bmatrix}=\begin{bmatrix}P_{2}\\ Q_{2}\end{bmatrix}.\] Write \(\mathbb{F}=\mathbb{F}_{q}\oplus V\) as a direct sum of \(\mathbb{F}_{q}\)-subspaces, and write \(a=a_{0}+a_{1}\), \(b=b_{0}+b_{1}\), \(c=c_{0}+c_{1}\), \(d=d_{0}+d_{1}\), where \(a_{0},b_{0},c_{0},d_{0}\in\mathbb{F}_{q}\) and \(a_{1},b_{1},c_{1},d_{1}\in V\). Then \[\begin{bmatrix}a_{0}&b_{0}\\ c_{0}&d_{0}\end{bmatrix}\begin{bmatrix}P_{1}\\ Q_{1}\end{bmatrix}+\begin{bmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{bmatrix}\begin{bmatrix}P_{1}\\ Q_{1}\end{bmatrix}=\begin{bmatrix}P_{2}\\ Q_{2}\end{bmatrix}.\] Comparing the coefficients in the above gives (4.2). **Lemma 4.3**.: _For \(g\in\mathbb{F}_{q^{2}}(X)\), \(g\circ\phi_{B}\in\mathbb{F}_{q}(X)\) if and only if \(\bar{g}(X)=g(X^{-1})\), where \(\bar{g}\) denotes the rational function obtained by applying \((\ )^{q}\) to the coefficients of \(g\)._ Proof.: Recall that \(\phi_{B}(X)=(X-\alpha)/(X-\alpha^{q})\). Since \(\bar{\phi}_{B}=X^{-1}\circ\phi_{B}\), we have \[g\circ\phi_{B}\in\mathbb{F}_{q}(X) \Leftrightarrow\overline{g\circ\phi_{B}}=g\circ\phi_{B}\] \[\Leftrightarrow\bar{g}\circ X^{-1}\circ\phi_{B}=g\circ\phi_{B}\] \[\Leftrightarrow\bar{g}=g\circ X^{-1}.\] Lemmas 4.1 and 4.3 suggest the following strategy (which we will follow) to determine \(\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})\): 1. Determine all \(g\in\mathbb{F}_{q^{2}}(X)\) of degree \(n\) such that \(\mathbb{F}_{q^{2}}(g(\alpha^{q-1}X))=\mathbb{F}_{q^{2}}(g(X))\). 2. Among all \(g\)'s in Step 1, determine those such that \(\bar{g}(X)=g(X^{-1})\). 3. Conclude that \(\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})=|G(\mathbb{F}_{q})|^{-1}\cdot( \text{the number of }g\text{'s in Step 2})\). We now carry out these steps in detail. **Step 1.** Determine all \(g\in\mathbb{F}_{q^{2}}(X)\) of degree \(n\) such that \(\mathbb{F}_{q^{2}}(g(\alpha^{q-1}X))=\mathbb{F}_{q^{2}}(g(X))\). Let \(d=o(\alpha^{q-1})\). By Lemma 3.1, for \(g\in\mathbb{F}_{q^{2}}(X)\) with \(\deg g=n\), \(\mathbb{F}_{q^{2}}(g(\alpha^{q-1}X))=\mathbb{F}_{q^{2}}(g(X))\) if and only if \[\mathcal{S}(g)=\langle X^{r_{1}}P_{1}(X^{d}),\,X^{r_{2}}Q_{1}(X^{d})\rangle_{ \mathbb{F}_{q^{2}}}, \tag{4.3}\] where \(0\leq r_{1},r_{2}<d\), \(P_{1},Q_{1}\in\mathbb{F}_{q^{2}}[X]\) are monic, \(\deg(X^{r_{2}}Q_{1}(X^{d}))<\deg(X^{r_{1}}P_{1}(X^{d}))=n\), \(\gcd(X^{r_{1}}P_{1},X^{r_{2}}Q_{1})=1\), and \(\langle\ \ \rangle_{\mathbb{F}_{q^{2}}}\) is the \(\mathbb{F}_{q^{2}}\)-span. In (4.3), let \(m=\deg(X^{r_{2}}Q_{1}(X^{d}))\). Note that \(n\equiv r_{1}\pmod{d}\), \(m\equiv r_{2}\pmod{d}\), \(\gcd(P_{1},Q_{1})=1\), and one of the following holds: (i) \(r_{1}=r_{2}=0\); (ii) \(r_{1}=0\), \(r_{2}>0\), \(P_{1}(0)\neq 0\); (iii) \(r_{1}>0\), \(r_{2}=0\), \(Q_{1}(0)\neq 0\). Let \(g\in\mathbb{F}_{q^{2}}(X)\) satisfy (4.3), i.e., \[g=\frac{sX^{r_{1}}P_{1}(X^{d})+tX^{r_{2}}Q_{1}(X^{d})}{uX^{r_{1}}P_{1}(X^{d})+ vX^{r_{2}}Q_{1}(X^{d})}, \tag{4.4}\] where \([\begin{smallmatrix}s&t\\ u&v\end{smallmatrix}]\in\operatorname{GL}(2,\mathbb{F}_{q^{2}})\). **Step 2.** Among all \(g\)'s in Step 1, determine those such that \(\bar{g}(X)=g(X^{-1})\). For fixed \(r_{1}\) and \(r_{2}\), let \[N(r_{1},r_{2})=\text{the number of $g$ satisfying \eqref{eq:eq:eq:eq:eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: \[\deg P=k,\ \deg Q_{1}=2l_{2}-k,\ \gcd(P,Q_{1})=1\}|\] \[= (q-1)\Gamma_{k,2l_{2}-k},\] where \[\Gamma_{i,j}=\{(f_{1},f_{2}): f_{1},f_{2}\in\mathbb{F}_{q^{2}}[X]\ \text{are monic and self-dual},\] \[\deg f_{1}=i,\ \deg f_{2}=j,\ \gcd(f_{1},f_{2})=1\}|.\] The number \(\Gamma_{i,j}\) is determined in Appendix, Lemma A5. Next, assume that \(l_{2}=k\) and \(l_{1}<k\) is fixed. By the same argument, the number of \(g\) satisfying (4.3) and \(\bar{g}(X)=g(X^{-1})\) is \((q-1)\Gamma_{k,2l_{1}-k}\). Therefore, the total number of \(g\) satisfying (4.3) and \(\bar{g}(X)=g(X^{-1})\) in Case (i) is \[N(0,0) =(q-1)\sum_{k/2\leq l_{2}\leq k}\Gamma_{k,2l_{2}-k}+(q-1)\sum_{k/2 \leq l_{1}<k}\Gamma_{k,2l_{1}-k}\] \[=(q-1)\Big{(}2\sum_{\begin{subarray}{c}0\leq i<k\\ i\equiv k\,(\operatorname{mod}2)\end{subarray}}\Gamma_{i,k}+\Gamma_{k,k}\Big{)}.\] If \(k=2k_{1}\), \[N(0,0) =(q-1)\Big{(}2\sum_{0\leq i<k_{1}}\Gamma_{2i,2k_{1}}+\Gamma_{2k_ {1},2k_{1}}\Big{)}\] \[=(q-1)\Big{[}2\Big{(}q^{2k_{1}-1}(q+1)+\sum_{1\leq i<k_{1}}\frac{ q^{2(k_{1}-i)-1}(q+1)(q^{2}-1)}{q^{2}+1}(q^{4i}-1)\Big{)}\] \[\qquad\qquad+\frac{q(q+1)}{q^{2}+1}(q^{4k_{1}}-q^{4k_{1}-2}-2) \Big{]}\qquad\qquad\qquad\text{(by Lemma A5)}\] \[=(q-1)\Big{[}2q^{2k_{1}-1}(q+1)+2\frac{(q+1)(q^{2}-1)q^{2k_{1}-1} }{q^{2}+1}\sum_{1\leq i<k_{1}}(q^{2i}-q^{-2i})\] \[\qquad\qquad+\frac{q(q+1)}{q^{2}+1}(q^{4k_{1}}-q^{4k_{1}-2}-2) \Big{]}\] \[=(q^{2}-1)\Big{[}2q^{2k_{1}-1}+\frac{2(q^{2}-1)q^{2k_{1}-1}}{q^{2 }+1}\Big{(}q^{2}\frac{1-q^{2(k_{1}-1)}}{1-q^{2}}-(q^{-2}\frac{1-q^{-2(k_{1}-1)} }{1-q^{-2}}\Big{)}\] \[\qquad\qquad+\frac{q}{q^{2}+1}(q^{4k_{1}}-q^{4k_{1}-2}-2)\Big{]}\] \[=(q^{2}-1)q^{4k_{1}-1}.\] If \(k=2k_{1}+1\), \[N(0,0) =(q-1)\Big{(}2\sum_{0\leq i<k_{1}}\Gamma_{2i+1,2k_{1}+1}+\Gamma_{2 k_{1}+1,2k_{1}+1}\Big{)}\] \[=(q-1)\Big{[}2\sum_{0\leq i<k_{1}}\frac{q^{2(k_{1}-i)-1}(q+1)(q^{ 2}-1)}{q^{2}+1}(q^{4i+2}+1)\] \[\qquad\qquad+\frac{q(q+1)}{q^{2}+1}(q^{4k_{1}+2}-q^{4k_{1}}+2) \Big{]}\qquad\qquad\qquad\text{(by Lemma A5)}\] \[=(q-1)\Big{[}2\frac{(q+1)(q^{2}-1)q^{2k_{1}-1}}{q^{2}+1}\sum_{0 \leq i<k_{1}}(q^{2i+2}+q^{-2i})\] \[\begin{cases}s\epsilon=cs,\\ \bar{u}\epsilon=cu,\\ \bar{t}\delta=ct,\\ \bar{v}\delta=cv.\end{cases}\] Under the assumption that \(\det\left[\begin{smallmatrix}s&t\\ u&v\end{smallmatrix}\right]\neq 0\), (4.10) implies that \(c\in\mu_{q+1}\). Write \(\epsilon=\epsilon_{0}^{q-1}\), \(\delta=\delta_{0}^{q-1}\) and \(c=c_{0}^{q-1}\), where \(\epsilon_{0},\delta_{0},c_{0}\in\mathbb{F}_{q^{2}}^{*}\). Then (4.10) is satisfied if and only if \[\begin{bmatrix}s&t\\ u&v\end{bmatrix}=\begin{bmatrix}s_{1}c_{0}/\epsilon_{0}&t_{1}c_{0}/\delta_{0} \\ u_{1}c_{0}/\epsilon_{0}&v_{1}c_{0}/\delta_{0}\end{bmatrix},\] where \(s_{1},t_{1},u_{1},v_{1}\in\mathbb{F}_{q}\). Therefore, the number of \(\left[\begin{smallmatrix}s&t\\ u&v\end{smallmatrix}\right]\) satisfying (4.9) is \[(q+1)\left|\mathrm{GL}(2,\mathbb{F}_{q})\right|=q(q^{2}-1)^{2}.\] To recap, when \(d\) is even, \(r_{2}=d/2\) and \(l\) (\((k-1)/2\leq l\leq k-1\)) is fixed, the number of \(g\) satisfying (4.3) and \(\bar{g}(X)=g(X^{-1})\) is \[\frac{1}{q^{2}-1}q(q^{2}-1)^{2}\,\Gamma_{k,2l-k+1}=q(q^{2}-1)\Gamma_{k,2l-k+1}.\] Hence, when \(d\) is even, \[N(0,r_{2})=q(q^{2}-1)\sum_{(k-1)/2\leq l\leq k-1}\Gamma_{k,2l-k+1}=q(q^{2}-1) \sum_{\begin{subarray}{c}0\leq i\leq k-1\\ i\equiv k-1\,(\text{mod}\,2)\end{subarray}}\Gamma_{i,k}.\] In the above, if \(k=2k_{1}\), \[N(0,r_{2}) =q(q^{2}-1)\sum_{1\leq i\leq k_{1}}\Gamma_{2i-1,2k_{1}}\] \[=q(q^{2}-1)\sum_{1\leq i\leq k_{1}}\frac{q^{2k_{1}-(2i-1)-1}(q+1) (q^{2}-1)}{q^{2}+1}(q^{4i-2}+1)\] (by Lemma A5) \[=\frac{q(q+1)(q^{2}-1)^{2}}{q^{2}+1}q^{2k_{1}}\sum_{1\leq i\leq k _{1}}(q^{2i-2}+q^{-2i})\] \[=\frac{(q+1)(q^{2}-1)^{2}q^{2k_{1}+1}}{q^{2}+1}\Big{(}\frac{1-q^ {2k_{1}}}{1-q^{2}}+q^{-2}\frac{1-q^{-2k_{1}}}{1-q^{-2}}\Big{)}\] \[=\frac{(q+1)(q^{2}-1)^{2}q^{2k_{1}+1}}{q^{2}+1}\cdot\frac{q^{-2k_ {1}}(q^{4k_{1}}-1)}{q^{2}-1}\] \[=\frac{q(q+1)(q^{2}-1)(q^{4k_{1}}-1)}{q^{2}+1}.\] If \(k=2k_{1}+1\), \[N(0,r_{2}) =q(q^{2}-1)\sum_{0\leq i\leq k_{1}}\Gamma_{2i,2k_{1}+1}\] \[=q(q^{2}-1)\Big{[}q^{2k_{1}}(q+1)+\sum_{1\leq i\leq k_{1}}\frac{ q^{2k_{1}+1-2i-1}(q+1)(q^{2}-1)}{q^{2}+1}(q^{4i}-1)\Big{]}\] (by Lemma A5) \[=q(q^{2}-1)(q+1)\Big{[}q^{2k_{1}}+\frac{(q^{2}-1)q^{2k_{1}}}{q^{2 }+1}\sum_{1\leq i\leq k_{1}}(q^{2i}-q^{-2i})\Big{]}\] \[=q(q^{2}-1)(q+1)\Big{[}q^{2k_{1}}+\frac{(q^{2}-1)q^{2k_{1}}}{q^{2 }+1}\Big{(}q^{2}\frac{1-q^{2k_{1}}}{1-q^{2}}-q^{-2}\frac{1-q^{-2k_{1}}}{1-q^{-2 }}\Big{)}\Big{]}\] \[=q(q^{2}-1)(q+1)\frac{1+q^{4k_{1}+2}}{1+q^{2}}\] \[=\frac{q(q+1)(q^{2}-1)(q^{4k_{1}+2}+1)}{q^{2}+1}.\] To summarize, we have \[N(0,r_{2})=\begin{cases}\frac{q(q+1)(q^{2}-1)(q^{2k}-(-1)^{k})}{q^{2}+1}&\text {if $d$ is even,}\\ 0&\text{if $d$ is odd.}\end{cases} \tag{4.11}\] **Case (iii)** Assume \(r_{1}>0\), \(r_{2}=0\) and \(Q_{1}(0)\neq 0\). By (4.4), \[g(X^{-1})=\frac{sX^{n-r_{1}}P_{1}(X^{-d})+tX^{n}Q_{1}(X^{-d})}{uX^{n-r_{1}}P_{1}( X^{-d})+vX^{n}Q_{1}(X^{-d})}.\] Hence \(\bar{g}(X)=g(X^{-1})\) if and only if \[\begin{cases}\bar{s}X^{r_{1}}\overline{P_{1}}(X^{d})+\bar{t}\,\overline{Q_{1}} (X^{d})=c\big{[}sX^{n-r_{1}}P_{1}(X^{-d})+tX^{n}Q_{1}(X^{-d})\big{]},\\ \bar{u}X^{r_{1}}\overline{P_{1}}(X^{d})+\bar{v}\overline{Q_{1}}(X^{d})=c\big{[} uX^{n-r_{1}}P_{1}(X^{-d})+vX^{n}Q_{1}(X^{-d})\big{]}\end{cases}\] for some \(c\in\mathbb{F}_{q^{2}}^{*}\), which is equivalent to \[\begin{cases}\bar{s}X^{r_{1}}\overline{P_{1}}(X^{d})=ctX^{n}Q_{1}(X^{-d}),\\ \bar{u}X^{r_{1}}\overline{P_{1}}(X^{d})=cvX^{n}Q_{1}(X^{-d}),\\ \bar{t}\,\overline{Q_{1}}(X^{d})=csX^{n-r_{1}}P_{1}(X^{-d}),\\ \bar{v}\overline{Q_{1}}(X^{d})=cuX^{n-r_{1}}P_{1}(X^{-d}).\end{cases} \tag{4.12}\] Under the assumption that \(\det\left[\begin{smallmatrix}s&t\\ u&v\end{smallmatrix}\right]\neq 0\), (4.12) implies that \(s,t,u,v\neq 0\) and \(c\in\mu_{q+1}\). Without loss of generality, assume \(s=1\). Then (4.12) becomes \[\begin{cases}\overline{P_{1}}(X)=ctX^{k}Q_{1}(X^{-1}),\\ c\in\mu_{q+1},\\ v=\bar{u}t,\end{cases} \tag{4.13}\] where \(k=(n-r_{1})/d\). Moreover, \[\det\begin{bmatrix}1&t\\ u&v\end{bmatrix}=\det\begin{bmatrix}1&t\\ u&\bar{u}t\end{bmatrix}=t(\bar{u}-u),\] which is nonzero if and only if \(t\in\mathbb{F}_{q^{2}}^{*}\) and \(u\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\). Condition (4.13) implies that \[\widetilde{P}_{1}=X^{k}\overline{P}_{1}(X^{-1})=ctQ_{1}(X),\] where \(\gcd(P_{1},\widetilde{P}_{1})=\gcd(P_{1},Q_{1})=1\). On the other hand, to satisfy (4.13) with \(u\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), we first choose monic \(P_{1}(X)\in\mathbb{F}_{q^{2}}[X]\) of degree \(k\) such that \(\gcd(P_{1},\widetilde{P}_{1})=1\); the number of choices of such \(P_{1}\), denoted by \(\Theta_{k}\), is determined in Appendix, Lemma A4. Next, let \(Q_{1}(X)=\epsilon X^{k}\overline{P_{1}}(X^{-1})\), where \(\epsilon\in\mathbb{F}_{q^{2}}^{*}\) is such that \(Q_{1}\) is monic. Afterwards, choose \(c\in\mu_{q+1}\) and \(u\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) arbitrarily, and let \(t\) and \(v\) be uniquely determined by (4.13). Hence the total number of \(g\) satisfying (4.3) and \(\bar{g}(X)=g(X^{-1})\) in Case (iii) is \[N(r_{1},0) =(q+1)(q^{2}-q)\Theta_{k}\] \[=\frac{q(q^{2}-1)}{1+q^{2}}\big{[}(-1)^{k}(1+q)+q^{2k+1}(q-1)\big{]} \text{ (by Lemma A4)}. \tag{4.14}\] **Step 3.** We have \[\text{Fix}(A_{\{\alpha,\alpha^{q}\}})=\frac{1}{|G(\mathbb{F}_{q})|}(\text{the number of $g$'s in Step 2}).\] **Theorem 4.4**.: _Let \(\alpha\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) with \(o(\alpha^{q-1})=d\). Then_ \[\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})=\begin{cases}q^{2n/d-2}+\dfrac{(q+ 1)(q^{2n/d}-(-1)^{n/d})}{q^{2}+1}&\text{if $d\mid n$ and $d$ is even},\\ q^{2n/d-2}&\text{if $d\mid n$ and $d$ is odd},\\ \dfrac{1}{1+q^{2}}\big{[}(-1)^{\lfloor n/d\rfloor}(1+q)+q^{2\lfloor n/d \rfloor+1}(q+1)\big{]}&\text{if $d\nmid n$}.\end{cases}\] Proof.: \(1^{\circ}\) Assume that \(d\mid n\) and \(d\) is even. By (4.8) and (4.11), \[\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}}) =\dfrac{1}{q(q^{2}-1)}\Big{[}(q^{2}-1)q^{2n/d-1}+\dfrac{q(q+1)(q^ {2}-1)(q^{2n/d}-(-1)^{n/d})}{q^{2}+1}\Big{]}\] \[=q^{2n/d-2}+\dfrac{(q+1)(q^{2n/d}-(-1)^{n/d})}{q^{2}+1}.\] \(2^{\circ}\) Assume that \(d\mid n\) and \(d\) is odd. By (4.8) and (4.11), \[\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})=\dfrac{1}{q(q^{2}-1)}q^{2n/d-1 }=q^{2n/d-2}.\] \(3^{\circ}\) Assume that \(d\nmid n\). By (4.14), \[\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}}) =\dfrac{1}{q(q^{2}-1)}\cdot\dfrac{q(q^{2}-1)}{1+q^{2}}\big{[}(-1 )^{k}(1+q)+q^{2k+1}(q-1)\big{]}\] \[=\dfrac{1}{1+q^{2}}\big{[}(-1)^{k}(1+q)+q^{2k+1}(q-1)\big{]}.\] ## 5. Determination of \(\operatorname{Fix}(B_{a})\) ### A useful lemma Let \(p=\operatorname{char}\mathbb{F}_{q}\). Every \(f(X)\in\mathbb{F}_{q}[X]\) has a representation \[f(X)=g_{p-1}(X^{p}-X)X^{p-1}+g_{p-2}(X^{p}-X)X^{p-2}+\dots+g_{0}(X^{p}-X), \tag{5.1}\] where \(g_{i}\in\mathbb{F}_{q}[X]\). Define \(\Delta f=f(X+1)-f(X)\). Then \(\Delta^{p}f=0\), and for \(0\leq i\leq p-1\), \[\Delta^{i}f=g_{i}(X^{p}-X)i!+\sum_{j=i+1}^{p-1}g_{j}(X^{p}-X)\Delta^{i}X^{j}.\] It follows that \(g_{i}\) in (5.1) are uniquely determined by \(f\). **Lemma 5.1**.: _Let \(0\leq i\leq p-1\). Then \(\Delta^{i}f=0\) if and only if \(g_{j}=0\) for all \(i\leq j\leq p-1\) in (5.1)._ Proof.: (\(\Leftarrow\)) Obvious. (\(\Rightarrow\)) Assume the contrary. Let \(j_{0}\) be the largest \(j\) such that \(g_{j}\neq 0\). Then \(i\leq j_{0}\leq p-1\). We have \[\Delta^{i}f =g_{j_{0}}(X^{p}-X)\Delta^{i}X^{j_{0}}+\sum_{j<j_{0}}g_{j}(X^{p}- X)\Delta^{i}X^{j}\] \[=g_{j_{0}}(X^{p}-X)\binom{j_{0}}{i}X^{j_{0}-i}+\sum_{j<j_{0}-i}h_ {j}(X^{p}-X)X^{j}\qquad\ (h_{j}\in\mathbb{F}_{q}[X])\] \[\begin{cases}\pi&\text{if }n\equiv 0\pmod{p},\\ 1&\text{if }n=1,\\ \frac{q-1}{q}(\alpha_{(n-1)/p}+\alpha_{(n-1)/p,(n-1)/p})&\text{if }n\equiv 1 \pmod{p},\ n>1,\\ 0&\text{otherwise}.\end{cases}\] Recall that \(\alpha_{i}\) and \(\alpha_{i,j}\) are given by Lemma A1. When \(n\equiv 0\pmod{p}\), \[\text{Fix}(B_{a})=\alpha_{n/p}=q^{2n/p-1}.\] When \(n\equiv 1\pmod{p}\) and \(n>1\), \[\operatorname{Fix}(B_{a})=\frac{q-1}{q}(q^{2(n-1)/p-1}+q^{2(n-1)/p}(1-q^{-1}))=q^ {2(n-1)/p-1}(q-1).\] To summarise, \[\operatorname{Fix}(B_{a})=\begin{cases}q^{2n/p-1}&\text{if }n\equiv 0\pmod{p},\\ 1&\text{if }n=1,\\ q^{2(n-1)/p-1}(q-1)&\text{if }n\equiv 1\pmod{p},\ n>1,\\ 0&\text{otherwise}.\end{cases} \tag{5.5}\] ## 6. The Main Theorem **Theorem 6.1**.: _For \(n\geq 1\), we have_ \[\mathfrak{N}(q,n)=\frac{q^{2n-3}}{q^{2}-1}+\frac{1}{2(q-1)}\mathfrak{A}(q,n)+ \frac{1}{2(q+1)}\mathfrak{B}(q,n)+\frac{1}{q}\mathfrak{C}(q,n), \tag{6.1}\] _where_ \[\mathfrak{A}(q,n)=\sum_{\begin{subarray}{c}1<d\,|\,q-1\\ d\,|\,n\end{subarray}}\phi(d)\Big{(}q^{2n/d-2}+\frac{(d-1)(q^{2n/d}-1)}{q+1} \Big{)}+\sum_{\begin{subarray}{c}1<d\,|\,q-1\\ d\,|\,n\end{subarray}}\phi(d)\frac{q^{2\lfloor n/d\rfloor+1}+1}{q+1}, \tag{6.2}\] \[\mathfrak{B}(q,n)= \sum_{\begin{subarray}{c}d\text{ even}\\ d\,|\,\gcd(d+1,n)\end{subarray}}\phi(d)\Big{(}q^{2n/d-2}+\frac{(q+1)(q^{2n/d}-( -1)^{n/d})}{q^{2}+1}\Big{)}\] \[+\sum_{\begin{subarray}{c}d\text{ odd}\\ 1<d\,|\,\gcd(d+1,n)\end{subarray}}\phi(d)q^{2n/d-2}\] \[+\frac{1}{(q+1)(q^{2}+1)}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,|\,n\end{subarray}}\phi(d)\Big{(}\frac{1+(-1)^{\lfloor n/d\rfloor}}{2}(1+q) ^{2}+q(q^{2\lfloor n/d\rfloor+2}-1)\Big{)}\] \[+\frac{1}{q^{2}+1}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,|\,n\end{subarray}}\phi(d)\big{(}(-1)^{\lfloor n/d\rfloor}(1+q)+q^{2\lfloor n /d\rfloor+1}(q-1)\big{)}, \tag{6.3}\] \[\mathfrak{C}(q,n)=\begin{cases}q^{2n/p-1}&\text{if }n\equiv 0\pmod{p},\\ 1&\text{if }n=1,\\ q^{2(n-1)/p-1}(q-1)&\text{if }n\equiv 1\pmod{p},\ n>1,\\ 0&\text{otherwise}.\end{cases} \tag{6.4}\] _In (6.2) and (6.3), \(\phi\) is the Euler function._ Proof.: We have \[\mathfrak{N}(q,n)=\frac{1}{q(q-1)^{2}(q+1)}\sum_{a\in\mathbb{F}_{q}^{*}} \operatorname{Fix}(A_{a})+\frac{1}{(q-1)^{2}}\sum_{\begin{subarray}{c}\{a,b \}\subset\mathbb{F}_{q}^{*}\\ a\neq b\end{subarray}}\operatorname{Fix}(A_{\{a,b\}})\] \[+\frac{1}{q^{2}-1}\sum_{\{\alpha,\alpha^{q}\}\subset\mathbb{F}_{q^{2}} \setminus\mathbb{F}_{q}}\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})+\frac{1}{q (q-1)}\sum_{a\in\mathbb{F}_{q}^{*}}\operatorname{Fix}(B_{a}).\] We now compute the four sums in the above. \(1^{\circ}\) We have \[\sum_{a\in\mathbb{F}_{q}^{*}}\operatorname{Fix}(A_{a})=(q-1)q^{2(n-1)}.\] \(2^{\circ}\) We have \[\sum_{\begin{subarray}{c}\{a,b\}\subset\mathbb{F}_{q}^{*}\\ a\neq b\end{subarray}}\operatorname{Fix}(A_{\{a,b\}})=\frac{1}{2}\sum_{a\in \mathbb{F}_{q}^{*}}\sum_{b\in\mathbb{F}_{q}^{*}\setminus\{1\}}\operatorname{ Fix}(A_{\{ab,a\}})=\frac{q-1}{2}\sum_{b\in\mathbb{F}_{q}^{*}\setminus\{1\}} \operatorname{Fix}(A_{\{b,1\}})\] \[=\frac{q-1}{2}\Big{[}\sum_{\begin{subarray}{c}1<d\,|\,q-1\\ d\,|\,n\end{subarray}}\phi(d)\Big{(}q^{2n/d-2}+\frac{(d-1)(q^{2n/d}-1)}{q+1} \Big{)}+\sum_{\begin{subarray}{c}1<d\,|\,q-1\\ d\,|\,n\end{subarray}}\phi(d)\frac{(q^{2\lfloor n/d\rfloor}+1)}{q+1}\Big{]}\] (by Theorem 3.2) \[=\frac{q-1}{2}\mathfrak{A}(q,n).\] \(3^{\circ}\) By Theorem 4.4, \[\sum_{\{\alpha,\alpha^{q}\}\subset\mathbb{F}_{q^{2}}\setminus \mathbb{F}_{q}}\operatorname{Fix}(A_{\{\alpha,\alpha^{q}\}})\] \[=\frac{q-1}{2}\Big{[}\sum_{\begin{subarray}{c}d\text{ even}\\ d\,|\gcd(q+1,n)\end{subarray}}\phi(d)\Big{(}q^{2n/d-2}+\frac{(q+1)(q^{2n/d}-(- 1)^{n/d})}{q^{2}+1}\Big{)}\] \[+\sum_{\begin{subarray}{c}d\text{ odd}\\ 1<d\,|\gcd(q+1,n)\end{subarray}}\phi(d)q^{2n/d-2}\] \[+\frac{1}{(q+1)(q^{2}+1)}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,|\,n\end{subarray}}\phi(d)\Big{(}\frac{1+(-1)^{\lfloor n/d\rfloor}}{2}(1+q) ^{2}+q(q^{2\lfloor n/d\rfloor+2}-1)\Big{)}\] \[+\frac{1}{q^{2}+1}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,|\,n\end{subarray}}\phi(d)\big{(}(-1)^{\lfloor n/d\rfloor}(1+q)+q^{2 \lfloor n/d\rfloor+1}(q-1)\big{)}\Big{]}\] \[=\frac{q-1}{2}\mathfrak{B}(q,n).\] \(4^{\circ}\) By (5.5), \[\sum_{a\in\mathbb{F}_{q}^{*}}\operatorname{Fix}(B_{a})=(q-1)\mathfrak{C}(q,n).\] ## 7. \(\mathfrak{N}(q,n)\) for Small n ### \(n=1\) We have \[\mathfrak{A}(q,1)=\sum_{1<d\,|\,q-1}\phi(d)=q-2,\] \[\mathfrak{B}(q,1)=\frac{1}{q^{2}+1}\sum_{1<d\,|\,q+1}\phi(d)\big{[}(1+q)+q(q-1 )\big{]}=\sum_{1<d\,|\,q+1}\phi(d)=q+1-1=q,\] \[\mathfrak{C}(q,1)=1.\] Hence \[\mathfrak{N}(q,1)=\frac{q^{-1}}{q^{2}-1}+\frac{1}{2(q-1)}(q-2)+\frac{1}{2(q+1 )}q+\frac{1}{q}=1,\] as expected. ### \(n=2\) **Case 1.** Assume \(q\) is even. We have \[\mathfrak{A}(q,2)=\sum_{1<d\,|\,q-1}\phi(d)=q-2,\] \[\mathfrak{B}(q,2)=\frac{1}{q^{2}+1}\sum_{\begin{subarray}{c}1<d\,|\,q+1\\ d\,\{2\}\end{subarray}}\phi(d)\big{[}(1+q)+q(q-1)\big{]}=\sum_{1<d\,|\,q+1} \phi(d)=q+1-1=q,\] \[\mathfrak{C}(q,2)=q.\] Hence \[\mathfrak{N}(q,2)=\frac{q}{q^{2}-1}+\frac{1}{2(q-1)}(q-2)+\frac{1}{2(q+1)}q+ \frac{1}{q}q=2.\] Since \(X^{2}\) and \(X^{2}+X\) are nonequivalent (\(X^{2}\) is a permutation of \(\mathbb{P}^{1}(\mathbb{F}_{q})\) but \(X^{2}+X\) is not), \[X^{2},\,\,X^{2}+X\] is a list of representatives of the equivalence classes of rational functions of degree \(2\) over \(\mathbb{F}_{q}\). **Case 2.** Assume \(q\) is odd. We have \[\mathfrak{A}(q,2)=\phi(2)\Big{(}1+\frac{q^{2}-1}{q+1}\Big{)}+\sum_{2<d\,|\,q-1 }\phi(d)=q+q-1-2=2q-3,\] \[\mathfrak{B}(q,2)= \,\phi(2)\Big{(}1+\frac{(q+1)(q^{2}+1)}{q^{2}+1}\Big{)}+\frac{1}{ q^{2}+1}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,\{2\}\end{subarray}}\phi(d)\big{(}(1+q)+q(q-1)\big{)}\] \[= \,q+2+\sum_{2<d\,|\,q+1}\phi(d)=q+2+q+2-2=2q+1,\] \[\mathfrak{C}(q,2)=0.\] Hence \[\mathfrak{N}(q,2)=\frac{q}{q^{2}-1}+\frac{1}{2(q-1)}(2q-3)+\frac{1}{2(q+1)}(2 q+1)=2.\] In this case, a list of representatives of the equivalence classes of rational functions of degree \(2\) over \(\mathbb{F}_{q}\) is given by \[X^{2},\ \frac{X^{2}+b}{X},\] where \(b\) is any fixed nonsquare of \(\mathbb{F}_{q}\). Proof.: It suffices to show that every \(f\in\mathbb{F}_{q}(X)\) of degree \(2\) is equivalent to one of the above two rational functions. If \(f\) is a polynomial, then \(f\sim X^{2}\). If \(f\) is not a polynomial, then \(f\sim(X^{2}+aX+b)/X\), where \(b\in\mathbb{F}_{q}^{*}\). Thus \(f\sim(X^{2}+b)/X\). If \(b=c^{2}\) for some \(c\in\mathbb{F}_{q}^{*}\), then \[f \sim\frac{X^{2}+2cX+c^{2}}{X}=\frac{(X+c)^{2}}{X}\sim\frac{X}{(X+c )^{2}}\sim\frac{X-c}{X^{2}}=\frac{1}{X}-c\Big{(}\frac{1}{X}\Big{)}^{2}\] \[\sim X-cX^{2}\sim X^{2}.\] ### \(n=3\) \(1^{\circ}\) Computing \(\mathfrak{A}(q,3)\). First assume \(q\) is even. If \(q-1\equiv 0\pmod{3}\), \[\mathfrak{A}(q,3) =\phi(3)\Big{(}1+\frac{2(q^{2}-1)}{q+1}\Big{)}+\sum_{\begin{subarray} {c}1<d\,\mid\,q-1\\ d\,\dagger\,3\end{subarray}}\phi(d)\] \[=2(1+2(q-1))+q-1-\phi(1)-\phi(3)\] \[=2(2q-1)+q-1-3=5q-6.\] If \(q-1\not\equiv 0\pmod{3}\), \[\mathfrak{A}(q,3)=\sum_{\begin{subarray}{c}1<d\,\mid\,q-1\\ d\,\dagger\,3\end{subarray}}\phi(d)=q-1-\phi(1)=q-2.\] Next, assume \(q\) is odd. If \(q-1\equiv 0\pmod{3}\), \[\mathfrak{A}(q,3) =\phi(3)\Big{(}1+\frac{2(q^{2}-1)}{q+1}\Big{)}+\phi(2)\frac{q^{3} +1}{q+1}+\sum_{3<d\,\mid\,q-1}\phi(d)\] \[=2(1+2(q-1))+q^{2}-q+1+q-1-\phi(1)-\phi(2)-\phi(3)\] \[=2(2q-1)+q^{2}-4=q^{2}+4q-6.\] If \(q-1\not\equiv 0\pmod{3}\), \[\mathfrak{A}(q,3)=\phi(2)\frac{q^{3}+1}{q+1}+\sum_{3<d\,\mid\,q-1}\phi(d)=q^{ 2}-q+1+q-1-\phi(1)-\phi(2)=q^{2}-2.\] To summarize, \[\mathfrak{A}(q,3)=\begin{cases}5q-6&\text{if }q\equiv 4\pmod{6},\\ q-2&\text{if }q\equiv 2\pmod{6},\\ q^{2}+4q-6&\text{if }q\equiv 1\pmod{6},\\ q^{2}-2&\text{if }q\equiv 3,5\pmod{6}.\end{cases}\] \(2^{\circ}\) Computing \(\mathfrak{B}(q,3)\). First assume \(q\) is even. If \(q+1\equiv 0\pmod{3}\), \[\mathfrak{B}(q,3) =\phi(3)+\frac{1}{q^{2}+1}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,\dagger\,3\end{subarray}}\phi(d)\big{[}(1+q)+q(q-1)\big{]}\] \[=2+\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,\dagger\,3\end{subarray}}\phi(d)=2+q+1-\phi(1)-\phi(3)=q.\] If \(q+1\not\equiv 0\pmod{3}\), \[\mathfrak{B}(q,3)=\frac{1}{q^{2}+1}\sum_{\begin{subarray}{c}d\,|\,q+1\\ d\,\dagger\,3\end{subarray}}\phi(d)\big{[}(1+q)+q(q-1)\big{]}=\sum_{ \begin{subarray}{c}d\,|\,q+1\\ d\,\dagger\,3\end{subarray}}\phi(d)=q+1-\phi(1)=q.\] Next, assume \(q\) is odd. If \(q+1\equiv 0\pmod{3}\), \[\mathfrak{B}(q,3) =\phi(3)+\frac{1}{q^{2}+1}\Big{[}\phi(2)(-(1+q)+q^{3}(q-1))+\sum_ {3<d\,|\,q+1}\phi(d)(1+q+q(q-1))\Big{]}\] \[=2+\frac{1}{q^{2}+1}\Big{[}q^{4}-q^{3}-q-1+(q^{2}+1)\sum_{3<d\,| \,q+1}\phi(d)\Big{]}\] \[=2+\frac{1}{q^{2}+1}\big{[}(q^{2}+1)(q^{2}-q-1)+(q^{2}+1)(q+1- \phi(1)-\phi(2)-\phi(3))\big{]}\] \[=2+q^{2}-q-1+q+1-4=q^{2}-2.\] If \(q+1\not\equiv 0\pmod{3}\), \[\mathfrak{B}(q,3) =\frac{1}{q^{2}+1}\Big{[}\phi(2)(-(1+q)+q^{3}(q-1))+\sum_{3<d\,| \,q+1}\phi(d)((1+q)+q(q-1))\Big{]}\] \[=\frac{1}{q^{2}+1}\big{[}(q^{2}+1)(q^{2}-q-1)+(q^{2}+1)(q+1-\phi( 1)-\phi(2))\big{]}\] \[=q^{2}-q-1+q+1-2=q^{2}-2.\] To summarize, \[\mathfrak{B}(q,3)=\begin{cases}q&\text{if }q\text{ is even,}\\ q^{2}-2&\text{if }q\text{ is odd.}\end{cases}\] \(3^{\circ}\) Computing \(\mathfrak{C}(q,3)\). We have \[\mathfrak{C}(q,3)=\begin{cases}q(q-1)&\text{if }p=2,\\ q&\text{if }p=3,\\ 0&\text{otherwise.}\end{cases}\] \(4^{\circ}\) Computing \(\mathfrak{N}(q,3)\). If \(q\equiv 1\pmod{6}\), \[\mathfrak{N}(q,3)=\frac{q^{3}}{q^{2}-1}+\frac{1}{2(q-1)}(q^{2}+4q-6)+\frac{1} {2(q+1)}(q^{2}-2)=2(q+1).\] If \(q\equiv 2\pmod{6}\), \[\mathfrak{N}(q,3)=\frac{q^{3}}{q^{2}-1}+\frac{1}{2(q-1)}(q-2)+\frac{1}{2(q+1)} q+\frac{1}{q}q(q-1)=2q.\] If \(q\equiv 3\pmod{6}\), i.e., \(p=3\), \[\mathfrak{N}(q,3)=\frac{q^{3}}{q^{2}-1}+\frac{1}{2(q-1)}(q^{2}-2)+\frac{1}{2( q+1)}(q^{2}-2)+\frac{1}{q}q=2q+1.\] If \(q\equiv 4\pmod{6}\), \[\mathfrak{N}(q,3)=\frac{q^{3}}{q^{2}-1}+\frac{1}{2(q-1)}(5q-6)+\frac{1}{2(q+1 )}q+\frac{1}{q}q(q-1)=2(q+1).\] If \(q\equiv 5\pmod{6}\), \[\mathfrak{N}(q,3)=\frac{q^{3}}{q^{2}-1}+\frac{1}{2(q-1)}(q^{2}-2)+\frac{1}{2( q+1)}(q^{2}-2)=2q.\] To summarize, \[\mathfrak{N}(q,3)=\begin{cases}2(q+1)&\text{if }q\equiv 1,4\pmod{6},\\ 2q&\text{if }q\equiv 2,5\pmod{6},\\ 2q+1&\text{if }q\equiv 3\pmod{6}.\end{cases}\] As mentioned in Section 1, rational functions of degree \(3\) in \(\mathbb{F}_{q}(X)\) have been classified for even \(n\)[16]; for odd \(q\), the question is still open. ### \(n=4\) We include the formulas for \(\mathfrak{A}(q,4)\), \(\mathfrak{B}(q,4)\), \(\mathfrak{C}(q,4)\) and \(\mathfrak{N}(q,4)\) but omit the details of the computations. \[\mathfrak{A}(q,4)=\begin{cases}-2-q+2q^{2}&\text{if }q\equiv 4,10\pmod{12},\\ -2+q&\text{if }q\equiv 2,8\pmod{12},\\ -10+6q+2q^{2}+q^{3}&\text{if }q\equiv 1\pmod{12},\\ -10+8q+q^{3}&\text{if }q\equiv 5,9\pmod{12},\\ -4+2q^{2}+q^{3}&\text{if }q\equiv 7\pmod{12},\\ -4+2q+q^{3}&\text{if }q\equiv 3,11\pmod{12}.\end{cases}\] \[\mathfrak{B}(q,4)=\begin{cases}-2+q^{2}&\text{if }q\equiv 2,8\pmod{12},\\ q&\text{if }q\equiv 4\pmod{12},\\ -4+4q^{2}+q^{3}&\text{if }q\equiv 11\pmod{12},\\ 2q+2q^{2}+q^{3}&\text{if }q\equiv 3,7\pmod{12},\\ -6-2q+4q^{2}+q^{3}&\text{if }q\equiv 5\pmod{12},\\ -2+2q^{2}+q^{3}&\text{if }q\equiv 1,9\pmod{12}.\end{cases}\] \[\mathfrak{C}(q,4)=\begin{cases}q^{3}&\text{if }p=2,\\ q(q-1)&\text{if }p=3,\\ 0&\text{otherwise}.\end{cases}\] \[\mathfrak{N}(q,4)=\begin{cases}4+3q+q^{2}+q^{3}&\text{if }q\equiv 1\pmod{12},\\ \dfrac{3}{2}q+q^{2}+q^{3}&\text{if }q\equiv 2,8\pmod{12},\\ 1+3q+q^{2}+q^{3}&\text{if }q\equiv 3\pmod{12},\\ 1+2q+q^{2}+q^{3}&\text{if }q\equiv 4\pmod{12},\\ 2+3q+q^{2}+q^{3}&\text{if }q\equiv 5,7\pmod{12},\\ 3+3q+q^{2}+q^{3}&\text{if }q\equiv 9\pmod{12},\\ 3q+q^{2}+q^{3}&\text{if }q\equiv 11\pmod{12}.\end{cases}\] ## 8. Equivalence Classes of Polynomials **Lemma 8.1**.: _Let \(f,g\in\mathbb{F}_{q}[X]\setminus\mathbb{F}_{q}\). Then \(g=\phi\circ f\circ\psi\) for some \(\phi,\psi\in G(\mathbb{F}_{q})\) if and only if \(g=\alpha\circ f\circ\beta\) for some \(\alpha,\beta\in\operatorname{AGL}(1,\mathbb{F}_{q})\)._ Proof.: (\(\Rightarrow\)) Let \(\psi(X)=A(X)/B(X)\). **Case 1.** Assume that \(B(X)=1\). Then \(\psi=A\in\operatorname{AGL}(1,\mathbb{F}_{q})\). Since \(f\circ A=f(A(X))\in\mathbb{F}_{q}[X]\) and \(\phi\circ f\circ A\in\mathbb{F}_{q}[X]\), it follows that \(\phi\in\operatorname{AGL}(1,\mathbb{F}_{q})\). **Case 2.** Assume that \(B(X)\notin\mathbb{F}_{q}\). Let \(B(X)=X+d\) and \(A(X)=aX+b\). Let \(f(X)=X^{n}+a_{n-1}X^{n-1}+\cdots+a_{0}\). Then \[f(\phi(X))=\frac{A(X)^{n}+a_{n-1}A(X)^{n-1}B(X)+\cdots+a_{0}B(X)^{n}}{B(X)^{n}}.\] Let \(\phi(X)=(sX+t)/(uX+v)\). Then \[u\big{(}A(X)^{n}+a_{n-1}A(X)^{n-1}B(X)+\cdots+a_{0}B(X)^{n}\big{)}+vB(X)^{n}=1 \tag{8.1}\] and \[g(X)=s\big{(}A(X)^{n}+a_{n-1}A(X)^{n-1}B(X)+\cdots+a_{0}B(X)^{n}\big{)}+tB(X )^{n}.\] By (8.1), \(u\neq 0\) and \[g(X)=su^{-1}(1-vB(X)^{n})+tB(X)^{n}=su^{-1}+(t-su^{-1}v)B(X)^{n}.\] Hence we may assume \(g(X)=X^{n}\). By (8.1) again, \[uf\Big{(}\frac{A(X)}{B(X)}\Big{)}+v = \frac{1}{B(X)^{n}}=\Big{(}\frac{1}{X+d}\Big{)}^{n}=\Big{(}\frac{ AX+b}{X+d}-a\Big{)}^{n}(b-ad)^{-n}\] \[= \Big{(}\frac{A(X)}{B(X)}-a\Big{)}^{n}(b-ad)^{-n}.\] So \(f(X)=u^{-1}(b-ad)^{-n}(X-a)^{n}-u^{-1}v\). Hence we may assume \(f(X)=X^{n}\). Then \(f=g\). Because of Lemma 8.1, we define two polynomials \(f,g\in\mathbb{F}_{q}[X]\setminus\mathbb{F}_{q}\) to be _equivalent_ if there exist \(\alpha,\beta\in\operatorname{AGL}(1,\mathbb{F}_{q})\) such that \(g=\alpha\circ f\circ\beta\); the meaning of equivalence between \(f\) and \(g\) is the same whether they are treated as polynomials or as rational functions. Let \[\mathcal{P}_{q,n}=\{f\in\mathbb{F}_{q}[X]:\deg f=n\}\] and let \(\mathfrak{M}(q,n)\) denote the number of equivalence classes in \(\mathcal{P}_{q,n}\). Compared with \(\mathfrak{N}(q,n)\), \(\mathfrak{M}(q,n)\) is much easier to determine. For \(f,g\in\mathbb{F}_{q}[X]\setminus\mathbb{F}_{q}\), define \(f\stackrel{{ L}}{{\sim}}g\) if there exists \(\alpha\in\operatorname{AGL}(1,\mathbb{F}_{q})\) such that \(g=\alpha\circ f\). Let \([f]\) denote the \(\stackrel{{ L}}{{\sim}}\) equivalence class of \(f\). Each \(\stackrel{{ L}}{{\sim}}\) equivalence class has a unique representative \(X^{n}+a_{n-1}X^{n-1}+\cdots+a_{1}X\). Let \(\operatorname{AGL}(1,\mathbb{F}_{q})\) act on the set of \(\stackrel{{ L}}{{\sim}}\) equivalence classes in \(\mathbb{F}_{q}[X]\setminus\mathbb{F}_{q}\) as follows: For \(f\in\mathbb{F}_{q}[X]\setminus\mathbb{F}_{q}\) and \(\alpha\in\operatorname{AGL}(1,\mathbb{F}_{q})\), \([f]^{\alpha}=[f\circ\alpha]\). Then \(\mathfrak{M}(q,n)\) is the number of \(\operatorname{AGL}(1,\mathbb{F}_{q})\)-orbits in \(\Omega_{n}:=\{[f]:f\in\mathcal{P}_{q,n}\}\). The information about the conjugacy classes of \(\operatorname{AGL}(1,\mathbb{F}_{q})\) is given in Table 2. For \(\alpha\in\operatorname{AGL}(1,\mathbb{F}_{q})\), let \(\operatorname{Fix}(\alpha)\) be the number of elements in \(\Omega_{n}\) fixed by \(\alpha\). All we have to do is to determine \(\operatorname{Fix}(\alpha)\) for each representative \(\alpha\) in Table 2. Clearly, \[\operatorname{Fix}(X)=q^{n-1}. \tag{8.2}\] Next, we compute \(\operatorname{Fix}(aX)\), where \(a\in\mathbb{F}_{q}^{*}\), \(a\neq 1\). Let \(o(a)=d\). Then \([f]\in\Omega_{n}\) is fixed by \(aX\) if and only if \[f\stackrel{{ L}}{{\sim}}X^{r}h(X^{d}),\] where \(0\leq r<d\), \(n\equiv r\pmod{d}\), \(h\in\mathbb{F}_{q}[X]\) is monic of degree \((n-r)/d\), and \(h(0)=0\) if \(r=0\). Thus \[\operatorname{Fix}(aX) =\begin{cases}q^{n/d-1}&\text{if }d\mid n\\ q^{\lfloor n/d\rfloor}&\text{if }d\nmid n\end{cases}\] \[=q^{\lceil n/d\rceil-1}. \tag{8.3}\] Now we comput \(\operatorname{Fix}(X+1)\). For \([f]\in\Omega_{n}\), \[[f]\text{ is fixed by }X+1\] \[\Leftrightarrow f(X+1)=f(X)+a,\text{ where }a\in\mathbb{F}_{q}\] \[\Leftrightarrow f(X)=g(X)+aX,\text{ where }a\in\mathbb{F}_{q},\text{ }g\in\mathbb{F}_{q}[X],\text{ }\Delta g=0\] \[\Leftrightarrow f(X)=h(X^{p}-X)+aX,\text{ where }a\in\mathbb{F}_{q},\text{ }h\in \mathbb{F}_{q}[X],\text{ }p=\operatorname{char}\mathbb{F}_{q}.\] \begin{table} \begin{tabular}{c|c} \hline representative & size of the centralizer \\ \hline \(X\) & \(q(q-1)\) \\ \(aX,\text{ }a\in\mathbb{F}_{q}^{*},\text{ }a\neq 1\) & \(q-1\) \\ \(X+1\) & \(q\) \\ \hline \end{tabular} \end{table} Table 2. Conjugacy classes of \(\operatorname{AGL}(1,\mathbb{F}_{q})\) In the above, we may assume that \(f\) is monic and \(f(0)=0\). Therefore, when \(p\mid n\), \(h\) is of degree \(n/p\) with \(h(0)=0\); when \(p\nmid n\), \(h=0\), \(n=1\) and \(a=1\). So, \[\operatorname{Fix}(X+1)=\begin{cases}q^{n/p-1}\cdot q=q^{n/p}&\text{if }p \mid n,\\ 1&\text{if }n=1,\\ 0&\text{if }p\nmid n\text{ and }n>1.\end{cases} \tag{8.4}\] **Theorem 8.2**.: _Let \(p=\operatorname{char}\mathbb{F}_{q}\). We have_ \[\mathfrak{M}(q,n)=\frac{q^{n-2}}{q-1}+\frac{1}{q-1}\sum_{1<d\,|\,q-1}\phi(d)q^ {\lceil n/d\rceil-1}+\begin{cases}q^{n/p-1}&\text{if }p\mid n,\\ q^{-1}&\text{if }n=1,\\ 0&\text{if }p\nmid n\text{ and }n>1.\end{cases}\] Proof.: By Burnside's lemma and (8.2) - (8.4), \[\mathfrak{M}(q,n) = \frac{1}{q(q-1)}\text{Fix}(X)+\frac{1}{q-1}\sum_{a\in\mathbb{F}_ {q}^{*}\setminus\{1\}}\operatorname{Fix}(aX)+\frac{1}{q}\text{Fix}(X+1)\] \[= \frac{q^{n-2}}{q-1}+\frac{1}{q-1}\sum_{1<d\,|\,q-1}\phi(d)q^{ \lceil n/d\rceil-1}+\frac{1}{q}\text{Fix}(X+1),\] where \[\frac{1}{q}\text{Fix}(X+1)=\begin{cases}q^{n/p-1}&\text{if }p\mid n,\\ q^{-1}&\text{if }n=1,\\ 0&\text{if }p\nmid n\text{ and }n>1.\end{cases}\] In Theorem 8.2, we can write \[\frac{q^{n-2}}{q-1}+\frac{1}{q-1}\sum_{1<d\,|\,q-1}\phi(d)q^{ \lceil n/d\rceil-1}\] \[= \frac{q^{n-2}}{q-1}+\frac{1}{q-1}\Big{(}\sum_{d\,|\,q-1}\phi(d)q^ {\lceil n/d\rceil-1}-q^{n-1}\Big{)}\] \[= \frac{1}{q-1}\sum_{d\,|\,q-1}\phi(d)q^{\lceil n/d\rceil-1}+\frac {q^{n-2}-q^{n-1}}{q-1}\] \[= \frac{1}{q-1}\Big{(}\sum_{d\,|\,q-1}\phi(d)(q^{\lceil n/d\rceil- 1}-1)+\sum_{d\,|\,q-1}\phi(d)\Big{)}-q^{n-2}\] \[= \frac{1}{q-1}\sum_{\begin{subarray}{c}d\,|\,q-1\\ d<n\end{subarray}}\phi(d)(q^{\lceil n/d\rceil-1}-1)+1-q^{n-2}.\] Hence \[\mathfrak{M}(q,n)=\frac{1}{q-1}\sum_{\begin{subarray}{c}d\,|\,q-1\\ d<n\end{subarray}}\phi(d)(q^{\lceil n/d\rceil-1}-1)+\begin{cases}1-q^{n-2}+q^{n /p-1}&\text{if }p\mid n,\\ 1&\text{if }n=1,\\ 1-q^{n-2}&\text{if }p\nmid n\text{ and }n>1.\end{cases}\] In the above, the sum \[\frac{1}{q-1}\sum_{\begin{subarray}{c}d\,|\,q-1\\ d<n\end{subarray}}\phi(d)(q^{\lceil n/d\rceil-1}-1)\] can be made more explicit as follows: Write \[\text{lcm}\{1,2,\ldots,n-1\}=\prod_{r\text{ prime}}r^{\nu_{r}},\qquad\nu_{r}= \lfloor\log_{r}(n-1)\rfloor,\] and \[\gcd(\text{lcm}\{1,2,\ldots,n-1\},q-1)=\prod_{r\text{ prime}}r^{u_{r}}.\] Then \[\frac{1}{q-1}\sum_{\begin{subarray}{c}d\,|\,q-1\\ d<n\end{subarray}}\phi(d)(q^{\lceil n/d\rceil-1}-1)\] \[= \sum_{\begin{subarray}{c}e_{r}\leq u_{r}\\ \prod_{r}r^{e_{r}}\leq n-1\end{subarray}}\phi\Bigl{(}\prod_{r}r^{e_{r}}\Bigr{)} (q^{\lceil n/\prod_{r}r^{e_{r}}\rceil-1}-1)\] \[= \sum_{\begin{subarray}{c}e_{r}\leq u_{r}\\ \prod_{r}r^{e_{r}}\leq n-1\end{subarray}}\Bigl{(}\prod_{r}r^{e_{r}}\Bigr{)} \Bigl{(}\prod_{r:e_{r}>0}(1-r^{-1})\Bigr{)}(q^{\lceil n/\prod_{r}r^{e_{r}} \rceil-1}-1).\] As concrete examples, we include the formulas for \(\mathfrak{M}(q,n)\) with \(1\leq n\leq 5\). \[\mathfrak{M}(q,1)=1.\] \[\mathfrak{M}(q,2)=\begin{cases}2&\text{if }p=2,\\ 1&\text{if }p>2.\end{cases}\] \[\mathfrak{M}(q,3)=\begin{cases}2&\text{if }p=2,\\ 4&\text{if }p=3,\\ 3&\text{if }p>3.\end{cases}\] \[\mathfrak{M}(q,4)=\begin{cases}q+5&\text{if }q\equiv 1\pmod{6},\\ 2q+2&\text{if }q\equiv 2\pmod{6},\\ q+3&\text{if }q\equiv 3,5\pmod{6},\\ 2q+4&\text{if }q\equiv 4\pmod{6}.\end{cases}\] \[\mathfrak{M}(q,5)=\begin{cases}q^{2}+2q+8&\text{if }q\equiv 1\pmod{12}\text{ and }p=5,\\ q^{2}+2q+7&\text{if }q\equiv 1\pmod{12}\text{ and }p\neq 5,\\ q^{2}+q+2&\text{if }q\equiv 2,8\pmod{12},\\ q^{2}+2q+3&\text{if }q\equiv 3,11\pmod{12},\\ q^{2}+q+4&\text{if }q\equiv 4\pmod{12},\\ q^{2}+2q+6&\text{if }q\equiv 5\pmod{12}\text{ and }p=5,\\ q^{2}+2q+5&\text{if }q\equiv 5,7,9\pmod{12}\text{ and }p\neq 5.\end{cases}\] With \(\mathfrak{M}(q,n)\) known, it is not difficult to classify polynomials of low degree over \(\mathbb{F}_{q}\). Tables 3 - 7 give the representatives of the equivalence classes in \(\mathcal{P}_{q,n}\) for \(1\leq n\leq 5\). In each of these cases, it is easy to verify that every \(f\in\mathcal{P}_{q,n}\) is equivalent to one of the representatives, and since their total number equals \(\mathfrak{M}(q,n)\), the representatives are pairwise nonequivalent. In these tables, \(\mathcal{C}_{i}\) denotes a system of representatives of the cosets of \(\{x^{i}:x\in\mathbb{F}_{q}^{*}\}\) in \(\mathbb{F}_{q}^{*}\). _and_ (A2) \[\alpha_{n}=q^{2n-1},\quad n\geq 1.\] Proof.: For (A1), we may assume that \(n-m=d\geq 0\), and it suffices to show that (A3) \[\alpha_{m,m+d}=\begin{cases}q^{d}&\text{if }m=0,\\ q^{2m+d}(1-q^{-1})&\text{if }m>0,\end{cases}\] The pairs \((f,g)\), where \(f,g\in\mathbb{F}_{q}[X]\) are monic, \(\deg f=m\) and \(\deg g=m+d\), are of the form \((hf_{1},hg_{1})\), where \(h,f_{1},g_{1}\in\mathbb{F}_{q}[X]\) are monic, \(\deg f_{1}=m-\deg h\), \(\deg g_{1}=m+d-\deg h\), and \(\gcd(f_{1},g_{1})=1\). Hence \[q^{2m+d}=\sum_{i\geq 0}q^{i}\alpha_{m-i,m+d-i},\] whence \[\sum_{m\geq 0}q^{2m+d}X^{m}=\Bigl{(}\sum_{i\geq 0}q^{i}X^{i}\Bigr{)}\Bigl{(} \sum_{j\geq 0}\alpha_{j,j+d}X^{j}\Bigr{)}.\] \begin{table} \begin{tabular}{c|c|c} \hline \(q\) & representative & number \\ \hline \(q\equiv 1\pmod{6}\) & \(X^{4}+a(X^{2}+X),\ a\in\mathbb{F}_{q}^{*}\) & \(q-1\) \\ & \(X^{4}+aX^{2},\ a\in\mathcal{C}_{2}\) & \(2\) \\ & \(X^{4}+aX,\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{4}\) & \(1\) \\ \cline{3-3} & & \(q+5\) \\ \hline \(q\equiv 2\pmod{6}\) & \(X^{4}+X^{3}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{4}+X^{2}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{4}+X\) & \(1\) \\ & \(X^{4}\) & \(1\) \\ \cline{3-3} & & \(2q+2\) \\ \hline \(q\equiv 3,5\pmod{6}\) & \(X^{4}+a(X^{2}+X),\ a\in\mathbb{F}_{q}^{*}\) & \(q-1\) \\ & \(X^{4}+aX^{2},\ a\in\mathcal{C}_{2}\) & \(2\) \\ & \(X^{4}+X\) & \(1\) \\ \cline{3-3} & & \(q+3\) \\ \hline \(q\equiv 4\pmod{6}\) & \(X^{4}+X^{3}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{4}+X^{2}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{4}+aX,\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{4}\) & \(1\) \\ \cline{3-3} & & \(2q+4\) \\ \hline \end{tabular} \end{table} Table 6. Equivalence classes of \(\mathcal{P}_{q,4}\) Therefore, \[\sum_{j\geq 0}\alpha_{j,j+d}X^{j}\,=(1-qX)\sum_{m\geq 0}q^{2m+d}X^{m}=q^{d} \Big{(}\sum_{m\geq 0}q^{2m}X^{m}-\sum_{m\geq 0}q^{2m+1}X^{m+1}\Big{)}\] \begin{table} \begin{tabular}{c|c|c} \hline \(q\) & representative & number \\ \hline \(q\equiv 1\pmod{12}\) & \(X^{5}+X^{4}+aX^{2}+bX,\ a,b\in\mathbb{F}_{q}\) & \(q^{2}\) \\ \(p=5\) & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+aX^{2},\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{4}\) & \(4\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} & & \(q^{2}+2q+8\) \\ \hline \(q\equiv 1\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ \(p\neq 5\) & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+a(X^{2}+X),\ a\in\mathbb{F}_{q}^{*}\) & \(q-1\) \\ & \(X^{5}+aX^{2},\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{4}\) & \(4\) \\ & \(X^{5}\) & \(1\) \\ & & \(q^{2}+2q+7\) \\ \hline \(q\equiv 2,8\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ & \(X^{5}+X^{3}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{5}+X^{2}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{5}+X\) & \(1\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} & & \(q^{2}+q+2\) \\ \hline \(q\equiv 3,11\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+X^{2}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{2}\) & \(2\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} & & \(q^{2}+2q+3\) \\ \hline \(q\equiv 4\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ & \(X^{5}+X^{3}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{5}+a(X^{2}+X),\ a\in\mathbb{F}_{q}^{*}\) & \(q-1\) \\ & \(X^{5}+aX^{2},\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{5}+X\) & \(1\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} & & \(q^{2}+q+4\) \\ \hline \end{tabular} \end{table} Table 7. Equivalence classes of \(\mathcal{P}_{q,5}\) \[=q^{d}\Big{(}1+\sum_{m\geq 1}(q^{2m}-q^{2m-1})X^{m}\Big{)}=q^{d}\Big{(}1+ \sum_{m\geq 1}q^{2m}(1-q^{-1})X^{m}\Big{)},\] which is (A3) (with \(j\) in place of \(m\)). For (A2), we have \[\alpha_{n} =\sum_{m=0}^{n-1}\alpha_{m,n}=q^{n}+\sum_{m=1}^{n-1}q^{m+n}(1-q^{-1})\] \[=q^{n}+q^{n}(q-1)\sum_{m=0}^{n-2}q^{m}=q^{n}+q^{n}(q^{n-1}-1)\] \[=q^{2n-1}.\] **Lemma A2**.: _Let \(\mathcal{R}_{q,n}=\{f\in\mathbb{F}_{q}[X]:\deg f=n\}\). Then_ \[|\mathcal{R}_{q,n}|=\begin{cases}q-1&\text{if }n=0,\\ q^{2n-1}(q^{2}-1)&\text{if }n>0.\end{cases}\] Proof.: For \(n>0\), we have \[|\mathcal{R}_{q,n}|=(q-1)(2\alpha_{n}+\alpha_{n,n})=(q-1)(2q^{2n-1}+q^{2n}(1- q^{-1}))=q^{2n-1}(q^{2}-1).\] \begin{table} \begin{tabular}{c|c|c} \hline \(q\) & representative & number \\ \hline \(q\equiv 5\pmod{12}\) & \(X^{5}+X^{4}+aX^{2}+bX,\ a,b\in\mathbb{F}_{q}\) & \(q^{2}\) \\ \(p=5\) & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+X^{2}\) & \(1\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{4}\) & \(4\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} \cline{3-3} & & \(q^{2}+2q+6\) \\ \hline \(q\equiv 5,9\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ \(p\neq 5\) & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+X^{2}+aX,\ a\in\mathbb{F}_{q}\) & \(q\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{4}\) & \(4\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} \cline{3-3} & & \(q^{2}+2q+5\) \\ \hline \(q\equiv 7\pmod{12}\) & \(X^{5}+a(X^{3}+X^{2})+bX,\ a\in\mathbb{F}_{q}^{*},\ b\in\mathbb{F}_{q}\) & \(q^{2}-q\) \\ & \(X^{5}+aX^{3}+bX,\ a\in\mathcal{C}_{2},\ b\in\mathbb{F}_{q}\) & \(2q\) \\ & \(X^{5}+a(X^{2}+X),\ a\in\mathbb{F}_{q}^{*}\) & \(q-1\) \\ & \(X^{5}+aX^{2},\ a\in\mathcal{C}_{3}\) & \(3\) \\ & \(X^{5}+aX,\ a\in\mathcal{C}_{2}\) & \(2\) \\ & \(X^{5}\) & \(1\) \\ \cline{2-3} & & \(q^{2}+2q+5\) \\ \hline \end{tabular} \end{table} Table 7. continued For \(m,n\geq 0\), let \[\beta_{m,n}=\] \[|\{(f,g):f,g\in\mathbb{F}_{q}[X]\ \text{monic},\ \deg f=m,\ \deg g=n,\ f(0) \neq 0,\ \gcd(f,g)=1\}|.\] **Lemma A3**.: _We have_ \[\beta_{m,n}=\begin{cases}q^{m-n-1}(q-1)\frac{q^{2n+1}+1}{q+1}&\text{if $m>n\geq 0 $},\\ q^{n}&\text{if $m=0$},\\ q^{n-m}(q-1)\frac{q^{2m}-1}{q+1}&\text{if $1\leq m\leq n$}.\end{cases}\] Proof.: We have \[\alpha_{m,n}=\beta_{m,n}+\beta_{n,m-1}.\] Therefore, (A4) \[\beta_{m,n} = \alpha_{m,n}-\beta_{n,m-1}=\alpha_{m,n}-(\alpha_{n,m-1}-\beta_{m -1,n-1})\] \[= \alpha_{m,n}-\alpha_{m-1,n}+\beta_{m-1,n-1}=c_{m,n}+\beta_{m-1,n -1},\] where \[c_{m,n} = \alpha_{m,n}-\alpha_{m-1,n}\] \[= \begin{cases}q^{n}&\text{if $m=0$},\\ q^{m-1}(q-1)&\text{if $m>0$},\ n=0,\\ q^{n}(q-2)&\text{if $m=1$},\ n>0,\\ q^{m+n-2}(q-1)^{2}&\text{if $m>1$},\ n>0.\end{cases}\] By (A4), \[\beta_{m,n}=\sum_{i\geq 0}c_{m-i,n-i}.\] When \(m>n\), \[\beta_{m,n} = c_{m,n}+c_{m-1,n-1}+\cdots+c_{m-n,0}\] \[= c_{m,n}+c_{m-1,n-1}+\cdots+c_{m-n+1,1}+q^{m-n-1}(q-1)\] \[= \sum_{i=1}^{n}q^{m-n+2i-2}(q-1)^{2}+q^{m-n-1}(q-1)\] \[= q^{m-n}(q-1)^{2}\,\frac{q^{2n}-1}{q^{2}-1}+q^{m-n-1}(q-1)\] \[= q^{m-n-1}(q-1)\frac{q^{2n+1}-1}{q+1}.\] When \(m\leq n\), \[\beta_{m,n} = c_{m,n}+c_{m-1,n-1}+\cdots+c_{0,n-m}\] \[= c_{m,n}+c_{m-1,n-1}+\cdots+c_{1,n-m+1}+q^{n-m}.\] In the above, if \(m=0\), \[\beta_{0,n}=q^{n};\] if \(m\geq 1\), \[\beta_{m,n} =\sum_{i=2}^{m}q^{n-m+2i-2}(q-1)^{2}+q^{n-m+1}(q-2)+q^{n-m}\] \[=q^{n-m+2}(q-1)^{2}\,\frac{q^{2(m-1)}-1}{q^{2}-1}+q^{n-m+1}(q-2)+q ^{n-m}\] \[=q^{n-m}(q-1)\frac{q^{2m}-1}{q+1}.\] Let \(\overline{(\ )}=(\ )^{q}\) be the Frobenius of \(\mathbb{F}_{q^{2}}\) over \(\mathbb{F}_{q}\), and for \(g=\sum_{i=0}^{n}a_{i}X^{i}\in\mathbb{F}_{q^{2}}[X]\), define \(\bar{g}=\sum_{i=0}^{n}\bar{a}_{i}X^{i}\). For \(0\neq g\in\mathbb{F}_{q^{2}}[X]\), define \(\tilde{g}=X^{\deg g}\bar{g}(X^{-1})\); that is, for \(g=a_{m}X^{m}+a_{m-1}X^{m-1}+\cdots+a_{0}\in\mathbb{F}_{q^{2}}[X]\), \(a_{m}\neq 0\), \[\tilde{g}=\bar{a}_{0}X^{m}+\bar{a}_{1}X^{m-1}+\cdots+\bar{a}_{m}.\] Clearly, \(\widetilde{g_{1}g_{2}}=\tilde{g}_{1}\tilde{g}_{2}\), \(\overline{X^{m}}=1\), and \(\tilde{\tilde{g}}=g\) if \(g(0)\neq 0\). We say the \(g\) is _self-dual_ if \(\tilde{g}=cg\) for some \(c\in\mathbb{F}_{q^{2}}^{*}\). In this case, \((\bar{a}_{0},\bar{a}_{m})=c(a_{m},a_{0})\), which implies that \(a_{0}/a_{m}\in\mu_{q+1}\) and \(c=\bar{a}_{0}/a_{m}\in\mu_{q+1}\). Define \[\Lambda_{i} =|\{g\in\mathbb{F}_{q^{2}}[X]:g\text{ is monic, self-dual, }\deg g=i\}|,\] \[\Theta_{i} =|\{g\in\mathbb{F}_{q^{2}}[X]:g\text{ is monic, }\deg g=i,\gcd(g,\tilde{g})=1\}|,\] \[\Gamma_{i,j} =|\{(g,h):g,h\in\mathbb{F}_{q^{2}}[X]\text{ monic, self-dual, }\gcd(g,h)=1\}|.\] **Lemma A4**.: _We have_ (A5) \[\Lambda_{i}=\begin{cases}1&\text{if }i=0,\\ (q+1)q^{i-1}&\text{if }i>0,\end{cases}\] \[\Theta_{i}=\frac{1}{1+q^{2}}\big{[}(-1)^{i}(1+q)+q^{2i+1}(q-1) \big{]}.\] Proof.: Every monic \(g\in\mathbb{F}_{q^{2}}[X]\) has a unique representation \(g=g_{1}h\), where \(h=\gcd(g,\tilde{g})\), which is monic and self-dual, and \(g_{1}\in\mathbb{F}_{q^{2}}[X]\) is monic such that \(\gcd(g_{1},\tilde{g}_{1})=1\). Therefore, \[\sum_{i=0}^{l}\Lambda_{i}\Theta_{l-i}=|\{g\in\mathbb{F}_{q^{2}}[X]\text{ monic of degree }l\}|=q^{2l},\] that is, (A6) \[\Big{(}\sum_{i=0}^{\infty}\Lambda_{i}X^{i}\Big{)}\Big{(}\sum_{j=0}^{\infty} \Theta_{j}X^{j}\Big{)}=\sum_{l=0}^{\infty}q^{2l}X^{l}=\frac{1}{1-q^{2}X}.\] Clearly \(\Lambda_{0}=1\). Assume \(l\geq 1\). Let \(g(X)=X^{l}+a_{l-1}X^{l-1}+\cdots+a_{0}\in\mathbb{F}_{q^{2}}[X]\), so \(\tilde{g}(X)=\bar{a}_{0}X^{l}+\bar{a}_{1}X^{l-1}+\cdots+1\). Then \(g\) is self-dual if and only if (A7) \[\begin{array}{ccccc}&\bar{a}_{0}&(\ a_{0}&a_{1}&\ldots&a_{l-1}\ )\\ &=&\big{(}\ \ 1&\bar{a}_{l-1}&\ldots&\bar{a}_{1}\ \ \ ).\end{array}\] If \(l-1\) is even, to satisfy \[\begin{array}{ccccccccc}&\bar{a}_{0}\ (&a_{0}&a_{1}&\ldots&a_{(l-1)/2}&a_{(l+1)/2}& \ldots&a_{l-1}\ )\\ =&(&1&\bar{a}_{l-1}&\ldots&\overline{a_{(l+1)/2}}&\overline{a_{(l-1)/2}}& \ldots&\bar{a}_{1}\ ),\end{array}\] we can choose \(a_{0}\in\mu_{q+1}\), choose \(a_{1},\ldots,a_{(l-1)/2}\in\mathbb{F}_{q^{2}}\) arbitrarily and let \(a_{i}=\overline{a_{l-i}}/\bar{a}_{0}\) for \((l+1)/2\leq i\leq l-1\). Hence \(\Lambda_{l}=(q+1)(q^{2})^{(l-1)/2}=(q+1)q^{l-1}\). If \(l-1\) is odd, to satisfy \[\begin{array}{ccccccccc}&\bar{a}_{0}\ (&a_{0}&a_{1}&\ldots&a_{l/2-1}&a_{l/2}&a_{l /2+1}&\ldots&a_{l-1}\ )\\ =&(&1&\bar{a}_{l-1}&\ldots&\overline{a_{l/2+1}}&\overline{a_{l/2}}&\overline{ a_{l/2-1}}&\ldots&\bar{a}_{1}\ ),\end{array}\] we can choose \(a_{0}\in\mu_{q+1}\), choose \(a_{1},\ldots,a_{l/2-1}\in\mathbb{F}_{q^{2}}\) arbitrarily, choose \(a_{l/2}\in\mathbb{F}_{q^{2}}\) such that \(\bar{a}_{0}a_{l/2}=\overline{a_{l/2}}\) and let \(a_{i}=\overline{a_{l-i}}/\bar{a}_{0}\) for \(l/2+1\leq i\leq l-1\). Since \(a_{0}\in\mu_{q+1}\), the number of choices for \(a_{l/2}\) is \(q\). Thus we also have \(\Lambda_{l}=(q+1)q(q^{2})^{l/2-1}=(q+1)q^{l-1}\). Therefore, \[\Lambda_{l}=\begin{cases}1&\text{if $l=0$},\\ (q+1)q^{l-1}&\text{if $l>0$}.\end{cases}\] We then have (A8) \[\begin{array}{rl}\sum_{i=0}^{\infty}\Lambda_{i}X^{i}&=1+\sum_{i=1}^{\infty }(q+1)q^{i-1}X^{i}=\sum_{i=0}^{\infty}(q+1)q^{i-1}X^{i}+1-(q+1)q^{-1}\\ &=\frac{q+1}{q}\frac{1}{1-qX}-\frac{1}{q}=\frac{1+X}{1-qX}.\end{array}\] By (A6) and (A8), \[\begin{array}{rl}\sum_{j=0}^{\infty}\Theta_{j}X^{j}&=\frac{1-qX}{1+X} \cdot\frac{1}{1-q^{2}X}=\frac{1+q}{1+q^{2}}\frac{1}{1+X}+\frac{q(q-1)}{1+q^{2} }\frac{1}{1-q^{2}X}\\ &=\frac{1+q}{1+q^{2}}\sum_{j=0}^{\infty}(-1)^{j}X^{j}+\frac{q(q-1)}{1+q^{2}} \sum_{j=0}^{\infty}q^{2j}X^{j}\\ &=\frac{1}{1+q^{2}}\sum_{j=0}^{\infty}\bigl{[}(-1)^{j}(1+q)+q^{2j+1}(q-1) \bigr{]}X^{j}.\end{array}\] Hence \[\Theta_{j}=\frac{1}{1+q^{2}}\bigl{[}(-1)^{j}(1+q)+q^{2j+1}(q-1)\bigr{]}.\] **Lemma A5**.: _For \(i,j\geq 0\), we have_ \[\Gamma_{i,i+j}=\begin{cases}1&\text{if $i=j=0$},\\ q^{j-1}(q+1)&\text{if $i=0,\ j>0$},\\ \frac{q(q+1)}{q^{2}+1}(q^{2i}-q^{2i-2}-(-1)^{i}2)&\text{if $i>0,\ j=0$},\\ \frac{q^{j-1}(q+1)(q^{2}-1)}{q^{2}+1}(q^{2i}-(-1)^{i})&\text{if $i>0,\ j>0$}.\end{cases}\] Proof.: Each ordered pair \((f,g)\), where \(f,g\in\mathbb{F}_{q^{2}}[X]\) are monic and self-dual with \(\deg f=i\) and \(\deg g=i+j\), has a unique representation \((f,g)=(f_{1}h,g_{1}h)\), where \(f_{1},g_{1},h\in\mathbb{F}_{q^{2}}[X]\) are monic and self-dual and \(\gcd(f_{1},g_{1})=1\). Thus \[\sum_{k}\Lambda_{k}\Gamma_{i-k,i+j-k}=\Lambda_{i}\Lambda_{i+j}.\] Therefore, (A9) \[\Bigl{(}\sum_{k\geq 0}\Lambda_{k}X^{k}\Bigr{)}\Bigl{(}\sum_{l\geq 0}\Gamma_{l,l+ j}X^{l}\Bigr{)}=\sum_{i\geq 0}\Lambda_{i}\Lambda_{i+j}X^{i}.\] When \(j=0\), by (A5), (A10) \[\sum_{i\geq 0}\Lambda_{i}\Lambda_{i}X^{i} =1+\sum_{i\geq 1}(q+1)^{2}q^{2(i-1)}X^{i}\] \[=\sum_{i\geq 0}(q+1)^{2}q^{2(i-1)}X^{i}+1-(q+1)^{2}q^{-2}\] \[=(q+1)^{2}q^{-2}\frac{1}{1-q^{2}X}+1-(q+1)^{2}q^{-2}\] \[=\frac{1+(2q+1)X}{1-q^{2}X}.\] Combining (A9), (A8) and (A10) gives \[\sum_{l\geq 0}\Gamma_{l,l}X^{l} =\frac{1-qX}{1+X}\cdot\frac{1+(2q+1)X}{1-q^{2}X}\] \[=\frac{2q+1}{q}-\frac{2q(q+1)}{q^{2}+1}\cdot\frac{1}{1+X}+\frac{( q-1)(q+1)^{2}}{q(q^{2}+1)}\cdot\frac{1}{1-q^{2}X}\] \[=\frac{2q+1}{q}-\frac{2q(q+1)}{q^{2}+1}\sum_{l\geq 0}(-1)^{l}X^{l} +\frac{(q-1)(q+1)^{2}}{q(q^{2}+1)}\sum_{l\geq 0}q^{2l}X^{l}.\] Hence \[\Gamma_{l,l}=\begin{cases}1&\text{if $l=0$},\\ \frac{q(q+1)}{q^{2}+1}(q^{2l}-q^{2l-2}-(-1)^{l}2)&\text{if $l>0$}.\end{cases}\] When \(j>0\), by (A5), (A11) \[\sum_{i\geq 0}\Lambda_{i}\Lambda_{i+j}X^{i} =(q+1)q^{j-1}+\sum_{i\geq 1}(q+1)^{2}q^{2i+j-2}X^{i}\] \[=\sum_{i\geq 0}(q+1)^{2}q^{2i+j-2}X^{i}+(q+1)q^{j-1}-(q+1)^{2}q^{j-2}\] \[=(q+1)^{2}q^{j-2}\frac{1}{1-q^{2}X}-(q+1)q^{j-2}\] \[=q^{j-1}(q+1)\frac{1+qX}{1-q^{2}X}.\] Combining (A9), (A8) and (A11) gives \[\sum_{l\geq 0}\Gamma_{l,l+j}X^{l} =q^{j-1}(q+1)\frac{1-qX}{1+X}\cdot\frac{1+qX}{1-q^{2}X}\] \[=q^{j-1}(q+1)\Big{(}1+\frac{1-q^{2}}{1+q^{2}}\cdot\frac{1}{1+X}- \frac{1-q^{2}}{1+q^{2}}\cdot\frac{1}{1-q^{2}X}\Big{)}\] \[=q^{j-1}(q+1)+\frac{q^{j-1}(q+1)(1-q^{2})}{1+q^{2}}\Big{(}\sum_{l \geq 0}(-1)^{l}X^{l}-\sum_{l\geq 0}q^{2l}X^{l}\Big{)}.\] Hence \[\Gamma_{l,l+j}=\begin{cases}q^{j-1}(q+1)&\text{if }l=0,\\ \frac{q^{j-1}(q+1)(q^{2}-1)}{q^{2}+1}(q^{2l}-(-1)^{l})&\text{if }l>0.\end{cases}\]
2305.19851
A geometrisation of $\mathbb N$-manifolds
This paper proposes a geometrisation of $\mathbb N$-manifolds of degree $n$ as $n$-fold vector bundles equipped with a (signed) $S_n$-symmetry. More precisely, it proves an equivalence between the categories of $[n]$-manifolds and the category of symmetric $n$-fold vector bundles, by finding that symmetric $n$-fold vector bundle cocycles and $[n]$-manifold cocycles are identical. This extends the already known equivalences of $[1]$-manifolds with vector bundles, and of $[2]$-manifolds with involutive double vector bundles, where the involution is understood as an $S_2$-action.
Malte Heuer, Madeleine Jotz
2023-05-31T13:40:53Z
http://arxiv.org/abs/2305.19851v1
# A Geometrisation of \(\mathbb{N}\)-manifolds ###### Abstract. This paper proposes a _geometrisation_ of \(\mathbb{N}\)-manifolds of degree \(n\) as \(n\)-fold vector bundles equipped with a (signed) \(S_{n}\)-symmetry. More precisely, it proves an equivalence between the categories of \([n]\)-manifolds and the category of symmetric \(n\)-fold vector bundles, by finding that symmetric \(n\)-fold vector bundle cocycles and \([n]\)-manifold cocycles are identical. This extends the already known equivalences of [1]-manifolds with vector bundles, and of [2]-manifolds with involutive double vector bundles, where the involution is understood as an \(S_{2}\)-action. 2010 Mathematics Subject Classification: Primary: 58A50, 20B30. Secondary: 53B05, 53C05, 20B05. ###### Contents * 1 Introduction * 2 Outline, main results and applications * 3 Relation to other work * 1.1 Notation and convention * 2 Partitions and integer partitions * 2.1 Partitions and cube-categories * 2.2 Partitions and signs * 2.3 Integer partitions of natural numbers * 3 The category of \(\mathbb{N}\)-manifolds * 3.1 Definitions * 3.2 The category of split \(\mathbb{N}\)-manifolds * 3.3 \([n]\)-manifold cocycles * 4 Multiple vector bundles and charts * 4.1 Cores of an \(n\)-fold vector bundle * 4.2 Linear splittings and decompositions of an \(n\)-fold vector bundle * 4.3 Iterated highest order cores of an \(n\)-fold vector bundle * 4.4 Proof of Corollary 3.6 in [HL20] * 4.5 Atlases of multiple vector bundles * 5 Symmetric \(n\)-fold vector bundles * 5.1 Global definition of symmetric \(n\)-fold vector bundles. * 5.2 Example: the pullback of a symmetric \(n\)-fold vector bundle * 5.3 Iterated highest order cores of symmetric \(n\)-fold vector bundles * 5.4 Decomposed symmetric \(n\)-fold vector bundles * 5.5 Morphisms of decomposed symmetric \(n\)-fold vector bundles * 5.6 Symmetric linear splittings and decompositions * 5.7 Symmetric \(n\)-fold vector bundle atlases * 6 The equivalence of symmetric \(n\)-fold vector bundles with \([n]\)-manifolds ## 1. Introduction An \(\mathbb{N}\)-graded manifold over a smooth manifold \(M\) is a sheaf of \(\mathbb{N}\)-graded, graded commutative, associative, unital \(C^{\infty}(M)\)-algebras over \(M\), that is locally freely generated by finitely many elements of strictly positive degree. \(\mathbb{N}\)-manifolds of degree \(1\) are easily seen to be the exterior algebras of sections of smooth vector bundles. \(\mathbb{N}\)-manifolds of degree \(2\), called for short [2]-manifolds, have recently been geometrised to double vector bundles with a linear _indirect_ involution [11]. (Such double vector bundles were called "symmetric double vector bundles with inverse symmetry" by Pradines [14].) Before that, Lie \(2\)-algebroids i.e. [2]-manifolds with a _homological vector field_, were linked to VB-Courant algebroids by Li-Bland in his thesis [13], see also [14], building up on the correspondence of Courant algebroids with symplectic Lie \(2\)-algebroids [20, 15]. Generally, positively graded manifolds are the geometric objects underlying Lie \(n\)-algebroids, which are widely accepted to infinitesimally describe Lie \(n\)-groupoids. A Lie \(n\)-algebroid is an \([n]\)-manifold equipped with a homological vector field, i.e. a vector field of degree \(1\) that squares to zero. Lie \(n\)-algebroids appeared in the early \(00\)'s in Voronov's study of Lie bialgebroids [21] and in Roytenberg's supergeometric approach to Courant algebroids [20], see also [15]. Precisely, Courant algebroids are equivalent to Lie \(2\)-algebroids equipped with a compatible symplectic structure of degree \(2\). This is at the origin of the interest for Lie \(n\)-algebroids in the Poisson community, as this fact leads to a path towards the _integration_ of Courant algebroids - to Lie \(2\)-groupoids with an additional geometric structure. The correspondence of symplectic Lie \(2\)-algebroids with Courant algebroids fits in fact in the more general equivalence of Lie \(2\)-algebroids with VB-Courant algebroids [13, 14], which, in turn, is based on the underlying equivalence between double vector bundles equipped with a linear involution, and positively graded manifolds generated in the degrees \(1\) and \(2\)[14]. Note that involutive double vector bundles dualise to metric double vector bundles, and the two classes of objects are therefore equivalent. Understanding the equivalence between metric or involutive double vector bundles and [2]-manifolds provides a precise dictionary, as summarised below, for linear geometric structures on involutive double vector bundles versus compatible geometric structures on [2]-manifolds. \begin{tabular}{|c|c|} \hline Double vector bundles & Degree \(2\) graded geometry \\ \hline metric double vector bundles & [2]-manifolds \\ metric VB-algebroids & Poisson [2]-manifolds \\ VB-Courant algebroids & Lie \(2\)-algebroids \\ LA-Courant algebroids & Poisson Lie \(2\)-algebroids \\ tangent doubles of Courant algebroids & symplectic Lie \(2\)-algebroids \\ \hline \end{tabular} In particular, several new explicit examples of Lie \(2\)-algebroids, of Poisson [2]-manifolds and of Poisson Lie \(2\)-algebroids arise as the counterparts of known examples of VB-Courant algebroids, of metric VB-algebroids and of LA-Courant algebroids [14, 15, 16]. The involution of an involutive double vector bundle can be understood as an \(S_{2}\)-action, while a classical vector bundle, which is commonly known as the geometrisation of a [1]-manifold, is trivially acted upon by the trivial group \(S_{1}\). The goal of this paper, which can be considered a sequel of [11], is to establish an equivalence between the category of \(S_{n}\)_-symmetric \(n\)-fold vector bundles_ and the category of positively graded manifolds of degree \(n\). In other words, this paper geometrises \([n]\)-manifolds via \(n\)-fold vector bundles with a compatible \(S_{n}\)-action. This is the groundwork needed for the geometrisation of Lie \(n\)-algebroids, or of Lie \(n\)-algebroids with additional compatible geometric structures such as symplectic or Poisson structures. The approach in [11] is a classical one; an extension of the construction of vector bundles over a manifold \(M\) from free and locally finitely generated sheaves of \(C^{\infty}(M)\)-modules, using the double vector bundle charts in [10]. Here also, the key to the geometrisation of \([n]\)-manifolds is a precise understanding of the atlases of symmetric \(n\)-fold vector bundles, which, in turn, only exist once it has been proved that symmetric \(n\)-fold vector bundles can always be equivariantly decomposed. The authors prove in [11] that \(n\)-fold vector bundles always admit a linear decomposition, and this paper proves that a _symmetric_\(n\)-fold vector bundle admits a _symmetric_ linear decomposition, and so a symmetric \(n\)-fold vector bundle atlas. Unlike in the case \(n=2\), adding the symmetry to the picture requires a thorough reconsideration of the proof of the decomposition in the general case. This, together with a little gap in [11], is the reason why some parts of [11] are considerably revisited here. However, it turns out that the equivalence of \([n]\)-manifolds with symmetric \(n\)-fold vector bundles is obtained in a more straightforward manner by studying cocycles for both types of geometric structures. This paper concentrates hence on showing that \([n]\)-manifold cocycles and symmetric \(n\)-fold vector bundle cocycles are basically the same objects, in particular with same cocycle conditions. Along the way, an explicit formula for morphisms of split \([n]\)-manifolds and for the composition of two such morphisms is given. The latter is heavily inferred by the correspondence of split \([n]\)-manifolds with decomposed symmetric \(n\)-fold vector bundles, and, as far as the authors know, the first explicit formula avoiding Koszul signs, or more precisely computing them explicitly. This paper does not describe how the graded functions of an \([n]\)-manifold correspond to special functions on a symmetric \(n\)-fold vector bundle. This is part of a further project in progress. Also, the general equivalence of \(\mathbb{N}\)-manifolds (or arbitrary, even infinite, degree) with symmetric multiple vector bundles is easily deduced from the main result of this paper, but will be carried out elsewhere. The differentiation of Lie \(n\)-groupoids to Lie \(n\)-algebroids is considered folklore in the research community, and goes back to Severa [12]. However, a precise differentiation process carrying out all the details, in particular the multiple Lie brackets - or in other words the homological vector field - has not been published yet. Recently, Kadiyan and Blohmann, as well as independently Du, Fernandes, Ryvkin, and Zhu have announced a differentiation process, that involves simplicial structures closely related to the \(n\)-cube categories used here, and to symmetric \(n\)-fold vector bundles. This paper hence builds the geometric fundaments for the infinitesimal description of Lie \(n\)-groupoids as Lie \(n\)-algebroids, via the intermediate step of symmetric \(n\)-fold vector bundles with an additional geometric structure that will correspond to the homological vector field. ### Outline, main results and applications This paper is organised as follows. Section 2 collects some facts about (ordered) integer partitions, ordered partitions of finite subsets of \(\mathbb{N}\) and their sign, cube categories. Partitions and integer partitions, as well as the interplay between the two notions, are at the core of many constructions in this paper. Section 3 quickly recalls the definitions of \([n]\)-manifolds and their morphisms. Then it discusses in details the split case, in particular morphisms of split \(\mathbb{N}\)-manifolds and their composition. \(\mathbb{N}\)-manifold cocycles are then defined as structures that are equivalent to \(\mathbb{N}\)-manifolds. Section 4 gives necessary background on \(n\)-fold vector bundles, as well as on their cores, their linear splittings and their linear decompositions. Then it introduces a new indexing for the iterated highest order cores of an \(n\)-fold vector bundle, and fills a gap in the proof of Corollary 3.6 in [10], establishing the existence of a linear decomposition for each \(n\)-fold vector bundle [10]. This yields that each \(n\)-fold vector bundle has an \(n\)-fold vector bundle atlas [10]. Appendix A refines the correspondence between linear splittings and decompositions of \(n\)-fold vector bundles by showing a uniqueness result, which is needed in this paper. Section 5 introduces symmetric \(n\)-fold vector bundles and their morphisms. It discusses decomposed symmetric \(n\)-fold vector bundles, as well as symmetric linear splittings and decompositions of symmetric \(n\)-fold vector bundles. Morphisms of decomposed symmetric \(n\)-fold vector bundles, as well as their composition, are discussed in detail since this is crucial for defining symmetric \(n\)-fold vector bundle cocycles. The existence of a symmetric linear decomposition of a symmetric \(n\)-fold vector bundle is proved in Appendix B. Finally, Sections 6 constructs from the equality of the cocycles the functors from symmetric \(n\)-fold vector bundles to \([n]\)-manifolds, and from \([n]\)-manifold to symmetric \(n\)-fold vector bundles, that establish together the equivalence between the two categories. ### Relation to other work Note that the question solved in this paper is already raised in [20]. More precisely, the author of [20] asks how to extend the table on Page 2 with the geometrisation of graded manifolds of higher degree. However, \(n\)-fold vector bundles are _defined_ in this reference as special \(\mathbb{Z}^{n}\)-graded manifolds, see also [21] and [14]. The idea is that a double vector bundle \(D\) with sides \(A\) and \(B\) defines a bigraded algebra \(D[1]_{A}[1]_{B}\), see also [17], and similarly, \(n\)-fold vector bundles define \(n\)-graded algebras. The paper [20] conversely associates to an \([n]\)-manifold \(\mathcal{M}\) its \(n\)-times iterated tangent bundle and establishes using this idea an equivalence between graded manifolds and multiple vector bundles. The approach and result here are different since \([n]\)-manifolds are geometrised by objects in "classical" differential geometry - in the sense that the symmetric \(n\)-fold vector bundles considered here are not considered in the graded setting. ### Acknowledgements The authors thank Leonid Ryvkin and Chenchang Zhu for encouraging them to complete this long-standing project and for discussions on the differentiation of Lie \(n\)-groupoids. They thank as well Christian Blohmann and Lory Kadiyan for interesting discussions, and again Lory Kadiyan for her careful reading of and useful comments on an early version of this paper. ### Notation and convention The cardinality of a finite set \(S\) is written \(\#S\). As a convention, \(0\) is not considered a natural number. Let \(E\to M\) and \(F\to N\) be smooth vector bundles and let \(\omega\colon E\to F\) be a morphism of vector bundles over a smooth map \(\omega_{0}\colon M\to N\). The dual of \(\omega\) is in general not a morphism of vector bundles, but a morphism \(\omega^{\star}\) of modules over the unital algebra morphism \(\omega_{0}^{*}\colon C^{\infty}(N)\to C^{\infty}(M)\): \[\omega^{\star}\colon\Gamma(F^{*})\to\Gamma(E^{*}),\qquad\omega^{\star}(\epsilon) (m)=\omega_{m}^{*}(\epsilon_{\omega_{0}(m)}) \tag{1}\] for all \(\epsilon\in\Gamma(F^{*})\) and \(m\in M\). The following lemma is immediate (see e.g. the appendix of [11] for a proof). **Lemma 1.1**.: _The map \(\cdot^{\star}\), that sends a morphism of vector bundles \(\omega\colon E\to F\) over \(\omega_{0}\colon M\to N\) to the morphism \(\omega^{\star}\colon\Gamma(F^{*})\to\Gamma(E^{*})\) of modules over \(\omega_{0}^{*}\colon C^{\infty}(N)\to C^{\infty}(M)\), is a bijection._ Let \(E\to M\) be a smooth vector bundle. This paper adopts the convention that the wedge product \(\omega\wedge\eta\in\Gamma(\wedge^{k+l}E^{*})\) of two forms \(\omega\in\Gamma(\wedge^{k}E^{*})\) and \(\eta\in\Gamma(\wedge^{l}E^{*})\) is given by \[\omega\wedge\eta=\frac{(k+l)!}{k!l!}\operatorname{Alt}(\omega\otimes\eta),\] i.e. \[(\omega\wedge\eta)(e_{1},\dots,e_{k+l})=\frac{1}{k!l!}\sum_{\sigma\in S_{k+l}} (-1)^{\sigma}\cdot\omega(e_{\sigma(1)},\dots,e_{\sigma(k)})\cdot\eta(e_{ \sigma(k+1)},\dots,e_{\sigma(k+l)}) \tag{2}\] for \(e_{1},\dots,e_{k+l}\) in the same fiber of \(E\). ## 2. Partitions and integer partitions Partitions and integer partitions, together with a notion of sign that partitions define, are at the core of the equivalence between \(\mathbb{N}\)-manifolds and symmetric multiple vector bundles. Partitions of finite subsets of \(\mathbb{N}\) are also important for indexing iterated higher order cores of multiple vector bundles. This section collects all the notions, notations and constructions with partitions that are needed in the rest of the paper. In this paper the following notation is used. For \(\underline{n}:=\{1,\dots,n\}\) the **standard \(n\)-cube category**\(\square^{n}\) is the category with subsets \(I\) of \(\underline{n}\) as objects and with arrows \(I\to J\,\Leftrightarrow\,J\subseteq I\). More generally, an \(n\)**-cube category** is a category that is isomorphic to the standard \(n\)-cube category \(\square^{n}\). ### Partitions and cube-categories Choose a finite subset \(I\subseteq\underline{n}\). A **partition of \(I\)** is a set \(\{I_{1},\dots,I_{l}\}\) of pairwise disjoint and non-empty subsets \(I_{1},\dots,I_{l}\subseteq I\) for some \(l\in\mathbb{N}\), such that \(I=I_{1}\cup\dots\cup I_{l}\). Since the order of the subsets _should not_ have any significance, i.e. \(\{I_{1},I_{2},\dots,I_{l}\}\) is naturally the same partition of \(I\) as \(\{I_{2},I_{1},\dots,I_{l}\}\), for clarity a partition is always assumed to be _ordered_, and if not mentioned otherwise, in the **canonical order**. That is, \(0<\#I_{1}\leq\#I_{2}\leq\dots\leq\#I_{l}\) and the subsets of same cardinality are ordered by lexicographic order. Later on, the fact that partitions are always ordered is also crucial for defining _signs_ of partitions. The set of (canonically ordered) partitions of \(I\) is written \(\mathcal{P}(I)\) and a partition \(\{I_{1},\dots,I_{l}\}\) is written as a list \((I_{1},\dots,I_{l})\) if it needs to be emphasized that it is ordered - in particular if it is non-canonically ordered, in which case this is always clearly stated. However, in some situations, it is simpler if the indices of the elements of a partition do not take into account that the partition is ordered (see for instance Lemma 2.2 below), and then the partition is simply written as a set. Each partition \(\rho=(I_{1},\dots,I_{l})\) of \(\underline{n}\) in \(l\) subsets defines an \(l\)**-cube category**\(\Diamond^{\rho}\) with objects the subsets \[J\subseteq\underline{n}\text{ such that for each }i=1,\dots,l,\text{ either }I_{i}\cap J=\emptyset\text{ or }I_{i}\subseteq J \tag{3}\] and with arrows \(J\to J^{\prime}\ \Leftrightarrow\ J^{\prime}\subseteq J\). The \(\#\rho\)-cube category \(\Diamond^{\rho}\) is a full subcategory of \(\square^{n}\). The inclusion functor is written \[i^{\rho}\colon\Diamond^{\rho}\to\square^{n}.\] **Remark 2.1**.: Given \(\rho=(I_{1},\ldots,I_{l})\), with \(l\in\{1,\ldots,n\}\), then the objects of \(\Diamond^{\rho}\) are the unions of elements of \(\rho\). The canonical equivalence with the standard \(l\)-cube category \(\square^{l}\) is given by sending \(J\in\operatorname{Obj}(\square^{l})\) to \(\cup_{j\in J}I_{j}\in\operatorname{Obj}(\Diamond^{\rho})\) and vice-versa. So \(\Diamond^{\rho}\) is really just \(\square^{l}\), but with the "indices" \(I_{1},\ldots,I_{l}\) replacing the natural numbers \(1,\ldots,l\). Then \(\square^{n}\) is simply understood as the \(n\)-cube category over the elements \(1,\ldots,n\), while \(\Diamond^{\rho}\) is the \(l\)-cube category over the elements \(I_{1},\ldots,I_{l}\). This point of view is useful for expressing some of the constructions later in the paper, in particular in Section 4.3, and also for keeping track of them. Note that if \(\rho=(I_{1},\ldots,I_{l})\) is a canonically ordered partition of \(I\subseteq\underline{n}\), then \((\#I_{1},\ldots,\#I_{l})\) is a naturally ordered integer partition of \(\#I\). The tuple \((\#I_{1},\ldots,\#I_{l})\) is called the **size** of the partition \((I_{1},\ldots,I_{l})\), and also denoted by \(|I_{1},\ldots,I_{l}|=|\rho|\). In general, integer partitions are assumed to be ordered and written as tuples. If not specified otherwise, they are naturally ordered. For simplicity, a partition of \(\underline{n}\) in \(l\) subsets is sometimes called an \(l\)**-partition** of \(\underline{n}\) and \(l\) is called the **length** of \(\rho\). Let \(\rho\) be such an \(l\)-partition of \(\underline{n}\). Then a **coarsement** of \(\rho\) is a partition \(\rho^{\prime}\) of \(\underline{n}\) that consists of (non-empty) unions of the elements of \(\rho\). The length of \(\rho^{\prime}\) is thus necessarily less than or equal to the length of \(\rho\). The set of coarsements of \(\rho\) is written \(\operatorname{coars}(\rho)\). Given \((J_{1},\ldots,J_{l^{\prime}})\in\operatorname{coars}(\rho)\) and \(i\in\{1,\ldots,l^{\prime}\}\), then \(\rho\cap J_{i}\) is defined to be the (canonically ordered) partition of \(J_{i}\) given by the elements of \(\rho\) it is a union of. A partition \(\rho^{\prime}\) of \(\underline{n}\) is a **refinement** of \(\rho\) if \(\rho\) is a coarsement of \(\rho^{\prime}\). For the study of cores in Section 4.3, it is important to understand the different coarsements of length \(l-2\) of an \(l\)-partition. **Lemma 2.2**.: _Let \(\rho=\{I_{1},\ldots,I_{l}\}\) be a partition of \(\underline{n}\). Consider two different coarsements_ \[\rho_{ij}=\left\{I_{i}\cup I_{j},I_{1},\ldots,\widehat{I_{i}},\ldots,\widehat{ I_{j}},\ldots,I_{l}\right\}\text{ and }\rho_{rs}=\left\{I_{r}\cup I_{s},I_{1},\ldots,\widehat{I_{r}},\ldots,\widehat{I_{s}}, \ldots,I_{l}\right\}\] _of length \(l-1\) of \(\rho\), with \(i<j\) and \(r<s\) in \(\{1,\ldots,l\}\). Then_ \[\operatorname{Obj}\left(\Diamond^{\rho_{ij}}\right)\cap\operatorname{Obj} \left(\Diamond^{\rho_{rs}}\right)=\operatorname{Obj}\left(\Diamond^{\rho^{ \prime}}\right)\] _with \(\rho^{\prime}\) the coarsement of length \(l-2\) of \(\rho\) given by_ \[\rho^{\prime}=\left\{\begin{array}{ll}\{I_{i}\cup I_{j}\cup I_{r};I_{l}\mid t \in\underline{l},t\neq i,j,r\}&\text{ if }i=s\text{ or }j=s,\\ \{I_{i}\cup I_{j}\cup I_{s};I_{l}\mid t\in\underline{l},t\neq i,j,s\}&\text{ if }i=r\text{ or }j=r,\\ \{I_{i}\cup I_{j},I_{r}\cup I_{s};I_{t}\mid t\in\underline{l},t\neq i,j,r,s\}& \text{ if }i\neq r,s\text{ and }j\neq r,s.\end{array}\right. \tag{4}\] **Definition 2.3**.: _In the situation of the previous lemma, the \((l-2)\)-cube category \(\Diamond^{\rho^{\prime}}\) is denoted \(\Diamond^{\rho_{ij}}\cap\Diamond^{\rho_{rs}}\), and called the **intersection** of the two \((l-1)\)-cube categories \(\Diamond^{\rho_{ij}}\) and \(\Diamond^{\rho_{rs}}\). It is again a full subcategory of \(\square^{n}\)._ _The partition \(\rho^{\prime}\) is itself sometimes written \(\rho_{ij}\sqcap\rho_{rs}\). It is the unique common \((l-2)\)-coarsement of \(\rho_{ij}\) and \(\rho_{rs}\)._ Proof of Lemma 2.2.: A subset \(K\subseteq\underline{n}\) is an object of \(\Diamond^{\rho_{ij}}\) and of \(\Diamond^{\rho_{rs}}\) if and only if \(K\cap I_{t}=\emptyset\) or \(I_{t}\subseteq K\) for all \(t\in\{1,\ldots,l\}\setminus\{i,j,r,s\}\) and * \(I_{i}\cup I_{j}\subseteq K\) and \(I_{r}\cup I_{s}\subseteq K\) i.e. \(I_{i}\cup I_{j}\cup I_{r}\cup I_{s}\subseteq K\), or * \((I_{i}\cup I_{j})\cap K=\emptyset\) and \((I_{r}\cup I_{s})\cap K=\emptyset\) i.e. \((I_{i}\cup I_{j}\cup I_{r}\cup I_{s})\cap K=\emptyset\), or * \(I_{i}\cup I_{j}\subseteq K\) and \((I_{r}\cup I_{s})\cap K=\emptyset\), or * \(I_{r}\cup I_{s}\subseteq K\) and \((I_{i}\cup I_{j})\cap K=\emptyset\). If \(\{i,j\}\cap\{r,s\}\neq\emptyset\), the cases (iii) and (iv) are not possible. The intersection \(\{i,j\}\cap\{r,s\}\) has then one element because \(\rho_{ij}\neq\rho_{rs}\) and by (i) and (ii) \(K\) is an object of \(\Diamond^{\rho^{\prime}}\) with one of the first two descriptions of \(\rho^{\prime}\) in (4). If \(\{i,j\}\cap\{r,s\}=\emptyset\) then \(K\) is an object of \(\Diamond^{\rho^{\prime}}\) with the third description of \(\rho^{\prime}\). ### Partitions and signs For \(\sigma\in S_{n}\) and \(I\subseteq\underline{n}\), denote by \(\epsilon(\sigma,I)\) the _order of inversions_ (or simply the _sign_) of \(\sigma|_{I}\colon I\to\sigma(I)\), where \(I\) and \(\sigma(I)\) are equipped with the natural order inherited from the natural numbers. That is, \[\epsilon(\sigma,I):=(-1)^{\#\{(i,j)\in I\times I|i<j,\sigma(i)>\sigma(j)\}}= \prod_{\begin{subarray}{c}i,j\in I\\ i<j\end{subarray}}\frac{\sigma(j)-\sigma(i)}{j-i}\] For \(\sigma,\nu\in S_{n}\) and \(I\subseteq\underline{n}\), the product formula \[\epsilon(\sigma\nu,I)=\epsilon(\sigma,\nu(I))\cdot\epsilon(\nu,I) \tag{5}\] is immediate. An ordered partition \(\rho=(I_{1},\ldots,I_{l})\) of \(I\) defines as follows a reordering of the elements of \(I\). If \(I_{j}=\{i_{1}^{j},\ldots,i_{k_{j}}^{j}\}\) for \(j=1,\ldots,l\), with \(i_{1}^{j},\ldots,i_{k_{j}}^{j}\in\mathbb{N}\) naturally ordered, then \[i_{1}^{1},\ldots,i_{k_{1}}^{1},i_{1}^{2},\ldots,i_{k_{2}}^{2},\ldots,i_{1}^{l},\ldots,i_{k_{l}}^{l} \tag{6}\] is a reordering of the elements of \(I\). For instance, the (non-canonically) ordered partition \((\{2\},\{1,4\},\{3\})\) of \(\{1,2,3,4\}\) defines the reordering \(2,1,4,3\) of the numbers \(1,2,3,4\). Given \(n\in\mathbb{N}\) and a (not necessarily naturally) ordered integer partition \((i_{1},\ldots,i_{l})\) of \(n\), set the **canonical partition**\(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}\) of \(\{1,\ldots,n\}=\underline{n}\) to be the unique ordered partition \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}:=(K_{1},\ldots,K_{l})\) of \(\underline{n}\) with \(\#K_{s}=i_{s}\) for \(s=1,\ldots,l\) and such that the order on \(\underline{n}\) induced by \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}\) is the natural one. For instance, for the integer partition \((1,2,1,3,3)\) of the number \(10\), the canonical partition \(\rho_{\mathrm{can}}^{1,2,1,3,3}\) of \(\underline{10}\) is the partition \((\{1\},\{2,3\},\{4\},\{5,6,7\},\{8,9,10\})\). Given \(\sigma\in S_{n}\) and an ordered partition \(\rho=(I_{1},\ldots,I_{l})\) of a subset \(I\subseteq\underline{n}\), write \(\sigma(\rho)\) for the ordered partition \((\sigma(I_{1}),\ldots,\sigma(I_{l}))\). Given a subset \(I\subseteq\underline{n}\) and a partition \(\rho=(I_{1},\ldots,I_{l})\) of \(I\), set \(\#I_{j}=:i_{j}\) and \(i=\#I=i_{1}+\ldots+i_{l}\). Consider \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}=(K_{1},\ldots,K_{l})\) and choose a permutation \(\sigma\in S_{n}\) with \[\sigma(K_{j})=I_{j}\quad\text{ for }\quad j=1,\ldots,l,\] i.e. with \(\rho=\sigma\left(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}\right)\). Set then \[\mathrm{sgn}(\rho):=\frac{\prod_{j=1}^{l}\epsilon(\sigma,K_{j})}{\epsilon( \sigma,\{1,\ldots,i\})},\] the **sign of the partition**\(\rho\). **Lemma 2.4**.: _In the situation above, the sign \(\mathrm{sgn}(\rho)\) does not depend on the choice of \(\sigma\in S_{n}\) with \(\sigma(K_{j})=I_{j}\) for \(j=1,\ldots,l\). Hence \(\mathrm{sgn}(\rho)\) equals_ \[\epsilon(\sigma,\{1,\ldots,i\})\] _for a permutation \(\sigma\in S_{n}\) with \(\rho=\sigma\left(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{l}}\right)\) and preserving in addition the order of the sets \(K_{1},\ldots,K_{l}\)._ _As a consequence, \(\mathrm{sgn}(\rho)\) is the sign of the unique permutation \(I\to I\) sending \(I\) in its canonical order to \(I\) in its order induced by \(\rho\) as in (6)._ Proof.: Assume that \(\nu\in S_{n}\) is a second permutation with \(\nu(K_{j})=I_{j}\) for \(j=1,\ldots,l\). Then \(\nu=\lambda\circ\sigma\) with \(\lambda\in S_{n}\) satisfying \(\lambda(I_{j})=I_{j}\) for \(j=1,\ldots,l\). First assume that \(\lambda|_{I}\) is a transposition, say \(\lambda=(s,t)\) with \(s\neq t\) in \(\{1,\ldots,n\}\). Since \(\lambda\) preserves each of the sets \(I_{j}\), \(j=1,\ldots,l\), there exists an \(r\in\{1,\ldots,l\}\) such that \(\lambda|_{I_{j}}=\operatorname{id}_{I_{j}}\) for \(j\in\{1,\ldots,l\}\setminus\{r\}\) and \(\lambda|_{I_{r}}\) is a transposition. Then \[\prod_{j=1}^{l}\epsilon(\lambda,I_{j})=\epsilon(\lambda,I_{r})=-1=\epsilon( \lambda,I). \tag{7}\] The general case follows with (5): a permutation \(\lambda\) preserving each of the sets \(I_{1},\ldots,I_{l}\) can be written as a product of transpositions \(\lambda_{1},\ldots,\lambda_{s}\) preserving these sets. Then \[\begin{split}\prod_{j=1}^{l}\epsilon(\lambda,I_{j})& =\prod_{j=1}^{l}\epsilon(\lambda_{s}\circ\ldots\circ\lambda_{1},I _{j})\stackrel{{\eqref{eq:1}}}{{=}}\prod_{j=1}^{l}\prod_{k=1}^{s} \epsilon(\lambda_{k},(\lambda_{k-1}\circ\ldots\circ\lambda_{1})(I_{j}))\\ &=\prod_{k=1}^{s}\left(\prod_{j=1}^{l}\epsilon(\lambda_{k},( \lambda_{k-1}\circ\ldots\circ\lambda_{1})(I_{j}))\right)\\ &\stackrel{{\eqref{eq:1}}}{{=}}\prod_{k=1}^{s} \epsilon(\lambda_{k},(\lambda_{k-1}\circ\ldots\circ\lambda_{1})(I))\stackrel{{ \eqref{eq:1}}}{{=}}\epsilon(\lambda,I).\end{split} \tag{8}\] Now the claim follows with a similar computation: \[\frac{\prod_{j=1}^{l}\epsilon(\nu,K_{j})}{\epsilon(\nu,\{1,\ldots,i\})}=\frac{ \prod_{j=1}^{l}\epsilon(\lambda\sigma,K_{j})}{\epsilon(\lambda\sigma,\{1, \ldots,i\})}\stackrel{{\eqref{eq:1}}}{{=}}\frac{\prod_{j=1}^{l} \epsilon(\lambda,I_{j})}{\epsilon(\lambda,I)}\frac{\prod_{j=1}^{l}\epsilon( \sigma,K_{j})}{\epsilon(\sigma,\{1,\ldots,i\})}\stackrel{{ \eqref{eq:1}}}{{=}}\frac{\prod_{j=1}^{l}\epsilon(\sigma,K_{j})}{ \epsilon(\sigma,\{1,\ldots,i\})}.\] To see the last claim of the lemma, note that there is a unique bijection \(f\colon I\to\{1,\ldots,i\}\) preserving the natural order of both sets. The sign of this map is hence \(1\) and its composition with \(\sigma\) gives then the bijection \(\sigma\circ f\colon I\to I\), which sends \(I\) in its canonical order to \(I\) in the order induced by \(\rho\) as in (6). Finally, note that given \(i\in\underline{n}\) and an ordered integer partition \((i_{1},\ldots,i_{l})\) of \(i\), the canonical partition \(\rho_{\operatorname{can}}^{i_{1},\ldots,i_{l}}\) of \(\underline{i}\) has sign \(1\) by construction. ### Integer partitions of natural numbers For \(n\in\mathbb{N}\) let \(\mathcal{P}(n)\) be the set of naturally ordered integer partitions of the natural numbers1\(1\leq j\leq n\): Footnote 1: The reader should be aware that the notations \(\mathcal{P}(n)\) and \(\mathcal{P}(\underline{n})\) are not consistent: \(\mathcal{P}(n)\) is the set of ordered integer partitions of the integers between \(1\) and \(n\), while \(\mathcal{P}(\underline{n})\) is the set of canonically ordered partitions of the set \(\underline{n}\). These conventions differ for simplicity of some of the formulas. \[\mathcal{P}(n):=\left\{(i_{1},\ldots,i_{l})\in\mathbb{N}^{l}\mid l\in\mathbb{N },1\leq i_{1}\leq\ldots\leq i_{l}\text{ and }1\leq i_{1}+\ldots+i_{l}\leq n\,\right\}.\] Given a partition \(p=(i_{1},\ldots,i_{l})\in\mathcal{P}(n)\), the sum \(\Sigma p\) of \(p\) is simply \[\sum p:=\sum_{j=1}^{l}i_{j}.\] Given \(p\in\mathcal{P}(n)\), the number \(\sharp p\) is the length of the list \(p\), i.e. the number of its elements. As explained in the preceding section, there is a bijection between ordered integer partitions of a number \(i\in\mathbb{N}\) and ordered partitions of \(\underline{i}\) inducing the canonical order on the elements of \(\underline{i}\). A integer partition \((i_{1},\ldots,i_{l})\) of \(k\) corresponds via this bijection to the unique partition \(\rho_{\operatorname{can}}^{i_{1},\ldots,i_{l}}\). ## 3. The category of \(\mathbb{N}\)-manifolds This section begins by recalling the definitions of \(\mathbb{N}\)-manifolds and their morphisms, and by discussing in detail the split case. The reader is referred to [10, 11] for more details. ### Definitions An \(\mathbb{N}\)**-manifold**\(\mathcal{M}\) of degree \(n\in\mathbb{N}\) and dimension \((m;r_{1},\ldots,r_{n})\) is a smooth manifold \(M\) of dimension \(m\) together with a sheaf \(C^{\infty}(\mathcal{M})\) of \(\mathbb{N}\)-graded, graded commutative, associative, unital \(C^{\infty}(M)\)-algebras, that is locally freely generated by \(r_{1}+\ldots+r_{n}\) elements \(\xi_{1}^{1},\ldots,\xi_{1}^{r_{1}}\), \(\xi_{2}^{1},\ldots,\xi_{2}^{r_{2}},\ldots\), \(\xi_{n}^{1},\ldots,\xi_{n}^{r_{n}}\) with \(\xi_{i}^{j}\) of degree \(i\) for \(i\in\{1,\ldots,n\}\) and \(j\in\{1,\ldots,r_{i}\}\). A morphism of \(\mathbb{N}\)-manifolds \(\mu\colon\mathcal{M}\to\mathcal{N}\) over a smooth map \(\mu_{0}\colon M\to N\) of the underlying smooth manifolds is a morphism \(\mu^{\star}\colon C^{\infty}(\mathcal{N})\to C^{\infty}(\mathcal{M})\) of sheaves of graded algebras over \(\mu_{0}^{\star}\colon C^{\infty}(N)\to C^{\infty}(M)\). Note that the degree \(0\) elements of \(C^{\infty}(\mathcal{M})\) are precisely the smooth functions on \(M\). In the following an \(\mathbb{N}\)-manifold of degree \(n\in\mathbb{N}\) is called an \([n]\)**-manifold**. \(|\xi|\) is the degree of a homogeneous element \(\xi\in C^{\infty}(\mathcal{M})\), i.e. an element which can be written as a sum of functions of the same degree, and \(C^{\infty}(\mathcal{M})^{i}\) is the space of elements of degree \(i\) in \(C^{\infty}(\mathcal{M})\). Note that a \([1]\)**-manifold** over a smooth manifold \(M\) is equivalent to a locally free and finitely generated sheaf of \(C^{\infty}(M)\)-modules, hence to a vector bundle over \(M\). The category of \([n]\)-manifolds is written \([\mathsf{n}]\mathsf{Man}\), and the category of \(\mathbb{N}\)-manifolds is written \(\mathbb{NMan}\). Given an \([n]\)-manifold \(\mathcal{M}\) over a smooth manifold \(M\), and an open subset \(U\subseteq M\), then \(\mathcal{M}|_{U}\) is the \(n\)-manifold over \(U\) defined by \(C^{\infty}(\mathcal{M}|_{U}):=C^{\infty}(\mathcal{M})|_{U}\). For simplicity, it is called the **restriction** of \(\mathcal{M}\) to \(U\). ### The category of split \(\mathbb{N}\)-manifolds Let \(E\) be a smooth vector bundle of rank \(r\) over a smooth manifold \(M\) of dimension \(m\). Assign the degree \(n\) to the fiber coordinates of \(E\). This defines \(E[-n]\), an \([n]\)-manifold of dimension \((m;r_{1}=0,\ldots,r_{n-1}=0,r_{n}=r)\) with generators \(C^{\infty}(E[-n])^{n}=\Gamma(E^{*})\) and \(C^{\infty}(E[-n])^{0}=C^{\infty}(M)\). Now let \(E_{1},E_{2},\ldots,E_{n}\) be smooth vector bundles of finite ranks \(r_{1},\ldots,r_{n}\) over \(M\) and assign the degree \(-i\) to the fiber coordinates of \(E_{i}\), for each \(i=1,\ldots,n\). The direct sum \(E=E_{1}\oplus\ldots\oplus E_{n}\) is a graded vector bundle with grading concentrated in degrees \(-1,\ldots,-n\). The \([n]\)-manifold \(E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\) has local basis sections of \(E_{i}^{*}\) as local generators of degree \(i\), for \(i=1,\ldots,n\), and so dimension \((d;r_{1},\ldots,r_{n})\). The \([n]\)-manifold \(E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\) is called a **split \([n]\)-manifold**. For instance, in the case \(n=3\), choose three vector bundles \(E_{1}\), \(E_{2}\) and \(E_{3}\) of ranks \(r_{1}\), \(r_{2}\) and \(r_{3}\) over a smooth manifold \(M\). Then the \([3]\)-manifold \(\mathcal{M}=E_{1}[-1]\oplus E_{2}[-2]\oplus E_{3}[-3]\) is defined by \(C^{\infty}(\mathcal{M})^{0}=C^{\infty}(M)\), \(C^{\infty}(\mathcal{M})^{1}=\Gamma(E_{1}^{*})\), \(C^{\infty}(\mathcal{M})^{2}=\Gamma(E_{2}^{*}\oplus\wedge^{2}E_{1}^{*})\), \(C^{\infty}(\mathcal{M})^{3}=\Gamma(E_{3}^{*}\oplus E_{1}^{*}\otimes E_{2}^{*} \oplus\wedge^{3}E_{1}^{*})\), and \[C^{\infty}(\mathcal{M})^{4}=\Gamma(E_{4}^{*}\oplus E_{1}^{*}\otimes E_{3}^{*} \oplus S^{2}E_{2}^{*}\oplus\wedge^{2}E_{1}^{*}\otimes E_{2}^{*}\oplus\wedge^{4 }E_{1}^{*}),\] etc. Set \(E_{j}^{*}\odot E_{k}^{*}\) to mean * \(\wedge\) if \(j=k\) is an odd number, * the symmetric product \(\cdot\) if \(j=k\) is an even number, * \(\otimes\) if \(j\neq k\). Then given an \([n]\)-manifold \(\mathcal{M}=E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\), its functions of degree \(k\) are elements of \[C^{\infty}(\mathcal{M})^{k}=\bigoplus_{\begin{subarray}{c}1\leq i_{1}\leq \ldots\leq i_{l}\leq n,\\ i_{1}+\ldots+i_{l}=k\end{subarray}}\Gamma\left(E_{i_{1}}^{*}\odot\ldots\odot E _{i_{l}}^{*}\right)\] for all \(k\in\mathbb{N}\). The graded product \(\odot\) on \[C^{\infty}(\mathcal{M})=\bigoplus_{k\in\mathbb{N}}\bigoplus_{\begin{subarray}{c}1 \leq i_{1}\leq\ldots\leq i_{l}\leq n,\\ i_{1}+\ldots+i_{l}=k\end{subarray}}\Gamma\left(E_{i_{1}}^{*}\odot\ldots\odot E_{ i_{l}}^{*}\right)\] is given by the following lemma. **Lemma 3.1**.: _Let \(\mathcal{M}=E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\) be a split \([n]\)-manifold over a smooth manifold \(M\) and consider \(\xi_{1},\ldots,\xi_{l}\in C^{\infty}(\mathcal{M})\) such that_ \[\xi_{j}\in\Gamma\left(E_{i_{1}^{j}}^{*}\odot\ldots\odot E_{i_{l_{j}}^{j}}^{*}\right)\] _with \(1\leq i_{1}^{j}\leq\ldots\leq i_{l_{j}}^{j}\leq n\), for \(j=1,\ldots,l\). That is, \(\xi_{j}\) has degree \(d_{j}:=i_{1}^{j}+\ldots+i_{l_{j}}^{j}\). Set \(p_{j}=(i_{1}^{j},\ldots,i_{l_{j}}^{j})\). It is an ordered integer partition of \(d_{j}\) for \(j=1,\ldots,l\)._ _The list \(p\) obtained as the union of \(p_{1},\ldots,p_{l}\), and canonically reordered, is then an integer partition with \(s:=l_{1}+\ldots+l_{l}\) elements of its sum \(d=d_{1}+\ldots+d_{l}\). Consider the canonical partition \(\rho_{p}^{\text{can}}=(K_{1},\ldots,K_{s})\) of \(\{1,\ldots,d\}\) associated to it. The graded product_ \[\xi_{1}\odot\ldots\odot\xi_{l}\] _a section of \(E_{|K_{1}|}^{*}\odot\ldots\odot E_{|K_{s}|}^{*}\), is defined on \(e_{K_{1}}\in E_{|K_{1}|},\ldots,e_{K_{s}}\in E_{|K_{s}|}\) by_ \[\sum_{\begin{subarray}{c}\rho_{\text{can}}^{\text{c}}=\rho_{1}\cup\ldots\cup \rho_{l}\\ \text{as sets and}\\ \rho_{1},\ldots,\rho_{l}\text{canonically ordered with}\\ |\rho_{j}|=p_{j}\text{ for }j=1,\ldots,l\end{subarray}}\text{sgn}(\rho_{1}, \ldots,\rho_{l})\cdot\xi_{1}(e_{K}\mid K\in\rho_{1})\cdot\ldots\cdot\xi_{l}( e_{K}\mid K\in\rho_{l}). \tag{9}\] _Here, \((\rho_{1},\ldots,\rho_{l})\) is the partition \(\rho_{\text{can}}^{\text{p}}\) of \(\underline{d}\), but reordered in the order of \(\rho_{1},\ldots,\rho_{l}\)._ Note that here, the notation \(\xi_{j}(e_{K}\mid K\in\rho_{j})\) is short for the following. If \(\rho_{j}=(K_{i_{1}},\ldots,K_{i_{l_{j}}})\), then \[\xi_{j}(e_{K}\mid K\in\rho_{j})=\xi_{j}(e_{K_{i_{1}}},\ldots,e_{K_{i_{l_{j}}}}).\] **Example 3.2**.: Before considering the proof, the reader can convince themself with a few examples that (9) works as it should. 1. Take \(\xi_{1}\in\Gamma(E_{1}^{*})\), \(\eta_{1}\odot\eta_{2}\in\Gamma(E_{1}^{*}\odot E_{2}^{*})\) and \(\tau_{2}\odot\tau_{3}\in\Gamma(E_{2}^{*}\odot E_{3}^{*})\). Then \(p_{1}=(1)\), \(p_{2}=(1,2)\) and \(p_{3}=(2,3)\), and so \(p=(1,1,2,2,3)\). Here \[\rho_{\text{can}}^{p}=(\{1\},\{2\},\{3,4\},\{5,6\},\{7,8,9\})\] and so \((\rho_{1},\rho_{2},\rho_{3})\) range over * \(((\{1\}),(\{2\},\{3,4\}),(\{5,6\},\{7,8,9\}))\) with sign \(1\), * \(((\{2\}),(\{1\},\{3,4\}),(\{5,6\},\{7,8,9\}))\) with sign \(-1\), * \(((\{1\}),(\{2\},\{5,6\}),(\{3,4\},\{7,8,9\}))\) with sign \(1\), * \(((\{2\}),(\{1\},\{5,6\}),(\{3,4\},\{7,8,9\}))\) with sign \(-1\). Therefore a quick computation shows that \((\xi_{1}\odot(\eta_{1}\odot\eta_{2})\odot(\tau_{2}\odot\tau_{3}))(e_{1},e_{2},e _{34},e_{56},e_{789})\) equals \[(\xi_{1}(e_{1})\eta_{1}(e_{2})-\xi_{1}(e_{2})\eta_{1}(e_{1}))\cdot(\eta_{2}(e_ {34})\tau_{2}(e_{56})+\eta_{2}(e_{56})\tau_{2}(e_{34}))\cdot\tau_{3}(e_{789}).\] 2. Take \(\xi\in\Gamma(E_{3}^{*})\), \(\eta\in\Gamma(E_{1})\) and \(\tau\in\Gamma(E_{2}^{*})\). Then \(p_{1}=(3)\), \(p_{2}=(1)\) and \(p_{3}=(2)\) and so \(p=(1,2,3)\) and \(\rho_{\mathrm{can}}^{p}=(\{1\},\{2,3\},\{4,5,6\})\). The only possible list \((\rho_{1},\rho_{2},\rho_{3})\) is here \[(\{4,5,6\},\{1\},\{2,3\}),\] which has sign \(-1\). Hence \[(\xi\odot\eta\odot\tau)(e_{1},e_{23},e_{456})=-\xi(e_{456})\cdot\eta(e_{1}) \cdot\tau(e_{23}),\] which shows \[\xi\odot\eta\odot\tau=-\eta\otimes\tau\otimes\xi.\] 3. The reader is invited to check that \(\xi\odot\eta\) for \(\xi\in\Gamma(\wedge^{k}E_{1}^{*})\) and \(\eta\in\Gamma(\wedge^{l}E_{1}^{*})\) equals \[\xi\odot\eta=\frac{(k+l)!}{k!l!}\operatorname{Alt}(\xi\otimes\eta)=\xi\wedge\eta.\] This example illustrates nicely the compatibility of the graded tensor in Lemma 3.1 with the usual wedge product of sections of \(\Gamma(\wedge^{\bullet}E_{1}^{*})\). Proof of Lemma 3.1.: First, the graded symmetry of this 'product' in the entries \(\xi_{1},\ldots,\xi_{l}\) is checked as follows: It suffices to show that for \(i=1,\ldots,l-1\), exchanging \(\xi_{i}\) with \(\xi_{i+1}\) in (9) is the same as a multiplication by the factor \((-1)^{d_{i}d_{i+1}}\). This exchange amounts to exchanging \(p_{i}\) with \(p_{i+1}\) in the list of partitions, and so exchanging the roles of \(\rho_{i}\) and \(\rho_{i+1}\) in each summand of (9), which only amounts to replacing in each summand the factor \(\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})\) by \(\operatorname{sgn}(\rho_{1},\ldots,\rho_{i+1},\rho_{i},\ldots,\rho_{l})\). But given a permutation \(\sigma\in S_{d}\) such that \(\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})=(-1)^{\sigma}\), the composition of \(\sigma\) with \(d_{i}\cdot d_{i+1}\) transpositions on both sides3 gives a permutation \(\sigma^{\prime}\in S_{d}\) such that \(\operatorname{sgn}(\rho_{1},\ldots,\rho_{i+1},\rho_{i},\ldots,\rho_{l})=(-1) ^{\sigma^{\prime}}\). Hence Footnote 3: Note that the canonical partitions \(\rho_{\mathrm{can}}^{(p_{1},\ldots,p_{l})}\) and \(\rho_{\mathrm{can}}^{(p_{1},\ldots,p_{i+1},p_{i},\ldots,p_{l})}\) are not equal in general. \[\operatorname{sgn}(\rho_{1},\ldots,\rho_{i+1},\rho_{i},\ldots,\rho_{l})=(-1)^ {d_{i}d_{i+1}}\operatorname{sgn}(\rho_{1},\ldots,\rho_{l}).\] Then check that (9) is indeed an element of \(\Gamma\left(E_{|K_{1}|}^{*}\odot\ldots\odot E_{|K_{s}|}^{*}\right)\). The following shows that if \(|K_{i}|=|K_{j}|=a\) for some \(i<j\in\{1,\ldots,s\}\), then exchanging \(e_{K_{i}}\) with \(e_{K_{j}}\) in (9), amounts to multiplying it by \((-1)^{a}=(-1)^{a\cdot a}\). Consider a list of canonically ordered partitions \(\rho_{1},\ldots,\rho_{l}\) the union of which is \(\rho_{\mathrm{can}}^{p}\) and such that \(\rho_{j}\) has size \(p_{j}\) for \(j=1,\ldots,l\). Write \((\rho_{1},\ldots,\rho_{l})=(I_{1},\ldots,I_{s})\). Exchanging \(K_{i}\) and \(K_{j}\) in this list defines a new list \((\rho_{1},\ldots,\rho_{l})^{ij}=(\rho_{1}^{*},\ldots,\rho_{l}^{*})=(I_{1}^{*}, \ldots,I_{d}^{*})\) with size \(p\). First assume that the partitions \(\rho_{1}^{*},\ldots,\rho_{l}^{*}\) are all still (canonically) ordered. By the last claim of Lemma 2.4, \(\operatorname{sgn}(\rho_{1},\ldots,\rho_{d})^{ij}=(-1)^{a}\operatorname{sgn}( \rho_{1},\ldots,\rho_{l})\), since the set \(\underline{d}\) in its order induced by \((\rho_{1},\ldots,\rho_{l})\) is sent to \(\underline{d}\) in its order induced by \((\rho_{1}^{*},\ldots,\rho_{l}^{*})\) by \(a\) permutations exchanging the elements of \(K_{i}\) with the elements of \(K_{j}\). As a consequence, exchanging \(e_{K_{i}}\) with \(e_{K_{j}}\) in the summand of (9) defined by \((\rho_{1},\ldots,\rho_{l})\) yields \[\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})\cdot\xi_{1}(e_{K}\mid K \in\rho_{1}^{*})\cdot\ldots\cdot\cdot\xi_{l}(e_{K}\mid K\in\rho_{l}^{*})\] \[=(-1)^{a}\operatorname{sgn}(\rho_{1}^{*},\ldots,\rho_{l}^{*})\cdot \xi_{1}(e_{K}\mid K\in\rho_{1}^{*})\cdot\ldots\cdot\cdot\xi_{l}(e_{K}\mid K \in\rho_{l}^{*}),\] which is \((-1)^{a}\) times the term of (9) defined by \((\rho_{1}^{*},\ldots,\rho_{l}^{*})\). Assume now that \(K_{i}\in\rho_{r}\) and \(K_{j}\in\rho_{t}\) for some \(r,t\in\{1,\ldots,l\}\), and that exchanging \(K_{i}\) with \(K_{j}\) leads to one or both of \(\rho_{r}^{*}\) and \(\rho_{t}^{*}\) not being canonically ordered anymore. If \(r=t\), then exchanging \(e_{K_{i}}\) and \(e_{K_{j}}\) in the summand of (9) defined by \((\rho_{1},\ldots,\rho_{l})\) multiplies the same term by \((-1)^{a}\) since \(\xi_{r}\) is graded-symmetric. Assume next that \(r\neq t\) and replacing \(K_{i}\) by \(K_{j}\) in \(\rho_{r}\) leads to a partition which is not ordered anymore. Without loss of generality, \(K_{i}\) needs to be exchanged with one element \(K_{h}\) in this partition for it becoming canonically ordered again. The obtained ordered partition is then denoted by \(\rho_{r}^{*}\) as above. Assume without loss of generality that after this step, \(\rho_{1}^{*},\ldots,\rho_{l}^{*}\) are all ordered. By Lemma 2.4, \(\operatorname{sgn}(\rho_{1}^{*},\ldots,\rho_{d}^{*})=(-1)^{2a}\operatorname{ sgn}(\rho_{1},\ldots,\rho_{l})=\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})\). Then exchanging \(\operatorname{sgn}_{k_{i}}\) with \(e_{K_{j}}\) in the summand of (9) defined by \((\rho_{1},\ldots,\rho_{l})\) yields \[\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})\cdot\xi_{1}(e_{K} \mid K\in\rho_{1}^{*})\cdot\ldots\cdot(-1)^{a}\cdot\xi_{r}(e_{K}\mid K\in\rho_{ r}^{*})\cdot\ldots\cdot\xi_{l}(e_{K}\mid K\in\rho_{l}^{*})\] \[=(-1)^{a}\cdot\operatorname{sgn}(\rho_{1}^{*},\ldots,\rho_{l}^{ *})\cdot\xi_{1}(e_{K}\mid K\in\rho_{1}^{*})\cdot\ldots\cdot\xi_{r}(e_{K}\mid K \in\rho_{r}^{*})\cdot\ldots\cdot\xi_{l}(e_{K}\mid K\in\rho_{l}^{*}).\] The remaining cases are treated similarly. The above shows that exchanging \(e_{K_{i}}\) with \(e_{K_{j}}\) in (9) multiplies all summands of (9) by \((-1)^{a}=(-1)^{a\cdot a}\). As a consequence \[(\xi_{1}\odot\ldots\odot\xi_{l})(e_{K_{1}},\ldots,e_{K_{j}},\ldots,e_{K_{i}}, \ldots,e_{K_{d}})=(-1)^{a}\cdot(\xi_{1}\odot\ldots\odot\xi_{l})(e_{K_{1}}, \ldots,e_{K_{d}}).\] That is, \(\xi_{1}\odot\ldots\odot\xi_{l}\) is graded symmetric. Then show that the graded symmetric product \(\odot\) coincides with the graded-symmetric product on generators. Take \(\xi_{j}\in\Gamma(E_{i_{j}}^{*})\) for \(j=1,\ldots,l\). Set for \(i=1,\ldots,n\) \[M_{i}=\{i_{j}\mid j\in\{1,\ldots,l\}\text{ and }i_{j}=i\}.\] Set \(\mathcal{S}_{i}\) to be the set of permutations of \(M_{i}\) for \(i=1,\ldots,n\). Assume without loss of generality (see the first step of this proof) that \(1\leq i_{1}\leq\ldots\leq i_{l}\leq n\). Then \(p=(i_{1},\ldots,i_{l})\) and (9) reads as follows on \(e_{K_{1}},\ldots,e_{K_{l}}\), with \(\rho_{\operatorname{can}}^{p}=(K_{1},\ldots,K_{l})\). \[(\xi_{1}\odot\ldots\odot\xi_{l})(e_{K_{1}},\ldots,e_{K_{l}})=\sum_{\begin{subarray} {c}\rho_{\operatorname{can}}^{(i_{1},\ldots,i_{l})}=(I_{1},\ldots,I_{l}) \text{ as sets}\\ \#I_{j}=i_{j}\text{ for }j=1,\ldots,l\end{subarray}}\operatorname{sgn}(I_{1}, \ldots,I_{l})\cdot\xi_{1}(e_{I_{1}})\cdot\ldots\cdot\xi_{l}(e_{I_{l}}).\] A choice of a (non-canonically) ordered partition \((I_{1},\ldots,I_{l})\) of \(\underline{d}\) such that \(\rho_{\operatorname{can}}^{(i_{1},\ldots,i_{l})}=(I_{1},\ldots,I_{l})\) as sets and \(\#I_{j}=i_{j}\) for \(j=1,\ldots,l\) amounts to a choice of order of each of the tuples \[(K_{j}\mid j\in M_{i})\,,\] hence of a permutation \(\sigma_{i}\in\mathcal{S}_{i}\) for \(i=1,\ldots,n\). It is then easy to see that \[\sum_{\begin{subarray}{c}\rho_{\operatorname{can}}^{(i_{1},\ldots,i_{l})}=(I_ {1},\ldots,I_{l})\text{ as sets}\\ \lvert I_{j}\rvert=i_{j}\text{ for }j=1,\ldots,l\end{subarray}} \operatorname{sgn}(I_{1},\ldots,I_{l})\cdot\xi_{1}(e_{I_{1}})\cdot\ldots \cdot\xi_{l}(e_{I_{l}})\] That is, \(\xi_{1}\odot\ldots\odot\xi_{l}\) is the block-wise graded skew-symmetrisation of \[\xi_{1}\otimes\ldots\otimes\xi_{l}=\bigotimes_{i=1}^{n}\underbrace{\left( \bigotimes_{j=1+\sum_{r=1}^{i-1}N_{r}}^{\sum_{r=1}^{i}N_{r}}\xi_{j}\right)}_{ \begin{subarray}{c}\sum_{r=1}^{i}N_{r}\\ \in\Gamma\big{(}(E_{i}^{*})^{\otimes N_{i}}\big{)}\end{subarray}}.\] Finally the associativity (9) is proved. Consider without loss of generality the case \(l=3\), and show as follows that \[(\xi_{1}\odot\xi_{2})\odot\xi_{3}=\xi_{1}\odot\xi_{2}\odot\xi_{3}=\xi_{1}\odot( \xi_{2}\odot\xi_{3}).\] By the graded-symmetry of \(\odot\), it is enough to prove the first equality on the left-hand side. Let \(q\) be the partition obtained as the reordered union of \(p_{1}\) and \(p_{2}\). Then \(p\) is the reordered union of \(q\) and \(p_{3}\). With the same notation as above, the form \((\xi_{1}\odot\xi_{2})\odot\xi_{3}\) applied to \(e_{K_{1}},\ldots,e_{K_{s}}\) reads \[\sum_{\begin{subarray}{c}\rho_{\mathrm{can}}^{p}=\rho\cup\rho_{3}\\ \mathrm{as\ sets\ and\ and\ }\\ |\rho|=q\ \mathrm{and}\ |\rho_{3}|=p_{3}\end{subarray}}\mathrm{sgn}(\rho,\rho_{3}) \cdot(\xi_{1}\odot\xi_{2})(e_{K}\mid K\in\rho)\cdot\xi_{3}(e_{K}\mid K\in\rho _{3})\] \[=\sum_{\begin{subarray}{c}\rho_{\mathrm{can}}^{p}=\rho\cup\rho_{ 3}\\ \mathrm{as\ sets\ and\ and\ }\\ |\rho|=q\ \mathrm{and}\ |\rho_{3}|=p_{3}\end{subarray}}\mathrm{sgn}(\rho,\rho_{3}) \cdot\sum_{\begin{subarray}{c}\rho=\rho_{1}\cup\rho_{2}\\ \mathrm{as\ sets\ and\ and\ }\\ |\rho_{1},\rho_{2}\mathrm{canonically\ ordered\ with\ }\\ |\rho_{1}|=p_{1}\ \mathrm{and}\ |\rho_{2}|=p_{2}\end{subarray}}\mathrm{ sgn}^{\rho}(\rho_{1},\rho_{2})\cdot\xi_{1}(e_{K}\mid K\in\rho_{1})\cdot\xi_{2}(e_{K}\mid K\in\rho _{2})\cdot\xi_{3}(e_{K}\mid K\in\rho_{3}),\] where \(\mathrm{sgn}^{\rho}(\rho_{1},\rho_{2})\) is the sign of the unique permutation of \(I:=\cup\rho\) sending \(I\) in its order induced by \(\rho\) to \(I\) in its order induced by \((\rho_{1},\rho_{2})\), see Lemma 2.4. This is \((\xi_{1}\odot\xi_{2}\odot\xi_{3})(e_{K_{1}},\ldots,e_{K_{s}})\) if \[\mathrm{sgn}(\rho,\rho_{3})\cdot\mathrm{sgn}^{\rho}(\rho_{1},\rho_{2})= \mathrm{sgn}(\rho_{1},\rho_{2},\rho_{3}) \tag{10}\] for ordered partitions \(\rho,\rho_{1},\rho_{2},\rho_{3}\) such that \(\rho_{\mathrm{can}}^{p}=\rho\cup\rho_{3}\) and \(\rho=\rho_{1}\cup\rho_{2}\) as sets, and \(|\rho|=q\), \(|\rho_{3}|=p_{3}\), \(|\rho_{1}|=p_{1}\), \(|\rho_{2}|=p_{2}\). In order to show (10), use the last claim of Lemma 2.4. Let \(I\) be the union of the elements of \(\rho\), i.e. let \(\rho\) be an ordered partition of \(I\subseteq\underline{d}\), and write the elements of \(I\) as \(i_{1},\ldots,i_{d_{1}+d_{2}}\), in the order defined by \(\rho\). The elements of \(I_{j}:=\cup\rho_{j}\) are \(i_{1}^{j},\ldots,i_{d_{j}}^{j}\) in the order defined by \(\rho_{j}\), for \(j=1,2,3\). Note that \(I=I_{1}\cup I_{2}\). Then \(\mathrm{sgn}(\rho,\rho_{3})\) is the sign of the unique permutation \(\sigma\) of \(\underline{d}\) sending \(\underline{d}\) in its canonical order to \(\underline{d}\) in the order \[i_{1},\ldots,i_{d_{1}+d_{2}},i_{1}^{3},\ldots,i_{d_{3}}^{3}.\] The number \(\mathrm{sgn}^{\rho}(\rho_{1},\rho_{2})\) is the sign of the unique permutation \(\tau\) of \(I_{1}\cup I_{2}=I\) sending \(I\) in the order \(i_{1},\ldots,i_{d_{1}+d_{2}}\) to \(I\) in the order \[i_{1}^{1},\ldots,i_{d_{1}}^{1},i_{1}^{2},\ldots,i_{d_{2}}^{2},\] and the sign of \((\rho_{1},\rho_{2},\rho_{3})\) is the sign of the unique permutation \(\eta\) of \(\underline{d}\) sending \(\underline{d}\) in its canonical order to \(\underline{d}\) in the order \[i_{1}^{1},\ldots,i_{d_{1}}^{1},i_{1}^{2},\ldots,i_{d_{2}}^{2},i_{1}^{3},\ldots,i_{d_{3}}^{3}.\] Extend \(\tau\) to a permutation of \(\underline{d}\) by setting \(\tau|_{I_{3}}=\mathrm{id}_{I_{3}}\), which does not change its sign. Then \(\eta\) equals \(\tau\circ\sigma\), and so \(\mathrm{sgn}(\rho_{1},\rho_{2},\rho_{3})=(-1)^{\eta}=(-1)^{\tau}(-1)^{\sigma}= \mathrm{sgn}^{\rho}(\rho_{1},\rho_{2})\cdot\mathrm{sgn}(\rho,\rho_{3})\). In this paper the category of split \([n]\)-manifolds is written \(\mathfrak{sl}[n]\mathsf{Man}\). A morphism \[\mu\colon E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\to F_{1}[-1]\oplus\ldots\oplus F _{m}[-m]\] of split \(\mathbb{N}\)-manifolds over the bases \(M\) and \(N\), respectively, consists of a smooth map \(\mu_{0}\colon M\to N\), and for each \(p=(i_{1},\ldots,i_{l})\in\mathcal{P}(m)\) a morphism \[\mu_{p}\colon E_{i_{1}}\odot E_{i_{2}}\odot\ldots\odot E_{i_{l}}\to F_{i_{1}+ \ldots+i_{l}}\] of vector bundles over \(\mu_{0}\). The map \(\mu^{\star}\) sends a degree \(k\) generator \(\xi\in\Gamma(F_{k}^{*})\) to \[\sum_{\begin{subarray}{c}p=(i_{1},\ldots,i_{l})\in\mathcal{P}(m)\\ \sum p=k\end{subarray}}\quad\underset{\in\Gamma(E_{i_{1}}^{*}\odot E_{i_{2}}^{ \star}\odot\ldots\odot E_{i_{l}}^{*})}{\mu_{(i_{1},\ldots,i_{l})}}\in C^{ \infty}(E_{1}[-1]\oplus\ldots\oplus E_{n}[-n])^{k}.\] The morphism \(\mu\) is therefore written here \[\mu=\left(\mu_{p}\right)_{p\in\mathcal{P}(m)},\] the smooth map \(\mu_{0}\) being here implicit as the common base map of all the vector bundle morphisms \(\mu_{p}\), \(p\in\mathcal{P}(m)\). In the following, given an ordered integer partition \((i_{1},\ldots,i_{l})\) of a natural number and a list of vectors \(e_{1}\in E_{i_{1}},\ldots,e_{l}\in E_{i_{l}}\), it is useful as before to index this list by \[K_{1},\ldots,K_{l}\] where \((K_{1},\ldots,K_{l})=\rho_{\text{can}}^{(i_{1},\ldots,i_{l})}\). Hence \[e_{K_{1}}\in E_{\#K_{1}},\ldots,e_{K_{l}}\in E_{\#K_{l}}.\] **Lemma 3.3**.: _Let_ \[\mu\colon E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\to F_{1}[-1]\oplus\ldots \oplus F_{m}[-m]\] _be a morphism of split \(\mathbb{N}\)-manifolds over base manifolds \(M\) and \(N\). Let \(\xi_{1}\in\Gamma(F_{d_{1}}^{*}),\ldots,\xi_{l}\in\Gamma(F_{d_{l}}^{*})\), with \(d_{1}\leq\ldots\leq d_{l}\)._ _Then for each list of ordered integer partitions \(p_{1},\ldots,p_{l}\) such that \(\Sigma p_{j}=d_{j}\), \(j=1,\ldots,l\) the image \(\mu^{\star}(\xi_{1}\odot\ldots\odot\xi_{l})\) has a term in_ \[\Gamma\left(E_{i_{1}}^{*}\odot\ldots\odot E_{i_{s}}^{*}\right),\] _if \(p=(i_{1},\ldots,i_{s})\) is the canonically reordered integer partition \(p_{1}\cup\ldots\cup p_{l}\)._ _More precisely, set \(\rho_{\text{can}}^{p}=(K_{1},\ldots,K_{s})\). Then for all \(e_{K_{1}}\in\Gamma(E_{\#K_{1}}^{*}),\ldots,e_{K_{s}}\in\Gamma(E_{\#K_{s}}^{*})\), \(\mu^{\star}(\xi_{1}\odot\ldots\odot\xi_{l})(e_{K_{1}},\ldots,e_{K_{s}})\) equals_ \[\underset{\begin{subarray}{c}\rho_{1},\ldots,\rho_{l}\text{ ordered partitions}\\ \rho_{\text{can}}^{p}=\rho_{1}\cup\ldots\cup\rho_{l}\text{ as sets}\\ \cup\rho_{1}<\ldots<\cup\rho_{l}\\ \#\cup\rho_{1}=d_{1},\ldots,\#\cup\rho_{l}=d_{l}\end{subarray}}{\text{sgn}( \rho_{1},\ldots,\rho_{l})\cdot(\xi_{1}\odot\ldots\odot\xi_{l})\left(\mu_{| \rho_{1}|}(e_{K}\mid K\in\rho_{1}),\ldots,\mu_{|\rho_{l}|}(e_{K}\mid K\in\rho_ {l})\right). \tag{11}\] Here, for each \(j\) the section \(\mu_{|\rho_{j}|}\) is fed the list of vectors \((e_{K}\mid K\in\rho_{j})\). The latter is a notation for the _ordered_ tuple with order given by the (canonical) order of the partition \(\rho_{j}\). Proof.: Since \(\mu^{\star}\) is a morphism of graded algebras, the image of \(\xi_{1}\odot\ldots\odot\xi_{l}\) under \(\mu^{\star}\) is given by \[\mu^{\star}(\xi_{1}\odot\ldots\odot\xi_{l})=\sum_{\begin{subarray}{c}p_{1}, \ldots,p_{l}\in\mathcal{P}(n)\\ \sum p_{j}=d_{j}\text{ for }j=1,\ldots,l\end{subarray}}\mu^{\star}_{p_{1}}(\xi_{1}) \odot\ldots\odot\mu^{\star}_{p_{l}}(\xi_{l}). \tag{12}\] Assume for simplicity that \(l=2\) and take first \(d_{1}<d_{2}\). Choose as in the claim two ordered integer partitions \(p_{1},p_{2}\) such that \(\sum p_{1}=d_{1}\) and \(\sum p_{2}=d_{2}\). Then \[\mu^{\star}(\xi_{1}\odot\xi_{2})(e_{K_{1}},\ldots,e_{K_{s}})=\sum_{ \begin{subarray}{c}q_{1},q_{2}\in\mathcal{P}(n)\\ \sum p_{j}=d_{j}\text{ for }j=1,2\\ q_{1}\text{O}q_{2}=p\text{ as sets}\end{subarray}}\left(\mu^{\star}_{q_{1}}(\xi_{1}) \odot\mu^{\star}_{q_{2}}(\xi_{2})\right)(e_{K_{1}},\ldots,e_{K_{s}}).\] By Lemma 3.1 this equals \[\sum_{\begin{subarray}{c}q_{1},q_{2}\in\mathcal{P}(n)\\ \sum p_{j}=d_{j}\text{ for }j=1,2,2\text{ }\rho_{\text{can}}^{\text{\tiny{$\rho_{1} \cup\rho_{2}$ as sets}}}\\ q_{1}\cup q_{2}=p\text{ as sets}\end{subarray}}\operatorname{sgn}(\rho_{1}, \rho_{2})\cdot\xi_{1}\left(\mu_{q_{1}}(e_{K}\mid K\in\rho_{1})\right)\cdot\xi_ {2}\left(\mu_{q_{2}}(e_{K}\mid K\in\rho_{2})\right)\] \[=\sum_{\begin{subarray}{c}\rho_{1},\rho_{2}\text{ ordered}\\ \rho_{\text{can}}^{\text{\tiny{$\rho_{\text{can}}=\rho_{1}\cup\rho_{2}$ as sets}}}\\ \#\cup\rho_{1}=d_{1},\#\cup\rho_{2}=d_{2}\end{subarray}}\operatorname{sgn}( \rho_{1},\rho_{2})\cdot\xi_{1}\left(\mu_{|\rho_{1}|}(e_{K}\mid K\in\rho_{1}) \right)\cdot\xi_{2}\left(\mu_{|\rho_{2}|}(e_{K}\mid K\in\rho_{2})\right). \tag{13}\] But since \(\#\cup\rho_{1}=d_{1}<d_{2}=\#\cup\rho_{2}\) for each pair of ordered partitions \(\rho_{1},\rho_{2}\) indexing this sum, the sum equals \[\sum_{\begin{subarray}{c}\rho_{1},\rho_{2}\text{ ordered}\\ \rho_{\text{can}}^{\text{\tiny{$\rho_{\text{can}}=\rho_{1}\cup\rho_{2}$ as sets}}\\ \#\cup\rho_{1}=d_{1},\#\cup\rho_{2}=d_{2}\\ \cup\rho_{1}<\cup\rho_{2}\end{subarray}}}\operatorname{sgn}(\rho_{1},\rho_{2} )\cdot\xi_{1}\left(\mu_{|\rho_{1}|}(e_{K}\mid K\in\rho_{1})\right)\cdot\xi_{2} \left(\mu_{|\rho_{2}|}(e_{K}\mid K\in\rho_{2})\right).\] Next take \(l=2\) but \(d_{1}=d_{2}=:a\). Choose as above two different ordered integer partitions \(p_{1},p_{2}\) such that \(\sum p_{1}=\sum p_{2}=a\) and assume for simplicity4 that under all candidates for these two partitions, only \(p_{1}\) and \(p_{2}\) give non-vanishing vector bundle morphisms \(\mu_{p_{1}}\) and \(\mu_{p_{2}}\). Then \(\mu^{\star}(\xi_{1}\odot\xi_{2})\) is given by Footnote 4: In general, \(\mu^{\star}(\xi_{1}\odot\xi_{2})\) has more terms, that need to be considered alone or pairwise, as is done here. \[\mu^{\star}(\xi_{1}\odot\xi_{2})=\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{1}}^ {\star}(\xi_{2})+\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2}) +\mu_{p_{2}}^{\star}(\xi_{1})\odot\mu_{p_{1}}^{\star}(\xi_{2})+\mu_{p_{2}}^{ \star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2}).\] Since \(p_{1}\neq p_{2}\), the reordered unions \(p_{1}\cup p_{1}\), \(p_{2}\cup p_{2}\) and \(p_{1}\cup p_{2}\) are all different. Consider first the term \(\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{1}}^{\star}(\xi_{2})\). (The term \(\mu_{p_{2}}^{\star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2})\) is treated in the same manner.) Let again \(p\) be the reordered union \(p_{1}\cup p_{1}\) and let \(\rho_{\text{can}}^{\text{\tiny{$\rho_{\text{can}}=(K_{1},\ldots,K_{s})$. In this case, by Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem: Consider the terms \(\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2})+\mu_{p_{2}}^{\star}( \xi_{1})\odot\mu_{p_{1}}^{\star}(\xi_{2})\). Let \(p\) be the reordered union of \(p_{1}\) and \(p_{2}\) and set \(\rho_{\text{can}}^{p}=(K_{1},\ldots,K_{s})\). Here \[\left(\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2})+\mu_{p_{2} }^{\star}(\xi_{1})\odot\mu_{p_{1}}^{\star}(\xi_{2})\right)(e_{K_{1}},\ldots,e_ {K_{s}})\] reads \[\sum_{\begin{subarray}{c}\rho_{1},\rho_{2}\text{ ordered}\\ \rho_{\text{can}}^{p}=\rho_{1}\cup\rho_{2}\text{ as sets}\\ |\rho_{1}|=p_{1},|\rho_{2}|=p_{2}\end{subarray}}\text{sgn}(\rho_{1},\rho_{2}) \cdot\xi_{1}\left(\mu_{p_{1}}(e_{K}\mid K\in\rho_{1})\right)\cdot\xi_{2}\left( \mu_{p_{2}}(e_{K}\mid K\in\rho_{2})\right)\] \[+\sum_{\begin{subarray}{c}\rho_{1},\rho_{2}\text{ ordered}\\ \rho_{\text{can}}^{p}=\rho_{1}\cup\rho_{2}\text{ as sets}\\ |\rho_{1}|=p_{1},|\rho_{2}|=p_{2}\end{subarray}}\text{sgn}(\rho_{2},\rho_{1}) \cdot\xi_{1}\left(\mu_{p_{2}}(e_{K}\mid K\in\rho_{2})\right)\cdot\xi_{2}\left( \mu_{p_{1}}(e_{K}\mid K\in\rho_{1})\right).\] For a choice of two partitions \(\rho_{1}\) and \(\rho_{2}\) indexing these sums, either5\(\cup\rho_{1}<\cup\rho_{2}\) or \(\cup\rho_{1}>\cup\rho_{2}\). In both cases the corresponding terms in the sum give together Footnote 5: In fact, for some choices of \(p_{1}\) and \(p_{2}\), \(\cup\rho_{1}<\cup\rho_{2}\) is automatically true. For instance if \(p_{1}=(1,3)\) and \(p_{2}=(2,2)\) the partition \(p\) is \(p=(1,2,2,3)\) and the partitions must be \(\rho_{1}=(\{1\},\{6,7,8\})\) and \(\rho_{2}=(\{2,3\},\{4,5\})\). But for some choices of \(p_{1}\) and \(p_{2}\) the order of \(\cup\rho_{1}\) and \(\cup\rho_{2}\) cannot be predicted. For instance if \(p_{1}=(1,3)\) and \(p_{2}=(1,1,2)\) since then \(p=(1,1,1,2,3)\) and \(\rho_{1}=(\{1\},\{6,7,8\})\) and \(\rho_{2}=(\{2\},\{3\},\{4,5\})\) satisfy \(\cup\rho_{1}<\cup\rho_{2}\) while \(\rho_{1}=(\{2\},\{6,7,8\})\) and \(\rho_{2}=(\{1\},\{3\},\{4,5\})\) satisfy \(\cup\rho_{1}>\cup\rho_{2}\). \[\text{sgn}(\rho_{1},\rho_{2})(\xi_{1}\odot\xi_{2})(\mu_{|\rho_{1}|}(e_{K}\mid K \in\rho_{1}),\mu_{|\rho_{2}|}(e_{K}\mid K\in\rho_{2}))\] if \(\cup\rho_{1}<\cup\rho_{2}\) and \[\text{sgn}(\rho_{2},\rho_{1})(\xi_{1}\odot\xi_{2})(\mu_{|\rho_{2}|}(e_{K}\mid K \in\rho_{2}),\mu_{|\rho_{1}|}(e_{K}\mid K\in\rho_{1}))\] if \(\cup\rho_{2}<\cup\rho_{1}\). As a summary \[\left(\mu_{p_{1}}^{\star}(\xi_{1})\odot\mu_{p_{2}}^{\star}(\xi_{2})+\mu_{p_{2} }^{\star}(\xi_{1})\odot\mu_{p_{1}}^{\star}(\xi_{2})\right)(e_{K_{1}},\ldots,e_{ K_{s}})\] equals \[\sum_{\begin{subarray}{c}\rho_{1},\rho_{2}\text{ ordered}\\ \rho_{\text{can}}^{p}=\rho_{1}\cup\rho_{2}\text{ as sets}\\ \#\cup\rho_{1}=a=\#\cup\rho_{2}\\ \cup\rho_{1}<\cup\rho_{2}\end{subarray}}\text{sgn}(\rho_{1},\rho_{2})\cdot \text{sgn}(\rho_{1},\rho_{2})(\xi_{1}\odot\xi_{2})(\mu_{|\rho_{1}|}(e_{K}\mid K \in\rho_{1}),\mu_{|\rho_{2}|}(e_{K}\mid K\in\rho_{2})).\] The above proves the claim for \(l=2\). The general case for \(l\geq 3\) works in the same manner, but with more different cases to consider. The following result is then an immediate corollary of the preceding lemma. The proof is left to the reader. **Corollary 3.4**.: _Given two morphisms_ \[\mu\colon E_{1}[-1]\oplus\ldots\oplus E_{n}[-n]\to F_{1}[-1]\oplus\ldots\oplus F_ {m}[-m]\] _and_ \[\nu\colon F_{1}[-1]\oplus\ldots\oplus F_{m}[-m]\to G_{1}[-1]\oplus\ldots\oplus G _{q}[-q]\] of split \([n]\)-manifolds over smooth maps \(\mu_{0}\colon M\to N\) and \(\nu_{0}\colon N\to Q\), respectively, the composition_ \[\nu\circ\mu=\left((\nu\circ\mu)_{p}\right)_{p\in\mathcal{P}(q)}\] _is defined by_ \[(\nu\circ\mu)_{p}=\sum_{\begin{subarray}{c}\rho_{1},\dots,\rho_{l}\text{ ordered}\\ \text{such that }\rho_{1}\cup\dots\cup\rho_{l}=\rho_{\text{can}}^{p}\\ \text{as }\text{ }\cup\rho_{1}<\dots<\cup\rho_{l}\end{subarray}}\operatorname{sgn}( \rho_{1},\dots,\rho_{l})\cdot\nu_{(\Sigma|\rho_{1}|,\dots,\Sigma|\rho_{l}|)} \circ\left(\mu_{|\rho_{1}|},\dots,\mu_{|\rho_{l}|}\right) \tag{14}\] _for all \(p\in\mathcal{P}(q)\)._ Here, \((\nu\circ\mu)_{p}\) is fed a list of vectors \[(e_{K_{1}},\dots,e_{K_{s}})\] with \(\rho_{\text{can}}^{p}=(K_{1},\dots,K_{s})\). The term \(\nu_{(\Sigma|\rho_{1}|,\dots,\Sigma|\rho_{l}|)}\circ\left(\mu_{|\rho_{1}|}, \dots,\mu_{|\rho_{l}|}\right)\) of (14) indexed by \((\rho_{1},\dots,\rho_{l})\) applied to this list is then precisely \[\nu_{(\Sigma|\rho_{1}|,\dots,\Sigma|\rho_{l}|)}\left(\mu_{|\rho_{1}|}(e_{K} \mid K\in\rho_{1}),\dots,\mu_{|\rho_{l}|}(e_{K}\mid K\in\rho_{l})\right),\] where, as before, \((e_{K}\mid K\in\rho_{1})\) is a notation for the _ordered_ tuple with order given by the partition \(\rho_{1}\), etc. A morphism \(\mu\colon E_{1}[-1]\oplus\dots\oplus E_{n}[-n]\to F_{1}[-1]\oplus\dots\oplus F_ {n}[-n]\) of split \([n]\)-manifolds is an **isomorphism** of split \([n]\)-manifolds if it has an inverse. More precisely, if \(\mu_{0}\) is a diffeomorphism and there exists a morphism \(\nu\colon F_{1}[-1]\oplus\dots\oplus F_{n}[-n]\to E_{1}[-1]\oplus\dots\oplus E _{n}[-n]\) of split \([n]\)-manifolds such that \(\nu_{0}\) is the smooth inverse of \(\mu_{0}\) and \[(\nu\circ\mu)_{p}=0\text{ and }(\mu\circ\nu)_{p}=0\] for \(p\in\mathcal{P}(n)\) with \(\sharp p\geq 2\), and \[(\nu\circ\mu)_{p}=\operatorname{id}_{E_{\Sigma p}}\] for \(\sharp p=1\). (This implies as always \((\mu\circ\nu)_{p}=\operatorname{id}_{F_{\Sigma p}}\) for \(\sharp p=1\).) For instance, a collection of \(n\)-isomorphisms \(\mu_{(1)}\colon E_{1}\to F_{1}\),..., \(\mu_{(n)}\colon E_{n}\to F_{n}\) over a smooth diffeomorphism \(\mu_{0}\colon M\to N\) gives an isomorphism of \(E_{1}[-1]\oplus\dots\oplus E_{n}[-n]\) with \(F_{1}[-1]\oplus\dots\oplus F_{n}[-n]\) by setting \(\mu_{p}=0\) for \(\sharp p\geq 2\). Any \(\mathbb{N}\)-manifold is non-canonically isomorphic to a split \(\mathbb{N}\)-manifold of the same degree. More precisely, the embedding of the category of split \([n]\)-manifolds in the one of \([n]\)-manifolds is fully faithful and essentially surjective. This is true locally, per definition, and proved globally for instance in [1], following the proof of the \(\mathbb{Z}/2\mathbb{Z}\)-graded version of this theorem, which is called there _Batchelor's theorem_[1], see also [1] and [10]. **Proposition 3.5**.: _Any \([n]\)-manifold is non-canonically isomorphic to a split \([n]\)-manifold._ Note that \([1]\)-manifolds are automatically split since they are just vector bundles with a degree shifting in the fibers, i.e. a \([1]\)-manifold over \(M\) is \(E[-1]\) for some vector bundle \(E\to M\) and \(C^{\infty}(E[-1])=\Gamma(\wedge^{\bullet}E^{*})\), the exterior algebra of \(E\). ### \([n]\)-manifold cocycles Let \(\mathcal{M}\) be an \([n]\)-manifold over a smooth manifold \(M\) and choose an open cover \((U_{\alpha})_{\alpha\in\Lambda}\) of \(M\) by open sets trivialising \(C^{\infty}(\mathcal{M})\). That is, for each \(\alpha\in\Lambda\), the \(\mathbb{N}\)-graded, graded commutative, associative, unital \(C^{\infty}(M)\)-algebra \(C^{\infty}_{U_{\alpha}}(\mathcal{M})\) is freely generated by its elements \[\xi^{1}_{\alpha,1},\ldots,\xi^{r_{1}}_{\alpha,1},\xi^{1}_{\alpha,2},\ldots, \xi^{r_{2}}_{\alpha,2},\ldots,\xi^{1}_{\alpha,n},\ldots,\xi^{r_{n}}_{\alpha,n}\] with \(\xi^{j}_{\alpha,i}\) of degree \(i\) for \(i\in\{1,\ldots,n\}\) and \(j\in\{1,\ldots,r_{i}\}\). In other words, for each \(\alpha\in\Lambda\) the restriction \(\mathcal{M}|_{U_{\alpha}}\) is isomorphic via a morphism \(\phi_{\alpha}\) over the identity on \(U_{\alpha}\) to the split \([n]\)-manifold \[(U_{\alpha}\times\mathbb{R}^{r_{1}})[-1]\oplus\ldots\oplus(U_{\alpha}\times \mathbb{R}^{r_{n}})[-n],\] where the spaces \(U_{\alpha}\times\mathbb{R}^{r_{j}}\) carry the canonical trivial vector bundle structures over \(U_{\alpha}\), for \(j=1,\ldots,n\). Take \(\alpha,\beta\in\Lambda\) and set \(U_{\alpha\beta}:=U_{\alpha}\cap U_{\beta}\). Then the following diagram of isomorphisms of \([n]\)-manifold commutes and \(\phi^{\alpha\beta}:=\phi_{\alpha}\circ\phi_{\beta}^{-1}\) is an isomorphism of split \([n]\)-manifolds. By construction, \[\phi^{\alpha\gamma}=\phi^{\alpha\beta}\circ\phi^{\beta\gamma}\] over \(U_{\alpha\beta\gamma}:=U_{\alpha}\cap U_{\beta}\cap U_{\gamma}\), i.e. \[\phi^{\alpha\gamma}_{p}=(\phi^{\alpha\beta}\circ\phi^{\beta\gamma})_{p}\] for all \(p\in\mathcal{P}(n)\). The open cover \(\{U_{\alpha}\}_{\alpha\in\Lambda}\) of \(M\) together with the collection of isomorphisms \[(\phi^{\alpha\beta}\mid\alpha,\beta\in\Lambda)\] satisfying 1. \(\phi^{\alpha\gamma}=\phi^{\alpha\beta}\circ\phi^{\beta\gamma}\) over \(U_{\alpha\beta\gamma}\) for all \(\alpha,\beta,\gamma\in\Lambda\) and 2. \(\phi^{\alpha\alpha}=\mathrm{id}_{(U_{\alpha}\times\mathbb{R}^{r_{1}})[-1] \oplus\ldots\oplus(U_{\alpha}\times\mathbb{R}^{r_{n}})[-n]}\) for all \(\alpha\in\Lambda\) is an \([n]\)**-manifold cocycle on \(M\)**. By their very definition, \([n]\)-manifold cocycles on \(M\) are equivalent to \([n]\)-manifolds over \(M\). ## 4. Multiple vector bundles and charts This section recalls the definitions, notation and results of [10] which are needed in the rest of the paper, and studies in more detail the _iterated higher order cores_ of an \(n\)-fold vector bundle. In [10] the authors define as follows an \(n\)-fold vector bundle. This is just a different, in their opinion more convenient, formulation for the \(n\)-fold vector bundles defined by Mackenzie and Gracia-Saz in [1]. **Definition 4.1**.: _An \(n\)**-fold vector bundle**, is a covariant functor \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) - to the category of smooth manifolds, such that, writing \(p^{I}_{J}:=\mathbb{E}(I\to J)\) for \(J\subseteq I\subseteq\underline{n}\),_ 1. _for all_ \(I\subseteq\underline{n}\) _and all_ \(i\in I\)_,_ \(p^{I}_{I\setminus\{i\}}\colon\mathbb{E}(I)\to\mathbb{E}(I\setminus\{i\})\) _has a smooth vector bundle structure, and_ _ 2. _for all_ \(I\subseteq\underline{n}\) _and_ \(i\neq j\in I\)_,_ _is a double vector bundle._ _The smooth manifold \(\mathbb{E}(\emptyset)\) is denoted \(M\) if not mentioned otherwise._ _Given two \(n\)-fold vector bundles \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) and \(\mathbb{F}\colon\square^{n}\to\mathbf{Man}^{\infty}\), a **morphism of \(n\)-fold vector bundles** from \(\mathbb{E}\) to \(\mathbb{F}\) is a natural transformation \(\Phi\colon\mathbb{E}\to\mathbb{F}\) such that the commutative diagrams_ _are vector bundle homomorphisms for all \(I\subseteq\underline{n}\) and \(i\in I\). The morphism \(\tau\) is surjective (respectively injective) if each of its components \(\tau(I)\), \(I\subseteq\underline{n}\) is fibrewise surjective (respectively fibrewise injective). It is then called **an epimorphism** (respectively a **monomorphism**) of \(n\)-fold vector bundles._ Note that such a morphism \(\Phi\colon\mathbb{E}\to\mathbb{F}\) is completely determined by its top map \(\Phi(\underline{n})\colon\mathbb{E}(\underline{n})\to\mathbb{F}(\underline{n})\). However, the definition above is convenient because it can be extended to morphisms of \(\infty\)-fold vector bundles [10]. A subset \(S\subseteq\underline{n}\) defines a full subcategory \(\square^{S}\) of the \(n\)-cube category \(\square^{n}\), with objects the subsets of \(S\). Given an \(n\)-fold vector bundle \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\), the \(S\)**-side** of \(\mathbb{E}\) is then the restriction \(\mathbb{E}_{S}\) of \(\mathbb{E}\) to the subcategory \(\square^{S}\). The functors obtained in this manner are the **sides** of \(\mathbb{E}\). Given two \(n\)-fold vector bundles \(\mathbb{E}\) and \(\mathbb{F}\) and a morphism \(\Phi\colon\mathbb{E}\to\mathbb{F}\) of \(n\)-fold vector bundles, the restriction of \(\Phi\) to the sides \(\mathbb{E}_{S}\) and \(\mathbb{F}_{S}\) is written \(\Phi|_{S}\colon\mathbb{E}_{S}\to\mathbb{F}_{S}\). It is defined by \(\Phi|_{S}(J)=\Phi(J)\colon\mathbb{E}(J)\to\mathbb{F}(J)\) for all \(J\subseteq S\). ### Cores of an \(n\)-fold vector bundle Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle, and choose two subsets \(J\subseteq S\subseteq\underline{n}\). Then by Proposition 2.18 in [10], the space \[\mathbb{E}_{J}^{S}:=\bigcap_{j\in J}(p_{S\setminus\{j\}}^{S})^{-1}\left( \mathbf{0}_{S\setminus J}^{S\setminus\{j\}}\right)=\left\{e\in\mathbb{E}(S)\ \left|\,p_{S\setminus\{j\}}^{S}(e)=\mathbf{0}_{p_{S\setminus J}^{S}(e)}^{S \setminus\{j\}}\text{ for all }j\in J\right.\right\}\] with \(\mathbf{0}_{S\setminus J}^{S\setminus\{j\}}\colon\mathbb{E}(S\setminus J) \to\mathbb{E}(S\setminus\{j\})\) the composition of zero sections, is an embedded submanifold of \(\mathbb{E}(S)\). (For \(J=\emptyset\) the empty intersection is \(\mathbb{E}(S)\) by convention, and for \(\#J=1\) the "zero section" \(\mathbf{0}_{S\setminus J}^{S\setminus\{j\}}=\mathbf{0}_{S\setminus J}^{S \setminus J}\) is the identity on \(\mathbb{E}(S\setminus J)\), so \(\mathbb{E}_{J}^{S}=\mathbb{E}(S)\).) It has further a vector bundle structure over \(\mathbb{E}(S\setminus J)\) with projection \[p_{S\setminus J}^{S}|_{\mathbb{E}_{J}^{S}}=\mathbb{E}(S\to S\setminus J)|_{ \mathbb{E}_{J}^{S}}\colon\mathbb{E}_{J}^{S}\to\mathbb{E}(S\setminus J)\] and with addition defined by \[e_{1}\underset{S\setminus J}{+}e_{2}:=e_{1}\underset{S\setminus\{j\}}{+}e_{2}\] for any \(j\in J\). \(\mathbb{E}^{S}_{J}\) is the total space of an \((\#S-\#J+1)\)-fold vector bundle \(\mathbb{E}^{(S,J)}\), called here the \((S,J)\)-core of \(\mathbb{E}\) and defined as follows. Consider the \((\#S-\#J+1)\)-cube subcategory \(\Diamond^{S}_{J}\) of \(\square^{S}\) with objects the subsets \(I\subseteq S\) with \[I\cap J=\emptyset\text{ or }J\subseteq I\] and with arrows \[I\to I^{\prime}\ :\Leftrightarrow\ I^{\prime}\subseteq I.\] That is, \(\Diamond^{S}_{J}\) is a full subcategory of \(\square^{S}\). Note that if \(S=\underline{n}\) the \((n-\#J+1)\)-cube category \(\Diamond^{\underline{n}}_{J}\) equals the \((n-\#J+1)\)-cube category \(\Diamond^{\rho_{J}}\) with \(\rho_{J}\) the partition of \(\underline{n}\) in the subset \(J\) and \(\underline{n}-\#J\) subsets with one element each. The functor \(\mathbb{E}^{(S,J)}\colon\Diamond^{S}_{J}\to\mathbf{Man}^{\infty}\) sends an object \(I\) as above \[\text{to }\ \mathbb{E}(I)=\mathbb{E}_{S}(I)\ \text{ if }\ I\cap J=\emptyset\ \text{ and to }\ \mathbb{E}^{I}_{J}\ \text{ if }J\subseteq I, \tag{15}\] and an arrow \(I\to I^{\prime}\) to \[\left\{\begin{array}{ll}\mathbb{E}(I\to I^{\prime})|_{\mathbb{E}^{I}_{J}} \colon\mathbb{E}^{I}_{J}\to\mathbb{E}^{I^{\prime}}_{J}&\text{ if }J\subseteq I^{\prime} \subseteq I,\\ \mathbb{E}(I\to I^{\prime})\colon\mathbb{E}(I)\to\mathbb{E}(I^{\prime})& \text{ if }I\cap J=\emptyset\text{ and }\\ \mathbb{E}(I\to I^{\prime})|_{\mathbb{E}^{I}_{J}}\colon\mathbb{E}^{I}_{J}\to \mathbb{E}(I^{\prime})&\text{ if }I^{\prime}\cap J=\emptyset\text{ but }J\subseteq I.\end{array}\right. \tag{16}\] Write \(i^{S}_{J}\colon\Diamond^{S}_{J}\to\square^{S}\) for the inclusion functor. Then the assignment \[\tau\colon\operatorname{Obj}\left(\Diamond^{S}_{J}\right)\to\operatorname{ Mor}(\mathbf{Man}^{\infty})\] sending \(I\subseteq S\) with \(I\cap J=\emptyset\) or \(J\subseteq I\) to the smooth embedding \[\tau(I):=\iota_{\mathbb{E}^{(S,J)}(I)}\colon\mathbb{E}^{(S,J)}(I)\hookrightarrow \mathbb{E}_{S}(I)=\mathbb{E}(I)\] defines a natural transformation \[\mathbb{E}^{(S,J)}\longrightarrow\mathbb{E}_{S}\circ i^{S}_{J}.\] In the following a little abuse of notation is made, and such a natural transformation by embeddings between functors from a subcategory of \(\square^{n}\) is called a _natural transformation by embeddings in \(\mathbb{E}\)_. Consider two \(n\)-fold vector bundles \(\mathbb{E}\) and \(\mathbb{F}\). Choose again \(J\subseteq S\subseteq\underline{n}\) and build the \((S,J)\)-cores \(\mathbb{E}^{(S,J)}\) and \(\mathbb{F}^{(S,J)}\). Then a morphism \(\Phi\colon\mathbb{E}\to\mathbb{F}\) of \(n\)-fold vector bundles induces as follows a core morphism \[\Phi^{(S,J)}\colon\mathbb{E}^{(S,J)}\to\mathbb{F}^{(S,J)}. \tag{17}\] For all \(I\in\operatorname{Obj}(\Diamond^{S}_{J})\) the map \(\Phi^{(S,J)}(I)\colon\mathbb{E}^{(S,J)}(I)\to\mathbb{F}^{(S,J)}(I)\) is simply the (necessarily smooth) restriction of \(\Phi(I)\) to the embedded domain and codomain \(\mathbb{E}^{(S,J)}(I)\subseteq\mathbb{E}(I)\) and \(\mathbb{F}^{(S,J)}(I)\subseteq\mathbb{F}(I)\), which is well-defined because \(\Phi\) preserves the \(n\)-fold vector bundle structure and so in particular the zeros. Finally note that by definition, the face \(\mathbb{E}_{S\setminus J}\) of \(\mathbb{E}\) is also a face of the core \(\mathbb{E}^{(S,J)}\) since \[\operatorname{Obj}(\square^{S\setminus J})\subseteq\operatorname{Obj}( \Diamond^{S}_{J})\] (\(\square^{S\setminus J}\) is a full subcategory of \(\Diamond^{S}_{J}\)) and \[\mathbb{E}^{(S,J)}(I)=\mathbb{E}(I)=\mathbb{E}_{S\setminus J}(I)\] for all \(I\subseteq S\setminus J\). ### Linear splittings and decompositions of an \(n\)-fold vector bundle Consider a collection \(\mathcal{A}=\{A_{I}\ |\ I\subseteq\underline{n}\}\) of vector bundles \(A_{I}\) over a smooth manifold \(M\), where as a convention, \(A_{\emptyset}\) is taken to be the trivial vector bundle \(M\to M\). Set \[\mathbb{E}^{\mathcal{A}}\colon\square^{n}\to\mathbf{Man}^{\infty},\] \[\mathbb{E}^{\mathcal{A}}\colon\square^{n}\to\mathbf{Man}^{\infty},\qquad \mathbb{E}^{\mathcal{A}}(J):=\prod_{I\subseteq J}^{M}A_{I}\] for \(J\subseteq\underline{n}\), where \(\Pi^{M}\) are the fibered products over \(M\). Set \(\mathbb{E}^{\mathcal{A}}(J\to J^{\prime})\) to be the canonical projection \(\prod_{I\subseteq J}^{M}A_{I}\to\prod_{I\subseteq J^{\prime}}^{M}A_{I}\). Then \(\mathbb{E}^{\mathcal{A}}\) with the obvious vector bundle structures is an \(n\)-fold vector bundle, the **decomposed**\(n\)-fold vector bundle defined by \(\mathcal{A}\)[11]. The reader is invited to check that \[(\mathbb{E}^{\mathcal{A}})^{(S,J)}(I)=\prod_{K\in\operatorname{Obj}(\Diamond _{J}^{I})}^{M}A_{K}\] for all \(J\subseteq I\subseteq S\subseteq\underline{n}\). Here, the right-hand face is an embedded submanifold of \(\mathbb{E}^{\mathcal{A}}(I)\) by taking its fibered products with the zero sections of the "missing" bundles. The **vacant decomposed**\(n\)-fold vector bundle defined by \(\mathcal{A}\) is \[\overline{\mathbb{E}^{\mathcal{A}}}\colon\square^{n}\to\mathbf{Man}^{\infty}, \quad\overline{\mathbb{E}^{\mathcal{A}}}(J)=\prod_{i\in J}^{M}A_{\{i\}}.\] (Note that it uses only the vector bundles in \(\mathcal{A}\) indexed by one-element sets. Hence it can also be defined by a family of \(n\) vector bundles over \(M\) indexed by the number \(1\) to \(n\).) For each \(J\subseteq\underline{n}\) the manifold \(\overline{\mathbb{E}^{\mathcal{A}}}(J)\) is clearly also embedded in \(\mathbb{E}^{\mathcal{A}}(J)\). Denote the embedding by \(\iota(J)\colon\overline{\mathbb{E}^{\mathcal{A}}}(J)\to\mathbb{E}^{\mathcal{A }}(J)\). The collection of these embeddings defines a monomorphism \[\iota\colon\overline{\mathbb{E}^{\mathcal{A}}}\to\mathbb{E}^{\mathcal{A}} \tag{18}\] of \(n\)-fold vector bundles. Write \(\overline{\mathbb{E}^{\mathcal{A}}}=\overline{\mathbb{E}}\) for simplicity. For \(\#J\geq 2\), \(J\subseteq S\subseteq\underline{n}\) and all \(I\in\operatorname{Obj}(\Diamond_{J}^{S})\) the space \(\overline{\mathbb{E}}^{(S,J)}(I)\) is here further \[\overline{\mathbb{E}}^{(S,J)}(I)=\prod_{i\in I\setminus J}^{M}A_{\{i\}}\] since \[\overline{\mathbb{E}}^{(S,J)}(I)=\left\{\begin{array}{ll}\overline{\mathbb{ E}}_{J}^{I}&\text{ if }J\subseteq I\\ \overline{\mathbb{E}}(I)&\text{ if }I\subseteq S\setminus J,\end{array}\right.\] with \[\overline{\mathbb{E}}_{J}^{I}=\left\{e\in\prod_{i\in I}^{M}A_{\{i\}}\ \bigg{|}\ p_{I\setminus\{j\}}^{I}(e)=\mathbf{0}_{p_{I\setminus\{j\}}^{I}(e)}^{I \setminus\{j\}}\text{ for all }j\in J\right\}=\prod_{\{i\}\in\operatorname{Obj}(\Diamond_{J}^{I})}^{M}A_{ \{i\}}=\prod_{i\in I\setminus J}^{M}A_{\{i\}},\] The name _vacant_[16] comes from the fact that the \((I,I)\)-cores of \(\overline{\mathbb{E}}\) are consequently all trivial, for \(I\subseteq\underline{n}\) with \(\#I\geq 2\). The induced monomorphism \(\tau^{(S,J)}\colon\overline{\mathbb{E}}^{(S,J)}\to(\mathbb{E}^{\mathcal{A}})^ {(S,J)}\) is clearly the one defined by the natural embeddings \[\overline{\mathbb{E}}^{(S,J)}(I)=\prod_{i\in I\setminus J}^{M}A_{\{i\}} \hookrightarrow\prod_{K\in\operatorname{Obj}(\Diamond_{J}^{I})}^{M}A_{K}=( \mathbb{E}^{\mathcal{A}})^{(S,J)}(I).\] Let \(\mathbb{E}\) be an \(n\)-fold vector bundle and consider the collection of vector bundles \(\mathbb{E}^{J}_{J}\to M=\mathbb{E}(\emptyset)\) for \(J\subseteq\underline{n}\). In particular, \(\mathbb{E}^{\{i\}}_{\{i\}}=\mathbb{E}(\{i\})=:E_{i}\) are the **lower sides** of \(\mathbb{E}\) and \(\mathbb{E}^{\emptyset}_{\emptyset}=\mathbb{E}(\emptyset)=M\). The collection \(\mathcal{A}_{\mathbb{E}}=\big{\{}\,\mathbb{E}^{I}_{I}\,|\,\emptyset\neq I \subseteq\underline{n}\big{\}}\) is the family of _building bundles_ of \(\mathbb{E}\) in the sense that \(\mathbb{E}\) is non-canonically isomorphic to the \(n\)-fold vector bundle \(\mathbb{E}^{\mathcal{A}_{\mathbb{E}}}\), or in other words, \(\mathbb{E}\) is _decomposed by \(\mathcal{A}\)_. The proof of the existence of such an isomorphism, called a _decomposition_, is the subject of the following two sections. **Definition 4.2**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle._ 1. _The decomposed_ \(n\)_-fold vector bundle_ \(\mathbb{E}^{\mathcal{A}_{\mathbb{E}}}\) _defined as above by_ \(\mathbb{E}\) _is denoted by_ \(\mathbb{E}^{\mathrm{dec}}\)_. The vacant decomposed_ \(n\)_-fold vector bundle_ \(\overline{\mathbb{E}^{\mathcal{A}_{\mathbb{E}}}}\) _is written_ \(\overline{\mathbb{E}}\)_._ 2. \(A\) _linear splitting_ _of the_ \(n\)_-fold vector bundle_ \(\mathbb{E}\) _is a monomorphism_ \(\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\) _of_ \(n\)_-fold vector bundles, such that for_ \(i=1,\ldots,n\)_,_ \(\Sigma(\{i\})\colon E_{i}\to E_{i}\) _is the identity._ 3. \(A\) _decomposition_ _of the_ \(n\)_-fold vector bundle_ \(\mathbb{E}\) _is a natural isomorphism_ \(\mathcal{S}\colon\mathbb{E}^{\mathrm{dec}}\to\mathbb{E}\) _of_ \(n\)_-fold vector bundles over the identity maps_ \(\mathcal{S}(\{i\})=\mathrm{id}_{E_{i}}\colon E_{i}\to E_{i}\) _such that additionally the induced core morphisms_ \(\mathcal{S}^{(I,I)}(\{I\})\) _are the identities_ \(\mathrm{id}_{\mathbb{E}^{I}_{I}}\) _for all_ \(I\subseteq\underline{n}\)_._ By Corollary 3.6 in [10], any \(n\)-fold vector bundle admits a decomposition. Since there appears to be a little gap in the proof of this result in [10], it is revisited in Section 4.4. Let \(\mathcal{A}=(A_{I}\mid I\subseteq\underline{n})\), \(\mathcal{B}=(B_{I}\mid I\subseteq\underline{n})\) and \(\mathcal{C}=(C_{I}\mid I\subseteq\underline{n})\) be three families of vector bundles over smooth manifolds \(M\), \(N\) and \(P\) respectively. A morphism \(\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{B}}\) is easily seen to amount to a collection of vector bundle morphisms \[\tau_{\rho}\colon A_{I_{1}}\otimes\ldots\otimes A_{I_{l}}\to B_{I}\] over \(\tau(\emptyset)\colon M\to N\) for all \(\emptyset\neq I\subseteq\underline{n}\) and all \(\rho=(I_{1},\ldots,I_{l})\in\mathcal{P}(I)\), see [10]. Given this collection, the map \(\tau(\underline{n})\colon\mathbb{E}^{\mathcal{A}}(\underline{n})\to\mathbb{E }^{\mathcal{B}}(\underline{n})\) is given by \[(a_{I})_{\emptyset\neq I\subseteq\underline{n}}\quad\mapsto\quad\left(\sum_{ \rho=(I_{1},\ldots,I_{l})\in\mathcal{P}(I)}\tau_{\rho}(a_{I_{1}},\ldots,a_{I_ {l}})\right)_{\emptyset\neq I\subseteq\underline{n}}.\] Given a second morphism \(\mu\colon\mathbb{E}^{\mathcal{B}}\to\mathbb{E}^{\mathcal{C}}\) over \(n\)-fold vector bundles, the composition \(\mu\circ\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{C}}\) is given by \[(\mu\circ\tau)_{\rho}\,((a_{J})_{J\in\rho})=\sum_{(J_{1},\ldots,J_{l})\in \mathrm{coars}(\rho)}\mu_{(J_{1},\ldots,J_{l})}\Big{(}\tau_{\rho\cap J_{1}} \big{(}(a_{J})_{J\in\rho\cap J_{1}}\big{)},\ldots,\tau_{\rho\cap J_{l}}\big{(} (a_{J})_{J\in\rho\cap J_{l}}\big{)}\Big{)}\] for all \(\emptyset\neq I\subseteq\underline{n}\) and all \(\rho\in\mathcal{P}(I)\). Here, \(\rho\cap J_{k}\) is the canonically ordered partition \(\{I_{s}\in\rho\mid I_{s}\subseteq J_{k}\}\) for \(k=1,\ldots,l\). ### Iterated highest order cores of an \(n\)-fold vector bundle This section studies in detail the _iterated highest order cores_ of an \(n\)-fold vector bundle. **Definition 4.3**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle. Then the **highest order cores**\(\mathbb{E}^{(\underline{n}.J)}\), written simply \(\mathbb{E}^{J}\), for \(J\subseteq\underline{n}\) with \(\#J=2\) are also called \((n-1)\)**-cores of \(\mathbb{E}\)**, since they are modelled on the \((n-1)\)-cube categories \(\Diamond^{\underline{n}}_{J}\). The \((n-2)\)**-cores**\(\mathbb{E}\) are then defined to be the highest order cores of the \((n-1)\)-cores of \(\mathbb{E}\), hence the \((n-2)\)-cores of the \((n-1)\)-cores of \(\mathbb{E}\)._ _This construction can then be iterated: for \(l\in\{1,\ldots,n-2\}\), the \(l\)**-cores of \(\mathbb{E}\)** are defined to be the \(l\)-cores of the \((l+1)\)-cores of \(\mathbb{E}\). Further, the (unique) \(n\)**-core of \(\mathbb{E}\)** is set by convention to be \(\mathbb{E}\) itself._ _The \(l\)-cores for \(l=1,\ldots,n-1\) are generally called the **iterated highest order cores of \(\mathbb{E}\)**._ By construction an iterated higher order core of \(\mathbb{E}\) is a functor from a full subcategory of \(\square^{n}\) to \(\mathbf{Man}^{\infty}\). As above, an \(l\)-core has a natural transformation by embeddings in the former \((l+1)\)-core in its recursive construction, and all morphisms are restrictions of the morphisms of this \((l+1)\)-core. Then the \(l\)-core has a natural transformation by embeddings in \(\mathbb{E}\). A priori the recursively defined subcategory of \(\square^{n}\) indexing an \(l\)-core is complicated to write down. In addition, two different chains of construction of \(l\)-cores can lead to the same \(l\)-core. The goal of the following theorem is to remedy to these problems, by understanding that the collection of \(l\)-cores of \(\mathbb{E}\), for \(l\in\{1,\ldots,n\}\), is simply parametrised by the partitions of \(\underline{n}\) in \(l\) subsets. **Proposition 4.4**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle and choose \(l\in\{1,\ldots,n\}\)._ 1. _For each_ \(l\)_-core_ \(\mathbb{F}\) _of_ \(\mathbb{E}\) _there is a partition_ \(\rho:=\{I_{1},\ldots,I_{l}\}\) _of_ \(\underline{n}\) _in_ \(l\) _(non-empty) subsets such that_ \(\mathbb{F}\) _is a functor from the_ \(l\)_-cube subcategory_ \(\Diamond^{\rho}\) _of_ \(\square^{n}\) _to_ \(\mathbf{Man}^{\infty}\)_. Conversely, for each partition_ \(\rho\) _of_ \(\underline{n}\) _in_ \(l\) _elements there is an_ \(l\)_-core_ \(\mathbb{F}\colon\Diamond^{\rho}\to\mathbf{Man}^{\infty}\) _of_ \(\mathbb{E}\)_. If two_ \(l\)_-cores are defined on the same subcategory_ \(\Diamond^{\rho}\)_, they are equal._ 2. _Choose an_ \(l\)_-partition_ \(\rho\) _of_ \(\underline{n}\)_, and consider the corresponding_ \(l\)_-core_ \(\mathbb{E}^{\rho}\) _of_ \(\mathbb{E}\)_. Then the_ \(i\)_-cores of_ \(\mathbb{E}^{\rho}\)_, for_ \(i=1,\ldots,l\)_, are the_ \(i\)_-cores of_ \(\mathbb{E}\) _indexed by coarsements of_ \(\rho\)_._ 3. _Assume that_ \(\rho=\{I_{1},\ldots,I_{l}\}\) _is a partition of_ \(\underline{n}\) _in_ \(l\) _subsets and that_ \(\rho_{1}\) _and_ \(\rho_{2}\) _are two different_ \((l-1)\)_-coarsements of_ \(\rho\) _as in Lemma_ 2.2_. Then the_ \((l-2)\)_-core_ \(\mathbb{E}^{\rho_{1}\cap\rho_{2}}\) _is a common_ \((l-2)\)_-core of_ \(\mathbb{E}^{\rho_{1}}\) _and_ \(\mathbb{E}^{\rho_{2}}\)_._ **Definition 4.5**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle and let \(\rho\) be a partition of \(\underline{n}\) in \(l\) elements. The unique \(l\)-core of \(\mathbb{E}\) corresponding to \(\rho\) as in the previous theorem is denoted by \(\mathbb{E}^{\rho}\)._ Note that an \(n\)-fold vector bundle has only one \(1\)-core, since there is only one partition of \(n\) with \(1\) element. It is the ultracore. Note also that the \(2\)-cores of \(\mathbb{E}\) are indexed by pairs of nonempty subsets \(I,J\subseteq\underline{n}\) with \(I\cap J=\emptyset\), and \(I\cup J=\underline{n}\). The corresponding \(2\)-cube category \(\Diamond^{\{I,J\}}\) has then the objects \(\emptyset,I,J,I\cup J=\underline{n}\). The characterisation above shows as well that an \(n\)-fold vector bundle can be naturally understood as its own \(n\)-core, since the partition of \(\underline{n}\) in \(n\) subsets gives with (3) the \(n\)-cube category \(\square^{n}\). Proof of Proposition 4.4.: For \(l=n\), the first statement is clearly true since there is only one partition of \(\underline{n}\) in \(n\) subsets and \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) is its own \(n\)-core. Choose a subset \(J\subseteq\underline{n}\) of cardinality \(2\). Then by Theorem 2.20 in [11], the core \(\mathbb{E}^{(\underline{n},J)}\) is an \((n-1)\)-fold vector bundle \[\mathbb{E}^{J}:=\mathbb{E}^{(\underline{n},J)}\colon\Diamond^{\rho_{J}}\to \mathbf{Man}^{\infty}, \tag{19}\] where as before \(\rho_{J}\) is the partition of \(\underline{n}\) with \(J\) and one-element sets, hence a partition of \(\underline{n}\) with \(n-1\) subsets. By definition there is a one-to-one correspondence between subsets of \(\underline{n}\) of cardinality \(2\) and \((n-1)\)-cores, hence between partitions of \(\underline{n}\) with \(n-1\) elements and \((n-1)\)-cores. So (1) is also true for \(l=n-1\), while (2) is true for \(l=n\) and \(i=n-1\). A partition of \(\underline{n}\) in \(n-2\) subsets has either one subset with \(3\) elements and \(n-3\) subsets with one element each, or two subsets with \(2\) elements each and \(n-4\) subsets with one elements each. Of course, here \(n\geq 3\) and if \(n=3\) only the first case is possible, while for \(n\geq 4\) both cases are possible. Consider first the first case, and assume without loss of generality that the partition is \(\rho:=\{\{1,2,3\},\{4\},\ldots,\{n\}\}\). Consider \(J_{1}:=\{1,2\}\) and \(J_{2}:=\{1,3\}\) and the two \((n-1)\)-cores \(\mathbb{E}^{J_{1}}\) and \(\mathbb{E}^{J_{2}}\) as in (19). Then \(\lozenge^{\rho_{J_{1}}}\) is the cube category over the elements \(J_{1},\{3\},\ldots,\{n\}\), and \(\lozenge^{\rho_{J_{2}}}\) is the cube category over the elements \(J_{2},\{2\},\{4\},\{5\},\ldots,\{n\}\). Build the \(\{J_{1},\{3\}\}\)-core of \(\mathbb{E}^{J_{1}}\), which is an \((n-2)\)-core of \(\mathbb{E}^{J_{1}}\) and of \(\mathbb{E}\). By definition, it is modeled on the full subcategory \(\lozenge\) of \(\lozenge^{\rho_{J_{1}}}\) with objects \(J\subseteq n\) consisting in unions of the sets \(J_{1}\cup\{3\},\{4\},\ldots,\{n\}\). Hence \(\lozenge=\lozenge^{\rho}\). This shows the existence of an \((n-2)\)-core modeled on \(\rho\). The \(\{J_{2},\{2\}\}\)-core of \(\mathbb{E}^{J_{2}}\) is then similarly modeled on the same category \(\lozenge^{\rho}\). These two \((n-2)\)-cores are hence modeled on the same partition of \(\underline{n}\). Choose \(I\in\operatorname{Obj}(\lozenge^{\rho})\). If \(I\cap\{1,2,3\}=\emptyset\), then also \(I\cap J_{1}=I\cap J_{2}=\emptyset\) and \[(\mathbb{E}^{J_{1}})^{\{J_{1},\{3\}\}}(I)=\mathbb{E}^{J_{1}}(I)=\mathbb{E}(I)= \mathbb{E}^{J_{2}}(I)=(\mathbb{E}^{J_{2}})^{\{J_{2},\{2\}\}}(I).\] If \(\{1,2,3\}\subseteq I\) then also \(J_{1},J_{2}\subseteq I\) and so6 Footnote 6: There is a little abuse of notation here; in the formula below \(I\) should formally be replaced by \(\{J_{1};\{t\}\mid t\in I\setminus J_{1}\}\) and \(\{J_{2};\{t\}\mid t\in I\setminus J_{2}\}\), depending on which chain of cores is considered. For the convenience of the reader, it is just written \(I\) and understood from the context. \[(\mathbb{E}^{J_{1}})^{\{J_{1},\{3\}\}}(I) =(\mathbb{E}^{J_{1}})^{I}_{\{J_{1},\{3\}\}}=(c^{I}_{J_{1}})^{-1} \left(\mathbf{0}^{I\setminus J_{1}}_{I\setminus\{1,2,3\}}\right)\cap(p^{I}_{3 })^{-1}\left(\mathbf{0}^{I\setminus\{3\}}_{I\setminus\{1,2,3\}}\right)\] \[=\mathbb{E}^{I}_{\{1,2,3\}}\] \[=(c^{I}_{J_{2}})^{-1}\left(\mathbf{0}^{I\setminus J_{2}}_{I \setminus\{1,2,3\}}\right)\cap(p^{I}_{2})^{-1}\left(\mathbf{0}^{I\setminus\{ 2\}}_{I\setminus\{1,2,3\}}\right)\] \[=(\mathbb{E}^{J_{2}})^{I}_{\{J_{2},\{2\}\}}=(\mathbb{E}^{J_{2}}) ^{\{J_{2},\{2\}\}}(I),\] where the third and fourth equalities are applications of [10, Lemma 2.19]. Since the projections in each \(l\)-core are restrictions of the projections of the \(n\)-fold vector bundle \(\mathbb{E}\), it suffices to show that the images of the two functors \((\mathbb{E}^{J_{1}})^{\{J_{1},\{3\}\}}\) and \((\mathbb{E}^{J_{2}})^{\{J_{2},\{2\}\}}\) on objects of \(\lozenge^{\rho}\) are the same to get that the two functors are equal. Next assume that \(n\geq 4\) and consider the partition \(\rho:=\{\{1,2\},\{3,4\},\{5\},\ldots,\{n\}\}\) of \(\underline{n}\) in \((n-2)\) elements. Set \(J_{1}:=\{1,2\}\) and \(J_{2}:=\{3,4\}\) and consider as before the two \((n-1)\)-cores \(\mathbb{E}^{J_{1}}\) and \(\mathbb{E}^{J_{2}}\) of \(\mathbb{E}\). Then \(\lozenge^{\rho_{J_{1}}}\) is the square category over the elements \(J_{1},\{3\},\ldots,\{n\}\), and \(\lozenge^{\rho_{J_{2}}}\) is the square category over the elements \(\{1\},\{2\},J_{2},\{5\},\{6\},\ldots,\{n\}\). Build the \(\{\{3\},\{4\}\}\)-core of \(\mathbb{E}^{J_{1}}\), which is an \((n-2)\)-core of \(\mathbb{E}^{J_{1}}\) and of \(\mathbb{E}\). By definition, it is modeled on the full subcategory \(\lozenge\) of \(\lozenge^{\rho_{J_{1}}}\) with objects \(I\subseteq n\) consisting in unions of the sets \(J_{1},J_{2},\{5\},\{6\},\ldots,\{n\}\). Hence \(\lozenge=\lozenge^{\rho}\). This shows the existence of an \((n-2)\)-core modeled on \(\rho\). The \(\{\{1\},\{2\}\}\)-core of \(\mathbb{E}^{J_{2}}\) is then similarly modeled on the same category \(\lozenge^{\rho}\). These two \((n-2)\)-cores are hence modeled on the same partition of \(\underline{n}\). In order to show that \[(\mathbb{E}^{J_{1}})^{\{\{3\},\{4\}\}}=(\mathbb{E}^{J_{2}})^{\{\{1\},\{2\}\}}\colon \lozenge^{\rho}\to\mathbf{Man}^{\infty},\] it is again enough to show that the functors are equal on objects of \(\lozenge^{\rho}\). First take \(I\in\operatorname{Obj}(\lozenge^{\rho})\) with \(I\cap J_{1}=\emptyset\) and \(I\cap J_{2}=\emptyset\). Then \[(\mathbb{E}^{J_{1}})^{\{\{3\},\{4\}\}}(I)=\mathbb{E}^{J_{1}}(I)=\mathbb{E}(I)= \mathbb{E}^{J_{2}}(I)=(\mathbb{E}^{J_{2}})^{\{\{1\},\{2\}\}}(I).\] If \(I\in\operatorname{Obj}(\Diamond^{\rho})\) does \(J_{1}\subseteq I\) and \(I\cap J_{2}=\emptyset\), then \[(\mathbb{E}^{J_{1}})^{\{\{3\},\{4\}\}}(I)=\mathbb{E}^{J_{1}}(I)=E^{I}_{J_{1}}\] while \[(\mathbb{E}^{J_{2}})^{\{\{1\},\{2\}\}}(I)=(\mathbb{E}^{J_{2}})^{I}_{\{\{1\},\{ 2\}\}}=E^{I}_{J_{1}}\] since the \(I\)-face of \(\mathbb{E}^{J_{2}}\) is the \(I\)-face of \(\mathbb{E}\). Finally if \(J_{1}\cup J_{2}\subseteq I\), then \[(\mathbb{E}^{J_{1}})^{\{\{3\},\{4\}\}}(I) =(\mathbb{E}^{J_{1}})^{I}_{\{\{3\},\{4\}\}}\] \[=\left\{e\in\mathbb{E}^{J_{1}}(I)\left|p^{I}_{I\setminus\{3\}}(e )=\mathbf{0}^{I\setminus\{3\}}_{p^{I}_{I\setminus J_{2}}(e)}\text{ and }p^{I}_{I\setminus\{4\}}(e)=\mathbf{0}^{I \setminus\{4\}}_{p^{I}_{I\setminus J_{2}}(e)}\right.\right\}\] \[=\left\{e\in\mathbb{E}(I)\left|\begin{array}{c}p^{I}_{I \setminus\{1\}}(e)=\mathbf{0}^{I\setminus\{1\}}_{p^{I}_{I\setminus J_{1}}(e)},\quad p^{I}_{I\setminus\{2\}}(e)=\mathbf{0}^{I\setminus\{2\}}_{p^{I}_{I \setminus J_{1}}(e)}\\ p^{I}_{I\setminus\{3\}}(e)=\mathbf{0}^{I\setminus\{3\}}_{p^{I}_{I\setminus J_{2 }}(e)}\text{ and }p^{I}_{I\setminus\{4\}}(e)=\mathbf{0}^{I\setminus\{4\}}_{p^{I}_{I \setminus J_{2}}(e)}\end{array}\right.\right\}\] \[=(\mathbb{E}^{J_{2}})^{\{\{1\},\{2\}\}}(I),\] since the description in the third line is symmetric in \(J_{1}\) and \(J_{2}\). Given an arbitrary \((n-1)\)-core \(\mathbb{E}^{J}\) for \(J\subset\underline{n}\) with \(\#J=2\), its \((n-2)\)-cores are either \((\mathbb{E}^{J})^{\{\{i\},\{j\}\}}\) for \(i,j\in\underline{n}\setminus J\) or \((\mathbb{E}^{J})^{\{J,\{i\}\}}\) for \(i\in\underline{n}\setminus J\). As explained above, in the first case the \((n-2)\)-core is indexed by the partition \(\{J,\{i,j\};\{t\}\mid t\in\underline{n}\setminus(J\cup\{i,j\})\}\), and in the second case on the partition \(\{J\cup\{i\};\{t\}\mid t\in\underline{n}\setminus(J\cup\{i\})\}\). Hence (1) is proved for \(l=n-2\). (2) is true for \(l=n\) and \(i=n-1,n-2\), and \(l=n-1\) and \(i=n-2\). (3) for \(l=n\) was proved above at the same time as the equality of the two \((n-2)\)-cores indexed by the same partition, since in the first case \[\{\{1,2,3\},\{4\},\ldots,\{n\}\}=\{J_{1},\{3\},\ldots,\{n\}\}\sqcap\{J_{2},\{2 \},\{4\},\{5\},\ldots,\{n\}\}.\] and in the second case \[\{\{1,2\},\{3,4\},\{5\},\ldots,\{n\}\}=\{J_{1},\{3\},\ldots,\{n\}\}\sqcap\{\{1 \},\{2\},J_{2},\{5\},\{6\},\ldots,\{n\}\}.\] The proof now works recursively since an \(l\)-core of \(\mathbb{E}\) is always an \(l\)-core of an \((l+1)\)-core of \(\mathbb{E}\): Assume that (1) is true for some fixed \(l\in\{2,\ldots,n\}\). Then an \((l-1)\)-core \(\mathbb{F}\) of \(\mathbb{E}\) is an \((l-1)\)-core of an \(l\)-core of \(\mathbb{E}\), hence of an \(l\)-core \(\mathbb{E}^{\rho}\) for some partition \(\rho=\{I_{1},\ldots,I_{l}\}\) of \(\underline{n}\). The \((l-1)\)-core \(\mathbb{F}\) is then defined by a choice of \(i<j\) in \(\{1,\ldots,l\}\) such that \[\mathbb{F}=(\mathbb{E}^{\rho})^{\{I_{i},I_{j}\}}.\] \(\mathbb{F}\) is then modeled on \(\rho^{\prime}=\{I_{i}\cup I_{j},I_{1},\ldots,\widehat{i},\ldots,\widehat{j}, \ldots,I_{l}\}\), which is an \((l-1)\)-partition of \(\underline{n}\). Conversely, choose a partition \(\rho=\{I_{1},\ldots,I_{l-1}\}\) of \(\underline{n}\) in \((l-1)\) subsets. Then since \(l-1\in\{1,\ldots,n-2\}\) one of the subsets \(I_{1},\ldots,I_{l-1}\) of \(\underline{n}\) must contain more than one element. Without generality, \(I_{l-1}\) does. Write \(I_{l-1}=J_{l-1}\cup J_{l}\) with \(J_{l-1},J_{l}\subseteq\underline{n}\) disjoint and non-empty. Set \(\rho^{\prime}=\{I_{1},\ldots,I_{l-2},J_{l-1},J_{l}\}\). Then \[(\mathbb{E}^{\rho^{\prime}})^{\{J_{l-1},J_{l}\}}\] is an \((l-1)\)-core of \(\mathbb{E}\) indexed by \(\rho\). Computations as above show that this \((l-1)\)-core of \(\mathbb{E}\) does not depend on the choice of \(\rho^{\prime}\) above \(\rho\). Precisely, the considerations above show that for \(I=I_{i_{1}}\cup\ldots\cup I_{i_{k}}\in\operatorname{Obj}(\lozenge^{\rho})\), \[\begin{split}\mathbb{E}^{\rho}(I)&=\left\{e\in \mathbb{E}(I)\,\Bigg{|}\begin{array}{c}\text{ For $s=1,\ldots,k$ and all $j\in I_{i_{s}}:$}\\ \qquad\qquad p^{I}_{I\setminus\{j\}}(e)=\mathbf{0}^{I}_{p^{I}_{I\setminus I_{ i_{s}}}(e)}.\end{array}\right\}\\ &=\bigcap_{s=1}^{k}\bigcap_{j\in I_{i_{s}}}(p^{I}_{I\setminus\{j\}})^{-1} \left(\mathbf{0}^{I\setminus\{j\}}_{I_{i_{s}}}\right)\end{split} \tag{20}\] which does not depend on the choice of \(\rho^{\prime}\). Therefore (1) holds for all \(l=1,\ldots,n\). By construction (2) holds as well for all \(l=1,\ldots,n\) and all \(i=1,\ldots,l\), and (3) follows again from the one-to-one correspondence between \(l\)-partitions and \(l\)-cores, for all \(l=1,\ldots,n\). **Corollary 4.6**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle and let \(\rho=\{I_{1},\ldots,I_{l}\}\) be a partition of \(\underline{n}\) in \(l\) elements. The lower sides of \(\mathbb{E}^{\rho}\) are the vector bundles \(\mathbb{E}^{I_{i}}_{I_{i}}\to M\) for \(i=1,\ldots,l\), and the other building bundles of \(\mathbb{E}^{\rho}\) are the bundles \(\mathbb{E}^{J}_{J}\) for \(J\in\operatorname{Obj}(\lozenge^{\rho})\)._ Proof.: The first statement is clear since \(\lozenge^{\rho}\) is the \(l\)-cube category over the elements \(I_{1},\ldots,I_{l}\). It is also a special case of the second statement. For the second statement, choose \(J\in\operatorname{Obj}(\lozenge^{\rho})\), without loss of generality \(J=I_{1}\cup\ldots\cup I_{k}\) for \(1\leq k\leq l\) and compute with (20) \[(\mathbb{E}^{\rho})^{J}_{J} =\left\{e\in\mathbb{E}^{\rho}(J)\,\Big{|}\begin{array}{c}\text {for all $s=1,\ldots,k$ and all $j\in I_{s}:$}\\ \qquad\qquad p^{J}_{J\setminus\{j\}}(e)=\mathbf{0}^{J}_{p^{J}_{J\setminus\{j \}}(e)}(\ref{eq:p_J})\\ \qquad\qquad\text{for all $s=1,\ldots,k$, $p^{J}_{J\setminus I_{s}}(e)= \mathbf{0}^{J}_{p^{J}_{\emptyset}}(e)$}\end{array}\right\}\] \[=\left\{e\in\mathbb{E}(J)\,\Bigg{|}\begin{array}{c}\text{ For all $j\in J:$}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad **Example 4.8**.: Consider as before an \(n\)-fold vector bundle \(\mathbb{E}\) and the induced collection of vector bundles \[\mathcal{A}=\left\{\mathbb{E}_{K}^{K}\mid K\subseteq\underline{n}\right\}.\] A partition \(\rho\) of \(\underline{n}\) defines then a \(\#\rho\)-fold vector bundle \[\mathbb{E}^{\mathcal{A},\rho}\colon\Diamond^{\rho}\to\mathbf{Man}^{\infty}, \qquad\mathbb{E}^{\mathcal{A},\rho}(J):=\prod_{\begin{subarray}{c}I\subseteq J \\ I\in\mathrm{Obj}(\Diamond^{\rho})\end{subarray}}^{M}\mathbb{E}_{I}^{I}\] for \(J\in\mathrm{Obj}(\Diamond^{\rho})\), and a vacant \(\#\rho\)-fold vector bundle \[\overline{\mathbb{E}^{\rho}}\colon\Diamond^{\rho}\to\mathbf{Man}^{\infty}, \qquad\overline{\mathbb{E}^{\rho}}(J):=\prod_{\begin{subarray}{c}I\subseteq J\\ I\in\rho\end{subarray}}^{M}\mathbb{E}_{I}^{I}\] for \(J\in\mathrm{Obj}(\Diamond^{\rho})\). Here \(\overline{\mathbb{E}^{\rho}}(J)\) is clearly embedded in \(\mathbb{E}^{\mathcal{A},\rho}(J)\), which is embedded in \(\mathbb{E}^{\mathcal{A}}(J)\) for all \(J\in\mathrm{Obj}(\Diamond^{\rho})\) and the collection of these embeddings define natural transformations \[\overline{\mathbb{E}^{\rho}}\xrightarrow{\overline{\mathbb{P}}^{\prime}} \mathbb{E}^{\mathcal{A},\rho}\xrightarrow{\tau^{\mathcal{A},\rho}}\mathbb{E}^{ \mathcal{A}}\circ i^{\rho}.\] (Recall that \(i^{\rho}\colon\Diamond^{\rho}\to\square^{n}\) is the inclusion functor.) The \(l\)-cores of \(\mathbb{E}^{\mathcal{A}}\) coincide with the functors \(\mathbb{E}^{\mathcal{A},\rho}\colon\Diamond^{\rho}\to\mathbf{Man}^{\infty}\) given by partitions \(\rho\) of \(\underline{n}\) with \(l\) elements. A decomposition \(\mathcal{S}\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}\) of \(\mathbb{E}\) restricts as in Lemma 4.7 to a decomposition \(\mathcal{S}^{\rho}\colon\mathbb{E}^{\mathcal{A},\rho}\to\mathbb{E}^{\rho}\) of the \(l\)-core \(\mathbb{E}^{\rho}\) such that the following diagram of natural transformations commutes. Consider an \(n\)-fold vector bundle \(\mathbb{E}\). Choose a partition \(\rho=\{I_{1},\ldots,I_{l}\}\) of \(\underline{n}\) in \(l\) subsets and consider two distinct coarsements \(\rho_{1}\) and \(\rho_{2}\) of \(\rho\) in \((l-1)\) subsets. Let \(\mathcal{S}^{\iota}\colon\mathbb{E}^{\mathcal{A},\rho_{i}}\to\mathbb{E}^{\rho_ {i}}\) be a decomposition of the \((l-1)\)-core \(\mathbb{E}^{\rho_{i}}\) for \(i=1,2\). The two decompositions \(\mathcal{S}^{\rho_{1}}\) and \(\mathcal{S}^{\rho_{2}}\) are **compatible** if \[(\mathcal{S}^{\rho_{1}})^{\rho_{1}\sqcap\rho_{2}}=(\mathcal{S}^{\rho_{2}})^{ \rho_{1}\sqcap\rho_{2}}\colon\mathbb{E}^{\mathcal{A},\rho_{1}\sqcap\rho_{2}} \to\mathbb{E}^{\rho_{1}\sqcap\rho_{2}}. \tag{21}\] Take now more precisely the coarsement \(\underline{\rho}:=\{I_{1}\cup I_{2},I_{3},\ldots,I_{4}\}\) of \(\rho\). Then \((\mathbb{E}^{\rho})_{\{I_{3},\ldots,I_{4}\}}\) is a face of \(\mathbb{E}^{\rho}\) and of \(\mathbb{E}^{\underline{\rho}}\). Consider a linear splitting \(\Sigma\colon\overline{\mathbb{E}^{\rho}}\to\mathbb{E}^{\rho}\) of \(\mathbb{E}^{\rho}\). Then \(\Sigma\)**is compatible with** a decomposition \(\mathcal{S}^{\underline{\rho}}\) of \(\mathbb{E}^{\underline{\rho}}\) if the following diagram commutes \[\begin{CD}(\overline{\mathbb{E}^{\underline{\rho}}})_{\{I_{3},\ldots,I_{4}\}} \xrightarrow{(\mathcal{S}^{\underline{\rho}_{\alpha}})_{\{I_{3},\ldots,I_{4}\}} }(\mathbb{E}^{\underline{\rho}})_{\{I_{3},\ldots,I_{4}\}}\\ \left\|\begin{CD}\\ \\ \end{CD}\right.\\ (\overline{\mathbb{E}^{\rho}})_{\{I_{3},\ldots,I_{4}\}}\xrightarrow{}{}_{ \Sigma|_{\{I_{3},\ldots,I_{4}\}}}(\mathbb{E}^{\rho})_{\{I_{3},\ldots,I_{4}\}} \end{CD} \tag{22}\] where \(\iota\) is the canonical natural transformation \[\iota\colon\overline{\mathbb{E}^{\underline{\rho}}}\to\mathbb{E}^{\mathcal{A },\underline{\rho}}.\] The compatibility of \(\Sigma\) with a decomposition of \(\mathbb{E}\underline{\rho}\) for any other coarsement \(\underline{\rho}\) of \(\rho\) with \((l-1)\) elements is defined similarly. ### Proof of Corollary 3.6 in [16] The following theorem is proved in [16, Theorem 3.3]. **Theorem 4.9**.: 1. _Let_ \(\mathcal{S}\) _be a decomposition of an_ \(n\)_-fold vector bundle_ \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\)_. Then the composition_ \(\Sigma=\mathcal{S}\circ\tau\colon\overline{\mathbb{E}}\to\mathbb{E}\)_, with_ \(\tau\) _defined as in (_18_), is a splitting of_ \(\mathbb{E}\)_. Furthermore, the core morphisms_ \(\mathcal{S}^{\rho_{J}}\colon\mathbb{E}^{\mathcal{A},\rho_{J}}\to\mathbb{E}^{ \rho_{J}}\) _are decompositions of_ \(\mathbb{E}^{\underline{n}}_{J}\) _for all_ \(J\subseteq\underline{n}\) _with_ \(\#J=2\) _and these decompositions and the linear splitting are compatible._ 2. _Conversely, given a linear splitting_ \(\Sigma\) _of_ \(\mathbb{E}\) _and compatible_8 _decompositions of the_ \((n-1)\)_-cores_ \(\mathbb{E}^{\rho_{J}}\) _for_ \(J\subseteq\underline{n}\) _with_ \(\#J=2\)_, there exists a unique decomposition_ \(\mathcal{S}\) _of_ \(\mathbb{E}\) _such that_ \(\Sigma=\mathcal{S}\circ\tau\) _and such that the core morphisms of_ \(\mathcal{S}\) _are given by_ \(\mathcal{S}^{\rho_{J}}=\mathcal{S}^{J}\) _for all_ \(J\)_._ Footnote 8: Pairwise compatible and all compatible with \(\Sigma\). A symmetric version of this result is proved later on (see Proposition B.1) and the details of the proof of this theorem are discussed in Appendix A. Theorem 3.5 in [16] states then that for each \(n\)-fold vector bundle \(\mathbb{E}\), there is a linear splitting \[\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\,,\] that is a monomorphism of \(n\)-fold vector bundles from the vacant, decomposed \(n\)-fold vector bundle \(\overline{\mathbb{E}}\) associated to \(\mathbb{E}\). [16] shows this result by proving inductively (over \(n\)) the following two claims: 1. Given an \(n\)-fold vector bundle \(\mathbb{E}\), there exist \(n\) linear splittings \(\Sigma_{\underline{n}\setminus\{k\}}\) of \(\mathbb{E}_{\underline{n}\setminus\{k\}}\) for \(k\in\underline{n}\), such that \(\Sigma_{\underline{n}\setminus\{i\}}(I)=\Sigma_{\underline{n}\setminus\{j\}}(I)\) for any \(I\subseteq\underline{n}\setminus\{i,j\}\), i.e. such that \[\Sigma_{\underline{n}\setminus\{i\}}|_{\underline{n}\setminus\{i,j\}}=\Sigma _{\underline{n}\setminus\{j\}}|_{\underline{n}\setminus\{i,j\}}\] for all \(i,j\in\underline{n}\). 2. Given a family of splittings as in (a), there exists a linear splitting \(\Sigma\) of \(\mathbb{E}\) with \[\Sigma|_{\underline{n}\setminus\{k\}}=\Sigma_{\underline{n}\setminus\{k\}}\] for each \(k\in\underline{n}\). The following proposition arises as a corollary of the second claim, which is the missing piece to the proof of Corollary 3.6 in [16]. **Proposition 4.10**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be an \(n\)-fold vector bundle. Assume that each \((n-1)\)-core \(\mathbb{E}^{\rho_{J}}\colon\Diamond^{\rho_{J}}\to\mathbf{Man}^{\infty}\) of \(\mathbb{E}\), for \(J\subseteq\underline{n}\) with \(\#J=2\), has a decomposition \(\mathcal{S}^{J}\colon\mathbb{E}^{\mathcal{A},\rho_{J}}\to\mathbb{E}^{\rho_{J}}\), such that for all \(J_{1},J_{2}\subseteq\underline{n}\) with \(\#J_{1}=\#J_{2}=2\) the decompositions \(\mathcal{S}^{J_{1}}\) and \(\mathcal{S}^{J_{2}}\) are compatible as in (21). Then there exists a splitting_ \[\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\] _of \(\mathbb{E}\) that is compatible with all decompositions \(\mathcal{S}^{J}\) as in (22): for all \(J\subseteq\underline{n}\) with \(\#J=2\),_ \[\Sigma|_{\underline{n}\setminus J}=\mathcal{S}^{J}|_{\underline{n}\setminus J} \circ\tau|_{\underline{n}\setminus J}\colon\overline{\mathbb{E}}_{\underline{ n}\setminus J}\to\mathbb{E}_{\underline{n}\setminus J}. \tag{23}\] Proof.: For each \(J\subseteq\underline{n}\) with \(\#J=2\), the decomposition \(\mathcal{S}^{J}\) of \(\mathbb{E}^{\rho_{J}}\) defines a decomposition \(\mathcal{S}^{J}|_{\underline{n}\setminus J}\) of \(\mathbb{E}_{\underline{n}\setminus J}\) since \(\mathbb{E}_{\underline{n}\setminus J}\) is a face of \(\mathbb{E}^{\rho_{J}}\). Denote by \(\Sigma^{J}\colon\overline{\mathbb{E}}_{\underline{n}\setminus J}\to\mathbb{E} _{\underline{n}\setminus J}\) the induced linear splitting of \(\mathbb{E}_{\underline{n}\setminus J}\), i.e. \[\Sigma^{J}:=\mathcal{S}^{J}|_{\underline{n}\setminus J}\circ\tau|_{\underline{ n}\setminus J}\colon\overline{\mathbb{E}}_{\underline{n}\setminus J}\to\mathbb{E}_{ \underline{n}\setminus J}.\] Consider \(i\in\underline{n}\). Then the \((n-2)\)-fold vector bundles \(\mathbb{E}_{\underline{n}\setminus J}\), for \(i\in J\subseteq\underline{n}\) with \(\#J=2\), are sides of the \((n-1)\)-fold vector bundle \(\mathbb{E}_{\underline{n}\setminus\{i\}}\). Choose two such subsets \(J_{1}\) and \(J_{2}\). Then \(\rho_{J_{1}}\sqcap\rho_{J_{2}}=\{J_{1}\cup J_{2};\{t\}\mid t\in\underline{n} \setminus(J_{1}\cup J_{2})\}\). Since \(\mathcal{S}^{J_{1}}\) and \(\mathcal{S}^{J_{2}}\) coincide on the common core \(\mathbb{E}^{\rho_{J_{1}}\sqcap\rho_{J_{2}}}\) of \(\mathbb{E}^{J_{1}}\) and \(\mathbb{E}^{J_{2}}\), they coincide in particular on its face \(\mathbb{E}_{\underline{n}\setminus(J_{1}\cup J_{2})}\). Hence \(\mathcal{S}^{J_{1}}|_{\underline{n}\setminus(J_{1}\cup J_{2})}=\mathcal{S}^{ J_{2}}|_{\underline{n}\setminus(J_{1}\cup J_{2})}\) and the induced splittings \(\Sigma^{J_{1}}\) and \(\Sigma^{J_{2}}\) satisfy consequently \[\Sigma^{J_{1}}(I)=\Sigma^{J_{2}}(I)\] for all \(I\subseteq(\underline{n}\setminus J_{1})\cap(\underline{n}\setminus J_{2})= \underline{n}\setminus(J_{1}\cup J_{2})\). That is, the splittings \(\Sigma^{J}\) for \(\#J=2\) and \(i\in J\) satisfy (a) above. By (b) there is consequently a linear splitting \[\Sigma_{\underline{n}\setminus\{i\}}\colon\overline{\mathbb{E}}_{\underline{n }\setminus\{i\}}\to\mathbb{E}_{\underline{n}\setminus\{i\}}\] of \(\mathbb{E}_{\underline{n}\setminus\{i\}}\), such that \[\Sigma_{\underline{n}\setminus\{i\}}|_{\underline{n}\setminus J}=\Sigma^{J}.\] for all \(J\subseteq\underline{n}\) with \(\#J=2\) and \(i\in J\). Now choose \(K\subseteq(\underline{n}\setminus\{i\})\cap(\underline{n}\setminus\{j\})= \underline{n}\setminus\{i,j\}\) for \(i\neq j\in\underline{n}\). Set \(I:=\{i,j\}\). Then \[\Sigma_{\underline{n}\setminus\{i\}}(K)=\Sigma^{I}(K)=\Sigma_{\underline{n} \setminus\{j\}}(K)\] shows that (a) then still holds for the \((n-1)\)-sides \(\mathbb{E}_{\underline{n}\setminus\{i\}}\) of \(\mathbb{E}\), for all \(i\in\underline{n}\). As a consequence, there exists with (b) a linear splitting \[\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\] of \(\mathbb{E}\) with \[\Sigma|_{\underline{n}\setminus\{i\}}=\Sigma_{\underline{n}\setminus\{i\}}\] for \(i=1,\ldots,n\). The compatibility in (23) with the decompositions of the highest order cores is immediate by construction. Now the complete proof of [10, Corollary 3.6] can be given. **Corollary 4.11**.: _Every \(n\)-fold vector bundle \(\mathbb{E}\) is non-canonically isomorphic to the associated decomposed \(n\)-fold vector bundle \(\mathbb{E}^{\mathcal{A}}\)._ Proof.: First consider all \(2\)-cores of \(\mathbb{E}\). As explained in Proposition 4.4, these are indexed by all possible partitions of \(\underline{n}\) in two (non-empty) subsets. Consider such a partition \(\rho=\{I,\underline{n}\setminus I\}\) for \(\emptyset\neq I\subsetneq\underline{n}\) and choose a linear decomposition of the \(2\)-core \(\mathbb{E}^{\rho}\), which is a double vector bundle with sides \(\mathbb{E}^{I}_{I}\) and \(\mathbb{E}^{\underline{n}\setminus I}_{\underline{n}\setminus I}\) and with core \(\mathbb{E}^{\underline{n}}_{\underline{n}}\). (Double vector bundles always admit decompositions, see [1, 1]. See also the introduction of [10] for more historical details.) Choose now for each partition \(\rho=\{I_{1},I_{2},I_{3}\}\) of \(\underline{n}\) in three subsets a linear splitting \(\Sigma^{\rho}\colon\overline{\mathbb{E}^{\rho}}\to\mathbb{E}^{\rho}\) of the \(3\)-core \(\mathbb{E}^{\rho}\) of \(\mathbb{E}\). Then \(\Sigma^{\rho}\) is _automatically_ compatible with the decompositions \[\mathcal{S}^{\{I_{1}\cup I_{2},I_{3}\}},\quad\mathcal{S}^{\{I_{1}\cup I_{3},I_{ 2}\}},\quad\text{ and }\quad\mathcal{S}^{\{I_{2}\cup I_{3},I_{1}\}}\] as in (22) since (22) reads here \[\Sigma^{\rho}(I_{i})=\mathcal{S}^{\{I_{j}\cup I_{k},I_{i}\}}(I_{i})\colon \mathbb{E}_{I_{i}}^{I_{i}}\to\mathbb{E}_{I_{i}}^{I_{i}},\] for \(\{i,j,k\}=\{1,2,3\}\), which is immediate since both \(\Sigma^{\rho}(I_{i})\) and \(\mathcal{S}^{\{I_{j}\cup I_{k},I_{i}\}}(I_{i})\) are the identity on \(\mathbb{E}_{I_{i}}^{I_{i}}\). Secondly, the decompositions \(\mathcal{S}^{\{I_{1}\cup I_{2},I_{3}\}}\), \(\mathcal{S}^{\{I_{1}\cup I_{3},I_{2}\}}\) and \(\mathcal{S}^{\{I_{2}\cup I_{3},I_{1}\}}\) of the \(2\)-cores of \(\mathbb{E}^{\rho}\) are compatible since e.g. \[\{I_{1}\cup I_{2},I_{3}\}\sqcap\{I_{1}\cup I_{3},I_{2}\}=\{\underline{n}\},\] so (21) reduces here to \[\left(\mathcal{S}^{\{I_{1}\cup I_{2},I_{3}\}}\right)^{\{\underline{n}\}}= \left(\mathcal{S}^{\{I_{1}\cup I_{3},I_{2}\}}\right)^{\{\underline{n}\}}: \mathbb{E}_{\underline{n}}^{\underline{n}}\to\mathbb{E}_{\underline{n}}^{ \underline{n}},\] which again is immediate since both decompositions restrict to the identity on the common core \(\mathbb{E}_{\underline{n}}^{\underline{n}}\) of \(\mathbb{E}^{\{I_{1}\cup I_{2},I_{3}\}}\) and \(\mathbb{E}^{\{I_{1}\cup I_{3},I_{2}\}}\). By Theorem 4.9, there exists a decomposition \(\mathcal{S}^{\rho}\) of \(\mathbb{E}^{\rho}\) that restricts to \(\Sigma^{\rho}\) and to \(\mathcal{S}^{\{I_{1}\cup I_{2},I_{3}\}}\), \(\mathcal{S}^{\{I_{1}\cup I_{3},I_{2}\}}\) and \(\mathcal{S}^{\{I_{2}\cup I_{3},I_{1}\}}\) on the \(2\)-cores of \(\mathbb{E}^{\rho}\). Since \(\rho\) was an arbitrary partition of \(\underline{n}\) in three subsets, this yields decompositions of all \(3\)-cores, such that the restrictions to two refinements of a same \(2\)-core are automatically compatible: If \(\rho_{1}\) and \(\rho_{2}\) are two \(3\)-partitions of \(\underline{n}\) such that \(\rho_{1}\sqcap\rho_{2}\) is a \(2\)-partition of \(\underline{n}\), then \(\rho_{1}\sqcap\rho_{2}\) indexes the common \(2\)-core of \(\mathbb{E}^{\rho_{1}}\) and \(\mathbb{E}^{\rho_{2}}\), and by the construction above of \(\overline{\mathcal{S}^{\rho_{1}}}\) and \(\mathcal{S}^{\rho_{2}}\) \[\left(\mathcal{S}^{\rho_{1}}\right)^{\rho_{1}\sqcap\rho_{2}}=\mathcal{S}^{\rho _{1}\sqcap\rho_{2}}=\left(\mathcal{S}^{\rho_{2}}\right)^{\rho_{1}\sqcap\rho_{2 }}.\] Use Proposition 4.10 and choose linear splittings of the \(4\)-cores which are compatible with all decompositions of the corresponding \(3\)-cores. By Theorem 4.9 there are decompositions of the \(4\)-cores with are compatible with the chosen splittings and with all decompositions of the \(3\)-cores constructed above. As above, the decompositions of the \(4\)-cores are therefore automatically compatible, and there exist by Proposition 4.10 linear splittings of the \(5\)-cores that are compatible with the decompositions of the \(4\)-cores. Repeat these steps until a decomposition of the \(n\)-core \(\mathbb{E}\) is constructed. ### Atlases of multiple vector bundles [10] shows that an \(n\)-fold vector bundle can be equivalently defined as a smooth manifold endowed with an \(n\)-fold vector bundle atlas. The following definition [10] is a straightforward generalisation of Pradines' definition of a double vector bundle atlas [11]. **Definition 4.12**.: _Let \(M\) be a smooth manifold and \(E\) a topological space together with a continuous map \(\pi\colon E\to M\). An **n-fold vector bundle chart on \(E\)** is a tuple_ \[c=(U,\Theta,(V_{I})_{\emptyset\neq I\subseteq\underline{n}}),\] _where \(U\) is an open set in \(M\), for each \(\emptyset\neq I\subseteq\underline{n}\) the space \(V_{I}\) is a fixed (finite dimensional) real vector space, and \(\Theta\colon\pi^{-1}(U)\to U\times\prod_{\emptyset\neq I\subseteq\bar{n}}V_{I}\) is a homeomorphism such that \(\pi=\operatorname{pr}_{1}\circ\Theta\)._ _Two \(n\)-fold vector bundle charts \(c\) and \(c^{\prime}\) are **smoothly compatible** if the "change of chart" \(\Theta^{\prime}\circ\Theta^{-1}\) over \(U\cap U^{\prime}\) has the following form9:_ Footnote 9: Hence if it is a morphism of decomposed \(n\)-fold vector bundles, see [12]. \[\big{(}p,(v_{I})_{\emptyset\neq I\subseteq\underline{n}}\big{)}\mapsto\left(p, \left(\sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\omega_{\rho}(p)(v_{I_ {1}},\ldots,v_{I_{k}})\right)_{\emptyset\neq I\subseteq\underline{n}}\right)\] _with \(p\in U\cap U^{\prime}\), \(v_{I}\in V_{I}\) and \(\omega_{\rho}\in C^{\infty}(U\cap U^{\prime},\operatorname{Hom}(V_{I_{1}} \otimes\ldots\otimes V_{I_{k}},V_{I}))\) for \(\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)\)._ _A **smooth n-fold vector bundle atlas**\(\mathfrak{A}\) on \(E\) is a set of n-fold vector bundle charts of \(E\) that are pairwise smoothly compatible and such that the set of underlying open sets in \(M\) is a covering of \(M\). As usual, \(E\) is then a smooth manifold and two smooth \(n\)-fold vector bundle atlases \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) are **equivalent** if their union is a smooth n-fold vector bundle atlas. A smooth \(n\)**-fold vector bundle structure** on \(E\) is an equivalence class of smooth n-fold vector bundle atlases on \(E\). The pair of \(E\) and a smooth \(n\)-fold vector bundle structure on \(E\) is written \(\mathbb{E}\)._ Consider a smooth \(n\)-fold vector bundle atlas, and write \(\{U_{\alpha}\}_{\alpha\in\Lambda}\) for the underlying open covering of \(M\). For \(\alpha,\beta,\gamma\in\Lambda\) the identity \(\Theta_{\gamma}\circ\Theta_{\alpha}^{-1}=\Theta_{\gamma}\circ\Theta_{\beta}^{ -1}\circ\Theta_{\beta}\circ\Theta_{\alpha}^{-1}\) on \(\pi^{-1}(U_{\alpha}\cap U_{\beta}\cap U_{\gamma})\) yields the following cocycle conditions. For \(\emptyset\neq I\subseteq\underline{n}\) and \(\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)\): \[\begin{split}&\omega_{\rho}^{\gamma\alpha}(p)(v_{I_{1}},\ldots,v_ {I_{k}})=\\ &\sum_{(J_{1},\ldots,J_{l})\in\operatorname{coars}(\rho)}\omega_ {(J_{1},\ldots,J_{l})}^{\gamma\beta}(p)\Big{(}\omega_{\rho\cap J_{1}}^{ \beta\alpha}(p)\big{(}(v_{I})_{I\in\rho\cap J_{1}}\big{)},\ldots,\omega_{\rho \cap J_{l}}^{\beta\alpha}(p)\big{(}(v_{I})_{I\in\rho\cap J_{l}}\big{)}\Big{)} \,.\end{split} \tag{24}\] Since the involved constructions need to be refined, this sections recalls in detail the proof of Corollary 3.10 in [12]: **Corollary 4.13**.: _Definition 4.1 of an \(n\)-fold vector bundle as a functor from the \(n\)-cube category is equivalent to Definition 4.12 of an \(n\)-fold vector bundle as a space with a maximal \(n\)-fold vector bundle atlas._ Proof.: Let \(\mathbb{E}\) be an \(n\)-fold vector bundle. By Corollary 4.11 (see also Theorem 3.2 and Theorem 3.3 in [12]), there is a decomposition \(\mathcal{S}\colon\mathbb{E}^{\operatorname{dec}}\to\mathbb{E}\) of \(\mathbb{E}\), with \(\mathbb{E}^{\operatorname{dec}}\) the decomposed \(n\)-fold vector bundle defined by the family \((\mathbb{E}^{I}_{I})_{I\subseteq\underline{n}}\) of vector bundles over \(M\). Set \(E:=\mathbb{E}(\underline{n})\), as usual \(M:=\mathbb{E}(\emptyset)\), and \(\pi=\mathbb{E}(\underline{n}\to\emptyset)\colon E\to M\). For each \(\emptyset\neq I\subseteq\underline{n}\), set \(V_{I}:=\mathbb{R}^{\dim E^{I}_{I}}\), the vector space on which \(E^{I}_{I}\) is modelled. Take a covering \(\{U_{\alpha}\}_{\alpha\in\Lambda}\) of \(M\) by open sets trivialising all the vector bundles \(E^{I}_{I}\); \[\phi^{\alpha}_{I}\colon q^{-1}_{I}(U_{\alpha})\stackrel{{\sim}}{{ \longrightarrow}}U_{\alpha}\times V_{I}\] for all \(\emptyset\neq I\subseteq\underline{n}\) and all \(\alpha\in\Lambda\), where \(q_{I}\colon E^{I}_{I}\to M\) is the projection (the restriction of \(p^{I}_{\emptyset}\)). Then define \(n\)-fold vector bundle charts \(\Theta_{\alpha}\colon\pi^{-1}(U_{\alpha})\to U_{\alpha}\times\prod_{I\subseteq \underline{n}}V_{I}\) by \[\Theta_{\alpha}=\Big{(}\pi\times(\phi^{\alpha}_{I})_{I\subseteq\underline{n}} \Big{)}\circ\mathcal{S}(\underline{n})^{-1}|_{\pi^{-1}(U_{\alpha})}\colon\pi^{- 1}(U_{\alpha})\to U_{\alpha}\times\prod_{\emptyset\neq I\subseteq\underline{n}} V_{I}.\] Given \(\alpha,\beta\in\Lambda\) with \(U_{\alpha}\cap U_{\beta}\neq\emptyset\), the change of chart \[\Theta_{\alpha}\circ\Theta_{\beta}^{-1}\colon(U_{\alpha}\cap U_{\beta})\times \prod_{\emptyset\neq I\subseteq\underline{n}}V_{I}\to(U_{\alpha}\cap U_{\beta}) \times\prod_{\emptyset\neq I\subseteq\underline{n}}V_{I}\] is given by \[\left(p,(v_{I})_{\emptyset\neq I\subseteq\underline{n}}\right)\mapsto\left(p,( \rho_{I}^{\alpha\beta}(p)v_{I})_{\emptyset\neq I\subseteq\underline{n}}\right), \tag{25}\] with \(\rho_{I}^{\alpha\beta}\in C^{\infty}(U_{\alpha}\cap U_{\beta},\operatorname{Gl }(V_{I}))\) the cocycle defined by \(\phi_{I}^{\alpha}\circ(\phi_{I}^{\beta})^{-1}\). The two charts are hence smoothly compatible. Hence his defines an \(n\)-fold vector bundle atlas \(\mathfrak{A}=\{(U_{\alpha},\Theta_{\alpha},(V_{I})_{I\subseteq\underline{n}}) \mid\alpha\in\Lambda\}\) on \(E\). Conversely, given a space \(E\) with an \(n\)-fold vector bundle structure over a smooth manifold \(M\) as in Definition 4.12, define \(\mathbb{E}\colon\square^{\mathbb{N}}\to\mathbf{Man}^{\infty}\) as follows. Take a maximal atlas \(\mathfrak{A}=\{(U_{\alpha},\Theta_{\alpha},(V_{I})_{I\subseteq\underline{n}}) \mid\alpha\in\Lambda\}\) of \(E\); in particular \(\{U_{\alpha}\}_{\alpha\in\Lambda}\) is an open cover of \(M\). For \(\alpha,\beta,\gamma\in\Lambda\) the identity \(\Theta_{\gamma}\circ\Theta_{\alpha}^{-1}=\Theta_{\gamma}\circ\Theta_{\beta}^{- 1}\circ\Theta_{\beta}\circ\Theta_{\alpha}^{-1}\) on \(\pi^{-1}(U_{\alpha}\cap U_{\beta}\cap U_{\gamma})\) yields the cocycle conditions (24). Set \(\mathbb{E}(\underline{n}):=E\), \(\mathbb{E}(\emptyset):=M\), and more generally for \(\emptyset\neq I\subseteq\underline{n}\), \[\mathbb{E}(I)=\left.\left(\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha}\times \prod_{\emptyset\neq J\subseteq I}V_{J}\right)\right)\right/\sim\] with \(\sim\) the equivalence relation defined on \(\bigsqcup_{\alpha\in\Lambda}(U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I }V_{J})\) by \[U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I}V_{J}\quad\ni\quad\left(p,( v_{J})_{\emptyset\neq J\subseteq I}\right)\quad\sim\quad\left(q,(w_{J})_{ \emptyset\neq J\subseteq I}\right)\quad\in\quad U_{\beta}\times\prod_{ \emptyset\neq J\subseteq I}V_{J}\] if and only if \(p=q\) and \[(v_{J})_{\emptyset\neq J\subseteq I}=\left(\sum_{\rho=(J_{1},\ldots,J_{k}) \in\mathcal{P}(J)}\omega_{\rho}^{\alpha\beta}(p)(w_{J_{1}},\ldots,w_{J_{k}}) \right)_{\emptyset\neq J\subseteq I}.\] The relations (24) show the symmetry and transitivity of this relation. As in the construction of a vector bundle from vector bundle cocycles, \(\mathbb{E}(I)\) has a unique smooth manifold structure such that \(\pi_{I}\colon\mathbb{E}(I)\to M\), \(\pi_{I}[p,(v_{I})_{\emptyset\neq I\subseteq J}]=p\) is a surjective submersion and such that the maps \[\Theta_{\alpha}^{I}\colon\operatorname{pr}_{I}\left(U_{\alpha}\times\prod_{ \emptyset\neq J\subseteq I}V_{J}\right)\to U_{\alpha}\times\prod_{\emptyset \neq J\subseteq I}V_{J},\qquad\left[p,(v_{I})_{\emptyset\neq I\subseteq J} \right]\mapsto\left(p,(v_{I})_{\emptyset\neq I\subseteq J}\right)\] are diffeomorphisms, where \(\operatorname{pr}_{I}\colon\bigsqcup_{\alpha\in\Lambda}(U_{\alpha}\times \prod_{\emptyset\neq J\subseteq I}V_{J})\to\mathbb{E}(I)\) is the projection to the equivalence classes. \(\mathbb{E}(I)\) comes equipped with \(\#I\) surjective submersions \[p_{I\setminus\{i\}}^{I}\colon\mathbb{E}(I)\to\mathbb{E}(I\setminus\{i\})\] for \(i\in I\), defined in charts by \[U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I}V_{J}\,\ni\,\left(p,(v_{J})_ {\emptyset\neq J\subseteq I}\right)\,\mapsto\,\left(p,(v_{J})_{\emptyset\neq J \subseteq I\setminus\{i\}}\right)\,\in\,U_{\alpha}\times\prod_{\emptyset \neq J\subseteq I\setminus\{i\}}V_{J}\] and it is easy to see that \(\mathbb{E}(I)\) is a vector bundle over \(\mathbb{E}(I\setminus\{i\})\), and that for \(i,j\in I\), \[\begin{CD}\mathbb{E}(I)@>{p_{I\setminus\{i\}}^{I}}>{}>\mathbb{E}(I\setminus\{ i\})\\ @V{\left\{p_{I\setminus\{i\}}^{I}}>{}>\left\{p_{I\setminus\{i,j\}}^{I \setminus\{j\}}^{I}\right\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I\setminus\{i,j\}}V_{I \setminus\{i,j\}}V_{I\setminus\setminus\{i,j\}}V_{I\setminus\{i, is a double vector bundle, with obvious local trivialisations given by the local charts. **Remark 4.14**.: Note that the construction above of an \(n\)-fold vector bundle atlas on \(\mathbb{E}(\underline{n})\) from an \(n\)-fold vector bundle yields an atlas with simpler changes of charts (25) than the most general allowed change of charts (4.12). This is due to the choice of a _global_ decomposition of the \(n\)-fold vector bundle. Choosing different local decompositions yields an atlas with changes of charts as in (4.12). ## 5. Symmetric \(n\)-fold vector bundles This section introduces _symmetric \(n\)-fold vector bundles_ and their morphisms, their symmetric atlases and the decomposed symmetric \(n\)-fold vector bundles. ### Global definition of symmetric \(n\)-fold vector bundles This section defines symmetric \(n\)-fold vector bundles. Recall that an \(n\)-fold vector bundle is a functor \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\). The \(n\)-cube category \(\square^{n}\) is equipped as follows with a canonical left action \(\Phi\) of the symmetric group \(S_{n}\) by isomorphisms of categories. The permutation \(\sigma\in S_{n}\) acts by \(\Phi_{\sigma}\colon\square^{n}\to\square^{n}\), which maps an object \(I\subseteq\underline{n}\) to the object \(\sigma(I)\subseteq\underline{n}\) and morphisms in the obvious way. This action induces as follows a right action of \(S_{n}\) on the category of \(n\)-fold vector bundles. **Definition 5.1**.: _For \(\sigma\in S_{n}\) define the \(\sigma\)**-flip** of an \(n\)-fold vector bundle \(\mathbb{E}\) to be the \(n\)-fold vector bundle \(\mathbb{E}^{\sigma}:=\mathbb{E}\circ\Phi_{\sigma}\colon\square^{n}\to\mathbf{Man }^{\infty}\). Since \(\Phi\) is a left action of \(S_{n}\) on \(\square^{n}\), then \((\mathbb{E}^{\sigma})^{\tau}=\mathbb{E}^{\sigma\tau}\). Given a morphism \(\tau\colon\mathbb{E}\to\mathbb{F}\) of \(n\)-fold vector bundles, there is an obvious morphism \(\tau^{\sigma}\colon\mathbb{E}^{\sigma}\to\mathbb{F}^{\sigma}\) of \(n\)-fold vector bundles defined by \(\tau^{\sigma}(I)=\tau(\sigma(I))\) for all \(I\subseteq\underline{n}\)._ Note that \(\mathbb{E}^{\sigma}\) has the same underlying spaces as \(\mathbb{E}\) but in different positions, which is crucial when considering morphisms of multiple vector bundles. For example the double vector bundle \((D;A,B;M)\) is different from the double vector bundle \((D;B,A;M)\). **Definition 5.2**.: _An \(n\)-fold vector bundle \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) has a **symmetric structure** if_ 1. _its building bundles satisfy_10__\(\mathbb{E}^{I}_{I}=\mathbb{E}^{J}_{J}\) _for_ \(\#I=\#J\)_, and_ Footnote 10: The building bundles \(\mathbb{E}^{I}_{I}\) and \(\mathbb{E}^{J}_{J}\) are of course equal up to isomorphism for \(\#I=\#J\). In condition (c), the identity map is then this isomorphism. 2. \(\mathbb{E}\) _is endowed with a left_ \(S_{n}\)_-action_ \(\Psi\colon S_{n}\to\operatorname{Mor}(\mathbb{E})\) _in the sense that_ 1. _for any_ \(\sigma\in S_{n}\)_,_ \(\Psi_{\sigma}\colon\mathbb{E}\to\mathbb{E}^{\sigma}\) _is a morphism of_ \(n\)_-fold vector bundles,_ 2. \(\Psi_{\mathrm{id}}=\mathrm{id}_{\mathbb{E}}\colon\mathbb{E}\to\mathbb{E}\) _and for all_ \(\sigma,\tau\in S_{n}\)__ \[\Psi_{\sigma\tau}=(\Psi_{\sigma})^{\tau}\circ\Psi_{\tau}\colon\mathbb{E}\to \mathbb{E}^{\tau}\to\mathbb{E}^{\sigma\tau}=(\mathbb{E}^{\sigma})^{\tau}\] 3. \(\Psi^{I,I}_{\sigma}=\varepsilon(\sigma,I)\cdot\mathrm{id}_{\mathbb{E}^{I}_{I}} \colon\mathbb{E}^{I}_{I}\to\mathbb{E}^{\sigma(I)}_{\sigma(I)}=\mathbb{E}^{I}_{I}\)_._ Footnote 10: The building bundles \(\mathbb{E}^{I}_{I}\) and \(\mathbb{E}^{J}_{J}\) are of course equal up to isomorphism for \(\#I=\#J\). In condition (c), the identity map is then this isomorphism. _An \(n\)-fold vector bundle together with a symmetric structure is called a **symmetric \(n\)-fold vector bundle**._ _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) and \(\mathbb{F}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be two symmetric \(n\)-fold vector bundles with actions \(\Phi\) and \(\Psi\), respectively. A **morphism**\(\tau\colon\mathbb{E}\to\mathbb{F}\)**of symmetric \(n\)-fold vector bundles** is a morphism of \(n\)-fold vector bundles which is \(S_{n}\)-equivariant, that is, which is such that the diagrams_ \[\begin{CD}\mathbb{E}(I)@>{\tau(I)}>{}>\mathbb{F}(I)\\ @V{\Phi_{\sigma}(I)}V{}V@V{\Psi_{\sigma}(I)}V{}V\\ \mathbb{E}(\sigma(I))@>{\tau(\sigma(I))}>{}>\mathbb{F}(\sigma(I))\end{CD}\] commute for all \(I\subseteq\underline{n}\) and all \(\sigma\in S_{n}\). That is, the diagram_ _of natural transformations commutes for all \(\sigma\in S_{n}\)._ \(\mathsf{SnVB}\) _is the category of symmetric \(n\)-fold vector bundles._ ### Example: the pullback of a symmetric \(n\)-fold vector bundle Let \(\mathbb{E}\) be an \(n\)-fold vector bundle. Then the \(n\)**-pullback of \(\mathbb{E}\)** is the set \[P=\left\{(e_{1},\ldots,e_{n})\left|e_{i}\in\mathbb{E}(\underline{n}\setminus\{ i\})\right.\text{and }p_{\underline{n}\setminus\{i,j\}}^{n\setminus\{i\}}(e_{i})=p_{\underline{n} \setminus\{i,j\}}^{n\setminus\{j\}}(e_{j})\in\mathbb{E}(\underline{n}\setminus \{i,j\})\text{ for }i,j\in\underline{n}\right\}\,.\] By Theorem 2.10 in [13], \(P\) is a smooth embedded submanifold of the product \(\mathbb{E}(\underline{n}\setminus\{1\})\times\ldots\times\mathbb{E}( \underline{n}\setminus\{n\})\), and the functor \(\mathbb{P}\) defined by * \(\mathbb{P}(\underline{n})=P\), * \(\mathbb{P}(S)=\mathbb{E}(S)\) for all \(S\subsetneq\underline{n}\) and the vector bundle projections * \(p_{S\setminus\{i\}}^{S}\colon\mathbb{E}(S)\to\mathbb{E}(S\setminus\{i\})\) for all \(S\subsetneq\underline{n}\) and \(i\in S\) * \(p_{\underline{n}\setminus\{i\}}^{\prime}\colon P\to\mathbb{E}(\underline{n} \setminus\{i\})\), \((e_{1},\ldots,e_{n})\mapsto e_{i}\) is an \(n\)-fold vector bundle. Furthermore, define the map \(\pi(\underline{n})\colon\mathbb{E}(\underline{n})\to P\) by \(\pi(\underline{n})\colon e\mapsto(p_{\underline{n}\setminus\{1\}}(e),\ldots, p_{\underline{n}\setminus\{n\}}(e))\). The map \(\pi(\underline{n})\) defines together with \(\pi(J)=\operatorname{id}_{\mathbb{E}(J)}\) for \(J\subsetneq\underline{n}\), a surjective morphism \(\pi\colon\mathbb{E}\to\mathbb{P}\) of \(n\)-fold vector bundles. Note that for each \(i\in\underline{n}\), the top map \(\pi(\underline{n})\colon\mathbb{E}(\underline{n})\to P\) of \(\pi\) is necessarily a vector bundle morphism over the identity on \(\mathbb{E}(\underline{n}\setminus\{i\})\). Note also that the building bundle \(\mathbb{P}\frac{\underline{n}}{\underline{n}}\to M\) is \[\left\{\left.\left(0_{m}^{\underline{n}\setminus\{1\}},\ldots,0_{m}^{ \underline{n}\setminus\{n\}}\right)\in\mathbb{E}(\underline{n}\setminus\{1 \})\times\ldots\times\mathbb{E}(\underline{n}\setminus\{n\})\right|m\in M \right\}\,, \tag{26}\] hence the trivial vector bundle \(M\to M\), while by construction \(\mathbb{P}^{I}_{I}=\mathbb{E}^{I}_{I}\) for all \(I\subsetneq\underline{n}\). Let \(\mathbb{E}\) be a symmetric \(n\)-fold vector bundle. This section shows that its pullback \(n\)-fold vector bundle \(\mathbb{P}\colon\square^{n}\to\mathbf{Man}^{\infty}\) (see [13]) is also a symmetric \(n\)-fold vector bundle, and the projection \(\pi\colon\mathbb{E}\to\mathbb{P}\) is a morphism of symmetric \(n\)-fold vector bundles. The building bundles of \(\mathbb{P}\) must satisfy \(\mathbb{P}^{I}_{I}=\mathbb{P}^{J}_{J}\) for \(I,J\subseteq\underline{n}\) with \(\#I=\#J\). This is immediate since for \(I\subsetneq\underline{n}\) the vector bundle \(\mathbb{P}^{J}_{J}\) equals \(\mathbb{E}^{J}_{J}\), and the building blocks of \(E\) satisfy \(\mathbb{E}^{I}_{I}=\mathbb{E}^{J}_{J}\) for \(I,J\subseteq\underline{n}\) with \(\#I=\#J\). There is further only one subset of \(\underline{n}\) with cardinality \(n\), so the condition (1) in Definition 5.2 is trivially satisfied for that set. Consider the action \(\Psi\) of \(S_{n}\) on \(\mathbb{E}\) and define the following action \(\Phi\) of \(S_{n}\) on \(\mathbb{P}\): for \(\sigma\in S_{n}\) and \(J\subsetneq\underline{n}\), \[\Phi_{\sigma}(J):=\Psi_{\sigma}(J)\colon\mathbb{E}(J)\to\mathbb{E}^{\sigma}(J) =\mathbb{E}(\sigma(J)).\] The map \(\Phi_{\sigma}(\underline{n})\colon P\to P\) is further defined by \[(e_{1},\ldots,e_{n})\mapsto(\Psi_{\sigma}(\underline{n}\setminus\{\sigma^{-1}( 1)\})(e_{\sigma^{-1}(1)}),\ldots,\Psi_{\sigma}(\underline{n}\setminus\{\sigma^ {-1}(n)\})(e_{\sigma^{-1}(n)}))\in P\,, \tag{27}\] and is automatically smooth. For all \(I\subsetneq\underline{n}\) and all \(\sigma\in S_{n}\), the map \((\Phi_{\sigma})^{I}_{I}\colon\mathbb{P}^{I}_{I}=\mathbb{E}^{I}_{I}\to\mathbb{E }^{I}_{I}\) equals \((\Psi_{\sigma})^{I}_{I}=\epsilon(\sigma,I)\cdot\operatorname{id}_{\mathbb{E}^{ I}_{I}}\) since \(\Phi_{\sigma}(I)=\Psi_{\sigma}(I)\). Since \(\mathbb{P}^{\underline{n}}_{\overline{n}}\) is the trivial vector bundle over \(M\) and \((\Phi_{\sigma})\frac{n}{\overline{n}}\) is obviously the identity on this trivial bundle by (26) and (27), the third condition in Definition 5.2 is satisfied for \(I=\underline{n}\). It remains to show that \(\Psi_{\sigma}\) is a natural transformation for each \(\sigma\in S_{n}\) and that for all \(I\subseteq\underline{n}\) and \(i\in I\) the square is a vector bundle homomorphism. But again, since \(\Phi_{\sigma}(I)=\Psi_{\sigma}(I)\) for all \(I\subsetneq\underline{n}\), it suffices to check that is a vector bundle homomorphism for each \(i\in\underline{n}\). This, in turn, follows immediately from the definition of \(\Phi_{\sigma}(\underline{n})\). Finally the diagrams all commute, since for \(\sigma\in S_{n}\) and \(I\subsetneq\underline{n}\), this is and for \(I=\underline{n}\), this is which commutes since for all \(e\in E\), \[\begin{split}\pi(\underline{n})(\Psi_{\sigma}(\underline{n})(e) )&=(p_{1}(\Psi_{\sigma}(\underline{n})(e)),\dots,p_{n}(\Psi_{ \sigma}(\underline{n})(e)))\\ &=\big{(}\Psi_{\sigma}(\underline{n}\setminus\{\sigma^{-1}(1)\} )(p_{\sigma^{-1}(1)}(e)),\dots,\Psi_{\sigma}(\underline{n}\setminus\{\sigma^{-1 }(n)\})(p_{\sigma^{-1}(n)}(e))\big{)}\\ &=\Phi_{\sigma}(\underline{n})(p_{1}(e),\dots,p_{n}(e))=\Phi_{ \sigma}(\underline{n})(\pi(\underline{n})(e)).\end{split}\] ### Iterated highest order cores of symmetric \(n\)-fold vector bundles This section shows that for \(l\in\{1,\dots,n\}\), the family of \(l\)-cores of a symmetric \(n\)-fold vector bundle inherit an \(S_{n}\)-symmetry. More precisely, the \(S_{n}\)-action on \(\mathbb{E}\) restricts to morphisms between the different \(l\)-cores. **Lemma 5.3**.: _Let \(\mathbb{E}\) be a symmetric \(n\)-fold vector bundle with \(S_{n}\)-action \(\Phi\). Choose a partition \(\rho=\{I_{1},\dots,I_{l}\}\) of \(\underline{n}\) in \(\#\rho=l\) subsets. Then for each \(\sigma\in S_{n}\), the partition \(\sigma(\rho)=\{\sigma(I_{1}),\dots,\sigma(I_{l})\}\) again has \(l\) elements, and \(\Phi_{\sigma}\colon\mathbb{E}\to\mathbb{E}^{\sigma}\) restricts to a morphism_ \[\Phi_{\sigma}^{\rho}\colon\mathbb{E}^{\rho}\to(\mathbb{E}^{\sigma(\rho)})^{\sigma}\] of \(l\)-fold vector bundles. For each \(J\in\operatorname{Obj}(\lozenge^{\rho})\) the smooth map_ \[\Phi^{\rho}_{\sigma}(J)\colon\mathbb{E}^{\rho}(J)\to\mathbb{E}^{\sigma(\rho)}( \sigma(J))\] _is the restriction to the embedded domain \(\mathbb{E}^{\rho}(J)\subseteq\mathbb{E}(J)\) and the embedded codomain \(\mathbb{E}^{\sigma(\rho)}(\sigma(J))\subseteq\mathbb{E}(\sigma(J))\) of the smooth map_ \[\Phi(\sigma)(J)\colon\mathbb{E}(J)\to\mathbb{E}(\sigma(J)).\] Proof.: First choose a subset \(J\subseteq\underline{n}\) with two elements and consider the \((n-1)\)-core \(\mathbb{E}^{J}\) of \(\mathbb{E}\). The morphism \(\Psi_{\sigma}\colon\mathbb{E}\to\mathbb{E}^{\sigma}\) of \(n\)-fold vector bundles restricts to a morphism \[(\Psi_{\sigma})^{J}\colon\mathbb{E}^{J}\to(\mathbb{E}^{\sigma})^{J}\] for \((n-1)\)-fold vector bundles. An easy computation shows that \[(\mathbb{E}^{\sigma})^{J}=(\mathbb{E}^{\sigma(J)})^{\sigma} \tag{28}\] with, as usual, \((\mathbb{E}^{\sigma(J)})^{\sigma}(I):=\mathbb{E}^{\sigma(J)}(\sigma(I))\) for all \(I\subseteq\underline{n}\). Assume that for an \(l\)-partition \(\rho=\{I_{1},\ldots,I_{l}\}\) of \(\underline{n}\), \[(\mathbb{E}^{\sigma})^{\rho}=(\mathbb{E}^{\sigma(\rho)})^{\sigma}\] and choose an \((l-1)\)-coarsement \(\rho^{\prime}\) of \(\rho\). Then without loss of generality \(\rho^{\prime}=\{I_{1}\cup I_{2},I_{3},\ldots,I_{l}\}\) and \[(\mathbb{E}^{\sigma})^{\rho^{\prime}}=((\mathbb{E}^{\sigma})^{\rho})^{\rho^{ \prime}}=\left(\left(\mathbb{E}^{\sigma(\rho)}\right)^{\sigma}\right)^{\rho^ {\prime}}=\left(\left(\mathbb{E}^{\sigma(\rho)}\right)^{\sigma(\rho^{\prime}) }\right)^{\sigma}=\left(\mathbb{E}^{\sigma(\rho^{\prime})}\right)^{\sigma},\] since \(\sigma(\rho^{\prime})\) is a coarsement of \(\sigma(\rho)\). In the third equality, (28) is used since the partition \(\rho^{\prime}\) gives a highest order core of \(\mathbb{E}^{\rho}\). The rest of the statement is then immediate. ### Decomposed symmetric \(n\)-fold vector bundles Consider a smooth manifold \(M\) and a collection of vector bundles \(\mathcal{A}=\{q_{i}\colon A_{i}\to M\mid 1\leq i\leq n\}\). Define a functor \(\mathbb{E}^{\mathcal{A}}\colon\Box^{n}\to\mathbf{Man}^{\infty}\) as follows. Each object \(I\subseteq\underline{n}\) is sent to \[\mathbb{E}^{\mathcal{A}}(I):=\prod_{\emptyset\neq J\subseteq I}^{M}A_{\#J} \,\simeq\,\prod_{i\in\{1,\ldots,n\}}^{M}A_{i}^{\#\{J\subseteq I\mid\#J=i\}},\] as a fibered product of vector bundles over \(M\). That is, \[\mathbb{E}^{\mathcal{A}}(\{1,2\})=A_{1}\times A_{1}\times A_{2},\] \[\mathbb{E}^{\mathcal{A}}(\{2,4,5\})=A_{1}\times A_{1}\times A_{1}\times A_{2 }\times A_{2}\times A_{2}\times A_{3},\] etc. For \(I\in\underline{n}\) with \(\#I\geq 1\) and for \(k\in I\), the arrow \(I\to I\setminus\{k\}\) is sent to the canonical vector bundle projection11 Footnote 11: By convention \(A_{0}\) is set to be \(M\). \[p^{I}_{I\setminus\{k\}}\colon\prod_{\emptyset\neq J\subseteq I}^{M}A_{\#J} \to\prod_{J\subseteq I\setminus\{k\}}^{M}A_{\#J}.\] In particular, the arrow \(\{i\}\to\emptyset\) for \(i\in\underline{n}\) is sent to the vector bundle projection \(p^{\{i\}}_{\emptyset}=q_{\{i\}}\colon\mathbb{E}^{\mathcal{A}}(\{i\})=A_{1} \to\mathbb{E}^{\mathcal{A}}(\emptyset)=M\). This decomposed \(n\)-fold vector bundle \(\mathbb{E}^{\mathcal{A}}\) satisfies the following two conditions: 1. By construction the building bundles satisfy \((\mathbb{E}^{\mathcal{A}})^{I}_{I}=A_{\#I}=A_{\#J}=(\mathbb{E}^{\mathcal{A}}) ^{J}_{J}\) for \(\#I=\#J\). 2. The maps \[\Psi_{\sigma}^{\mathcal{A}}(I)\colon\mathbb{E}^{\mathcal{A}}(I) \to\mathbb{E}^{\mathcal{A},\sigma}(I)=\mathbb{E}^{\mathcal{A}}( \sigma(I))\] \[(a_{J})_{\emptyset\neq J\subseteq I} \mapsto(\epsilon(\sigma^{-1},J)a_{\sigma^{-1}(J)})_{\emptyset \neq J\subseteq\sigma(I)}\] for \(\sigma\in S_{n}\) and \(I\subseteq\underline{n}\) define an \(S_{n}\)-symmetry on \(\mathbb{E}^{\mathcal{A}}\). For any \(\sigma\in S_{n}\) the n-fold vector bundle \(\mathbb{E}^{\mathcal{A},\sigma}\) is the decomposed \(n\)-fold vector bundle with \(\mathbb{E}^{\mathcal{A},\sigma}(I)=\mathbb{E}^{\mathcal{A}}(\sigma(I))=\prod_{J \subseteq\sigma(I)}^{M}A_{\#J}\). Therefore, \(\Psi_{\sigma}^{\mathcal{A}}\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{ \mathcal{A},\sigma}\) is obviously a morphism of \(n\)-fold vector bundles. Furthermore, by definition, \((\Psi_{\sigma}^{\mathcal{A}})_{I}^{I}\) is clearly \(\epsilon(\sigma^{-1},\sigma(I))\cdot\mathrm{id}_{A_{\#I}}=\epsilon(\sigma,I) \cdot\mathrm{id}_{A_{\#I}}\) for all \(I\in\underline{n}\). Obviously, \(\Psi_{\mathrm{id}}^{\mathcal{A}}(I)=\mathrm{id}_{\mathbb{E}^{\mathcal{A}}(I)}\) for all \(I\subseteq\underline{n}\). Choose \(\emptyset\neq J\subseteq\underline{n}\) and take \(\sigma,\tau\in S_{n}\) and \(\left(p,(v_{I})_{\emptyset\neq I\subseteq J}\right)\in\prod_{\emptyset\neq I \subseteq J}^{M}A_{\#I}=\mathbb{E}^{\mathcal{A}}(J)\). Then (29) \[\begin{split}\left(\Psi_{\sigma}^{\mathcal{A}}\circ\Psi_{\tau}^{ \mathcal{A}}\right)\left((v_{I})_{\emptyset\neq I\subseteq J}\right)& =\Psi_{\sigma}^{\mathcal{A}}\left(\left(\underbrace{\epsilon( \tau^{-1},I)v_{\tau^{-1}(I)}}_{=:w_{I}\in A_{\#I}}\right)_{\emptyset\neq I \subseteq\tau(J)}\right)\\ &=\left(\left(\epsilon(\sigma^{-1},I)w_{\sigma^{-1}(I)}\right)_{ \emptyset\neq I\subseteq\sigma(\tau(J))}\right)\\ &=\left(\left(\epsilon(\sigma^{-1},I)\epsilon(\tau^{-1},\sigma^{- 1}(I))v_{\tau^{-1}(\sigma^{-1}(I))}\right)_{\emptyset\neq I\subseteq\sigma( \tau(J))}\right)\\ &\stackrel{{\eqref{eq:S_n}}}{{=}}\left(\left( \epsilon((\sigma\tau)^{-1},I)v_{(\sigma\tau)^{-1}(I))}\right)_{\emptyset\neq I \subseteq\sigma\tau(J)}\right)\\ &=\Psi_{\sigma\tau}^{\mathcal{A}}\left((v_{I})_{\emptyset\neq I \subseteq J}\right).\end{split}\] This shows \((\Psi_{\sigma}^{\mathcal{A}})^{\tau}\circ\Psi_{\tau}^{\mathcal{A}}=\Psi_{ \sigma\tau}^{\mathcal{A}}\). **Definition 5.4**.: _Let \(n\in\mathbb{N}\) and let \(A_{1},\ldots,A_{n}\) be vector bundles over a smooth manifold \(M\)._ 1. _The symmetric_ \(n\)_-fold vector bundle_ \(\mathbb{E}^{\mathcal{A}}\colon\square^{n}\to\mathbf{Man}^{\infty}\) _as defined above is the_ _decomposed symmetric_ \(n\)_-fold vector bundle_ _defined by_ \(\mathcal{A}:=\{A_{1},\ldots,A_{n}\}\)_._ 2. _If_ \(A_{2},\ldots,A_{n}\) _are all trivial vector bundles over_ \(M\)_, then the symmetric decomposed_ \(n\)_-fold vector bundle_ \(\mathbb{E}^{\mathcal{A}}=:\mathbb{E}^{\{A_{1}\}}\colon\square^{n}\to\mathbf{Man}^{\infty}\) _defined by_ \(\mathcal{A}\) _is_ vacant_. The functor_ \(\mathbb{E}^{\{A_{1}\}}\) _is here precisely given by_ \(\mathbb{E}^{\{A_{1}\}}(I)=\prod_{i\in I}^{M}A_{1}=A_{1}^{\#I}\) _for all_ \(I\subseteq\underline{n}\)_. It is the_ _vacant symmetric_ \(n\)_-fold vector bundle_ _defined by the vector bundle_ \(A_{1}\)_._ There exists a canonical monomorphism \(\iota\colon\mathbb{E}^{\{A_{1}\}}\to\mathbb{E}^{\mathcal{A}}\) of symmetric \(n\)-fold vector bundles, given by the canonical embeddings \[\tau(I)\colon\prod_{i\in I}A_{1}\hookrightarrow\prod_{J\subseteq I}^{M}A_{ \#J},\quad(a_{i})_{i\in I}\mapsto(a_{J})_{J\subseteq I};\quad a_{J}:=\left\{ \begin{array}{ll}a_{i}&J=\{i\}\\ 0_{m}^{A_{i}}&\#J=l\geq 2\end{array}\right.\] if \(m\in M\) is the foot point of the tuple \((a_{i})_{i\in I}\). Assume that \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) is a symmetric decomposed \(n\)-fold vector bundle and that for \(i\in\{1,\ldots,n\}\), the vector bundle \(A_{i}\) is \(\mathbb{E}_{I}^{I}\) for any \(I\subseteq\underline{n}\) with \(\#I=i\). Then the vacant symmetric decomposed \(n\)-fold vector bundle defined by \(A_{1}\) is written \(\overline{\mathbb{E}}\colon\square^{n}\to\mathbf{Man}^{\infty}\), and the symmetric decomposed \(n\)-fold vector bundle defined by \(\{A_{1},\ldots,A_{n}\}\) is written \(\mathbb{E}^{\mathrm{dec}}\colon\square^{n}\to\mathbf{Man}^{\infty}\). If \(\Psi\) is the \(S_{n}\)-action on \(\mathbb{E}\), then the \(S_{n}\)-action on \(\overline{\mathbb{E}}\) is written \(\overline{\Psi}\), and the \(S_{n}\)-action \(\Psi^{\mathcal{A}}\) on \(\mathbb{E}^{\mathrm{dec}}\) is written \(\Psi^{\mathrm{dec}}\). ### Morphisms of decomposed symmetric \(n\)-fold vector bundles Let \(\mathbb{E}^{\mathcal{A}}\colon\square^{n}\to\mathbf{Man}^{\infty}\) and \(\mathbb{E}^{\mathcal{B}}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be two decomposed symmetric \(n\)-fold vector bundles, defined by \(\mathcal{A}=(A_{i}\to M)_{1\leq i\leq n}\) and \(\mathcal{B}=(B_{i}\to N)_{1\leq i\leq n}\), respectively, and with \(S_{n}\)-actions \(\Psi^{\mathcal{A}}\) and \(\Psi^{\mathcal{B}}\). Recall that a morphism of (decomposed) symmetric \(n\)-fold vector bundles from \(\mathbb{E}^{\mathcal{A}}\) to \(\mathbb{E}^{\mathcal{B}}\) is a natural transformation \(\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{B}}\) such that for all objects \(I\) of \(\square^{n}\) and for all \(i\in I\), the commutative diagram is a morphism of vector bundles, and such that for all \(\sigma\in S_{n}\), \[\Psi^{\mathcal{B}}_{\sigma}(\underline{n})\circ\tau(\underline{n})=\tau( \underline{n})\circ\Psi^{\mathcal{A}}_{\sigma}(\underline{n}).\] In the following theorem, vector bundle morphisms \(\tau_{(i_{1},\ldots,i_{k})}\colon A_{i_{1}}\otimes\ldots\otimes A_{i_{k}}\to B _{i}\) over \(\tau(\emptyset)\colon M\to N\) are considered, which are indexed by pairs \((i,(i_{1},\ldots,i_{k}))\), with \(i\in\{1,\ldots,n\}\) and \((i_{1},\ldots,i_{k})\) an ordered integer partition of \(i\). Rewrite \(A_{i_{1}}\otimes\ldots\otimes A_{i_{k}}\) as \[A_{1}^{\otimes\#\{s\in\{1,\ldots,k\}|i_{s}=1\}}\otimes\ldots\otimes A_{i}^{ \otimes\#\{s\in\{1,\ldots,k\}|i_{s}=i\}}.\] Then \[\tau_{(i_{1},\ldots,i_{k})}\colon A_{1}^{\otimes\#\{s\in\{1,\ldots,k\}|i_{s}=1 \}}\otimes\ldots\otimes A_{i}^{\otimes\#\{s\in\{1,\ldots,k\}|i_{s}=i\}}\to B _{i}\] is **symmetric** (respectively **skew-symmetric**) **in the \(j\)-entries**, for \(j\in\{1,\ldots,i\}\), if it is symmetric (respectively alternating) on its \(j\)-th factor \(A_{j}^{\otimes\#\{s\in\{1,\ldots,k\}|i_{s}=j\}}\). **Proposition 5.5**.: _A morphism \(\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{B}}\) of decomposed symmetric \(n\)-fold vector bundles is equivalent to a family of vector bundle morphisms \(\tau_{(i_{1},\ldots,i_{k})}\colon A_{i_{1}}\otimes\ldots\otimes A_{i_{k}}\to B _{i}\) over \(\tau(\emptyset)\colon M\to N\), indexed by pairs \((i,(i_{1},\ldots,i_{k}))\), with \(i\in\{1,\ldots,n\}\) and \((i_{1},\ldots,i_{k})\) an ordered integer partition of \(i\), such that for \(j\in\{1,\ldots,i\}\):_ 1. \(\tau_{(i_{1},\ldots,i_{k})}\) _is symmetric in the_ \(j\)_-entries for_ \(j\) _even, and_ 2. \(\tau_{(i_{1},\ldots,i_{k})}\) _is skew-symmetric in the_ \(j\)_-entries for_ \(j\) _odd._ _Given such a family, the corresponding morphism of decomposed symmetric \(n\)-fold vector bundles \(\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{B}}\) is given by \(\tau(J)\colon\mathbb{E}^{\mathcal{A}}(J)\to\mathbb{E}^{\mathcal{B}}(J)\),_ \[\tau(J)\left((a_{I})_{\emptyset\neq I\subseteq J}\right)=\left(\underbrace{ \sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\operatorname{sgn}(\rho) \cdot\tau_{(\#I_{1},\ldots,\#I_{k})}(a_{I_{1}},\ldots,a_{I_{k}})}_{\in B_{ \#I}}\right)_{\emptyset\neq I\subseteq J},\] _for all \(\emptyset\neq J\subseteq\underline{n}\), for \(i\in\{1,\ldots,n\}\) and \((i_{1},\ldots,i_{k})\) a (canonically) ordered integer partition of \(i\)._ _Conversely, given \(\tau\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{B}}\), the morphism \(\tau_{(i_{1},\ldots,i_{k})}\colon A_{i_{1}}\otimes\ldots\otimes A_{i_{k}}\to B _{i}\) is given as follows. Consider the canonical partition \(\rho_{\operatorname{cam}}^{i_{1},\ldots,i_{k}}:=(K_{1},\ldots,K_{k})\) of \(\{1,\ldots,i\}\) defined by \((i_{1},\ldots,i_{k})\). Then for \(a_{i_{1}}\in A_{i_{1}},\ldots,a_{i_{k}}\in A_{i_{k}}\), the image_ \[\tau_{(i_{1},\ldots,i_{k})}(a_{i_{1}},\ldots,a_{i_{k}})\in B_{i}=B_{\#\underline {i}}\] _is the \(\underline{i}\)-entry of the image under \(\tau(\underline{n})\) of the tuple \((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\) with_ \[a_{J}=\left\{\begin{array}{ll}0&\mbox{ for }J\neq K_{1},\ldots,K_{k}\\ a_{i_{l}}&\mbox{ for }J=K_{l},\ l\in\{1,\ldots,k\}.\end{array}\right.\] The proof of this statement relies on the following lemma. **Lemma 5.6**.: _In the situation of the previous theorem, consider a family of vector bundle morphisms \(\tau_{(i_{1},\dots,i_{k})}\colon A_{i_{1}}\otimes\dots\otimes A_{i_{k}}\to B_{i}\) over \(\tau(\emptyset)\colon M\to N\), indexed by pairs \((i,(i_{1},\dots,i_{k}))\), with \(i\in\{1,\dots,n\}\) and \((i_{1},\dots,i_{k})\) an ordered integer partition of \(i\), such that for \(j\in\{1,\dots,i\}\):_ 1. \(\tau_{(i_{1},\dots,i_{k})}\) _is symmetric in the_ \(j\)_-entries for_ \(j\) _even, and_ 2. \(\tau_{(i_{1},\dots,i_{k})}\) _is skew-symmetric in the_ \(j\)_-entries for_ \(j\) _odd._ _Take a subset \(\emptyset\neq I\subseteq\underline{n}\) and an ordered partition \(\rho=(I_{1},\dots,I_{k})\) of \(I\), as well as a permutation \(\sigma\in S_{n}\). Denote by \((J_{1}^{\rho,\sigma},\dots,J_{k}^{\rho,\sigma})\) the ordered partition \(\sigma(\rho)\). That is, the subsets \(J_{1}^{\rho,\sigma},\dots,J_{k}^{\rho,\sigma}\subseteq\sigma(I)\) are the subsets \(\sigma(I_{1}),\dots,\sigma(I_{k})\), but back in the lexicographic order. Then for \(a_{\sigma(I_{1})}\in A_{\#I_{1}},\dots,a_{\sigma(I_{k})}\in A_{\#I_{k}}\)_ \[\tau_{(\#I_{1},\dots,\#I_{k})}(a_{\sigma(I_{1})},\dots,a_{\sigma(I_{k})})= \frac{\operatorname{sgn}(\sigma(\rho))\cdot\epsilon(\sigma,I)}{\operatorname {sgn}(\rho)\cdot\prod_{l=1}^{k}\epsilon(\sigma,I_{l})}\cdot\tau_{(\#I_{1}, \dots,\#I_{k})}\left(a_{J_{1}^{\rho,\sigma}},\dots,a_{J_{k}^{\rho,\sigma}} \right). \tag{30}\] Proof.: Choose a subset \(\emptyset\neq I\subseteq\underline{n}\) and an ordered partition \((I_{1},\dots,I_{k})\in\mathcal{P}(I)\). For simplicity, write in this proof \(\tau\) for \(\tau_{(\#I_{1},\dots,\#I_{k})}\). 1. If \(\sigma\in S_{n}\) does \(\sigma(I_{j})=I_{j}\) for \(j=1,\dots,k\), then \(\sigma(\rho)=\rho\) and \(J_{j}^{\rho,\sigma}=\sigma(I_{j})=I_{j}\) for \(j=1,\dots,k\). Hence both sides of (30) are equal if and only if \[\prod_{j=1}^{k}\epsilon(\sigma,I_{j})=\epsilon(\sigma,I).\] But this equation holds true by (8). 2. Consider the case where \(\sigma\) is a permutation preserving \(\rho\), but not each of its elements. That is, here \(\sigma(I_{j})\in\rho\) for \(j=1,\dots,k\). Assume first the more special case where for some \(j<l\in\{1,\dots,k\}\), necessarily with \(\#I_{j}=\#I_{l}\): (31) \[\begin{array}{l}\sigma|_{I_{j}}\colon I_{j}\to I_{l}\text{ and }\sigma|_{I_{l}}\colon I_{l}\to I_{j}\text{ are order-preserving bijections and }\\ \sigma|_{I\setminus(I_{j}\cup I_{l})}=\operatorname{id}_{I\setminus(I_{j} \cup I_{l})}.\end{array}\] In this case \(\sigma|_{I}\) is a product of \(\#I_{j}\) transpositions, and so \(\epsilon(\sigma,I)=(-1)^{\#I_{j}}\). On the other hand, \(\epsilon(\sigma,I_{j})=1=\epsilon(\sigma,I_{l})\) because \(\sigma\) is order-preserving on both sets, and \(\epsilon(\sigma,I_{s})=1\) for all \(s\in\{1,\dots,k\}\setminus\{j,l\}\) since \(\sigma\) is the identity on these sets. Therefore (32) \[\prod_{s=1}^{l}\epsilon(\sigma,I_{s})=1=(-1)^{\#I_{j}}\epsilon(\sigma,I)\] and since \(\rho=\sigma(\rho)\), (30) is here \[\tau\left(a_{I_{1}},\dots,a_{I_{l}},\dots,a_{I_{j}},\dots,a_{I_{k}}\right)=(-1 )^{\#I_{j}}\tau\left(a_{I_{1}},\dots,a_{I_{j}},\dots,a_{I_{l}},\dots,a_{I_{k}} \right),\] which holds since \(\tau\) is symmetric in the \(j\)-entries for \(j\) even, and skew-symmetric in the \(j\)-entries for \(j\) odd. If \(\sigma\) is now a general permutation preserving \(\rho\), then its restriction to \(I\) is a composition \[\sigma|_{I}=(\sigma_{r}\circ\dots\sigma_{1}\circ\lambda)|_{I}\] with \(\sigma_{i}|_{I}\) exchanging two of the elements of \(\rho\) as in (31) for \(i=1,\dots,r\) and \(\lambda|_{I}\) a permutation like in the first case and in the proof of Lemma 2.4, i.e. preserving each element of \(\rho\). Then if \(\sigma_{i}\) exchanges two elements of cardinality \(c_{i}\) for \(i=1,\ldots,r\), then \[\prod_{l=1}^{k}\epsilon(\sigma,I_{l}) \stackrel{{\eqref{eq:c_i}}}{{=}}\prod_{l=1}^{k}\left( \left(\prod_{j=1}^{r}\epsilon(\sigma_{j},\sigma_{j-1}\circ\ldots\circ\sigma_{1} (\lambda(I_{l})))\right)\cdot\epsilon(\lambda,I_{l})\right)\] \[=\left(\prod_{j=1}^{r}\prod_{l=1}^{k}\epsilon(\sigma_{j},\sigma_{ j-1}\circ\ldots\circ\sigma_{1}(\lambda(I_{l})))\right)\cdot\prod_{l=1}^{k} \epsilon(\lambda,I_{l})\] \[\stackrel{{\eqref{eq:c_i}}}{{=}}\left(\prod_{j=1}^{ r}(-1)^{c_{j}}\epsilon(\sigma_{j},\sigma_{j-1}\circ\ldots\circ\sigma_{1}( \lambda(I)))\right)\cdot\epsilon(\lambda,I)\] \[\stackrel{{\eqref{eq:c_i}}}{{=}}(-1)^{c_{1}+\ldots+c _{r}}\cdot\epsilon(\sigma,I),\] which shows that (33) follows in this case from (33) for the simpler case above and the symmetry/skew-symmetry of \(\tau\) on its different components. 3. Now take a general permutation \(\sigma\in S_{n}\), but assume first that \(\sigma|_{I_{j}}\colon I_{j}\to\sigma(I_{j})\) is order-preserving for \(j=1,\ldots,k\). The ordered partition \(\sigma(\rho)\) consists in the subsets \(\sigma(I_{1}),\ldots,\sigma(I_{k})\), but reordered. Hence there is a permutation \(\nu\in S_{n}\) that does \[\nu(\sigma(I_{l}))=J_{l}^{\rho,\sigma}\text{ for }l=1,\ldots,k,\] and the restrictions of which to \(\sigma(I_{1}),\ldots,\sigma(I_{k})\) are all order-preserving. As before, consider the canonical partition \(\rho_{\operatorname{can}}^{\pm I_{1},\ldots,\#I_{k}}:=(K_{1},\ldots,K_{k})\) defined by \((\#I_{1},\ldots,\#I_{k})\) on \(\{1,\ldots,i\}\) for \(\#I=:i\). Choose a permutation \(\lambda\in S_{n}\) with \(\lambda(K_{l})=I_{l}\) for \(l=1,\ldots,k\) and such that \(\lambda|_{K_{l}}\colon K_{l}\to I_{l}\) is order-preserving for \(l=1,\ldots,k\). Then \(\nu\sigma\lambda\in S_{n}\) does \[(\nu\sigma\lambda)(K_{l})=\nu(\sigma(I_{l}))=J_{l}^{\rho,\sigma}\quad\text{ for }l=1,\ldots,k,\] and \[(\nu\sigma\lambda)|_{K_{l}}\colon K_{l}\to J_{l}^{\rho,\sigma}\quad\text{ is order preserving for }l=1,\ldots,k.\] Therefore \[\operatorname{sgn}(\sigma(\rho))=\frac{\prod_{l=1}^{k}\epsilon(\nu\sigma \lambda,K_{l})}{\epsilon(\nu\sigma\lambda,\underline{i})}\stackrel{{ \eqref{eq:c_i}}}{{=}}\frac{1}{\epsilon(\nu,\sigma(I))\epsilon(\sigma,I) \epsilon(\lambda,\underline{i})}\] and so \[\frac{\operatorname{sgn}(\sigma(\rho))\cdot\epsilon(\sigma,I)}{\operatorname{ sgn}(\rho)\cdot\prod_{l=1}^{k}\epsilon(\sigma,I_{l})}=\frac{\epsilon(\nu, \sigma(I))\epsilon(\lambda,\underline{i})}{\epsilon(\lambda,\underline{i})}= \epsilon(\nu,\sigma(I))=\frac{\prod_{l=1}^{k}\epsilon(\nu,\sigma(I_{l}))}{ \epsilon(\nu,\sigma(I))}.\] As in the second step above, since \(\nu\) is a composition of permutations \(\sigma_{r},\ldots,\sigma_{1}\) as in (31). Conclude using the second case above. Now a general \(\tilde{\sigma}\in S_{n}\) is the composition \(\tilde{\sigma}=\sigma\mu\) of a permutation \(\mu\) that fixes \(\rho\) with a permutation \(\sigma\) as above, i.e. that is order-preserving on each of the subsets \(I_{l}\), for \(l=1,\ldots,k\). Then \(\tilde{\sigma}(\rho)=\sigma(\rho)\), \(\tilde{\sigma}(I_{l})=\sigma(I_{l})\) and so also \(J_{l}^{\rho,\tilde{\sigma}}=J_{l}^{\rho,\sigma}\) for \(l=1,\ldots,k\). Hence (30) follows from the first part of this case, using (8) and (5). Proof of Proposition 5.5.: Recall that a morphism \(\tau\) of decomposed \(n\)-fold vector bundles is given as follows [12] by its component \(\tau(\underline{n})\) on the total space of \(\mathbb{E}^{\mathcal{A}}\): \[\tau(\underline{n})((a_{I})_{\emptyset\neq I\subseteq\underline{n}})=\left( \sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\tau_{\rho}(a_{I_{1}},\ldots,a _{I_{k}})\right)_{\emptyset\neq I\subseteq\underline{n}}\] for all \((a_{I})_{\emptyset\neq I\subseteq\underline{n}}\in\mathbb{E}^{\mathcal{A}}( \underline{n})\), with vector bundle morphisms \[\tau_{\rho}\colon A_{\#I_{1}}\otimes\ldots\otimes A_{\#I_{k}}\to B_{\#I}\] over \(\tau(\emptyset)\colon M\to N\), for \(\emptyset\neq I\subseteq\underline{n}\) and \(\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)\). For \(\sigma\in S_{n}\) the morphism \(\tau(\underline{n})\circ\Psi^{\mathcal{A}}_{\sigma^{-1}}(\underline{n})\) sends \((a_{I})_{\emptyset\neq I\subseteq\underline{n}}\) to \[\left(\tau\circ\Psi^{\mathcal{A}}_{\sigma^{-1}}\right)\left((a_{I})_{ \emptyset\neq I\subseteq\underline{n}}\right) =\tau\left(\left(\epsilon(\sigma,I)a_{\sigma(I)}\right)_{ \emptyset\neq I\subseteq\underline{n}}\right)\] while the morphism \(\Psi^{\mathcal{B}}_{\sigma^{-1}}(\underline{n})\circ\tau(\underline{n})\) sends it to \[\left(\Psi^{\mathcal{B}}_{\sigma^{-1}}\circ\tau\right)\left((a_ {I})_{\emptyset\neq I\subseteq\underline{n}}\right) =\Psi^{\mathcal{B}}_{\sigma}\left(\left(\sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\tau_{\rho}(a_{I_{1}},\ldots,a_{I_{k}})\right)_{ \emptyset\neq I\subseteq\underline{n}}\right)\] \[=\left(\epsilon(\sigma,I)\cdot\sum_{\rho=(I_{1},\ldots,I_{k})\in \mathcal{P}(\sigma(I))}\tau_{\rho}(a_{I_{1}},\ldots,a_{I_{k}})\right)_{ \emptyset\neq I\subseteq\underline{n}}\] \[=\left(\sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\epsilon( \sigma,I)\tau_{\sigma(\rho)}\left(a_{J^{p,\sigma}_{1}},\ldots,a_{J^{p,\sigma}_ {k}}\right)\right)_{\emptyset\neq I\subseteq\underline{n}}.\] where for \(\rho\in\mathcal{P}(I)\) of length \(k\), the tuple \((J^{p,\sigma}_{1},\ldots,J^{p,\sigma}_{k})\) is the ordered partition \(\sigma(\rho)\) as in Lemma 5.6. If \(\tau_{\rho}=\operatorname{sgn}(\rho)\cdot\tau_{(\#I_{1},\ldots,\#I_{k})}\) for \(\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)\), with \(\tau_{(\#I_{1},\ldots,\#I_{k})}\colon A_{\#I_{1}}\otimes\ldots\otimes A_{\#I_{ k}}\to B_{\#I}\) as in the statement of the theorem, then (30) gives immediately \(\tau(\underline{n})\circ\Psi^{\mathcal{A}}_{\sigma^{-1}}(\underline{n})=\Psi^{ \mathcal{B}}_{\sigma^{-1}}(\underline{n})\circ\tau(\underline{n})\). Hence a family of vector bundle morphisms \(\tau_{(i_{1},\ldots,i_{k})}\) as in the statement defines a morphism of symmetric \(n\)-fold vector bundles. It remains to show that any morphism of symmetric decomposed \(n\)-fold vector bundles must be of this form. If \(\tau\) is a morphism of symmetric \(n\)-fold vector bundles, then \(\left(\tau(\underline{n})\circ\Psi^{\mathcal{A}}_{\sigma^{-1}}(\underline{n}) \right)\left((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\right)=\left( \Psi^{\mathcal{B}}_{\sigma^{-1}}(\underline{n})\circ\tau(\underline{n}) \right)\left((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\right)\) for all \((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\in\mathbb{E}^{\mathcal{A}}( \underline{n})\) and all \(\sigma\in S_{n}\). Choose a subset \(I\subseteq\underline{n}\), an ordered partition \((I_{1},\ldots,I_{k})\in\mathcal{P}(I)\) and a permutation \(\sigma\in S_{n}\). Evaluating the two sides of this formula on tuples \((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\) with \[a_{J}=0\text{ for }J\neq\sigma(I_{1}),\ldots,\sigma(I_{k})\] yields \[\prod_{l=1}^{k}\epsilon(\sigma,I_{l})\cdot\tau_{\rho}\left(a_{\sigma(I_{1})}, \ldots,a_{\sigma(I_{k})}\right)=\epsilon(\sigma,I)\tau_{\sigma(\rho)}\left(a_{J ^{p,\sigma}_{1}},\ldots,a_{J^{p,\sigma}_{k}}\right) \tag{33}\] for all \(a_{\sigma(I_{1})}\in A_{\#I_{1}},\ldots,a_{\sigma(I_{k})}\in A_{\#I_{k}}\) over a same point of \(M\). A study of (33) in two special cases yields the claim as follows. 1. Consider the case where \(\sigma\) is a permutation preserving \(\rho\), but not each of its elements. That is, here \(\sigma(I_{j})\in\rho\) for \(j=1,\ldots,k\). Assume first the more special case of (31). In this case (32) yields that (33) is \[\tau_{\rho}\left(a_{I_{1}},\ldots,a_{I_{j}},\ldots,a_{I_{l}},\ldots,a_{I_{k}} \right)=(-1)^{\#I_{j}}\tau_{\rho}\left(a_{I_{1}},\ldots,a_{I_{l}},\ldots,a_{I_ {j}},\ldots,a_{I_{k}}\right),\] which shows that \[\tau_{\rho}\colon A_{1}^{\otimes\#\{s\in\{1,\ldots,k\}|\#I_{s}=1\}}\otimes \ldots\otimes A_{n}^{\otimes\#\{s\in\{1,\ldots,k\}|\#I_{s}=n\}}\to B_{\#I}\] must be symmetric on \(A_{t}^{\otimes\#\{s\in\{1,\ldots,k\}|\#I_{s}=t\}}\) for \(t\) even, and skew-symmetric for \(t\) odd. 2. Now choose a subset \(I\) of \(\underline{n}\) with cardinality \(i\) and choose a partition \(\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)\). Set \(i_{j}:=\#I_{j}\) for \(j=1,\ldots,k\), and consider the canonical partition \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}=(K_{1},\ldots,K_{k})\) of \(\underline{i}\). Choose \(\sigma\in S_{n}\) with \[\sigma(K_{j})=I_{j}\quad\text{ for }\quad j=1,\ldots,k,\] i.e. with \(\rho=\sigma\left(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}\right)\) and preserving the order of the partitions. Set \[\tau_{(i_{1},\ldots,i_{k})}:=\tau_{\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}} \colon A_{i_{1}}\otimes\ldots\otimes A_{i_{k}}\to B_{i}.\] Then (33) applied to \(\sigma\) and \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}\) reads \[\tau_{\rho}\left(a_{I_{1}},\ldots,a_{I_{k}}\right)=\frac{\prod_{j= 1}^{k}\epsilon(\sigma,K_{j})}{\epsilon(\sigma,\underline{i})}\cdot\tau_{i_{1},\ldots,i_{k}}\left(a_{I_{1}},\ldots,a_{I_{k}}\right)=\mathrm{sgn}(\rho)\cdot \tau_{i_{1},\ldots,i_{k}}\left(a_{I_{1}},\ldots,a_{I_{k}}\right)\] for \(a_{I_{j}}\in A_{i_{j}}\), \(j=1,\ldots,k\). Next the composition of morphisms needs to be understood. Choose three decomposed symmetric \(n\)-fold vector bundles \(\mathbb{E}^{\mathcal{A}}\), \(\mathbb{E}^{\mathcal{B}}\) and \(\mathbb{E}^{\mathcal{C}}\) defined by three families \(\mathcal{A}=(A_{i}\to M\mid i=1,\ldots,n)\), \(\mathcal{B}=(B_{i}\to M\mid i=1,\ldots,n)\) and \(\mathcal{C}=(C_{i}\to M\mid i=1,\ldots,n)\) of vector bundles over smooth manifolds \(M\), \(N\) and \(P\), respectively. Consider two morphisms \[\eta=(\eta_{p})_{p\in\mathcal{P}(n)}\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E }^{\mathcal{B}}\quad\text{ over }\quad\eta_{0}\colon M\to N\] and \[\tau=(\tau_{p})_{p\in\mathcal{P}(n)}\colon\mathbb{E}^{\mathcal{B}}\to\mathbb{E }^{\mathcal{C}}\quad\text{ over }\quad\tau_{0}\colon N\to P.\] The composition of \(\tau\) with \(\eta\) is given as follows. As stated in Proposition 5.5, for \(p=(i_{1},\ldots,i_{k})\in\mathcal{P}(n)\) with \(\sum p=i\), and \(a_{l}\in A_{i_{l}}\) for \(l=1,\ldots,k\), \[(\tau\circ\eta)_{(i_{1},\ldots,i_{k})}(a_{1},\ldots,a_{k})\in C_{i}=C_{\# \underline{i}}\] is the \(\underline{i}\)-entry of the image under \((\tau\circ\eta)(\underline{n})\) of the tuple \((a_{J})_{\emptyset\neq J\subseteq\underline{n}}\) with \[a_{J}=\left\{\begin{array}{ll}0&\text{ for }J\neq K_{1},\ldots,K_{k}\\ a_{l}&\text{ for }J=K_{l},\ l\in\{1,\ldots,k\},\end{array}\right.\] where \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}:=(K_{1},\ldots,K_{k})\) is the canonical partition of \(\underline{i}\) defined by \((i_{1},\ldots,i_{k})\). Write \(\rho_{\mathrm{can}}^{i_{1},\ldots,i_{k}}=\rho\) for simplicity and compute \[(\tau\circ\eta)_{(i_{1},\ldots,i_{k})}(a_{1},\ldots,a_{k})=\bigl{(}( \tau\circ\eta)(\underline{n})\,\bigl{(}(a_{J})_{\emptyset\neq J\subseteq \underline{n}}\bigr{)}\bigr{)}_{\underline{i}}\] \[=\sum_{\begin{subarray}{c}(J_{1},\ldots,J_{l})\in\mathrm{coars}( \rho)\\ \rho=\rho_{1}\cup\ldots\cup\rho_{l}\\ \underline{\text{ as sets}}\\ \cup\underline{\text{$\rho_{1}$}}\leq\underline{\text{$\zeta$}}\cup\underline{ \text{$\rho_{l}$}}\end{subarray}}\tau_{(J_{1},\ldots,J_{l})}\Bigl{(}\eta_{ \rho\cap J_{1}}\bigl{(}(a_{I})_{I\in\rho\cap J_{1}}\bigr{)},\ldots,\eta_{ \rho\cap J_{l}}\bigl{(}(a_{I})_{I\in\rho\cap J_{l}}\bigr{)}\Bigr{)}.\] For \(\rho_{1},\ldots,\rho_{l}\) ordered partitions with \(\rho=\rho_{1}\cup\ldots\cup\rho_{l}\) as sets and \(\cup\rho_{1}<\ldots<\cup\rho_{l}\) the term \[\tau_{(\cup\rho_{1},\ldots,\cup\rho_{l})}\Big{(}\eta_{\rho_{1}}\big{(}(a_{I})_{I \in\rho_{1}}\big{)},\ldots,\eta_{\rho_{l}}\big{(}(a_{I})_{I\in\rho_{l}}\big{)} \Big{)}\] equals \[\operatorname{sgn}(\cup\rho_{1},\ldots,\cup\rho_{l})\cdot\prod_{j=1}^{l} \operatorname{sgn}(\rho_{j})\cdot\tau_{(\sum|\rho_{1}|,\ldots,\sum|\rho_{l}|)} \Big{(}\eta_{|\rho_{1}|}\big{(}(a_{I})_{I\in\rho_{1}}\big{)},\ldots,\eta_{|\rho _{l}|}\big{(}(a_{I})_{I\in\rho_{l}}\big{)}\Big{)}.\] This leads to the following result. **Proposition 5.7**.: _Consider three decomposed symmetric \(n\)-fold vector bundles \(\mathbb{E}^{\mathcal{A}}\), \(\mathbb{E}^{\mathcal{B}}\) and \(\mathbb{E}^{\mathcal{C}}\) over smooth manifolds \(M\), \(N\) and \(P\), respectively, and two morphisms_ \[\eta=(\eta_{p})_{p\in\mathcal{P}(n)}\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{ E}^{\mathcal{B}}\quad\text{ over }\quad\eta_{0}\colon M\to N\] _and_ \[\tau=(\tau_{p})_{p\in\mathcal{P}(n)}\colon\mathbb{E}^{\mathcal{B}}\to\mathbb{ E}^{\mathcal{C}}\quad\text{ over }\quad\tau_{0}\colon N\to P\] _of symmetric \(n\)-fold vector bundles. The morphism_ \[\tau\circ\eta\colon\mathbb{E}^{\mathcal{A}}\to\mathbb{E}^{\mathcal{C}}\quad \text{ over }\quad\tau_{0}\circ\eta_{0}\colon M\to P\] _of symmetric \(n\)-fold vector bundles is defined by_ \[(\tau\circ\eta)_{p}=\sum_{\begin{subarray}{c}\rho_{1},\ldots,\rho_{l}\text{ ordered}\\ \text{such that }\rho_{1}\cup\ldots\cup\rho_{l}=\rho_{\text{can}}^{p}\\ \text{as sets}\\ \cup\rho_{1}<\ldots<\cup\rho_{l}\end{subarray}}\operatorname{sgn}(\rho_{1}, \ldots,\rho_{l})\cdot\tau_{(\sum|\rho_{1}|,\ldots,\sum|\rho_{l}|)}\circ\big{(} \eta_{|\rho_{1}|},\ldots,\eta_{|\rho_{l}|}\big{)}\] _for all \(p\in\mathcal{P}(n)\)._ Proof.: It remains to show that for \((\rho_{1},\ldots,\rho_{l})\) as above with \(\cup\rho_{1}<\ldots<\cup\rho_{l}\), \[\operatorname{sgn}(\rho_{1},\ldots,\rho_{l})=\operatorname{sgn}(\cup\rho_{1}, \ldots,\cup\rho_{l})\cdot\prod_{j=1}^{l}\operatorname{sgn}(\rho_{j}).\] Set \((Q_{1},\ldots,Q_{l}):=\rho_{\text{can}}^{(\#\cup\rho_{1},\ldots,\#\cup\rho_{l})}\). Then for each \(j=1,\ldots,l\) the ordered partition \(p_{j}:=|\rho_{j}|\) defines canonically a partition \((Q_{j}^{1},\ldots,Q_{j}^{d_{j}})\) of \(Q_{j}\). Write \(\rho_{j}=(I_{j}^{1},\ldots,I_{j}^{d_{j}})\). Then \[(\rho_{1},\ldots,\rho_{l})=\left(I_{1}^{1},\ldots,I_{1}^{d_{1}},\ldots,I_{l}^{1 },\ldots,I_{l}^{d_{l}}\right).\] In the same manner, \[\rho_{\text{can}}^{(p_{1},\ldots,p_{l})}=\left(Q_{1}^{1},\ldots,Q_{1}^{d_{1}}, \ldots,Q_{l}^{1},\ldots,Q_{l}^{d_{l}}\right).\] Consider a permutation \(\sigma\in S_{i}\) with \(\sigma(Q_{j}^{r})=I_{j}^{r}\) for \(j=1,\ldots,l\) and \(r=1,\ldots,d_{j}\). Then also \(\sigma(Q_{j})=\cup\rho_{j}\) for \(j=1,\ldots,l\), and \[\operatorname{sgn}(\cup\rho_{1},\ldots,\cup\rho_{l})\cdot\prod_{j=1 }^{l}\operatorname{sgn}(\rho_{j}) =\frac{\prod_{j=1}^{l}\epsilon(\sigma,\overrightarrow{Q_{j}})}{ \epsilon(\sigma,\underline{i})}\cdot\prod_{j=1}^{l}\frac{\prod_{r=1}^{d_{j}} \epsilon(\sigma,Q_{j}^{r})}{\underline{\epsilon(\sigma_{i}\overrightarrow{Q_{j}} )}}\] \[=\frac{\prod_{j=1}^{l}\prod_{r=1}^{d_{j}}\epsilon(\sigma,Q_{j}^{r} )}{\epsilon(\sigma,\underline{i})}=\operatorname{sgn}(\rho_{1},\ldots,\rho_{l}).\] This completes the proof. **Definition 5.8**.: _The category of decomposed symmetric \(n\)-fold vector bundles defined in this section is written \(\mathsf{dSnVB}\)._ ### Symmetric linear splittings and decompositions Let \(\mathbb{E}\) be a symmetric \(n\)-fold vector bundle. Consider the decomposed symmetric \(n\)-fold vector bundle \(\mathbb{E}^{\rm dec}\) and the vacant decomposed symmetric \(n\)-fold vector bundle \(\overline{\mathbb{E}}\) defined by \(\mathbb{E}\) as in Definition 5.4. **Definition 5.9**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be a symmetric \(n\)-fold vector bundle._ _A **symmetric linear splitting** of \(\mathbb{E}\) is a monomorphism \(\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\) of symmetric \(n\)-fold vector bundles, such that for \(i=1,\ldots,n\), \(\Sigma(\{i\})\colon\mathbb{E}(\{i\})\to\mathbb{E}(\{i\})\) is the identity._ _A **symmetric decomposition** of \(\mathbb{E}\) is a natural isomorphism \(\mathcal{S}\colon\mathbb{E}^{\rm dec}\to\mathbb{E}\) of symmetric \(n\)-fold vector bundles over the identity maps \(\mathcal{S}(\{i\})=\operatorname{id}_{\mathbb{E}(\{i\})}\colon\mathbb{E}(\{i \})\to\mathbb{E}(\{i\})\) such that additionally the induced core morphisms \(\mathcal{S}^{I}_{I}(I)\) are the identities \(\operatorname{id}_{\mathbb{E}^{I}_{I}}\) for all \(I\subseteq\underline{n}\)._ The following theorem implies that symmetric \(n\)-fold vector bundles always admit a symmetric \(n\)-fold vector bundle atlas. Its proof builds up on the results in [10] and Section 4.4 and can be found in Appendix B. **Theorem 5.10**.: _A symmetric \(n\)-fold vector bundle always admits a symmetric decomposition._ ### Symmetric \(n\)-fold vector bundle atlases This section finally discusses the local approach to symmetric \(n\)-fold vector bundles, i.e. in the language of \(n\)-fold vector bundle atlases and cocycles. **Definition 5.11**.: _An \(n\)-fold vector bundle atlas \(\{c_{\alpha}=(U_{\alpha},\Theta_{\alpha},(V_{I})_{\emptyset\neq I\subseteq \underline{n}})\mid\alpha\in\Lambda\}\) as in Definition 4.12 on a smooth surjective submersion \(\pi\colon E\to M\) is called **symmetric** if_ 1. _the model vector spaces satisfy_ \(V_{I}=V_{J}=:V^{\#I}\) _whenever_ \(\#I=\#J\)__ 2. _for all_ \(\alpha,\beta\in\Lambda\)_,_ \(\emptyset\neq I,J\subseteq\underline{n}\) _with_ \(\#I=\#J\) _and_ \(\rho_{1}\in\mathcal{P}(I)\)_,_ \(\rho_{2}\in\mathcal{P}(J)\) _with same size_ \(l_{\rho_{1}}=l_{\rho_{2}}=(l_{1},\ldots,l_{k})\)_:_ \[\operatorname{sgn}(\rho_{1})\cdot\omega_{\alpha\beta}^{\rho_{1}}=\operatorname {sgn}(\rho_{2})\cdot\omega_{\alpha\beta}^{\rho_{2}}.\] _Set then \(\omega_{\alpha\beta}^{l_{1},\ldots,l_{k}}:=\omega_{\alpha\beta}^{l_{1},\ldots,l_{k}}\in C^{\infty}(U_{\alpha}\cap U_{\beta},\operatorname{Hom}(V^{l_{1}} \otimes\ldots\otimes V^{l_{k}},V^{l_{1}+\ldots+l_{k}}))\) for each ordered integer partition \((l_{1},\ldots,l_{k})\in\mathcal{P}(n)\) and all \(\alpha,\beta\in\Lambda\)._ 1. _The forms_ \(\omega_{\alpha\beta}^{l_{1},\ldots,l_{k}}\) _are elements of_ \(C^{\infty}(U_{\alpha}\cap U_{\beta},\operatorname{Hom}(V^{l_{1}}\odot\ldots \odot V^{l_{k}},V^{l_{1}+\ldots+l_{k}}))\)_, where_ \(V^{l_{i}}\odot V^{l_{i+1}}\) _means_ * \(\otimes\) _if_ \(l_{i}<l_{i+1}\)_,_ * \(\wedge\) _if_ \(l_{i}=l_{j}\) _is an odd number,_ * _the symmetric product_ \(\cdot\) _if_ \(l_{i}=l_{j}\) _is an even number._ Hence here for \(\alpha,\beta\in\Lambda\), the change of chart \(\Theta_{\alpha}\circ\Theta_{\beta}^{-1}\colon(U_{\alpha}\cap U_{\beta})\times \prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}\to(U_{\alpha}\cap U_{ \beta})\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}\) reads \[\left(p,(v_{I})_{\emptyset\neq I\subseteq\underline{n}}\right)\mapsto\left(p, \left(\sum_{\rho=(I_{1},\ldots,I_{k})\in\mathcal{P}(I)}\operatorname{sgn}( \rho)\cdot\omega_{\alpha\beta}^{\#I_{1},\ldots,\#I_{k}}(p)(v_{I_{1}},\ldots,v_{ I_{k}})\right)_{\emptyset\neq I\subseteq\underline{n}}\right). \tag{34}\] By construction, the space \(E\) as in the definition above is locally diffeomorphic to spaces of the form \[U\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}.\] These are the total spaces of decomposed symmetric \(n\)-fold vector bundles defined as in Section 5.4 by the family of vector bundles \((U\times V^{i}\to U)_{1\leq i\leq n}\). Further, by (34) and Proposition 5.5 the changes of charts \(\Theta_{\alpha}\circ\Theta_{\beta}^{-1}\) are morphisms of symmetric decomposed \(n\)-fold vector bundles \[(U_{\alpha}\cap U_{\beta})\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{ \#I}\to(U_{\alpha}\cap U_{\beta})\times\prod_{\emptyset\neq I\subseteq \underline{n}}V^{\#I}.\] Hence there is an "indirect" (i.e. _signed_, see [10] in the case \(n=2\)) right \(S_{n}\)-action on any manifold endowed with a symmetric \(n\)-fold vector bundle atlas as above, and the following proposition holds. **Proposition 5.12**.: _A smooth manifold \(E\) endowed with a symmetric \(n\)-fold vector bundle atlas as in Definition 5.11 has a left \(S_{n}\)-action, given in a chart \(\pi^{-1}(U_{\alpha})\) by_ \[\Phi_{\sigma}^{\alpha}\colon U_{\alpha}\times\prod_{\emptyset\neq I\subseteq \underline{n}}V^{\#I}\to U_{\alpha}\times\prod_{\emptyset\neq I\subseteq \underline{n}}V^{\#I},\quad\Phi_{\sigma}^{\alpha}\left(p,(v_{I})_{\emptyset \neq I\subseteq\underline{n}}\right)=\left(p,\left(\epsilon\left(\sigma^{-1}, I\right)v_{\sigma^{-1}(I)}\right)_{\emptyset\neq I\subseteq\underline{n}}\right).\] Proof.: For each \(\sigma\in S_{n}\) and all \(\alpha,\beta\in\Lambda\) the equality \(\Phi_{\sigma}^{\alpha}\circ\Theta_{\alpha}\circ\Theta_{\beta}^{-1}=\Theta_{ \alpha}\circ\Theta_{\beta}^{-1}\circ\Phi_{\sigma}^{\beta}\) holds by (34) and Proposition 5.5. Hence the collection of maps \(\{\Phi_{\sigma}^{\alpha}\}_{\alpha\in\Lambda}\) defines a global map \(\Phi_{\sigma}\colon E\to E\). It remains to check that the obtained map \(\Phi\colon S_{n}\times E\to E\) is a left action. It is enough to verify that \[\Phi^{\alpha}\colon S_{n}\times U_{\alpha}\times\prod_{\emptyset\neq I \subseteq\underline{n}}V^{\#I}\to U_{\alpha}\times\prod_{\emptyset\neq I \subseteq\underline{n}}V^{\#I}\] is a left action for each \(\alpha\in\Lambda\). But this is the computation in (29). **Theorem 5.13**.: _An \(n\)-fold vector bundle \(\mathbb{E}\) admits a symmetric structure \(\Phi\) if and only if \(\mathbb{E}(\underline{n})\) can be endowed with a symmetric atlas, such that the induced \(S_{n}\)-action as in Proposition 5.12 is \(\Phi(\underline{n})\)._ Proof.: This proof works as the proof of Corollary 4.13, but with the additional \(S_{n}\)-symmetry taken into account. First let \(\mathbb{E}\) be a symmetric \(n\)-fold vector bundle. By Theorem 5.10, there is a decomposition \(\mathcal{S}\colon\mathbb{E}^{\mathrm{dec}}\to\mathbb{E}\) of \(\mathbb{E}\), with \(\mathbb{E}^{\mathrm{dec}}\) the _symmetric_ decomposed \(n\)-fold vector bundle defined by the family \(\left(q_{l}\colon A_{l}:=\mathbb{E}_{\underline{l}}^{l}\to M\right)_{1\leq l \leq n}\) of vector bundles over \(M\). Set \(E:=\mathbb{E}(\underline{n})\), as usual \(M:=\mathbb{E}(\emptyset)\), and \(\pi\colon\mathbb{E}(\underline{n}\to\emptyset)\colon E\to M\). For each \(l\in\{1,\ldots,n\}\), set \(V^{l}:=\mathbb{R}^{\dim A_{l}}\), the vector space on which \(A_{l}=\mathbb{E}_{\underline{l}}^{l}\) is modelled. Take a covering \(\{U_{\alpha}\}_{\alpha\in\Lambda}\) of \(M\) by open sets trivialising all the vector bundles \(A_{1},\ldots,A_{n}\); \[\phi_{l}^{\alpha}\colon q_{l}^{-1}(U_{\alpha})\stackrel{{\sim}}{{ \longrightarrow}}U_{\alpha}\times V^{l}\] for all \(1\leq l\leq n\) and all \(\alpha\in\Lambda\). Then define \(n\)-fold vector bundle charts \(\Theta_{\alpha}\colon\pi^{-1}(U_{\alpha})\to U_{\alpha}\times\prod_{\emptyset \neq I\subseteq\underline{n}}V^{\#I}\) by \[\Theta_{\alpha}=\left(\operatorname{pr}_{M}\times\left(\phi_{\#I}^{\alpha}I \right)_{\emptyset\neq I\subseteq\underline{n}}\right)\circ\mathcal{S}( \underline{n})^{-1}|_{\pi^{-1}(U_{\alpha})}\colon\pi^{-1}(U_{\alpha})\to U_{ \alpha}\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}.\] Given \(\alpha,\beta\in\Lambda\) with \(U_{\alpha}\cap U_{\beta}\neq\emptyset\), the change of chart \[\Theta_{\alpha}\circ\Theta_{\beta}^{-1}\colon(U_{\alpha}\cap U_{\beta})\times \prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}\to(U_{\alpha}\cap U_{ \beta})\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}\] is given by \[\left(p,(v_{I})_{\emptyset\neq I\subseteq\underline{n}}\right)\mapsto\left(p, \left(\omega_{\alpha\beta}^{\#I}(p)v_{I}\right)_{\emptyset\neq I\subseteq \underline{n}}\right), \tag{35}\] with \(\omega_{\alpha\beta}^{\#I}\in C^{\infty}(U_{\alpha}\cap U_{\beta},\operatorname{Gl }(V^{\#I}))\) the cocycle defined by \(\phi_{\#I}^{\alpha}\circ(\phi_{\#I}^{\beta})^{-1}\). The two charts are hence smoothly compatible and \(\mathfrak{A}=\{(U_{\alpha},\Theta_{\alpha},(V^{\#I})_{\emptyset\neq I\subseteq \underline{n}})\mid\alpha\in\Lambda\}\) is a symmetric \(n\)-fold vector bundle atlas on \(E\). It remains to check that this is a _symmetric_\(n\)-fold vector bundle atlas. But this is immediate since by (35) the morphism \(\omega_{\rho}^{\alpha,\beta}\in C^{\infty}(U_{\alpha}\cap U_{\beta}, \operatorname{Hom}(V^{\#I_{1}}\otimes\ldots\otimes V^{\#I_{k}},V^{\#I}))\) is trivial for \(\emptyset\neq I\subseteq\underline{n}\) and \(\rho=(I_{1},\ldots,I_{k})\) an ordered partition of \(I\)_with_\(k\geq 2\). Finally, the action \(\Phi^{\alpha}\) of \(S_{n}\) on \(U_{\alpha}\times\prod_{\emptyset\neq I\subseteq\underline{n}}V^{\#I}\) is such that the diagram commutes, since the decomposition \(\mathcal{S}\) is symmetric. Hence \(\Phi_{\sigma}^{\alpha}\) is \(\Phi_{\sigma}(\underline{n})\) in the chart \(\Theta_{\alpha}\). Conversely, given a space \(E\) with a symmetric \(n\)-fold vector bundle structure over a smooth manifold \(M\) as in Definition 5.11, define \(\mathbb{E}\colon\square^{\mathbb{N}}\to\mathbf{Man}^{\infty}\) as follows. Take a maximal symmetric \(n\)-fold vector bundle atlas \[\mathfrak{A}=\Big{\{}\left(U_{\alpha},\Theta_{\alpha},\left(V^{\#I}\right)_{ I\subseteq\underline{n}}\right)\Big{|}\;\alpha\in\Lambda\Big{\}}\] and set \(\mathbb{E}(\underline{n})=E\), \(\mathbb{E}(\emptyset)=M\), and more generally for \(\emptyset\neq I\subseteq\underline{n}\), \[\mathbb{E}(I)=\left(\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha}\times\prod _{\emptyset\neq J\subseteq I}V^{\#J}\right)\right)\Bigg{/}\sim\] with \(\sim\) the equivalence relation defined on \(\bigsqcup_{\alpha\in\Lambda}(U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I }V^{\#J})\) by \[U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I}V^{\#J}\quad\ni\quad\left(p, (v_{J})_{\emptyset\neq J\subseteq I}\right)\quad\sim\quad\left(q,(w_{J})_{ \emptyset\neq J\subseteq I}\right)\quad\in\quad U_{\beta}\times\prod_{ \emptyset\neq J\subseteq I}V^{\#J}\] if and only if \(p=q\) and \[v_{J}=\sum_{\rho=(J_{1},\ldots,J_{k})\in\mathcal{P}(J)}\operatorname{sgn}( \rho)\omega_{\alpha\beta}^{\#J_{1},\ldots,\#J_{k}}(p)(w_{J_{1}},\ldots,w_{J_{k}})\] for \(\emptyset\neq J\subseteq I\). Let \(\operatorname{pr}_{I}\colon\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha}\times \prod_{\emptyset\neq J\subseteq I}V^{\#J}\right)\to\mathbb{E}(I)\) be the quotient map. Then, as in the proof of Corollary 4.13, \(\mathbb{E}(I)\) has a unique smooth manifold structure such that \(\operatorname{pr}_{I}\colon\mathbb{E}(I)\to M\), \(\operatorname{pr}_{I}[p,(v_{I})_{\emptyset\neq I\subseteq J}]=p\) is a surjective submersion and such that the maps \[\Theta_{\alpha}^{I}\colon\operatorname{pr}_{I}\left(U_{\alpha}\times\prod_{ \emptyset\neq J\subseteq I}V^{\#J}\right)\to U_{\alpha}\times\prod_{ \emptyset\neq J\subseteq I}V^{\#J},\qquad\left[p,(v_{I})_{\emptyset\neq I \subseteq J}\right]\mapsto\left(p,(v_{I})_{\emptyset\neq I\subseteq J}\right)\] are diffeomorphisms. This leads to an \(n\)-fold vector bundle \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) as in the proof of Corollary 4.13. The projections \(\operatorname{pr}_{I}\) for \(I\subseteq\underline{n}\) define a morphism \(\operatorname{pr}\colon\widetilde{\mathbb{E}}\to\mathbb{E}\) of \(n\)-fold vector bundles, where the \(n\)-fold vector bundle \(\widetilde{\mathbb{E}}\colon\square^{n}\to\mathbf{Man}^{\infty}\) is defined12 by Footnote 12: If \(\Lambda\) is not countable, the images of the objects of \(\square^{n}\) under the functor \(\widetilde{\mathbb{E}}\) are not second-countable topological spaces. However, this is not relevant for the use made of \(\widetilde{\mathbb{E}}\) in this proof. \[\widetilde{\mathbb{E}}(I)=\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha}\times \prod_{\emptyset\neq J\subseteq I}V^{\#J}\right).\] It remains to show that \(\mathbb{E}\) is symmetric. First, by construction, for \(\emptyset\neq I\subseteq\underline{n}\) \[\mathbb{E}^{I}_{I}=\left.\left(\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha} \times V^{\#I}\right)\right)\right/\sim\] with \(\sim\) the equivalence relation defined on \(\bigsqcup_{\alpha\in\Lambda}\left(U_{\alpha}\times V^{\#I}\right)\) by \[U_{\alpha}\times V^{\#I}\quad\ni\quad(p,v)\quad\sim\quad(q,w)\quad\in\quad U_{ \beta}\times V^{\#I}\] if and only if \(p=q\) and \(v=\omega_{\alpha\beta}^{\#I}(p)(w)\). Hence the equality \(\mathbb{E}^{I}_{I}=\mathbb{E}^{\sigma(I)}_{\sigma(I)}\) for \(\sigma\in S_{n}\) is immediate. \(\widetilde{\mathbb{E}}\) is a symmetric \(n\)-fold vector bundle by Section 5.4. Let its \(S_{n}\)-action be denoted by \(\widetilde{\Phi}\). It is easy to check with Lemma 5.6 that \[U_{\alpha}\times\prod_{\emptyset\neq J\subseteq I}V^{\#J}\quad\ni\quad\left(p,(v_{J})_{\emptyset\neq J\subseteq I}\right)\quad\sim\quad\left(q,(w_{J})_{ \emptyset\neq J\subseteq I}\right)\quad\in\quad U_{\beta}\times\prod_{ \emptyset\neq J\subseteq I}V^{\#J}\] if and only if for all \(\sigma\in S_{n}\) \[U_{\alpha}\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! of decomposed symmetric \(n\)-fold vector bundles over the identity on \(U_{\alpha\beta}\), with \(V^{1},\ldots,V^{n}\) fixed vector spaces, such that_ \[\omega^{\alpha\beta}=\omega^{\alpha\gamma}\circ\omega^{\gamma\beta}\colon U_{ \alpha\gamma\beta}\times\prod_{\emptyset\neq J\subseteq\underline{n}}V^{\#J} \to U_{\alpha\gamma\beta}\times\prod_{\emptyset\neq J\subseteq\underline{n}}V ^{\#J}\] _for all \(\alpha,\beta,\gamma\in\Lambda\), and_ \[\omega^{\alpha\alpha}=\operatorname{id}\colon U_{\alpha}\times\prod_{ \emptyset\neq J\subseteq\underline{n}}V^{\#J}\to U_{\alpha}\times\prod_{ \emptyset\neq J\subseteq\underline{n}}V^{\#J}\] _for all \(\alpha\in\Lambda\)._ In other words for all of \(\alpha,\beta\) in \(\Lambda\), \[\omega^{\alpha\beta}=\left(\omega^{\alpha\beta}_{(i_{1},\ldots,i_{l})}\in C^{ \infty}(U_{\alpha\beta},\operatorname{Hom}(V^{i_{1}}\odot\ldots\odot V^{i_{l }},V^{i_{1}+\ldots+i_{l}}))\right)_{(i_{1},\ldots,i_{l})\in\mathcal{P}(n)},\] with the composition given as in Proposition 5.7: for all \(\alpha,\beta,\gamma\in\Lambda\) and all canonically ordered integer partitions \((i_{1},\ldots,i_{l})\in\mathcal{P}(n)\) \[\omega^{\alpha\beta}(p)_{(i_{1},\ldots,i_{l})}=(\omega^{\alpha \gamma}(p)\circ\omega^{\gamma\beta}(p))_{(i_{1},\ldots,i_{l})}\] \[=\sum_{\begin{subarray}{c}\rho_{1},\ldots,\rho_{s}\text{ ordered}\\ \text{such that }\rho_{1}\cup\ldots\cup\rho_{s}=\rho_{\text{can}}^{(i_{1}, \ldots,i_{l})}\\ \text{as sets}\\ \cup\rho_{1}<\ldots<\cup\rho_{s}\end{subarray}}\operatorname{sgn}(\rho_{1}, \ldots,\rho_{s})\cdot\omega^{\alpha\gamma}_{(\Sigma|\rho_{1}|,\ldots,\Sigma| \rho_{s}|)}\circ\left(\omega^{\gamma\beta}_{|\rho_{1}|},\ldots,\omega^{\gamma \beta}_{|\rho_{s}|}\right),\] for \(p\in U_{\alpha\beta\gamma}\), and for \(p\in U_{\alpha}\) \[\omega^{\alpha\alpha}(p)_{(i_{1},\ldots,i_{l})}=\left\{\begin{array}{cc} \operatorname{id}_{V^{i_{1}}}&l=1\\ 0&l\geq 2\end{array}\right..\] ## 6. The equivalence of symmetric \(n\)-fold vector bundles with \([n]\)-manifolds This section constructs the functor from the category of symmetric \(n\)-fold vector bundles to the category of \([n]\)-manifolds, and the functor from the category of \([n]\)-manifolds to the category of symmetric \(n\)-fold vector bundles. The key to both functors is the equality is the obvious bijection between \([n]\)-manifold cocycles and symmetric \(n\)-fold vector bundle cocycles. Using this, both functors are constructed in the same manner. Begin by considering a symmetric \(n\)-fold vector bundle \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\), over a smooth manifold \(M:=\mathbb{E}(\emptyset)\). Since \(\mathbb{E}\) is symmetric, it defines \(n\) vector bundles \(A_{1},\ldots,A_{n}\) over \(M\), such that for \(i=1,\ldots,n\) \[A_{i}:=\mathbb{E}_{I}^{I}\quad\text{ for all }\quad I\subseteq\underline{n} \text{ with }\#I=i.\] Let \(r_{i}\) be the rank of \(A_{i}\) for \(i=1,\ldots,n\) and set \(V^{i}:=\mathbb{R}^{r_{i}}\) for simplicity. By Theorem 5.13 the manifold \(\mathbb{E}(\underline{n})\) is equipped with a symmetric \(n\)-fold vector bundle atlas \(\{c_{\alpha}=(U_{\alpha},\Theta_{\alpha})\mid\alpha\in\Lambda\}\) modeled on the vector spaces \(\mathbb{R}^{r_{1}},\ldots,\mathbb{R}^{r_{n}}\), and hence a symmetric \(n\)-fold vector bundle cocycle \[(U_{\alpha})_{\alpha\in\Lambda},\qquad\left(\omega^{\alpha\beta}:=\Phi_{ \alpha}\circ\Phi_{\beta}^{-1}\colon U_{\alpha\beta}\times\prod_{\emptyset\neq J \subseteq\underline{n}}V^{\#J}\to U_{\alpha\beta}\times\prod_{ \emptyset\neq J\subseteq\underline{n}}V^{\#J}\right)_{\alpha,\beta\in\Lambda}.\] Assume that the atlas is maximal. Recall that for each pair of indices \(\alpha,\beta\in\Lambda\), the morphism \(\omega^{\alpha\beta}\) amounts to a collection \[\omega^{\alpha\beta}=\left(\omega^{\alpha\beta}_{(i_{1},\ldots,i_{l})}\in \operatorname{Hom}(V^{i_{1}}\odot\ldots\odot V^{i_{l}},V^{i_{1}+\ldots+i_{l}}) \right)_{(i_{1},\ldots,i_{l})\in\mathcal{P}(n)}.\] By Proposition 5.7 and (14), the \(n\)-fold vector bundle cocycle defines hence the \([n]\)-manifold cocycle \[\left((U_{\alpha})_{\alpha\in\Lambda},\left(\omega_{(i_{1},\dots,i_{l})}^{\alpha \beta}\in\operatorname{Hom}(V^{i_{1}}\odot\dots\odot V^{i_{l}},V^{i_{1}+\dots+ i_{l}})\right)_{(i_{1},\dots,i_{l})\in\mathcal{P}(n)}\right)\] Denote the corresponding \([n]\)-manifold by \(\mathcal{A}(\mathbb{E})\). This defines the map on objects of an _algebraisation functor_ \[\mathcal{A}\colon\mathsf{SnVB}\to[\mathsf{n}]\mathsf{Man}\] Consider a morphism \(\Phi\colon\mathbb{E}\to\mathbb{F}\) of symmetric \(n\)-fold vector bundles. Let as before \((U_{\alpha},\phi_{\alpha}\mid\alpha\in\Lambda)\) be a symmetric \(n\)-fold vector bundle atlas for \(\mathbb{E}\) modeled as above on \(\mathbb{R}^{r_{1}}=:V^{1},\dots,\mathbb{R}^{r_{n}}=:V^{n}\), and let \((V_{\alpha^{\prime}},\psi_{\alpha^{\prime}}\mid\alpha^{\prime}\in\Lambda^{ \prime})\) be a symmetric \(n\)-fold vector bundle atlas for \(\mathbb{F}\) modeled on \(\mathbb{R}^{s_{1}}=:W^{1},\dots,\mathbb{R}^{s_{n}}=:W^{n}\). Then for each \(\alpha\in\Lambda\) and \(\alpha^{\prime}\in\Lambda^{\prime}\) the map \(\Phi(\underline{n})\colon\mathbb{E}(\underline{n})\to\mathbb{F}(\underline{n})\) defines \[\Phi_{\alpha^{\prime}\alpha}:=\psi_{\alpha^{\prime}}\circ\Phi(\underline{n}) \circ\phi_{\alpha}^{-1}\colon(\Phi_{0}^{-1}(V_{\alpha^{\prime}})\cap U_{\alpha })\times\prod_{\emptyset\neq J\subseteq\underline{n}}V^{\#J}\to V_{\alpha^{ \prime}}\times\prod_{\emptyset\neq J\subseteq\underline{n}}W^{\#J},\] which is a morphism of symmetric \(n\)-fold vector bundles and hence equivalent to a collection \[\left(\Phi_{\alpha^{\prime}\alpha}^{(i_{1},\dots,i_{l})}\in C^{\infty}(\Phi_{ 0}^{-1}(V_{\alpha}^{\prime})\cap U_{\alpha},\operatorname{Hom}(V^{i_{1}}\odot \dots\odot V^{i_{l}},W^{i_{1}+\dots+i_{l}}))\right)_{(i_{1},\dots,i_{l})\in \mathcal{P}(n)}.\] For \(\alpha,\beta\in\Lambda\), \(\alpha^{\prime},\beta^{\prime}\in\Lambda^{\prime}\) and all \(x\in\Phi_{0}^{-1}(V_{\alpha^{\prime}\beta^{\prime}})\cap U_{\alpha\beta}\) \[\Phi_{\beta^{\prime}\beta}(x)=\Psi_{\beta^{\prime}\alpha^{\prime}}(\Phi_{0}(x ))\circ\Phi_{\alpha^{\prime}\alpha}(x)\circ\phi_{\alpha\beta}(x). \tag{36}\] it is easy to see that the collection of morphisms \(\Phi_{\alpha^{\prime}\alpha}\), \(\alpha\in\Lambda,\alpha^{\prime}\in\Lambda^{\prime}\) with (36) are equivalent to \(\Phi\) in the sense that \(\Phi(\underline{n})\) can be fully recovered from it. Such a collection of morphisms is called a **morphism of \(n\)-fold vector bundle cocycles**. **Definition 6.1**.: _Let \(C_{M}:=\left((U_{\alpha})_{\alpha\in\Lambda},(\phi^{\alpha\beta})_{\alpha, \beta\in\Lambda}\right)\) be an \(n\)-fold vector bundle cocycle on a smooth manifold \(M\) and let \(C_{N}:=\left((V_{\alpha^{\prime}})_{\alpha^{\prime}\in\Lambda^{\prime}},( \psi^{\alpha^{\prime}\beta^{\prime}})_{\alpha^{\prime},\beta^{\prime}\in \Lambda^{\prime}}\right)\) be an \(n\)-fold vector bundle cocycle on a smooth manifold \(N\)._ _Then a morphism \(\Phi\colon C_{M}\to C_{N}\) of \(n\)-fold vector bundle cocycles over a smooth map \(\Phi_{0}\) is a collection_ \[\left(\left(\Phi_{\alpha^{\prime}\alpha}^{(i_{1},\dots,i_{l})}\in C^{\infty}( \Phi_{0}^{-1}(V_{\alpha}^{\prime})\cap U_{\alpha},\operatorname{Hom}(V^{i_{1}} \odot\dots\odot V^{i_{l}},W^{i_{1}+\dots+i_{l}}))\right)_{(i_{1},\dots,i_{l}) \in\mathcal{P}(n)},\alpha\in\Lambda,\alpha^{\prime}\in\Lambda^{\prime}\right)\] _of symmetric \(n\)-fold vector bundle morphisms such that (36) holds._ It is also easy to see that such a morphism of symmetric \(n\)-fold vector bundle cocycles defines a morphism \(\mathbb{E}(C_{M})\to\mathbb{E}(C_{N})\) of symmetric \(n\)-fold vector bundles, where \(\mathbb{E}(C_{M})\) is the symmetric \(n\)-fold vector bundle defined as in the proof of Theorem 5.13. The same discussion for \([n]\)-manifold gives the notion of morphism of \([n]\)-manifold cocycles, which are again exactly the same information as in the last definition. Hence a morphism \(\Phi\colon\mathbb{E}\to\mathbb{F}\) of symmetric \(n\)-fold vector bundles defines a morphism of symmetric \(n\)-fold vector bundle cocycles, hence a morphism \(\mathcal{A}(\Phi)\colon\mathcal{A}(\mathbb{E})\to\mathcal{A}(\mathbb{F})\). This completes the definition of the functor \[\mathcal{A}\colon\mathsf{SnVB}\to[\mathsf{n}]\mathsf{Man}.\] The _geometrisation functor_ \[\mathcal{G}\colon[\mathsf{n}]\mathsf{Man}\to\mathsf{SnVB}\] is defined in exactly the same manner: starting from an \([n]\)-manifold \(\mathcal{M}\), consider an associated maximal \([n]\)-manifold cocycle, which is a symmetric \(n\)-fold vector bundle cocycle, and hence defines a symmetric \(n\)-fold vector bundle \(\mathbb{E}(\mathcal{M})=:\mathcal{G}(\mathcal{M})\). As above, this extends naturally to morphisms of \([n]\)-manifolds. The two obtained functors define an equivalence between the two categories \(\mathsf{SnVB}\) and \([\mathsf{n}]\mathsf{Man}\): The composition \[\mathcal{G}\circ\mathcal{A}\colon\mathsf{SnVB}\to\mathsf{SnVB}\] sends a symmetric \(n\)-fold vector bundle \(\mathbb{E}\) to the abstract symmetric \(n\)-fold vector bundle defined by a choice of maximal symmetric \(n\)-fold vector bundle cocycle associated to \(\mathbb{E}\). The obtained symmetric \(n\)-fold vector bundle is canonically isomorphic to \(\mathbb{E}\). In the same manner, \[\mathcal{A}\circ\mathcal{G}\colon[\mathsf{n}]\mathsf{Man}\to[\mathsf{n}] \mathsf{Man}\] sends an \([n]\)-manifold \(\mathcal{M}\) to the abstract \([n]\)-manifold defined by a choice of maximal \([n]\)-manifold cocycle for \(\mathcal{M}\). The two \([n]\)-manifolds are canonically isomorphic. This completes the proof of the following theorem. **Theorem 6.2**.: _Let \(n\in\mathbb{N}\). The algebraisation and geometrisation functors_ \[\mathcal{A}\colon\mathsf{SnVB}\to[\mathsf{n}]\mathsf{Man}.\] _and_ \[\mathcal{G}\colon[\mathsf{n}]\mathsf{Man}\to\mathsf{SnVB}\] _establish an equivalence between the category of symmetric \(n\)-fold vector bundles and the category of \([n]\)-manifolds._ Note that this equivalence can be extended to an equivalence of \(\mathsf{NMan}\) with the category of symmetric multiple vector bundles, but the details will be carried out in a future work. ## Appendix A From linear splittings to decompositions This section revisits in detail the proof of the following statement in [10] (see Theorem 3.3 there), because its symmetric analogue requires further insights on it - in particular the uniqueness part of this statement was not explained in detail in [10]. In this section a general \(n\)-fold vector bundle is considered - no \(S_{n}\)-symmetry is assumed. **Proposition A.1**.: _Let \(\mathbb{E}\) be an \(n\)-fold vector bundle. Given a linear splitting \(\Sigma\) of \(\mathbb{E}\) and compatible decompositions \(\mathcal{S}^{J}\colon(E^{\mathrm{dec}})^{\rho_{J}}\to E^{\rho_{J}}\) of the highest order cores of \(\mathbb{E}\), for \(J\subseteq\underline{n}\) with \(\#J=2\), there exists a unique decomposition \(\mathcal{S}\) of \(\mathbb{E}\) such that \(\Sigma=\mathcal{S}\circ\iota\) and such that the core morphisms of \(\mathcal{S}\) equal \(\mathcal{S}^{J}\) for all \(J\)._ The decomposition \(\mathcal{S}\colon\mathbb{E}^{\mathrm{dec}}\to\mathbb{E}\) is recursively constructed as follows. Write \(J_{1},\ldots,J_{\binom{n}{2}}\) for the subsets of \(\underline{n}\) with \(\#J_{k}=2\) and define an increasing chain of \(\binom{n}{2}\) decomposed \(n\)-fold vector bundles: For \(k=0,\ldots,\binom{n}{2}\) set \(\mathcal{A}^{k}=(B^{k}_{I})_{I\subseteq\underline{n}}\) with \[B^{k}_{I}=\left\{\begin{array}{ll}A_{\#I}&\quad\text{if $\#I=1$ or if there is $i\leq k$ such that $J_{i}\subseteq I$;}\\ M&\quad\text{otherwise.}\end{array}\right.\] Set \(\mathbb{E}^{k}:=\mathbb{E}^{\mathcal{A}^{k}}\). There are obvious monomorphisms \[\overline{\mathbb{E}}=\mathbb{E}^{0}\hookrightarrow\mathbb{E}^{1} \hookrightarrow\ldots\hookrightarrow\mathbb{E}^{\binom{n}{2}}=\mathbb{E}^{ \mathrm{dec}}\] of \(n\)-fold vector bundles. In particular the spaces \(E^{k}:=\mathbb{E}^{k}(\underline{n})\) can be seen as submanifolds of \(\mathbb{E}^{\mathrm{dec}}(\underline{n})\). Note that additionally \((\mathbb{E}^{\mathrm{dec}})^{\underline{n}}_{J_{i}}(\underline{n})\subseteq \mathbb{E}^{k}(\underline{n})\) for all \(i\leq k\). First set \(\mathcal{S}^{0}:=\Sigma\). Then take \(k\geq 0\) and assume that a monomorphism \(\mathcal{S}^{k}\colon\mathbb{E}^{k}\to\mathbb{E}\) of \(n\)-fold vector bundles has already been constructed, that restricts to \(\Sigma\) on \(\mathbb{E}^{0}\) and to \(\mathcal{S}^{J_{i}}\) on \(E^{k}\cap(E^{\mathrm{dec}})^{\underline{n}}_{J_{i}}\) for \(i=1,\ldots,k\). Take \(\mathbf{x}=(x_{I})_{I\subseteq\underline{n}}\in\mathbb{E}^{k+1}(\underline{n})\) over a base point \(m\in M\) Then in particular \(x_{I}=0_{m}^{A_{\#I}}\) if \(\#I\geq 2\) and there is no \(i\leq k+1\) with \(J_{i}\subseteq I\). Set \(\mathbf{y}:=(y_{I})_{I\subseteq\underline{n}}\in\mathbb{E}^{k}(\underline{n}) \subseteq\mathbb{E}^{\rm dec}(\underline{n})\) with \[y_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if either $\#I=1$ or there is $i\leq k$ such that $J_{i}\subseteq I$,}\\ 0_{m}^{A_{\#I}}&\mbox{ otherwise.}\end{array}\right. \tag{37}\] Set furthermore \(\mathbf{z}:=(z_{I})_{I\subseteq\underline{n}}\in(\mathbb{E}^{\rm dec})_{ \overline{J}_{k+1}}^{\underline{n}}\) where \[z_{I}=\left\{\begin{array}{ll}y_{I}&\mbox{ whenever $I\subseteq\underline{n} \setminus J_{k+1}$,}\\ x_{I}&\mbox{ whenever $J_{k+1}\subseteq I$ and there is no $i\leq k$ with $J_{i}\subseteq I$,}\\ 0_{m}^{A_{\#I}}&\mbox{ otherwise.}\end{array}\right. \tag{38}\] Then writing \(J_{k+1}=\{s,t\}\), it is easy to check that \[\mathbf{x}=\mathbf{y}+\limits_{\underline{n}\setminus\{s\}}\left(\mathbf{0}_ {p_{s}(\mathbf{y})}^{\underline{n}}+\limits_{\underline{n}\setminus\{t\}} \mathbf{z}\right)=\mathbf{y}+\limits_{\underline{n}\setminus\{t\}}\left( \mathbf{0}_{p_{t}(\mathbf{y})}^{\underline{n}}+\limits_{\underline{n}\setminus \{s\}}\mathbf{z}\right)\,.\] The last equality follows directly from the interchange law in the double vector bundle \((E;E_{\underline{n}\setminus\{s\}},E_{\underline{n}\setminus\{t\}};E_{ \underline{n}\setminus\{s,t\}})\) since \(\mathbf{z}\) is in the core of this double vector bundle. The monomorphism \(\mathcal{S}^{k+1}\colon\mathbb{E}^{k+1}\to\mathbb{E}\) is then defined by \[\mathcal{S}^{k+1}(\mathbf{x}) :=\mathcal{S}^{k}(\mathbf{y})+\limits_{\underline{n}\setminus\{ s\}}\left(\mathbf{0}_{p_{s}\left(\mathcal{S}^{k}(\mathbf{y})\right)}^{ \underline{n}}+\limits_{\underline{n}\setminus\{t\}}\mathcal{S}^{J_{k+1}}( \mathbf{z})\right)\] \[=\mathcal{S}^{k}(\mathbf{y})+\limits_{\underline{n}\setminus\{t \}}\left(\mathbf{0}_{p_{t}\left(\mathcal{S}^{k}(\mathbf{y})\right)}^{\underline {n}}+\limits_{\underline{n}\setminus\{s\}}\mathcal{S}^{J_{k+1}}(\mathbf{z}) \right)\,.\] Repeating this construction \(\binom{n}{2}\)-times yields the decomposition \(\mathcal{S}=\mathcal{S}^{\binom{n}{2}}\colon\mathbb{E}^{\rm dec}\to\mathbb{E}\) of \(E\), see [11]. The uniqueness of \(\mathcal{S}\) in the statement needs to be checked next. It is sufficient to show that \(\mathcal{S}\) does not depend on the choice of numbering \(J_{1},\ldots,J_{\binom{n}{2}}\) of the subsets of \(\underline{n}\) with two elements. Since this fact is crucial for the proof below of the \(S_{n}\)-invariance of \(\mathcal{S}\), its proof is done in detail here. It suffices to show that for each \(k=0,\ldots,\binom{n}{2}-2\), the monomorphism \(\mathcal{S}^{k+2}\) does not depend on the order of the two subsets \(J_{k+1}\) and \(J_{k+2}\). Choose such a \(k\) and set \(J_{k+1}=\{s,t\}\) and \(J_{k+2}=\{p,q\}\). Take \(\mathbf{x}=(x_{I})_{I\subseteq\underline{n}}\in E^{k+2}\) over a base point \(m\in M\). That is, \(x_{I}=0_{m}^{A_{\#I}}\) if \(\#I\geq 2\) and there is no \(i\leq k+2\) with \(J_{i}\subseteq I\). Set \(\mathbf{y}:=(y_{I})_{I\subseteq\underline{n}}\in E^{k}\subseteq\mathbb{E}^{ \rm dec}(\underline{n})\) with \[y_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if either $\#I=1$ or there is $1\leq i\leq k$ such that $J_{i}\subseteq I$,}\\ 0_{m}^{A_{\#I}}&\mbox{ otherwise.}\end{array}\right.\] Define further \(\mathbf{a}\in(\mathbb{E}^{\rm dec})^{J_{k+1}}\) and \(\mathbf{b}\in(\mathbb{E}^{\rm dec})^{J_{k+2}}\) by \[a_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if $I\subseteq\underline{n} \setminus J_{k+1}$ and $(\#I=1$ or there exists $i\in\{1\ldots,k\}:J_{i}\subseteq I$),}\\ x_{I}&\mbox{ if $J_{k+1}\subseteq I$ and there is no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq I$,}\\ 0_{m}^{A_{\#I}}&\mbox{ otherwise,}\end{array}\right.\] and \[b_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if $I\subseteq\underline{n} \setminus J_{k+2}$ and $(\#I=1$ or there exists $i\in\{1\ldots,k,k+1\}:J_{i}\subseteq I$),}\\ x_{I}&\mbox{ if $J_{k+2}\subseteq I$ and there is no $i\in\{1,\ldots,k,k+1\}$ with $J_{i}\subseteq I$,}\\ 0_{m}^{A_{\#I}}&\mbox{ otherwise.}\end{array}\right.\] Then \[\mathbf{u}:=\mathbf{y}+\limits_{\underline{n}\setminus\{s\}}\left(\mathbf{0}_ {p_{\underline{n}}^{\underline{n}}\setminus\{s\}}^{\underline{n}}(\mathbf{y}) ^{\underline{n}}+\limits_{\underline{n}\setminus\{t\}}\mathbf{a}\right)\] lies in \(E^{k+1}\) and is described by \[u_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if either $\#I=1$ or there is $1\leq i\leq k+1$ such that $J_{i}\subseteq I$,}\\ 0^{A_{\#I}}_{m}&\mbox{ otherwise.}\end{array}\right.\] The sum \[\mathbf{u}\underset{\underline{n}\setminus\{p\}}{+}\left(\mathbf{0}^{ \underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}}_{\underline{n} \setminus\{p\}}(\mathbf{u})}\underset{\underline{n}\setminus\{q\}}{+} \mathbf{b}\right)=\left(\mathbf{y}\underset{\underline{n}\setminus\{s\}}{+} \left(\mathbf{0}^{\underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n} \setminus\{s\}}(\mathbf{y})}\underset{\underline{n}\setminus\{t\}}{+} \mathbf{a}\right)\right)\underset{\underline{n}\setminus\{p\}}{+}\left( \mathbf{0}^{\underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n} \setminus\{p\}}(\mathbf{u})}\underset{\underline{n}\setminus\{q\}}{+} \mathbf{b}\right) \tag{39}\] then equals \(\mathbf{x}\). Now inverse the roles of \(J_{k+1}\) and \(J_{k+2}\). Define \(\mathbf{c}\in(\mathbb{E}^{\mathrm{dec}})^{J_{k+2}}\) and \(\mathbf{d}\in(\mathbb{E}^{\mathrm{dec}})^{J_{k+1}}\) by \[c_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if $I\subseteq\underline{n} \setminus J_{k+2}$ and (\#I=1 or there exists $i\in\{1\,\ldots,k\}:J_{i}\subseteq I$),}\\ x_{I}&\mbox{ if $J_{k+2}\subseteq I$ and there is no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq I$,}\\ 0^{A_{\#I}}_{m}&\mbox{ otherwise,}\end{array}\right.\] and \[d_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if $I\subseteq\underline{n} \setminus J_{k+1}$ and (\#I=1 or there exists $i\in\{1\,\ldots,k,k+2\}:J_{i}\subseteq I$),}\\ x_{I}&\mbox{ if $J_{k+1}\subseteq I$ and there is no $i\in\{1,\ldots,k,k+2\}$ with $J_{i}\subseteq I$,}\\ 0^{A_{\#I}}_{m}&\mbox{ otherwise.}\end{array}\right.\] Then \[\mathbf{v}:=\mathbf{y}\underset{\underline{n}\setminus\{p\}}{+}\left(\mathbf{0 }^{\underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{p \}}(\mathbf{y})}\underset{\underline{n}\setminus\{q\}}{+}\mathbf{c}\right)\] is described by \[v_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if either $\#I=1$ or there is $i\in\{1,\ldots,k,k+2\}$ such that $J_{i}\subseteq I$,}\\ 0^{A_{\#I}}_{m}&\mbox{ otherwise.}\end{array}\right.\] The sum \[\mathbf{v}\underset{\underline{n}\setminus\{s\}}{+}\left(\mathbf{0}^{ \underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{s\}}( \mathbf{v})}\underset{\underline{n}\setminus\{t\}}{+}\mathbf{d}\right)= \left(\mathbf{y}\underset{\underline{n}\setminus\{p\}}{+}\left(\mathbf{0}^{ \underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{p\}}( \mathbf{y})}\underset{\underline{n}\setminus\{q\}}{+}\mathbf{c}\right)\right) \underset{\underline{n}\setminus\{s\}}{+}\left(\mathbf{0}^{\underline{n}}_{ \underline{p}^{\underline{n}}_{\underline{n}\setminus\{s\}}(\mathbf{v})} \underset{\underline{n}\setminus\{t\}}{+}\mathbf{d}\right) \tag{40}\] then equals \(\mathbf{x}\). It is easy to check that the tuples \(\mathbf{a}\), \(\mathbf{b}\), \(\mathbf{c}\) and \(\mathbf{d}\) satisfy \[p^{\underline{n}}_{\underline{n}\setminus\{s\}}(\mathbf{b})=p^{\underline{n}} _{\underline{n}\setminus\{s\}}(\mathbf{c}),\quad p^{\underline{n}}_{ \underline{n}\setminus\{t\}}(\mathbf{b})=p^{\underline{n}}_{\underline{n} \setminus\{t\}}(\mathbf{c}),\] as well as \[p^{\underline{n}}_{\underline{n}\setminus\{p\}}(\mathbf{a})=p^{\underline{n}} _{\underline{n}\setminus\{p\}}(\mathbf{d}),\quad\mbox{ and }\quad p^{\underline{n}}_{\underline{n}\setminus\{q\}}(\mathbf{a})=p^{ \underline{n}}_{\underline{n}\setminus\{q\}}(\mathbf{d}).\] It turns then out that \[\mathbf{b}=\mathbf{c}\underset{\underline{n}\setminus\{s\}}{+}\left(\mathbf{0 }^{\underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{s\}}( \mathbf{c})}\underset{\underline{n}\setminus\{t\}}{+}\mathbf{e}\right)= \mathbf{c}\underset{\underline{n}\setminus\{t\}}{+}\left(\mathbf{0}^{ \underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{t\}}( \mathbf{c})}\underset{\underline{n}\setminus\{s\}}{+}\mathbf{e}\right)\] and also \[\mathbf{d}=\mathbf{a}\underset{\underline{n}\setminus\{p\}}{+}\left(\mathbf{0 }^{\underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{p\}}( \mathbf{a})}\underset{\underline{n}\setminus\{q\}}{+}\mathbf{e}\right)= \mathbf{a}\underset{\underline{n}\setminus\{q\}}{+}\left(\mathbf{0}^{ \underline{n}}_{\underline{p}^{\underline{n}}_{\underline{n}\setminus\{q\}}( \mathbf{a})}\underset{\underline{n}\setminus\{p\}}{+}\mathbf{e}\right)\] with \(\mathbf{e}\in(\mathbb{E}^{\mathrm{dec}})^{\rho_{J_{k+1}}\cap p_{J_{k+2}}}\) defined by \[e_{I}=\left\{\begin{array}{ll}x_{I}&\mbox{ if $I\subseteq\underline{n} \setminus(J_{k+1}\cup J_{k+2})$ and (\#I=1 or there exists $i\in\{1,\ldots,k\}:J_{i}\subseteq I$)}\\ x_{I}&\mbox{ if $J_{k+2}\subseteq I\subseteq\underline{n}\setminus J_{k+1}$ and there is no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq I$}\\ x_{I}&\mbox{ if $J_{k+1}\subseteq I\subseteq\underline{n}\setminus J_{k+2}$ and there is no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq I$}\\ x_{I}&\mbox{ if $J_{k+1}\cup J_{k+2}\subseteq I$ and there is no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq I$}\\ 0^{A_{\#I}}_{m}&\mbox{ otherwise.}\end{array}\right.\] Consider the second term of (39). Compute \[\mathbf{0}_{\underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{p\}} (\mathbf{u})}^{\underline{n}} =\mathbf{0}^{\underline{n}}\] \[=\mathbf{0}_{\underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{p \}}(\mathbf{y})}^{\underline{n}}+\left(\mathbf{0}_{\underline{p^{\mathsf{n}}_{ \underline{n}}\backslash\{s,p\}}(\mathbf{y})}^{\underline{n}}+\mathbf{0}_{ \underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{t,p\}}(\mathbf{a})}^{ \underline{n}}\right)\] \[=\mathbf{0}_{\underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{p \}}(\mathbf{y})}^{\underline{n}}+\left(\mathbf{0}_{\underline{p^{\mathsf{n}}_{ \underline{n}}\backslash\{s,p\}}(\mathbf{y})}^{\underline{n}}+\mathbf{0}_{ \underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{t,p\}}(\mathbf{a})}^{ \underline{n}}\right)\] and use \(\mathbf{b}=\mathbf{c}+\underset{\underline{n}\backslash\{s\}}{\mathbf{0}_{ \underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{s\}}(\mathbf{c})}^{ \underline{n}}+\mathbf{c}}\mathbf{e}\)) as well as the interchange formula in the double vector bundle \((\mathbb{E}(\underline{n}),\mathbb{E}(\underline{n}\setminus\{s\}),\mathbb{E}( \underline{n}\setminus\{q\}),\mathbb{E}(\underline{n}\setminus\{s,q\}))\) in order to get (41) With the interchange law in the double vector bundle \((\mathbb{E}(\underline{n}),\mathbb{E}(\underline{n}\setminus\{t\}),\mathbb{E} (\underline{n}\setminus\{q\}),\mathbb{E}(\underline{n}\setminus\{t,q\}))\), the second term on the right-hand face becomes \[\left(\mathbf{0}_{\underline{p^{\mathsf{n}}_{\underline{n}} \backslash\{s,p\}}(\mathbf{y})}^{\underline{n}}+\mathbf{0}_{\underline{p^{ \mathsf{n}}_{\underline{n}}\backslash\{t\}}(\mathbf{y})}^{\underline{n}}+ \mathbf{a}\right)\underset{\underline{n}\backslash\{t\}}{\mathbf{0}_{ \underline{p^{\mathsf{n}}_{\underline{n}}\backslash\{t\}}(\mathbf{c})}^{ \underline{n}}+\mathbf{b}\right)\] \[=\left(\mathbf{0}_{\underline{p^{\mathsf{n}}_{\underline{n}} \backslash\{s,p\}}(\mathbf{y})}^{\underline{n}}+\mathbf{0}_{\underline{p^{ \mathsf{n}}_{\underline{n}}\backslash\{s\}}(\mathbf{y})}^{\underline{n}}+ \mathbf{a}\right)\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad and further transforming this as above using the interchange formulas shows that this equals \[\left(\mathcal{S}^{k}(\mathbf{y})\underset{\underline{\mathbf{a} \backslash\{p\}}}{+}\left(\mathbf{0}_{\underline{\mathbf{r}}_{\underline{\mathbf{a }\backslash\{p\}}}}^{\underline{\mathbf{n}}}(\mathcal{S}^{k}(\mathbf{y})) \underset{\underline{\mathbf{a}\backslash\{q\}}}{+}\mathcal{S}^{J_{k+2}}( \mathbf{c})\right)\right)\underset{\underline{\mathbf{a}\backslash\{q\}}}{+}\] \[\left(\mathbf{0}_{\underline{\mathbf{r}}_{\underline{\mathbf{a} \backslash\{p\}}}(\mathcal{S}^{k}(\mathbf{y})\underset{\underline{\mathbf{a} \backslash\{p\}}}{+}\left(\mathbf{0}_{\underline{\mathbf{r}}_{\underline{\mathbf{a }\backslash\{p\}}}}^{\underline{\mathbf{n}}}(\mathcal{S}^{k}(\mathbf{y})) \underset{\underline{\mathbf{a}\backslash\{q\}}}{+}\mathcal{S}^{J_{k+2}}( \mathbf{c})\right)\right)\underset{\underline{\mathbf{a}\backslash\{q\}}}{+} \mathcal{S}^{J_{k+1}}(\mathbf{d})\right)\] if and only if \[\mathcal{S}^{J_{k+1}}(\mathbf{e})=\mathcal{S}^{J_{k+2}}(\mathbf{e}).\] But since \(\mathbf{e}\) lies in \((\mathbb{E}^{\mathrm{dec}})^{\rho_{J_{k+1}}\cap\rho_{J_{k+2}}}\) and \(\mathcal{S}^{J_{k+1}}\) and \(\mathcal{S}^{J_{k+2}}\) are compatible as in (21), this equality is immediate. ## Appendix B On the existence of symmetric linear splittings and symmetric decompositions This section proves that any symmetric \(n\)-fold vector bundle admits a (non-canonical) symmetric linear decomposition. The existence of a linear decomposition is proved in [12], see also Section 4.4 so it remains to show that a _symmetric_ decomposition can always be chosen. Recall that a linear splitting \(\Sigma\) of an \(n\)-fold vector bundle \(\mathbb{E}\) and decompositions \(\mathcal{S}^{\rho}\) of the \((n-1)\)-cores of \(\mathbb{E}\) are called **compatible** if (21) and (22) hold. If \(\mathbb{E}\) is now a symmetric \(n\)-fold vector bundle with symmetric structure \(\Phi\), the decompositions of the \((n-1)\)-cores of \(\mathbb{E}\) as above are furthermore **symmetrically compatible**, if additionally \[(\Phi_{\sigma})^{\rho}\circ\mathcal{S}^{\rho}=\mathcal{S}^{\sigma(\rho)}\circ \left(\Psi_{\sigma}^{\mathrm{dec}}\right)^{\rho} \tag{42}\] for all \(\sigma\in S_{n}\) and \(I\subseteq\underline{n}\) with \(\#I=2\). For simplicity, the restriction of \(\Phi_{\sigma}\), \(\Psi_{\sigma}^{\mathrm{dec}}\) to iterated highest order cores of \(\mathbb{E}\) and \(\mathbb{E}^{\mathrm{dec}}\) are just written \(\Phi_{\sigma}\) and respectively \(\Psi_{\sigma}^{\mathrm{dec}}\) for \(\sigma\in S_{n}\). That is, (42) is just written \[\Phi_{\sigma}\circ\mathcal{S}^{\rho}=\mathcal{S}^{\sigma(\rho)}\circ\Psi_{ \sigma}^{\mathrm{dec}}.\] **Proposition B.1**.: 1. _Let_ \(\mathcal{S}\) _be a symmetric decomposition of an_ \(n\)_-fold vector bundle_ \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\)_. Then the composition_ \(\Sigma=\mathcal{S}\circ\iota\colon\overline{\mathbb{E}}\to\mathbb{E}\)_, with_ \(\iota\) _defined as in (_3_) of Definition_ 5.4_, is a symmetric splitting of_ \(\mathbb{E}\)_. Furthermore, the core morphisms_ \(\mathcal{S}^{\rho_{J}}\colon(\mathbb{E}^{\mathrm{dec}})^{\rho_{J}}\to\mathbb{ E}^{\rho_{J}}\) _are decompositions of_ \(\mathbb{E}^{\rho_{J}}\) _for all_ \(J\subseteq\underline{n}\) _with_ \(\#J=2\) _and these decompositions and the linear splitting are symmetrically compatible as in (_42_)._ 2. _Conversely, given a symmetric linear splitting_ \(\Sigma\) _of_ \(\mathbb{E}\) _and compatible, symmetrically compatible decompositions_ \(\mathcal{S}^{J}\) _of the highest order cores_ \(\mathbb{E}^{\rho_{J}}\) _for_ \(J\subseteq\underline{n}\) _with_ \(\#J=2\)_, there exists a unique symmetric decomposition_ \(\mathcal{S}\) _of_ \(\mathbb{E}\) _such that_ \(\Sigma=\mathcal{S}\circ\iota\) _and such that the induced core morphisms_ \(\mathcal{S}^{\rho_{J}}\) _of_ \(\mathcal{S}\) _equal_ \(\mathcal{S}^{J}\) _for all_ \(J\)_._ Proof.: 1. Consider a symmetric decomposition \(\mathcal{S}\colon\mathbb{E}^{\mathrm{dec}}\to\mathbb{E}\). Then the composition \(\Sigma=\mathcal{S}\circ\iota\) is a linear splitting of \(\mathbb{E}\) and for all \(J\subseteq\underline{n}\) with \(\#J=2\) the restrictions \(\mathcal{S}^{\rho_{J}}\) are decompositions of \(\mathbb{E}^{\rho_{J}}\) that are compatible with \(\Sigma\), see Proposition 3.3 in [12]. Since \(\iota\) is \(S_{n}\)-invariant, \(\Sigma\) is clearly symmetric: \[\Sigma(I)\circ\overline{\Psi}_{\sigma}(I) =\mathcal{S}(I)\circ\iota(I)\circ\overline{\Psi}_{\sigma}(I)= \mathcal{S}(I)\circ\Psi_{\sigma}^{\mathrm{dec}}(I)\circ\iota(I)\] \[=\Phi_{\sigma}(I)\circ\mathcal{S}(I)\circ\iota(I)=\Phi_{\sigma}(I) \circ\Sigma(I)\] for \(\sigma\in S_{n}\) and \(I\subseteq\underline{n}\). It remains to show that the core morphisms are symmetrically compatible. But this follows immediately by the definition of these objects and by the \(S_{n}\)-invariance of \(\mathcal{S}\) and \(\iota\): \[\Phi_{\sigma}\circ\mathcal{S}^{\rho_{J}} =\left.\left(\Phi_{\sigma}\circ\mathcal{S}\right)\right|_{(\mathbb{ E}^{\text{dec}})^{\rho_{J}}}=\left.\left(\mathcal{S}\circ\Psi_{\sigma}^{\text{ dec}}\right)\right|_{(\mathbb{E}^{\text{dec}})^{\rho_{J}}}\] \[=\mathcal{S}^{\rho_{\sigma(J)}}\circ\left(\Psi_{\sigma}^{\text{dec}} \right)^{J}\] for all \(J\subseteq\underline{n}\) with \(\#J=2\). 2. Conversely, assume that an \(S_{n}\)-invariant splitting \(\Sigma\) of \(\mathbb{E}\) and compatible, symmetrically compatible decompositions \(\mathcal{S}^{J}\) of the cores \(\mathbb{E}^{\rho_{J}}\) with \(J\subseteq\underline{n}\), \(\#J=2\) are given as in (b) and (42). Proposition 3.3 of [12] (see Proposition A.1 in Section A above) shows the existence of a unique decomposition \(\mathcal{S}\) of \(\mathbb{E}\) that restricts in the sense of (b) to \(\Sigma\) and the core decompositions \(\mathcal{S}^{J}\). It remains to check that this decomposition is symmetric. Going back to the proof of Proposition A.1 in Section A, fix once and for all the numbering of the sets \(J_{1},\ldots,J_{\binom{n}{2}}\). For each \(\sigma\in S_{n}\), the list of subsets \[J_{1}^{\sigma}:=\sigma(J_{1}),\ldots,J_{\binom{n}{2}}^{\sigma}:=\sigma\left(J _{\binom{n}{2}}\right)\] is again a numbering of the subsets of \(\underline{n}\) with \(2\) elements. Repeating the construction in Section A for this new choice yields again for each \(k=1,\ldots,\binom{n}{2}\) a family \(\mathcal{A}_{\sigma}^{k}=(B_{I}^{k,\sigma})_{I\subseteq\underline{n}}\), \[B_{I}^{k,\sigma}=\left\{\begin{array}{ll}A_{\#I}&\quad\text{if $\#I=1$ or if there is $i\leq k$ such that $\sigma(J_{i})\subseteq I$;}\\ M&\quad\text{otherwise.}\end{array}\right.\] A new chain of decomposed \(n\)-fold vector bundles \[\mathbb{E}^{0}=\overline{\mathbb{E}}=:\mathbb{E}_{\sigma}^{0}\hookrightarrow \mathbb{E}_{\sigma}^{1}\hookrightarrow\ldots\hookrightarrow\mathbb{E}_{\sigma }^{\binom{n}{2}}=\mathbb{E}^{\text{dec}}=\mathbb{E}^{\binom{n}{2}},\] is defined by the families \(\mathcal{A}_{\sigma}^{k}\), \(k=0,\ldots,\binom{n}{2}\), as well as a family of monomorphisms \[\mathcal{S}_{\sigma}^{0}=\mathcal{S}^{0}=\Sigma\colon\overline{\mathbb{E}} \to\mathbb{E},\quad\mathcal{S}_{\sigma}^{1}\colon\mathbb{E}_{\sigma}^{1}\to \mathbb{E},\quad\ldots,\quad\mathcal{S}_{\sigma}^{\binom{n}{2}}=\mathcal{S}^{ \binom{n}{2}}=\mathcal{S}\colon\mathbb{E}^{\text{dec}}\to\mathbb{E}\] of \(n\)-fold vector bundles, where the first and last morphisms are \(\Sigma\) and \(\mathcal{S}\), respectively, the first by construction, and the latter since \(\mathcal{S}\) does not depend on the choice of ordering of the subsets of \(\underline{n}\) with two elements, by Appendix A. Consider \(\mathbf{x}=(x_{J})_{J\subseteq I}\in\mathbb{E}_{\sigma}^{k}(I)\subseteq \mathbb{E}^{\text{dec}}(I)\). Then there is an \(m\in M\) such that for all \(J\subseteq I\): \[x_{J}\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\quad\text{if $\#J\geq 2$ and there exists no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq J$}\\ A_{\#J}(m)&\quad\text{otherwise.}\end{array}\right.\] For \(\sigma\in S_{n}\) the image \(\mathbf{x}^{\prime}\) of this point \(\mathbf{x}\) under \(\Psi_{\sigma}^{\text{dec}}(I)\) lives in \(\mathbb{E}^{\text{dec}}(\sigma(I))\) and its coordinates satisfy \[x^{\prime}_{J}=\epsilon(\sigma^{-1},J)x_{\sigma^{-1}(J)}\] \[\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\quad\text{if $\#J\geq 2$ and there exists no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq\sigma^{-1}(J)$}\\ \in A_{\#J}(m)&\quad\text{otherwise}\end{array}\right.\] for all \(J\subseteq\sigma(I)\). Hence \[x^{\prime}_{J}=\epsilon(\sigma^{-1},J)x_{\sigma^{-1}(J)}\] \[\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\quad\text{if $\#J\geq 2$ and there exists no $i\in\{1,\ldots,k\}$ with $\sigma(J_{i})\subseteq J$}\\ \in A_{\#J}(m)&\quad\text{otherwise}\end{array}\right.\] \[\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\quad\text{if $\#J\geq 2$ and there exists no $i\in\{1,\ldots,k\}$ with $J_{i}\subseteq J$}\\ \in A_{\#J}(m)&\quad\text{otherwise}\end{array}\right. for all \(J\subseteq\sigma(I)\), which shows that \(\mathbf{x}^{\prime}=\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{x})\in\mathbb{E}_{ \sigma}^{k}(\sigma(I))\), and so that \(\Psi_{\sigma}^{\mathrm{dec}}\) restricts to a natural isomorphism \[\Psi_{\sigma}^{\mathrm{dec}}\colon\mathbb{E}^{k}\to(\mathbb{E}_{\sigma}^{k})^ {\sigma}.\] Now since \(\mathbb{E}_{\sigma}^{\binom{n}{2}}=\mathbb{E}^{\binom{n}{2}}=\mathbb{E}^{ \mathrm{dec}}\) and \(\mathcal{S}_{\sigma}^{\binom{n}{2}}=\mathcal{S}^{\binom{n}{2}}=\mathcal{S}\), it suffices to show inductively that for \(k=0,\ldots,\binom{n}{2}\) and for \(\sigma\in S_{n}\) the diagram \[\begin{CD}\mathbb{E}^{k}@>{\mathcal{S}^{k}}>{}>\mathbb{E}\\ @V{}V{\Psi_{\sigma}^{\mathrm{dec}}}V@V{}V{\Psi_{\sigma}}V\\ \left(\mathbb{E}_{\sigma}^{k}\right)^{\sigma}@>{(\mathcal{S}_{\sigma}^{k}) \sigma}>{}>\mathbb{E}^{\sigma}\end{CD}\] of morphisms of \(n\)-fold vector bundles commutes. It is enough to check that at the top level, i.e. to check that \[\begin{CD}\mathbb{E}^{k}(\underline{n})@>{\mathcal{S}^{k}(\underline{n})}>{}> \mathbb{E}(\underline{n})\\ @V{}V{\Psi_{\sigma}^{\mathrm{dec}}(\underline{n})}V@V{}V{\Psi_{\sigma}( \underline{n})}V\\ \mathbb{E}_{\sigma}^{k}(\underline{n})@>{\mathcal{S}_{\sigma}^{k}(\underline{n} )}>{}>\mathbb{E}(\underline{n})\end{CD}\] commutes. For \(k=0\) this is obviously the case since \(\Sigma\colon\overline{\mathbb{E}}\to\mathbb{E}\) is a symmetric linear splitting of \(\mathbb{E}\). Assume that the diagram commutes for a \(k\in\left\{0,\ldots,\binom{n}{2}-1\right\}\), and choose as above \(\mathbf{x}=(x_{I})_{I\subseteq\underline{n}}\in\mathbb{E}^{k+1}(\underline{n})\) over a base point \(m\in M\). Define \(\mathbf{y}:=(y_{I})_{I\subseteq\underline{n}}\in\mathbb{E}^{k}(\underline{n}) \subseteq\mathbb{E}^{\mathrm{dec}}(\underline{n})\) and \(\mathbf{z}:=(z_{I})_{I\subseteq\underline{n}}\in(\mathbb{E}^{\mathrm{dec}}) \overline{J}_{k+1}\) as in (37) and (38). As above \(\mathbf{x}^{\prime}:=\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{x})\) is then an element of \(\mathbb{E}_{\sigma}^{k+1}(\underline{n})\) and the corresponding objects \(\mathbf{y}^{\prime}\in\mathbb{E}_{\sigma}^{k}(\underline{n})\) and \(\mathbf{z}^{\prime}\in(\mathbb{E}^{\mathrm{dec}})\overline{\sigma}_{(J_{k+1})}\) are then \(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{y})\) and \(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{z})\), respectively. Then \[\Psi_{\sigma}(\mathcal{S}^{k+1}(\mathbf{x})) =\Psi_{\sigma}\left(\mathcal{S}^{k}(\mathbf{y})\right)\underset{ \underline{n}\setminus\{\sigma\}}{+}\left(\mathbf{0}_{p_{\sigma}\left( \mathcal{S}^{k}(\mathbf{y})\right)}^{\underline{n}}+\mathcal{S}^{J_{k+1}}( \mathbf{z})\right)\right)\] \[=\Psi_{\sigma}\left(\mathcal{S}^{k}(\mathbf{y})\right)\underset{ \underline{n}\setminus\{\sigma\}}{+}\left(\mathbf{0}_{p_{\sigma}\left( \mathcal{S}^{k}(\mathbf{y})\right)}^{\underline{n}}+\underset{\underline{n} \setminus\{\sigma\}}{+}\left(\Psi_{\sigma}\right)\overline{J}_{k+1}^{\underline {n}}\left(\mathcal{S}^{J_{k+1}}(\mathbf{z})\right)\right)\] \[=\mathcal{S}_{\sigma}^{k}\left(\Psi_{\sigma}^{\mathrm{dec}}( \mathbf{y})\right)\underset{\underline{n}\setminus\{\sigma\}}{+}\left( \mathbf{0}_{p_{\sigma}\left(\mathcal{S}^{k}(\Psi_{\sigma}^{\mathrm{dec}}( \mathbf{y}))\right)}^{\underline{n}}+\underset{\underline{n}\setminus\{\sigma \}}{+}\mathcal{S}^{\sigma(J_{k+1})}\left(\left(\Psi_{\sigma}^{\mathrm{dec}} \right)\overline{J}_{k+1}^{\underline{n}}\left(\mathbf{z}\right)\right)\right)\] \[=\mathcal{S}_{\sigma}^{k+1}\left(\Psi_{\sigma}^{\mathrm{dec}}( \mathbf{x})\right)\] follows from the induction hypothesis and (42). The following is a more general version of the second statement of the previous theorem, which is the first step of the proof a symmetric decomposition of a symmetric \(n\)-fold vector bundle. **Proposition B.2**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be a symmetric \(n\)-fold vector bundle, and consider an \(l\)-partition \(\rho\) of \(\underline{n}\) for \(1\leq l\leq n\). Consider for each partition \(\rho^{\prime}\in\{\sigma(\rho)\mid\sigma\in S_{n}\}\) a linear splitting \(\Sigma^{\rho^{\prime}}\colon\overline{\mathbb{E}^{\rho^{\prime}}}\to\mathbb{ E}^{\rho^{\prime}}\). For each coarsement \(\rho\) in \((l-1)\) subsets of a partition \(\rho^{\prime}\) in the \(S_{n}\)-orbit of \(\rho\) take a linear decomposition \(\mathcal{S}^{\underline{\rho}}\colon(\mathbb{E}^{\mathrm{dec}})^{\underline{ \rho}}\to\mathbb{E}^{\underline{\rho}}\) of the \((l-1)\)-core \(\mathbb{E}^{\underline{\rho}}\) of \(\mathbb{E}\). Assume that:_ 1. _For each_ \(\sigma\in S_{n}\)_, the diagram_ _commutes._ 2. _For each_ \(\sigma\in S_{n}\) _and each coarsement_ \(\underline{\rho}\) _of_ \(\rho\)__ \[\Phi_{\sigma}\circ\mathcal{S}^{\underline{\rho}}=\mathcal{S}^{\sigma( \underline{\rho})}\circ\Psi^{\mathrm{dec}}_{\sigma}\colon(\mathbb{E}^{\mathrm{ dec}})\underline{\rho}\to\left(\mathbb{E}^{\sigma(\underline{\rho})}\right)^{ \sigma}.\] 3. _For two coarsements_ \(\rho_{1}\) _and_ \(\rho_{2}\) _of_ \(\rho\) _in_ \(l-1\) _subsets the two decompositions_ \(\mathcal{S}^{\rho_{1}}\) _and_ \(\mathcal{S}^{\rho_{2}}\) _are compatible as in (_21_)._ 4. \(\Sigma^{\rho}\) _is compatible with_ \(\mathcal{S}^{\underline{\rho}}\) _as in (_22_) for each coarsement_ \(\underline{\rho}\) _of_ \(\rho\) _in_ \(l-1\) _subsets._ _Then Proposition A.1 yields for each \(\sigma\in S_{n}\) a decomposition_ \[\mathcal{S}^{\sigma(\rho)}\colon(\mathbb{E}^{\mathrm{dec}})^{\sigma(\rho)} \to\mathbb{E}^{\sigma(\rho)}\] _of the \(l\)-core \(\mathbb{E}^{\sigma(\rho)}\) of \(\mathbb{E}\), that restricts to \(\Sigma^{\sigma(\rho)}\) and the decompositions of the appropriate \((l-1)\)-cores of \(\mathbb{E}\). These decompositions satisfy_ \[\Phi_{\sigma}(I)\circ\mathcal{S}^{\rho}(I)=\mathcal{S}^{\sigma(\rho)}(\sigma( I))\circ\Psi^{\mathrm{dec}}_{\sigma}(I)\colon(\mathbb{E}^{\mathrm{dec}})^{\rho}(I) \to\mathbb{E}^{\sigma(\rho)}(\sigma(I))\] _for all \(\sigma\in S_{n}\) and all \(I\in\mathrm{Obj}(\Diamond^{\rho})\)._ Proof.: The proof is very similar to the proof of the second statement in Proposition B.1. Write \(\rho=\{I_{1},\ldots,I_{l}\}\) and build all sets \[J_{1},\ldots,J_{\binom{l}{2}}\in\mathrm{Obj}\left(\Diamond^{\rho}\right),\] each consisting in the union of exactly two elements of \(\rho\). Then for each \(\sigma\in S_{n}\) \[J_{1}^{\sigma}:=\sigma(J_{1}),\ldots,J_{\binom{l}{2}}^{\sigma}:=\sigma\left(J _{\binom{l}{2}}\right)\] is such a list of subsets for the partition \(\sigma(\rho)=\{\sigma(I_{1}),\ldots,\sigma(I_{l})\}\) of \(\underline{n}\). Each such subset \(J_{i}^{\sigma}\) defines a coarsement \(\underline{\rho}\) of \(\sigma(\rho)\), namely the one with \(J_{i}^{\sigma}\) replacing the two elements of \(\sigma(\rho)\) it is a union of. The corresponding splitting \[\mathcal{S}^{\underline{\rho}}\colon\mathbb{E}^{\mathrm{dec},\underline{\rho }}\to\mathbb{E}^{\underline{\rho}}\] which is chosen in the hypotheses of the theorem is named \(\mathcal{S}^{J_{i}^{\sigma}}\) in the following. As in Section A these lists yield for each \(\sigma\in S_{n}\) and each \(k=1,\ldots,\binom{l}{2}\) a family \(\mathcal{A}^{k}_{\sigma}=(B^{k,\sigma}_{I})_{I\in\mathrm{Obj}(\Diamond^{ \sigma(\rho)})}\), \[B^{k,\sigma}_{I}=\left\{\begin{array}{ll}A_{\#I}&\text{if $I\in\sigma(\rho)$ or if there is $i\leq k$ such that $\sigma(J_{i})\subseteq I$;}\\ M&\text{otherwise.}\end{array}\right.\] A chain of decomposed \(l\)-fold vector bundles \(\Diamond^{\sigma(\rho)}\to\mathbf{Man}^{\infty}\) \[\overline{\mathbb{E}^{\sigma(\rho)}}=:\mathbb{E}^{0}_{\sigma}\hookrightarrow \mathbb{E}^{1}_{\sigma}\hookrightarrow\ldots\hookrightarrow\mathbb{E}^{ \binom{l}{2}}_{\sigma}:=\mathbb{E}^{\mathrm{dec},\sigma(\rho)},\] is defined by the families \(\mathcal{A}^{k}_{\sigma}\), \(k=0,\ldots,\binom{l}{2}\), as well as a family of monomorphisms \[\mathcal{S}^{0}_{\sigma}=\Sigma^{\sigma(\rho)}\colon\overline{\mathbb{E}^{ \sigma(\rho)}}\to\mathbb{E}^{\sigma(\rho)},\quad\mathcal{S}^{1}_{\sigma}\colon \mathbb{E}^{1}_{\sigma}\to\mathbb{E}^{\sigma(\rho)},\quad\ldots,\quad \mathcal{S}^{\binom{l}{2}}_{\sigma}=\mathcal{S}^{\sigma(\rho)}\colon\mathbb{E} ^{\mathrm{dec},\sigma(\rho)}\to\mathbb{E}^{\sigma(\rho)}\] of \(l\)-fold vector bundles \(\Diamond^{\sigma(\rho)}\to\mathbf{Man}^{\infty}\), where the first and last morphisms are \(\Sigma^{\sigma(\rho)}\) and \(\mathcal{S}^{\sigma(\rho)}\), respectively. Consider \[\mathbf{x}=(x_{J})_{\begin{subarray}{c}J\subseteq I\\ J\in\operatorname{Obj}(\Diamond^{\rho})\end{subarray}}\in\mathbb{E}_{\operatorname {id}}^{k}(I)\] for some \(k\in\{1,\dots,\binom{l}{2}\}\) and for \(\sigma=\operatorname{id}\in S_{n}\). Then as before there is an \(m\in M\) such that for all \(J\subseteq I\) with \(J\in\operatorname{Obj}(\Diamond^{\rho})\): \[x_{J}\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\text{ if }J\not\in\rho\text{ and there exists no }i\in\{1,\dots,k\}\text{ with }J_{i}\subseteq J\\ \in A_{\#J}(m)&\text{ otherwise.}\end{array}\right.\] For \(\sigma\in S_{n}\) the image \(\mathbf{x}^{\prime}\) of this point \(\mathbf{x}\) under \(\Psi_{\sigma}^{\operatorname{dec}}\) lives in \(\mathbb{E}^{\operatorname{dec},\sigma(\rho)}(\sigma(I))\) and its coordinates satisfy \[x_{J}^{\prime} =\epsilon(\sigma^{-1},J)x_{\sigma^{-1}(J)}\] \[\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\text{ if }\sigma^{-1}(J) \not\in\rho\text{ and there exists no }i\in\{1,\dots,k\}\text{ with }J_{i}\subseteq\sigma^{-1}(J)\\ \in A_{\#J}(m)&\text{ otherwise}\end{array}\right.\] for all \(J\subseteq\sigma(I)\) with \(J\in\operatorname{Obj}(\Diamond^{\sigma(\rho)})\). Hence \[x_{J}^{\prime} =\epsilon(\sigma^{-1},J)x_{\sigma^{-1}(J)}\] \[\left\{\begin{array}{ll}=0_{m}^{A_{\#J}}&\text{ if }J\not\in \sigma(\rho)\text{ and there exists no }i\in\{1,\dots,k\}\text{ with }\sigma(J_{i})\subseteq J\\ \in A_{\#J}(m)&\text{ otherwise}\end{array}\right.\] for all \(J\subseteq\sigma(I)\) with \(J\in\operatorname{Obj}(\Diamond^{\sigma(\rho)})\), which shows that \(\mathbf{x}^{\prime}=\Psi_{\sigma}^{\operatorname{dec}}(\mathbf{x})\in \mathbb{E}_{\sigma}^{k}(\sigma(I))\), and so that \(\Psi_{\sigma}^{\operatorname{dec}}\) restricts to a natural isomorphism \[\Psi_{\sigma}^{\operatorname{dec}}\colon\mathbb{E}^{k}\to(\mathbb{E}_{\sigma} ^{k})^{\sigma}\] of the functors \(\mathbb{E}^{k},(\mathbb{E}_{\sigma}^{k})^{\sigma}\colon\Diamond^{\rho}\to \mathbf{Man}^{\infty}\). Now since \(\mathbb{E}^{\binom{l}{2}}=(\mathbb{E}^{\operatorname{dec}})^{\rho}\), \(\mathbb{E}_{\sigma}^{\binom{l}{2}}=(\mathbb{E}^{\operatorname{dec}})^{\sigma( \rho)}\) and \(\mathcal{S}_{\operatorname{id}}^{\binom{l}{2}}=\mathcal{S}^{\rho}\) as well as \(\mathcal{S}_{\sigma}^{\binom{l}{2}}=\mathcal{S}^{\sigma(\rho)}\), it suffices to show inductively that for \(k=0,\dots,\binom{l}{2}\) and for \(\sigma\in S_{n}\) the diagram of morphisms of \(l\)-fold vector bundles commutes. It is enough to check this at the top level, i.e. to check that commutes. For \(k=0\) this is given by the first assumption in the theorem. Assume that the diagram commutes for a \(k\in\left\{0,\dots,\binom{l}{2}-1\right\}\), and choose \[\mathbf{x}=(x_{I})_{I\in\operatorname{Obj}(\Diamond^{\rho})}\in\mathbb{E}_{ \operatorname{id}}^{k+1}(\underline{n})\] over a base point \(m\in M\). Define \(\mathbf{y}:=(y_{I})_{I\in\operatorname{Obj}(\Diamond^{\rho})}\in\mathbb{E}_{ \operatorname{id}}^{k}(\underline{n})\subseteq\mathbb{E}^{\operatorname{dec}, \rho}(\underline{n})\) by \[y_{I}=\left\{\begin{array}{ll}x_{I}&\text{ if }I\in\rho\text{ or there is }i\leq k\text{ such that }J_{i}\subseteq I,\\ 0_{m}^{A_{\#I}}&\text{ otherwise.}\end{array}\right.\] and \(\mathbf{z}:=(z_{I})_{I\in\mathrm{Obj}(\Diamond^{\rho})}\in\mathbb{E}^{\mathrm{ dec},\rho_{J_{k+1}}}(\underline{n})\) by \[z_{I}=\left\{\begin{array}{cl}y_{I}&\text{ whenever }I\subseteq\underline{n} \setminus J_{k+1},\\ x_{I}&\text{ whenever }J_{k+1}\subseteq I\text{ and there is no }i\leq k\text{ with }J_{i} \subseteq I,\\ 0_{m}^{A_{\#I}}&\text{ otherwise.}\end{array}\right.\] Here, \(\rho_{J_{k+1}}\) is the \((l-1)\)-coarsement of \(\rho\) obtained by replacing by \(J_{k+1}\) the two sets \(J_{k+1}\) is the union of. Then writing \(J_{k+1}=I_{s}\cup I_{t}\) with \(1\leq s<t\leq l\), it is easy to check that \[\mathbf{x}=\mathbf{y}\underset{\underline{n}\setminus I_{s}}{+}\left(\mathbf{0 }_{\underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}(\mathbf{y}) \underset{\underline{n}\setminus I_{s}}{+}\mathbf{z}\right)=\mathbf{y} \underset{\underline{n}\setminus I_{t}}{+}\left(\mathbf{0}_{\underset{ \underline{n}\setminus I_{t}}{+}}^{\underline{n}}(\mathbf{y})\underset{ \underline{n}\setminus I_{s}}{+}\mathbf{z}\right)\,,\] and \(\mathcal{S}_{\mathrm{id}}^{k+1}(\mathbf{x})\) is then defined by \[\mathcal{S}_{\mathrm{id}}^{k+1}(\mathbf{x})=\mathcal{S}_{\mathrm{id}}^{k}( \mathbf{y})\underset{\underline{n}\setminus I_{s}}{+}\left(\mathbf{0}_{ \underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathbf{ 0}_{\underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left( \mathbf{S}_{\mathrm{id}}^{k}(\mathbf{y})\right)\underset{\underline{n} \setminus I_{t}}{+}\mathcal{S}_{\mathrm{id}}^{J_{k+1}^{\mathrm{id}}}( \mathbf{z})\right)\] As above \(\mathbf{x}^{\prime}:=\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{x})\) is then an element of \(\mathbb{E}_{\sigma}^{k+1}(\underline{n})\) and the corresponding objects \(\mathbf{y}^{\prime}\in\mathbb{E}_{\sigma}^{k}(\underline{n})\) and \(\mathbf{z}^{\prime}\in\mathbb{E}^{\mathrm{dec},\rho_{J_{k+1}^{\sigma}}}( \underline{n})\) are then \(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{y})\) and \(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{z})\), respectively. Then \[\Phi_{\sigma}\left(\mathcal{S}_{\mathrm{id}}^{k+1}(\mathbf{x})\right) =\Phi_{\sigma}\left(\mathcal{S}_{\mathrm{id}}^{k}(\mathbf{y}) \underset{\underline{n}\setminus I_{s}}{+}\left(\mathbf{0}_{\underset{ \underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathbf{0}_{ \underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathbf{S}_{ \mathrm{id}}^{k}(\mathbf{y})\right)\underset{\underline{n}\setminus I_{t}}{+} \mathcal{S}_{\mathrm{id}}^{J_{k+1}^{\mathrm{id}}}(\mathbf{z})\right)\right)\] \[=\Phi_{\sigma}\left(\mathcal{S}_{\mathrm{id}}^{k}(\mathbf{y}) \right)\underset{\underline{n}\setminus\sigma(I_{s})}{+}\left(\mathbf{0}_{ \underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathbf{0}_{ \underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathbf{0}_{ \underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left(\mathcal{S}_{ \mathrm{id}}^{k}(\mathbf{y})\right)\right)\underset{\underline{n}\setminus \sigma(I_{t})}{+}\Phi_{\sigma}\left(\mathcal{S}^{J_{k+1}^{\mathrm{id}}}( \mathbf{z})\right)\right)\] \[=\mathcal{S}_{\sigma}^{k}\left(\Psi_{\sigma}^{\mathrm{dec}}( \mathbf{y})\right)\underset{\underline{n}\setminus\sigma(I_{s})}{+}\left( \mathbf{0}_{\underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left( \mathbf{0}_{\underset{\underline{n}\setminus I_{s}}{+}}^{\underline{n}}\left( \mathcal{S}_{\sigma}^{k}(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{y})\right) \right)\underset{\underline{n}\setminus\sigma(I_{t})}{+}\mathcal{S}^{J_{k+1}^ {\sigma}}\left(\Psi_{\sigma}^{\mathrm{dec}}(\mathbf{z})\right)\right)\] \[=\mathcal{S}_{\sigma}^{k+1}\left(\Psi_{\sigma}^{\mathrm{dec}}( \mathbf{x})\right)\] follows from the induction hypothesis and (42). **Proposition B.3**.: _Let \(\mathbb{E}\colon\square^{n}\to\mathbf{Man}^{\infty}\) be a symmetric \(n\)-fold vector bundle and consider an \(l\)-partition \(\rho\) of \(\underline{n}\). Assume that_ 1. _For each_ \(\rho^{\prime}\) _in the_ \(S_{n}\)_-orbit of_ \(\rho\) _and for each_ \((l-1)\)_-coarsement_ \(\underline{\rho}\) _of_ \(\rho^{\prime}\) _there is a decomposition_ \(\mathcal{S}^{\underline{\rho}}\colon\mathbb{E}^{\mathrm{dec},\underline{\rho}} \to\mathbb{E}^{\underline{\rho}}\)_._ 2. _For each pair_ \(\rho_{1},\rho_{2}\) _of_ \((l-1)\)_-coarsements of a partition_ \(\rho^{\prime}\) _in the_ \(S_{n}\)_-orbit of_ \(\rho\)_, the decompositions_ \(\mathcal{S}^{\rho_{1}}\) _and_ \(\mathcal{S}^{\rho_{2}}\) _are compatible as in (_21_)._ 3. _For each_ \(\sigma\in S_{n}\) _and each coarsement_ \(\underline{\rho}\) _of_ \(\rho\)__ \[\Phi_{\sigma}\circ\mathcal{S}^{\underline{\rho}}=(\mathcal{S}^{\sigma(\underline{ \rho})})^{\sigma}\circ\Psi_{\sigma}^{\mathrm{dec}}\colon(\mathbb{E}^{\mathrm{ dec}})^{\underline{\rho}}\to\mathbb{E}^{\sigma(\underline{\rho})}.\] _Then for each_ \(\rho^{\prime}\) _in the_ \(S_{n}\)_-orbit of_ \(\rho\) _there exists a linear splitting_ \[\Sigma^{\rho^{\prime}}\colon\overline{\mathbb{E}^{\rho^{\prime}}}\to\mathbb{E}^{ \rho^{\prime}}\] _of_ \(\mathbb{E}^{\rho^{\prime}}\) _that is compatible as in (_22_) with all decompositions_ \(\mathcal{S}^{\underline{\rho}}\) _for all coarsements_ \(\underline{\rho}\) _of_ \(\rho^{\prime}\)_, and such that_ \[\Phi_{\sigma}\circ\Sigma^{\rho}=(\Sigma^{\sigma(\rho)})^{\sigma}\circ\Psi_{\sigma}.\] _for all_ \(\sigma\in S_{n}\)_._ Proof.: Recall that the following claim is proved in [11, Theorem 3.5], see also Section 4.4: Given an \(n\)-fold vector bundle \(\mathbb{E}\), with linear splittings \(\Sigma_{I}\) of \(\mathbb{E}^{I,\emptyset}\) for all \(I\subsetneq\underline{n}\) with \(\#I=n-1\), such that \(\Sigma_{I_{1}}(J)=\Sigma_{I_{2}}(J)\) whenever \(J\subseteq I_{1}\cap I_{2}\), then there exists a linear splitting \(\Sigma\) of \(\mathbb{E}\) with \(\Sigma(J)=\Sigma_{I}(J)\) whenever \(J\subseteq I\subseteq\underline{n}\). For a partition \(\rho\) of \(\underline{n}\) each \(I\in\operatorname{Obj}(\lozenge^{\rho})\) defines a face \(\mathbb{E}^{\rho,I}\) of \(\mathbb{E}^{\rho}\). It is the restriction of \(\mathbb{E}\) to the full subcategory of \(\lozenge^{\rho}\) with objects contained in \(I\). Choose an \(l\)-partition \(\rho\) of \(\underline{n}\). Take an \(l\)-partition \(\rho^{\prime}=\{I_{1},\ldots,I_{l}\}\) of \(\underline{n}\) in the \(S_{n}\)-orbit of \(\rho\) and choose an \((l-1)\)-coarsement \(\underline{\rho}\) of \(\rho^{\prime}\). Then there exist \(1\leq s<t\leq l\) such that \[\underline{\rho}=\{I_{s}\cup I_{t},I_{1},\ldots,I_{s-1},I_{s+1},\ldots,I_{t-1}, I_{t+1},\ldots,I_{l}\}=:\rho_{st}.\] The decomposition \(\mathcal{S}^{\underline{\rho}}\) of \(\mathbb{E}^{\underline{\rho}}\) defines then a decomposition of the face \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}\) of \(\mathbb{E}^{\rho^{\prime}}\) since \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}=\mathbb{E }_{\underline{\rho},\underline{n}\setminus(I_{s}\cup I_{t})}\) is also a face of \(\mathbb{E}^{\underline{\rho}}\). Denote by \(\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}\) the induced linear splitting of \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}\). Fix \(s\in\{1,\ldots,l\}\). Then as above the \((l-2)\)-fold vector bundles \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}\), for \(t\in l\setminus\{s\}\), are all sides of the \((l-1)\)-fold vector bundle \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus I_{s}}\). Choose \(t,r\in\underline{l}\setminus\{s\}\). Then \[\rho_{st}\sqcap\rho_{sr}=\{I_{s}\cup I_{r}\cup I_{t}\}\cup\{I_{x}\mid x\in \underline{l}\setminus\{r,s,t\}\}.\] Since \(\mathcal{S}^{\rho_{st}}\) and \(\mathcal{S}^{\rho_{sr}}\) coincide on the common core \(\mathbb{E}^{\rho_{st}\sqcap\rho_{sr}}\) of \(\mathbb{E}^{\rho_{st}}\) and \(\mathbb{E}^{\rho_{sr}}\) by (21), they coincide in particular on its face \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{r}\cup I_{t})}\). Hence \(\mathcal{S}^{\rho_{st}}|_{\underline{n}\setminus(I_{s}\cup I_{r}\cup I_{t})}= \mathcal{S}^{\rho_{sr}}|_{\underline{n}\setminus(I_{s}\cup I_{r}\cup I_{t})}\) and the splittings \(\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}\) and \(\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{r})}\) satisfy consequently \[\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}(I)=\Sigma_{ \rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{r})}(I)\] for all \(I\in\operatorname{Obj}(\lozenge^{\rho^{\prime}})\) with \(I\subseteq\underline{n}\setminus(I_{s}\cup I_{t}\cup I_{r})\). By the claim proved in [10, Theorem 3.5] there is consequently a linear splitting \(\Sigma_{\rho^{\prime},\underline{n}\setminus I_{s}}\) of \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus I_{s}}\), such that \[\Sigma_{\rho^{\prime},\underline{n}\setminus I_{s}}|_{\underline{n}\setminus (I_{s}\cup I_{t})}=\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I _{t})} \tag{43}\] for all \(t\in\underline{l}\setminus\{s\}\). Apply this to each \(l\)-partition \(\rho^{\prime}\) in the \(S_{n}\)-orbit of \(\rho\) and obtain linear splittings of the \((l-1)\)-sides of \(\mathbb{E}^{\rho^{\prime}}\), that are compatible with the given linear splittings of its \((l-2)\)-sides. By the condition (2) in the hypotheses of the theorem, \[\Phi_{\sigma}(J)\circ\Sigma_{\rho^{\prime},\underline{n}\setminus I}(J)=\Sigma_ {\sigma(\rho^{\prime}),\underline{n}\setminus\sigma(I)}(\sigma(J))\circ\Psi_{ \sigma}(J) \tag{44}\] follows immediately for all \(\sigma\in S_{n}\) and for all \(J\in\operatorname{Obj}(\lozenge^{\rho^{\prime}})\) with \(J\subsetneq\underline{n}\setminus I\). Now choose \(\rho^{\prime}\) in the \(S_{n}\)-orbit of \(\rho\) and for \(I\in\rho^{\prime}\) define \[\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}\colon\overline{ \mathbb{E}^{\rho^{\prime}\underline{n}\setminus I}}\to\mathbb{E}^{\rho^{ \prime},\underline{n}\setminus I},\] by \[\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}(J)=\Sigma_{\rho^{ \prime},\underline{n}\setminus I}(J)\] for \(J\in\operatorname{Obj}(\lozenge^{\rho^{\prime}})\) with \(J\subsetneq\underline{n}\setminus I\) and \[\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}(\underline{n} \setminus I)=\frac{1}{n!}\cdot_{K}\sum_{\sigma\in S_{n}}^{K}\left(\Phi_{\sigma^ {-1}}\circ\Sigma_{\sigma(\rho^{\prime}),\underline{n}\setminus\sigma(I)}\circ \Psi_{\sigma}\right)(\underline{n}\setminus I)\] for any \(K\in\rho^{\prime}\setminus\{I\}\), where the scalar multiplication \(\cdot_{K}\) and the addition \(\sum^{K}\) are the scalar multiplication and the addition in the vector bundle \[p_{\underline{n}\setminus(I\cup K)}^{n\setminus I}\colon\mathbb{E}^{\rho^{ \prime}}(\underline{n}\setminus I)\to\mathbb{E}^{\rho^{\prime}}(\underline{n} \setminus(I\cup K)).\] This sum is well-defined since for all such \(K\) and for all \(\sigma\in S_{n}\) \[p_{\underline{n}\setminus(I\cup K)}^{n}\circ\Phi_{\sigma^{-1}}( \underline{n}\setminus\sigma(I))\circ\Sigma_{\sigma(\rho^{\prime}),\underline{n} \setminus\sigma(I)}(\underline{n}\setminus\sigma(I))\circ\Psi_{\sigma}( \underline{n}\setminus I)\] \[\quad=\Phi_{\sigma^{-1}}(\underline{n}\setminus\sigma(I\cup K)) \circ p_{\underline{n}\setminus(\sigma(I\cup K)}^{n}\circ\Sigma_{\sigma(\rho^ {\prime}),\underline{n}\setminus\sigma(I)}(\underline{n}\setminus\sigma(I)) \circ\Psi_{\sigma}(\underline{n}\setminus I)\] \[\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def__def_def__def_def__def_def__def_def__def__def__def_def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def___def__def__def__def__def__def__def__def__def__def__def___def__def__def___def__def__def___def___def___def__def__def___def___def__def__def___def___def___def___def__def___def__def__def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def__def___def___def___def___def___def__def___def___def___def___def___def__def___def___def___def___def__def___def___def___def___def___def___def___def__def___def__def___def__def___def__def___def___def___def__def___def___def__def___def__def___def__def__def___def__def___def__def___def__def__def___def___def__def__def__def___def__def__def__def__def__def__def__def___def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def___def__def__def__def__def__def__def__def__def__def__def__def__def__def__def_def__def__def__def__def__def__def_def__def__def__def__def_def__def__def__def__def__def__def__def__def_def__def__def__def__def__def_def__def__def_def__def__def__def__def__def__def_def_def__def__def_def__def__def_def__def_def__def_def_def__def_def__def_def__def_def_def__def_def__def_def_def__def_def_def_def_def_def__def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ for \(\left((a_{J})_{J\in\rho^{\prime}\setminus\{I\}}\right)\) and \(\left((b_{J})_{J\in\rho^{\prime}\setminus\{I\}}\right)\) in \(\overline{\mathbb{E}\rho^{\prime}}(\underline{n}\setminus I)\). Then for each \(K\in\rho^{\prime}\setminus\{I\}\) \[\Sigma_{\rho^{\prime},\underline{n}\setminus(I\cup K)}(\underline{n} \setminus(I\cup K))\left((a_{J})_{J\in\rho^{\prime}\setminus\{I,K\}}\right) =\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}( \underline{n}\setminus(I\cup K))\left((a_{J})_{J\in\rho^{\prime}\setminus\{I, K\}}\right)\] \[=p_{\underline{n}\setminus(I\cup K)}^{\underline{n}\setminus I} \left(\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}(\underline{n} \setminus I)\left((a_{J})_{J\in\rho^{\prime}\setminus\{I\}}\right)\right)\] \[=p_{\underline{n}\setminus(I\cup K)}^{\underline{n}\setminus I} \left(\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}(\underline{n} \setminus I)\left((b_{J})_{J\in\rho^{\prime}\setminus\{I\}}\right)\right)\] \[=\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}( \underline{n}\setminus(I\cup K))\left((b_{J})_{J\in\rho^{\prime}\setminus\{ I,K\}}\right)\] \[=\Sigma_{\rho^{\prime},\underline{n}\setminus(I\cup K)}( \underline{n}\setminus(I\cup K))\left((b_{J})_{J\in\rho^{\prime}\setminus\{ I,K\}}\right).\] Since \(\Sigma_{\rho^{\prime},\underline{n}\setminus(I\cup K)}\) is a linear splitting of \(\mathbb{E}_{\rho^{\prime},\underline{n}\setminus(I\cup K)}\), hence a monomorphism of vector bundles, \[(a_{J})_{J\in\rho^{\prime}\setminus\{I,K\}}=(b_{J})_{J\in\rho^{\prime} \setminus\{I,K\}}.\] But since \(K\in\rho^{\prime}\setminus\{I\}\) was arbitrary, this shows \[(a_{J})_{J\in\rho^{\prime}\setminus\{I\}}=(b_{J})_{J\in\rho^{\prime}\setminus \{I\}}.\] This shows that \(\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I}\) is a monomorphism of \((l-1)\)-fold vector bundles, hence a linear splitting of the \((l-1)\)-fold vector bundle \(\mathbb{E}^{\rho^{\prime},I}\). By construction the obtained collection of linear splittings of the sides of the \(l\)-fold vector bundles \(\mathbb{E}^{\rho^{\prime}}\) for all \(\rho^{\prime}\) in the \(S_{n}\)-orbit of \(\rho\) \[\Phi_{\sigma}(J)\circ\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I }(J)=\tilde{\Sigma}_{\sigma(\rho^{\prime}),\underline{n}\setminus\sigma(I)}( \sigma(J))\circ\Psi_{\sigma}(J)\] for all \(I\in\rho^{\prime}\), \(J\in\operatorname{Obj}(\lozenge^{\rho^{\prime}})\) with \(J\subseteq\underline{n}\setminus I\) and all \(\sigma\in S_{n}\). In addition for \(I_{1}\neq I_{2}\in\rho^{\prime}\) and \(J\in\operatorname{Obj}(\lozenge^{\rho^{\prime}}\) such that \[J\subseteq(\underline{n}\setminus I_{1})\cap(\underline{n}\setminus I_{2})= \underline{n}\setminus(I_{1}\cup I_{2}),\] \[\tilde{\Sigma}_{\rho^{\prime},\underline{n}\setminus I_{1}}(J)=\Sigma_{\rho^ {\prime},\underline{n}\setminus(I_{1}\cup I_{2})}(J)=\tilde{\Sigma}_{\rho^{ \prime},\underline{n}\setminus I_{2}}(J).\] As a consequence, there exists a linear splitting \[\Sigma^{\rho^{\prime}}\colon\overline{\mathbb{E}\rho^{\prime}}\to\mathbb{E}^{ \rho^{\prime}}\] of \(\mathbb{E}^{\rho^{\prime}}\) with \[\Sigma^{\rho^{\prime}}|_{\underline{n}\setminus I}=\tilde{\Sigma}_{\rho^{ \prime},\underline{n}\setminus I}\] for all \(I\in\rho^{\prime}\). As above, an averaging of the obtained family of splittings (for all \(\rho^{\prime}\) in the \(S_{n}\)-orbit of \(\rho\)) shows that these linear splittings can be chosen such that \[\Phi_{\sigma}(J)\circ\Sigma^{\rho^{\prime}}(J)=\Sigma^{\sigma(\rho^{\prime})}( \sigma(J))\circ\Psi_{\sigma}(J) \tag{45}\] for all \(J\in\operatorname{Obj}(\lozenge^{\rho^{\prime}})\) and all \(\sigma\in S_{n}\). The compatibility in (22) with the decompositions of the highest order cores is immediate by construction: For \(I_{s},I_{t}\in\rho^{\prime}\) as above and \(J\in\operatorname{Obj}(\lozenge^{\rho_{st}})\subseteq\operatorname{Obj}( \lozenge^{\rho^{\prime}})\) \[\Sigma^{\rho^{\prime}}(J)=\tilde{\Sigma}_{\rho,\underline{n}\setminus I_{s}}(J) =\Sigma_{\rho^{\prime},\underline{n}\setminus(I_{s}\cup I_{t})}(J)=\mathcal{ S}^{\rho_{st}}\circ\iota(J).\qed\] ### Proof of Theorem 5.10 Finally Theorem 5.10 can be proved. This is the subject of the remainder of this section. For simplicity, the action \(\Psi^{\mathrm{dec}}\) of \(S_{n}\) on the decomposed \(n\)-fold vector bundle \(\mathbb{E}^{\mathrm{dec}}\) is simply written \(\Psi\) in this proof. First consider all \(2\)-cores of \(\mathbb{E}\). As explained in Proposition 4.4, these are indexed by all possible partitions of \(\underline{n}\) in two subsets. Consider such a partition \(\rho=\{I,\underline{n}\setminus I\}\) for \(\emptyset\neq I\subsetneq\underline{n}\) and choose a linear decomposition of the \(2\)-core \(\mathbb{E}^{\rho}\), which is a double vector bundle with sides \(E^{I}_{I}\) and \(E^{\underline{n}\setminus I}_{\underline{n}\setminus I}\) and with core \(E^{\underline{n}}_{\underline{n}}\). After such a decomposition \(\mathcal{S}^{\rho^{\prime}}\) of \(\mathbb{E}^{\rho^{\prime}}\) has been chosen for each partition \(\rho^{\prime}\) of \(\underline{n}\) in two sets, choose again a fixed such partition \(\rho=\{I,\underline{n}\setminus I\}\) for \(\emptyset\neq I\subsetneq\underline{n}\) and define \[\mathcal{S}^{\rho}\colon\mathbb{E}^{\mathrm{dec},\rho}\to\mathbb{E}^{\rho}\] by \[\mathcal{S}^{\rho}(\underline{n}):=\frac{1}{n!}\cdot_{E^{I}_{I}}\sum_{\sigma \in S_{n}}^{E^{I}_{I}}\left(\Phi_{\sigma^{-1}}\circ\widetilde{\mathcal{S}^{ \sigma(\rho)}}\circ\Psi_{\sigma}\right)(\underline{n}), \tag{46}\] as well as \[\mathcal{S}^{\rho}(I)=\mathrm{id}_{E^{I}_{I}}\qquad\mathcal{S}^{\rho}( \underline{n}\setminus I)=\mathrm{id}_{E^{\underline{n}\setminus I}_{ \underline{n}\setminus I}}\] and \(\mathcal{S}^{\rho}(\emptyset)=\mathrm{id}_{M}\), respectively. (46) is well-defined since for all \(\sigma\in S_{n}\), \[p^{\underline{n}}_{I}\circ\Phi_{\sigma^{-1}}(\underline{n}) \circ\widetilde{\mathcal{S}^{\sigma(\rho)}}(\underline{n})\circ\Psi_{\sigma} (\underline{n}) =\Phi_{\sigma^{-1}}(\sigma(I))\circ p^{\underline{n}}_{\sigma(I )}\circ\widetilde{\mathcal{S}^{\sigma(\rho)}}(\underline{n})\circ\Psi_{\sigma }(\underline{n})\] \[=\Phi_{\sigma^{-1}}(\sigma(I))\circ\mathrm{id}_{E^{\sigma(I)}_{ \sigma(I)}}\circ p^{\underline{n}}_{\sigma(I)}\circ\Psi_{\sigma}(\underline{ n})\] \[=\Phi_{\sigma^{-1}}(\sigma(I))\circ\Psi_{\sigma}(I)\circ p^{ \underline{n}}_{I}\] \[=\left(\epsilon(\sigma,I)\cdot\mathrm{id}_{E^{I}_{I}}\right) \circ\left(\epsilon(\sigma,I)\cdot\mathrm{id}_{E^{I}_{I}}\right)\circ p^{ \underline{n}}_{I}=p^{\underline{n}}_{I},\] since \(\Phi_{\sigma^{-1}}(\sigma(I))\) and \(\Psi_{\sigma}(I)\), being the restrictions to \(\mathbb{E}^{\sigma(\rho)}(\sigma(I))=E^{\sigma(I)}_{\sigma(I)}\) and \(\mathbb{E}^{\mathrm{dec},\rho}(I)=E^{I}_{I}\) of \(\Phi_{\sigma^{-1}}(\sigma(I))\) and \(\Psi_{\sigma}(I)\), must equal \(\epsilon(\sigma^{-1},\sigma(I))\cdot\mathrm{id}_{E^{\sigma(I)}_{\sigma(I)}}= \epsilon(\sigma,I)\cdot\mathrm{id}_{E^{I}_{I}}\) and \(\epsilon(\sigma,I)\cdot\mathrm{id}_{E^{I}_{I}}\), respectively. Further, by the interchange law, (46) does not change if \(I\) is replaced by \(\underline{n}\setminus I\). By construction, \[\Phi_{\sigma}\circ\mathcal{S}^{\rho}=\mathcal{S}^{\sigma(\rho)}\circ\Psi_{ \sigma}\colon\mathbb{E}^{\mathrm{dec},\rho}\to(\mathbb{E}^{\sigma(\rho)})^{\sigma} \tag{47}\] for all \(\sigma\in S_{n}\) and all partitions \(\rho\) of \(\underline{n}\) in two subsets. All the \(2\)-partitions of \(\underline{n}\) have the unique common \(1\)-coarsement \(\underline{\rho}=\{\underline{n}\}\). Since the restriction to the ultracore of each decomposition of a \(2\)-core is the identity \(\mathrm{id}_{\underline{n}\underline{n}}\), all the obtained \(2\)-core decompositions are compatible as in (21). By Proposition B.3 there exist linear splittings of the \(3\)-cores of \(\mathbb{E}\), which are symmetrically compatible and also compatible with all previously chosen decompositions of the \(2\)-cores. By Proposition B.2 there exist then symmetrically compatible decompositions of all the \(3\)-cores of \(\mathbb{E}\), that are compatible as in (21). A recursive use of Propositions B.3 and B.2 yields then for each \(l=3,\ldots,n\) symmetrically compatible decompositions of all \(l\)-cores, which are compatible as in (21). The claim is then proved at the last step \(l=n\).
2308.16546
Inference of dynamic hypergraph representations in temporal interaction data
A range of systems across the social and natural sciences generate datasets consisting of interactions between two distinct categories of items at various instances in time. Online shopping, for example, generates purchasing events of the form (user, product, time of purchase), and mutualistic interactions in plant-pollinator systems generate pollination events of the form (insect, plant, time of pollination). These data sets can be meaningfully modeled as temporal hypergraph snapshots in which multiple items within one category (i.e. online shoppers) share a hyperedge if they interacted with a common item in the other category (i.e. purchased the same product) within a given time window, allowing for the application of hypergraph analysis techniques. However, it is often unclear how to choose the number and duration of these temporal snapshots, which have a strong influence on the final hypergraph representations. Here we propose a principled nonparametric solution to this problem by extracting temporal hypergraph snapshots that optimally capture structural regularities in temporal event data according to the minimum description length principle. We demonstrate our methods on real and synthetic datasets, finding that they can recover planted artificial hypergraph structure in the presence of considerable noise and reveal meaningful activity fluctuations in human mobility data.
Alec Kirkley
2023-08-31T08:37:10Z
http://arxiv.org/abs/2308.16546v2
# Constructing hypergraphs from temporal data ###### Abstract A wide range of systems across the social and natural sciences produce temporal data consisting of interaction events among nodes in disjoint sets. Online shopping, for example, generates purchasing events of the form (user, product, time of purchase), and mutualistic interactions in plant-pollinator systems generate pollination events of the form (inset, plant, time of pollination). These data sets can be meaningfully modeled as temporal hypergraph snapshots in which multiple nodes within one set (i.e. online shoppers) share a hyperedge if they interacted with a common node in the opposite set (i.e. purchased the same product) within a given time window, allowing for the application of a range of hypergraph analysis techniques. However, it is often unclear how to choose the number and duration of these temporal snapshots, which have a strong influence on the final hypergraph representations. Here we propose a principled, efficient, nonparametric solution to this longstanding problem by extracting temporal hypergraph snapshots that optimally capture structural regularities in temporal event data according to the minimum description length principle. We demonstrate our methods on real and synthetic datasets, finding that they can recover planted artificial hypergraph structure in the presence of considerable noise and reveal meaningful activity fluctuations in human mobility data. ## I Introduction The recent fast-paced development of hypergraph modeling tools has opened up many new avenues for understanding the higher order structure of complex systems [1; 2]. In applications as diverse as crime prediction [3], social media analytics [4], and epidemiology [5], temporal data consisting of events involving distinct categories of entities--for example, users and comment threads in social media data or infected persons and locations in epidemiological data--can be represented as temporal hypergraph snapshots, providing a powerful lens with which to view these data sets. In this representation, nodes of one type (e.g. social media users) share a hyperedge if they were involved in an event with a particular node of the other type (e.g. commented on the same thread) within some specified time window. In some applications of interest, there is a physically meaningful time window of interest for the temporal hypergraph snapshots. For example, in epidemiology one may want to set the time scale of co-location in human mobility data to be on the order of days, in order to capture possible transmission risk from infected individuals that visited a given location. In this paper we are interested in situations where the time scale of interest is not clear ahead of time, and one must infer the characteristic time windows based on structural regularities in the event data itself. Such a need arises, for example, when identifying seasonality or anomalies in systems without clear physical time scales such as online shopping [6] or cybersystems [7], as well as in exploratory machine learning analyses of geolocalized events in urban planning [8] and ecology [9], among many other applications in the social and natural sciences. Existing methods for constructing networks or hypergraphs from temporal data often require each temporal event to have some non-zero duration (such representations are also called "interval graphs") [10; 11; 12], but event time intervals can be hard or impossible to infer from many data generating sources, including social media checkins, online purchases, and plant-pollinator interactions. Other works choose uniform, pre-defined time windows for event aggregation [13], but the precise window size chosen for temporal network aggregation can have a sizeable impact on a wide variety of structural and dynamical characteristics including clustering and other centrality measures [14; 15], latent node geometries [16], consensus dynamics [17], controllability [18], epidemic spreading [19], and ecological processes [20]. Uniform time windows may also fail to capture the "bursty" dynamics of temporal network interactions, in which many events occur within short time periods that are separated by long time periods of inactivity [21; 22; 11]. A natural way to construct hypergraph snapshots from temporal event data that overcomes these problems is to identify time windows within which the events exhibit significant shared structure. Such structural regularities can be readily identified using information theory, which allows us to quantify the level of data compression we can achieve by exploiting these regularities to transmit the data to a receiver. Hypergraph representations that better encapsulate structural regularities in the event data will therefore result in better compression from an information theoretic perspective, which can be operationalized using the Minimum Description Length (MDL) principle [23]. The MDL principle states that the best model among a set of competing models for a given dataset is the one that can describe the data using the fewest symbols by exploiting its structural regularities [24]. The MDL principle is a powerful first-principles framework for model selection which has been employed in a range of graph mining and network science applications including community detection [25; 26; 27], significant subgraph identification [28; 29; 30; 31; 32], and graph summarization [33; 34; 35]. A few existing works have examined the aggregation of temporal network data into representative snapshots of varying duration by using regularities within the structure of the event data. In the method of Masuda and Holme [36], time is discretized into small time steps and unipartite interactions that occur within each timestep are aggregated into network snapshots. Then, a distance matrix is computed among these high resolution network snapshots using any user-specified network distance measure, and the snapshots are clustered using a hierarchical clustering algorithm to give a coarse-grained representation of the data. This method is similar in spirit to that of De Domenico et al [37], which aggregates multi-layer networks (which may or may not represent temporal snapshots) using a spectral distance between network layers. Kirkley et al [38] also approach the problem of aggregating multilayer network data, but using a nonparametric MDL approach that is motivated by the exploitation of shared edges in these layers. These methods all differ from the one proposed in this manuscript in a few crucial ways. First, and most importantly, they require the fundamental measured network units (i.e. the disaggregated snapshots) to have meaningful structure in and of themselves in order to compute the network distances and clustering criteria of interest. This requires an initial binning of events into network snapshots that aggregate many events at each time window, leading precisely to the problem studied in this paper. Second, these methods do not focus on constructing hypergraphs from events among disjoint sets of entities, which allows for additional structure (e.g. the degrees of each node set) to be exploited when segmenting event data and addresses a fundamentally different data summarization task. In this paper we first derive an objective function which computes the description length of a temporal event data set under a three-part encoding that exploits structural regularities and temporal localization in the events while using a temporal hypergraph representation of the data as an intermediate step. We develop an exact polynomial time dynamic programming algorithm and a fast approximate greedy algorithm that minimize this description length objective to find the MDL-optimal configuration of temporal hypergraph snapshots associated with the event dataset. Our methods are then applied in a variety of experiments involving real and synthetic datasets to demonstrate their utility and performance. We first examine the ability for these algorithms to reconstruct planted hypergraphs in synthetic data, finding that they can recover the planted structure with high accuracy even in the presence of considerable noise. Then we apply our methods to a longitudinal location-based social network (LBSN) dataset of checkins to various locations by app users, finding that we can compress this data to automatically extract meaningful regularities in these human mobility patterns. ## II Methods ### Temporal hypergraph binning from bipartite event data Suppose we are given a dataset of \(N\) data points ("events") \(\mathcal{X}=\{\mathbf{x}_{1},...,\mathbf{x}_{N}\}\), where each data point \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\) consists of a source node \(s_{i}\), a destination node \(d_{i}\), and a time \(t_{i}\) when the event involving the source \(s_{i}\) and destination \(d_{i}\) occurred. For simplicity we can assume \(\mathcal{X}\) has been ordered in time (i.e. \(t_{i}<t_{i+1}\) for \(i=1,...,N-1\)), so that the entire time period of interest is \([t_{1},t_{N}]\). We also assume that the source nodes \(\mathcal{S}\) and destination nodes \(\mathcal{D}\) form disjoint sets of sizes \(|\mathcal{S}|=S\) and \(|\mathcal{D}|=D\) respectively, and that we are interested in understanding the interactions of nodes in only one set (e.g. \(\mathcal{S}\)) as mediated by the events in \(\mathcal{X}\). Fig. 1(a) shows an example of an event data set \(\mathcal{X}\) consisting of \(N=10\) events with source nodes \(\mathcal{S}=\{1,2,3,4\}\), destination nodes \(\mathcal{D}=\{A,B,C\}\), and \(T=12\) time steps of size \(\Delta t\) with which we discretize the event times \(\{t_{i}\}\) (see Sec. II.2 for further details). Data in this form occurs in a wide variety of applications. Take as an example human mobility data \(\mathcal{X}\), where an event \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\) represents the presence of individual \(s_{i}\) at location \(d_{i}\) at time \(t_{i}\)--we will study this example in more detail in Sec. III using location-based social network data. In this case, for applications across epidemic modelling [5], sociology [39], and urban planning [40], one may be interested in the co-location patterns among individuals in \(\mathcal{S}\). Alternatively, in recommendation systems applications, purchasing data often consists of events in which a user \(s_{i}\) purchases a product \(d_{i}\) at time \(t_{i}\), and correlations among user purchasing behavior can be used for effective advertising of new products [41]. A natural representation of the event data \(\mathcal{X}\) in these and similar settings is as a set of hypergraph snapshots \(\mathcal{G}=\{\mathbf{G}^{(1)},...,\mathbf{G}^{(K)}\}\) corresponding to consecutive non-overlapping time intervals \(\{[t_{min}^{(k)},t_{max}^{(k)}]\}_{k=1}^{K}\) that partition the time interval \([t_{1},t_{N}]\), and where node \(s\in\mathcal{S}\) participates in a hyperedge labeled by \(d\in\mathcal{D}\) within hypergraph \(\mathbf{G}^{(k)}\) if and only if \(s\) is involved in an event with \(d\) in the time interval \([t_{min}^{(k)},t_{max}^{(k)}]\). Here we allow for a node \(s\) to be repeated any number of times within a hyperedge \(d\) to signal multiple events involving \(\{s,d\}\) in a given time window. For maximum generality, we also allow for \(\mathbf{G}^{(k)}\) to have self-loops (hyperedges with a single node) as well as multi-edges (distinct labelled hyperedges containing the same set of nodes). In other words, \(\mathbf{G}^{(k)}\) is not necessarily a _simple_ hypergraph. The hypergraph representation \(\mathcal{G}^{(k)}\) captures all the (poten tially indirect) interactions among nodes in \(\mathcal{S}\) that occur via their interactions with nodes in \(\mathcal{D}\) during the time period \([t_{min}^{(k)},t_{max}^{(k)}]\), and can be analyzed using the wealth of newly available tools for higher order networks [1]. For simplicity of presentation, we will write the hypergraph snapshot \(\mathbf{G}^{(k)}\) in its weighted bipartite ("incidence") representation \(\mathbf{G}^{(k)}=\big{\{}(s,d,G_{sd}^{(k)})\big{\}}_{s,d=1}^{S,D}\), where \[G_{sd}^{(k)}=\sum_{(s_{i},d_{i},t_{i})\in\mathcal{X}}\mathbb{1}_{t_{i}\in[t_{ min}^{(k)},t_{max}^{(k)}]}\delta_{s_{i},s}\delta_{d_{i},d} \tag{1}\] is the number of events involving the node \(s\) and the hyperedge \(d\) within the \(k\)-th time window. This representation also naturally permits a symmetric treatment of the source set \(\mathcal{S}\) and destination set \(\mathcal{D}\), in case one is interested in the dual hypergraph representation with node set \(\mathcal{D}\) and hyperedge set \(\mathcal{S}\)--for example, to examine purchasing similarities among products rather than users. To construct a series of hypergraphs \(\mathcal{G}=\{\mathbf{G}^{(k)}\}_{k=1}^{K}\) of the form in Eq. 1 from event data \(\mathcal{X}\), one only needs to make two choices: 1. The number of temporal snapshots ("bins") \(K\). 2. The consecutive non-overlapping time intervals \(\{[t_{min}^{(k)},t_{max}^{(k)}]\}_{k=1}^{K}\). Equivalently, in discretized time (see Sec. II.2), one just needs to specify the integer-valued interval widths \(\mathbf{\tau}=\{\tau_{1},...,\tau_{K}\}\), where \(\tau_{k}\Delta t=t_{max}^{(k)}-t_{min}^{(k)}\) and \(\sum_{k=1}^{K}\tau_{k}=T\) is the total number of timesteps in our discretization. The integer-valued widths \(\mathbf{\tau}\) alone fully specify the intervals \(\{[t_{min}^{(k)},t_{max}^{(k)}]\}_{k=1}^{K}\) in discretized time because of the consecutive, non-overlapping nature of the intervals discussed above. For this reason, we will refer to \(\mathbf{\tau}\) as the "binning" of the event data \(\mathcal{X}\). Any binning \(\mathbf{\tau}\)--a partition of (discrete) time--induces a partition of the events in \(\mathcal{X}\), which we denote with \(\mathcal{C}=\{C_{1},...,C_{K}\}\). \(C_{k}\), which we call the \(k\)-th "event cluster", is the set of data points \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\) such that \(t_{i}\in[t_{min}^{(k)},t_{max}^{(k)}]\). From \(C_{k}\) we can construct the \(k\)-th hypergraph snapshot \(\mathbf{G}^{(k)}\) using \[G_{sd}^{(k)}=\sum_{(s_{i},d_{i},t_{i})\in C_{k}}\delta_{s_{i},s}\delta_{d_{i},d}. \tag{2}\] We denote the number of events in \(C_{k}\) with \(m_{k}\), and the vector of sizes for all clusters in \(\mathcal{C}\) as \(\mathbf{m}\). In this way, the Figure 1: **Diagram of hypergraph binning method.****(a)** Data set \(\mathcal{X}\), consisting of \(N=10\) events (\(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\)) involving a “source” \(s_{i}\in\mathcal{S}\) and “destination” \(d_{i}\in\mathcal{D}\) interacting at time \(t_{i}\). \(\mathcal{X}\) may, for example, be used to examine co-location patterns from user-location data or co-purchasing patterns among consumers in recommendation system analysis. Time is discretized into \(T\) timesteps to allow for data compression at a desired temporal resolution \(\Delta t=(t_{N}-t_{1})/T\). **(b)** Hypergraphs \(\mathcal{G}=\{\mathbf{G}^{(1)},\mathbf{G}^{(2)}\}\) extracted from partitioning the events \(\mathcal{X}\) into \(K=2\) clusters \(\mathcal{C}=\{\{\mathbf{x}_{1},...,\mathbf{x}_{6}\},\{\mathbf{x}_{7},...,\mathbf{x}_{10}\}\}\) with localized activity patterns. The inferred weighted hypergraphs \(\mathbf{G}^{(k)}\) are shown in both their incidence (bipartite) representation and their standard representation, with sources \(s\) mapped to nodes and destinations \(d\) mapped to hyperedges. **(c)** Three-stage information transmission process used to design a minimum description length objective (Eq. 12) to infer the hypergraphs \(\mathcal{G}\) from event data \(\mathcal{X}\). The data \(\mathcal{X}\) is transmitted at increasing levels of granularity, and the optimal hypergraphs \(\mathcal{G}\) (constructed using clusters \(\mathcal{C}\) of events) are selected as those that minimize the description length of the transmission process. integer vector \(\mathbf{\tau}=\{\tau_{1},...,\tau_{K}\}\) indicates the sizes of the \(K\) snapshots in terms of timesteps \(\Delta t\), and the integer vector \(\mathbf{m}=\{m_{1},...,m_{K}\}\) indicates the sizes of the snapshots in terms of the number of events \(\mathbf{x}\in\mathcal{X}\) they contain. Note that there may be multiple binnings \(\mathbf{\tau}\) that induce the same event partition \(\mathcal{C}\), since any "empty" time steps \(\Delta t\) (i.e., timesteps in which no events occur) at the boundary of a snapshot can be moved to an adjacent snapshot without changing the number of events occurring in each snapshot. In Fig. 1(b) we show a binning of an event dataset \(\mathcal{X}\) into \(K=2\) bins of widths \(\mathbf{\tau}=\{\tau_{1},\tau_{2}\}=\{7,5\}\), which induces an event partition \(\mathcal{C}=\{\{\mathbf{x}_{1},...,\mathbf{x}_{6}\},\{\mathbf{x}_{7},...,\mathbf{x}_{10}\}\}\) and hypergraphs \(\mathbf{G}^{(1)}=\{(3,A,3),(3,C,1),(4,A,1),(4,C,1)\}\), \(\mathbf{G}^{(2)}=\{(1,B,1),(2,B,2),(4,B,1)\}\). We show each hypergraph in both its bipartite incidence representation (along with its incidence matrix), as well as in its representation with nodes in \(\mathcal{S}=\{1,2,3,4\}\) and hyperedges in \(\mathcal{D}=\{A,B,C\}\). ### Minimum description length binning objective The method we present in this paper provides a principled, efficient, nonparametric solution to identify hypergraph snapshots \(\mathcal{G}\) of any event dataset \(\mathcal{X}\) using the minimum description length (MDL) principle from information theory, which states that the best model among a set of candidate models is the one that provides the best compression (shortest description) of a dataset [23, 24]. We do this by constructing a three-part encoding that allows us to gradually transmit the data \(\mathcal{X}\) at increasing levels of granularity, with the hypergraphs \(\mathcal{G}\) transmitted as an intermediate step in the process. The less information this transmission process requires, the more the hypergraph binning process has compressed the data \(\mathcal{X}\) by capturing its statistical regularities, and the better the representation \(\mathcal{G}\). The hypergraphs \(\mathcal{G}\) that result in the most efficient lossless transmission of the data set \(\mathcal{X}\) to a receiver (i.e., the lowest description length) give an MDL-optimal temporal hypergraph representation of \(\mathcal{X}\). In order to construct a lossless MDL objective, we need to discretize the relevant time interval \([t_{1},t_{N}]\) into small, uniform time steps of size \(\Delta t=(t_{N}-t_{1})/T\), where \(T\) is the number of time steps. The parameter \(T\) is technically a free parameter of the method to be chosen by the user, but we show empirically in Sec. III that it has little to no impact on inference results. For this reason we consider the proposed method to be nonparametric since it has no parameters that require tuning by the user other than \(T\), which can be set arbitrarily based on computational limitations (we discuss the time complexity of our methods in Sec. II.3). Given the discretization of time into intervals of width \(\Delta t\), we preprocess the data \(\mathcal{X}\) by rounding each \(t_{i}\) to the value of the closest time step, which will incur an error of at most \(\Delta t/2\) for each \(t_{i}\) and can potentially permit multiple events to occur simultaneously in the same time step. By discretizing time, we can then proceed with developing a lossless transmission scheme that results in perfect reconstruction of the discretized data \(\mathcal{X}\), and whose information content is computed using discrete combinatorial structures. However, due to the rounding, we are in effect performing _lossy_ compression with maximum distortion \(\Delta t/2\) in the time values reconstructed by a receiver. With this discretization in place, we can construct our MDL objective for communicating \(\mathcal{X}\) using the hypergraphs \(\mathcal{G}\) as an intermediate step. The fundamental mechanism behind our encoding is that we can obtain compression of event data \(\mathcal{X}\) using hypergraphs \(\mathcal{G}\) that are localized in time as well as with respect to sources \(s\) and destinations \(d\). This is made possible by our encoding exploiting the redundancies in the events \(\mathbf{x}_{i}\) that take place within the event clusters \(\mathcal{C}\) corresponding to these hypergraphs. This localization is also consistent with previous findings that bipartite graphs and hypergraphs display heavy-tailed (hyper-)degrees [42, 43, 44], as well as "burstiness" in time [21, 22, 11]. Suppose we want to transmit the (temporally discretized) dataset \(\mathcal{X}\) to a receiver. We will assume that the number of events \(N\), the number of discrete timesteps \(T\), the number of sources (nodes) \(S\), and the number of destinations (hyperedges) \(D\) are known by the receiver. These are all integer constants and are of comparatively negligible information cost to transmit, so we can safely ignore them in our formulation. Suppose now that we do not use any intermediate steps in our transmission process that exploit event redundancies, and instead choose to communicate the data \(\mathcal{D}\) directly to the receiver as a set of completely independent events. The receiver knows there \(N\) events \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\), each with \(S\) possible sources, \(D\) possible destinations, and \(T\) possible timesteps. Therefore, there are \((SDT)^{N}\) possible configurations of the data \(\mathcal{X}\), and so to specify to the receiver in binary which particular configuration corresponds to our dataset, we need to send a message of length up to approximately \[\mathcal{L}_{0}=\log_{2}((SDT)^{N})=N\log(SDT) \tag{3}\] bits, where we've used the notation \(\log\equiv\log_{2}\) for brevity. The quantity \(\mathcal{L}_{0}\) is referred to as the _description length_ of the dataset \(\mathcal{X}\) under the naive one-level encoding scheme we've devised, which only uses the global information \(\{N,T,S,D\}\) to constrain the space of possible datasets \(\mathcal{X}\). A smarter way to transmit the data \(\mathcal{X}\) that exploits the redundancies we seek in our hypergraph representation \(\mathcal{G}\) is to transmit \(\mathcal{X}\) to the receiver in three stages by passing binary messages that communicate information at increasing levels of granularity and successively constrain the space of possible datasets \(\mathcal{X}\) until there is only one remaining possibility. Each step in our transmission scheme requires an information content (e.g. description length) given by the logarithm of the number of possible message configurations, as in Eq. 3. We will assume that the number of data bins (i.e. hypergraphs, event clusters) \(K\) is known by the receiver, and can ignore its information along with the other constants above. Crucially, although \(K\) is assumed known by the receiver, it remains a free variable in our description length optimization process, as we will see in Sec. II.3. Our transmission process proceeds as follows: 1. **Transmit aggregate cluster-level information (event cluster sizes \(\mathbf{m}\) and bin widths \(\mathbf{\tau}\)):** * \(\mathbf{\tau}=\{\tau_{k}\}_{k=1}^{K}\) requires \(\log{T-1\choose K-1}\) bits of information to specify, as it consists of \(K\) positive integers that sum to \(T\). * \(\mathbf{m}=\{m_{k}\}_{k=1}^{K}\) requires \(\log{N-1\choose K-1}\) bits of information to specify, as it consists of \(K\) positive integers that sum to \(N\). The total information content of this first stage is therefore given by the sum of these two contributions: \[\mathcal{L}_{1}=\mathcal{L}(\mathbf{\tau},\mathbf{m})=\log{T-1\choose K-1}+\log{N-1 \choose K-1}.\] (4) 2. **Transmit detailed cluster-level information (counts of sources, destinations, and timestamps for each event cluster \(C_{k}\)):** * The number of instances (bipartite degree) of each source in event cluster \(C_{k}\) is stored in the vector \(\mathbf{s}^{(k)}=\{s_{r}^{(k)}\}_{r=1}^{S}\), with \(s_{r}^{(k)}\) the number of occurrences of source \(r\) in event cluster \(C_{k}\). Transmitting these counts requires \(\log{\left(\!\!\left(\!\!\begin{array}{c}S\\ m_{k}\end{array}\!\!\right)\!\right)}\) bits of information, where \({\left(\!\!\!\left(\!\!\begin{array}{c}y\\ x\end{array}\!\!\right)\!\right)}={x+y-1\choose y-1}\) is the multiset coefficient counting the number of ways to assign \(x\) objects to \(y\) distinct bins, allowing bins to be empty. * The number of instances (bipartite degree) of each destination in event cluster \(C_{k}\) is stored in the vector \(\mathbf{d}^{(k)}=\{d_{r}^{(k)}\}_{r=1}^{D}\), with \(d_{r}^{(k)}\) the number of occurrences of destination \(r\) in event cluster \(C_{k}\). Transmitting these counts requires \(\log{\left(\!\!\left(\!\!\begin{array}{c}D\\ m_{k}\end{array}\!\!\right)\!\right)}\) bits of information. * The number of events \(\mathbf{x}_{i}\) in event cluster \(C_{k}\) that occur at each discrete time step within the temporal boundaries of the cluster is stored in the vector \(\mathbf{n}^{(k)}=\{n_{t}^{(k)}\}_{t=1}^{\tau_{k}}\). Here, \(n_{t}^{(k)}\) the number of events within event cluster \(C_{k}\) that fall into the \(t\)-th time step within the cluster's boundary (there are \(\tau_{k}\) time steps to choose from). Transmitting these counts requires \(\log{\left(\!\!\left(\!\!\begin{array}{c}\tau_{k}\\ m_{k}\end{array}\!\!\right)\!\right)}\) bits of information. The total information content of this second stage is therefore given by the sum of these three contributions for each cluster \(C_{k}\): \[\mathcal{L}_{2} =\sum_{k=1}^{K}\mathcal{L}(\mathbf{s}^{(k)},\mathbf{d}^{(k)},\mathbf{n}^{(k)} |\tau_{k},m_{k})\] (5) \[=\sum_{k=1}^{K}\left[\log{\left(\!\!\left(\!\!\begin{array}{c}S \\ m_{k}\end{array}\!\!\right)\!\right)}+\log{\left(\!\!\left(\!\!\begin{array}{ c}D\\ m_{k}\end{array}\!\!\right)\!\right)}+\log{\left(\!\!\left(\!\!\begin{array}{ c}\tau_{k}\\ m_{k}\end{array}\!\!\right)\!\right)}\right].\] (6) 3. **Transmit the events \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\) within each cluster \(C_{k}\), which fully specifies \(\mathcal{X}\)**: We have the following three constraints based on previously transmitted information \[\sum_{r=1}^{S}s_{r}^{(k)} =m_{k},\] (7) \[\sum_{r=1}^{D}d_{r}^{(k)} =m_{k},\] (8) \[\sum_{t=1}^{\tau_{k}}n_{t}^{(k)} =m_{k}.\] (9) Therefore, the number of non-negative integer-valued 3D tensors with margins defined by \(\mathbf{s}^{(k)},\mathbf{d}^{(k)},\mathbf{n}^{(k)}\) is the number of possibilities for \(\mathcal{X}\), and the logarithm of this quantity is the information content of this last step. However, this quantity itself is difficult to compute, so we can break up this last step into two stages, one of which involves the hypergraphs we are looking for: * Transmit the hypergraph \(\mathcal{G}^{(k)}\) given the bipartite degree constraints \(\mathbf{s}^{(k)},\mathbf{d}^{(k)}\). This requires \(\log\Omega(\mathbf{s}^{(k)},\mathbf{d}^{(k)})\) bits of information, where \(\Omega(\mathbf{s}^{(k)},\mathbf{d}^{(k)})\) is the number of non-negative integer-valued matrices with margins \(\mathbf{s}^{(k)},\mathbf{d}^{(k)}\). \(\log\Omega(\mathbf{s}^{(k)},\mathbf{d}^{(k)})\) is in general difficult to compute exactly, but can be approximated in order \(\mathrm{O}(S+D)\) time using the effective columns approximation of [45]. * Transmit the final data points \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\) in cluster \(C_{k}\) given the hypergraph \(\mathcal{G}^{(k)}\) and the time step counts \(\mathbf{n}^{(k)}\). Transmitting these requires \(\log\Omega(\mathcal{G}^{(k)},\mathbf{n}^{(k)})\) bits of information, where \(\Omega(\mathcal{G}^{(k)},\mathbf{n}^{(k)})\) is the number of non-negative integer-valued matrices with margins \(\mathbf{n}^{(k)}\) and \(\{G_{sd}^{(k)}\}_{s,d=1}^{S,D}\). Using the approximation in [45], \(\log\Omega(\mathcal{G}^{(k)},\mathbf{n}^{(k)})\) can be estimated in order \(\mathrm{O}(SD+\tau_{k})\) time. The total information content of this third stage is therefore given by the sum of these two contributions for each cluster \(C_{k}\): \[\mathcal{L}_{3} =\sum_{k=1}^{K}\big{[}\mathcal{L}(\mathbf{G}^{(k)}|\mathbf{s}^{(k)},\mathbf{d}^ {(k)})+\mathcal{L}(C_{k}|\mathbf{G}^{(k)},\mathbf{n}^{(k)})\big{]} \tag{10}\] \[=\sum_{k=1}^{K}\big{[}\log\Omega(\mathbf{s}^{(k)},\mathbf{d}^{(k)})+\log \Omega(\mathcal{G}^{(k)},\mathbf{n}^{(k)})\big{]}. \tag{11}\] Summing the description length of each stage, we have a total description length of \[\mathcal{L}_{total}(\mathcal{X},\mathbf{\tau})=\mathcal{L}_{1}+\mathcal{L}_{2}+ \mathcal{L}_{3}, \tag{12}\] where we've explicitly noted the functional dependence of \(\mathcal{L}\) on the binning \(\mathbf{\tau}\), since the description length of the data \(\mathcal{X}\) under our transmission scheme--including the hypergraphs \(\mathcal{G}\)--can be computed when \(\mathbf{\tau}\) is known. Fig. 1(c) shows a schematic of the three stages described above, for the event dataset in Fig. 1(a). In the next section we describe how to minimize the description length of Eq. 12 over all binnings \(\mathbf{\tau}\) to find the MDL set of hypergraphs \(\mathcal{G}\) that best summarize the temporal event data \(\mathcal{X}\). ### Optimization and model selection The description length of Eq. 12 amounts to a one-dimensional clustering objective over binnings \(\mathbf{\tau}\). Therefore, if our objective consisted of independent terms for each cluster (akin to the objective for K-means clustering), then we could optimize it exactly using a dyamic programming approach [46, 47, 48, 38]. Only the first term in Eq. 12 couples clusters together, but in the regime we are interested, we have \(K\ll T,N\) and we can rewrite Eq. 12 as (up to irrelevant constant factors) \[\mathcal{L}_{total}(\mathcal{X},\mathbf{\tau})=\sum_{k=1}^{K}\mathcal{L}_{cluster}^ {(k)}, \tag{13}\] where \[\mathcal{L}_{cluster}^{(k)} =\log(N-1)(T-1)\] \[+\log\left(\!\!\left(\!\!\begin{array}{c}S\\ m_{k}\end{array}\!\!\right)\!\right)\left(\!\!\left(\!\!\begin{array}{c}D\\ m_{k}\end{array}\!\!\right)\!\right)\left(\!\!\left(\!\!\begin{array}{c}\tau _{k}\\ m_{k}\end{array}\!\!\right)\!\!\right) \tag{14}\] \[+\left[\log\Omega(\mathbf{s}^{(k)},\mathbf{d}^{(k)})+\log\Omega(\mathcal{ G}^{(k)},\mathbf{n}^{(k)})\right].\] We can now minimize our MDL objective in Eq. 13 using a dynamic program. The key intuition behind this is that since the objective in Eq. 13 is a sum of independent terms over clusters in one dimension, its minimum over the first \(j\) time steps--i.e., the optimal binning \(\mathbf{\tau}\) restricted to these first \(j\) time steps--must consist of the optimal binning up to some time step \(i\in\{1,...,j\}\) (excluding the \(i\)-th time step) plus a final cluster of time steps \(i,...,j\). In other words, for all \(j\in[1,T]\) we have \[\mathcal{L}_{\text{MDL}}^{(j)}=\min_{i\in[1,j]}\left\{\mathcal{L}_{\text{MDL }}^{(i-1)}+\mathcal{L}_{cluster}^{([i,j])}\right\}, \tag{15}\] where \(\mathcal{L}_{\text{MDL}}^{(j)}\) is the minimum value of Eq. 13 when we include only the first \(j\) timestamps \(\Delta t\), and \(\mathcal{L}_{cluster}^{([i,j])}\) is the cluster-level description length of Eq. 14 evaluated at the cluster containing consecutive time steps \(\{i,...,j\}\). Setting \(\mathcal{L}_{\text{MDL}}^{(0)}=0\) and iterating over all \(j\in[1,T]\), we find the minimum of the description length in Eq. 13, giving us the optimal binning \(\mathbf{\tau}\) for the data \(\mathcal{X}\) according to the MDL principle. In addition to finding the exact optimum over binnings \(\mathbf{\tau}\), this dynamic programming approach has the advantage of automatically selecting the optimal number of bins \(K\), since the entire unconstrained space of binnings \(\mathbf{\tau}\) is explored by the algorithm. The objective function in Eq. 13 will naturally penalize high values of \(K\) since we will waste information to describe the clusters and increase the total description length if \(K\) is too high. On the other hand, Eq. 13 will also naturally penalize values of \(K\) that are too low, since we will waste information describing the events within each cluster if they are too heterogeneous and/or spread over too large a time period. The MDL-optimal binning \(\mathbf{\tau}\) therefore balances the information required to describe the clusters and the information required to describe the data within each cluster by selecting an appropriate number of clusters \(K\) using the data itself. To quantify the extent to which our method has achieved compression over a naive one-level encoding, we could take the ratio of the optimal description length \(\mathcal{L}_{\text{MDL}}\) from Eq. 13 with the description length of Eq. 3. However, in our case it is of more interest to determine how much of a compression gain we achieve when we use an optimal configuration of multiple hypergraphs to summarize the temporal event data \(\mathcal{X}\), versus using only a single hypergraph that aggregates all the events together. We therefore construct an _inverse compression ratio_\(\eta\) which computes our compression gain as \[\eta=\frac{\mathcal{L}_{\text{MDL}}}{\mathcal{L}(K=1)}, \tag{16}\] where \(\mathcal{L}(K=1)\) is the description length of Eq. 13 when all events are put into a single event cluster. A value \(\eta=1\) implies that the event dataset \(\mathcal{X}\) is not compressible using multiple hypergraphs, while \(\eta\ll 1\) implies that the event dataset \(\mathcal{X}\) can be greatly compressed using a representation of multiple hypergraphs. Due to the last two terms in Eq. 14, evaluating \(\mathcal{L}_{cluster}^{([i,j])}\) has a time complexity that scales like \(\text{O}(N(j-i)/T+SD+(j-i))\), assuming evenly spaced data in time and using the approximation in [45] for the number \(\Omega\) of non-negative integer matrices with fixed margins. Iterating over all \(j\) and \(i\) in the recursion then gives a total complexity of roughly \(\text{O}((SD+N+T)T^{2})\) for the dynamic programming algorithm using a naive implementation. However, we can speed up the method by saving the \(\mathcal{L}_{cluster}^{([i,j])}\) values as they're computed (requiring order O(\(T^{2}\)) space), since \(\mathcal{L}_{cluster}^{[i,j]}\) can be computed from \(\mathcal{L}_{cluster}^{([i,j-1])}\) in constant time if time step \(j\) has no events. This speed-up results in a total time complexity of roughly O(\(T^{2}\)) for the dynamic program when many time steps have no events. Despite this speed-up, the time complexity of our exact dynamic programming solution may be too high for practical applications that involve large time periods or that require high values of \(T\) for sufficient temporal resolution. In such cases we can use a greedy heuristic optimization method where we start with all time steps \(i=1,...,T\) in their own cluster and iteratively merge the pair of time steps that gives the greatest decrease to the description length in Eq. 13. We save the description length changes induced by all proposed merges (including those that were sub-optimal) and perform greedy merges until all timesteps are in a single cluster. We then pick the value of \(K\) for which the total description length was minimized over our set of merges. This greedy optimization method is not guaranteed to find the exact optimum, but it has a time complexity that is nearly O(\(T\)) in practice, as for each \(K=T,...,1\) we will only have to update the \(\log\Omega\) terms for two merge pairs (those involving adjacent clusters to the one most recently merged). In practice, this greedy method appears to achieve MDL values that are nearly optimal but with much faster runtimes than the dynamic programming approach. See Sec. III.1 and Appendix A for numerical experiments computing the runtimes and inverse compression ratios achieved by the two algorithms on synthetic and real datasets respectively. Code for the algorithms presented in this paper can be found at [https://github.com/aleckirkley/hypergraph-binning](https://github.com/aleckirkley/hypergraph-binning). ## III Results ### Reconstruction of synthetic data To examine the performance of the algorithms presented in Sec. II.3, we can generate synthetic data consisting of planted event clusters \(\mathcal{C}\) with binnings \(\mathbf{\tau}\) and test the ability for these algorithms to recover the planted clusters at various levels of injected noise. We generated synthetic datasets with \(N\in[200,500,1000]\), \(T\in[50,500]\), \(K\in[2,5,10]\), and \(S=D=5\) (the results did not depend on \(S\) and \(D\)) in order to examine a range of model settings for the reconstruction tests. The synthetic event clusters \(\mathcal{C}\) are generated by first drawing a partition of the \(N\) events and \(T\) timesteps into \(K\) sets uniformly at random, then drawing the time step counts \(\mathbf{n}^{(k)}\) uniformly at random within each cluster. To control the level of heterogeneity across the \(K\) synthetic event clusters--which in turn controls the level of noise in the partition of the events, and consequently the reconstruction difficulty--our synthetic model includes a parameter \(\gamma\geq 0\) which determines the localization of the edges \((s,d)\) within hypergraph \(\mathbf{G}^{(k)}\) on sources \(s\in\mathcal{S}\) and \(d\in\mathcal{D}\). More specifically, for each cluster \(C_{k}\) we independently generate the bipartite degrees \(\mathbf{s}^{(k)}\) and \(\mathbf{d}^{(k)}\) from a Dirichlet-Multimomial distribution with \(m_{k}\) trials and concentration parameter \(\gamma\mathbf{1}\), then draw the bipartite graph \(\mathbf{G}^{(k)}\) at random from the set of non-negative integer matrices with row and column sums \(\mathbf{s}^{(k)}\) and \(\mathbf{d}^{(k)}\) using the algorithm of [49]. This will create more localized bipartite degree distributions within hypergraph \(\mathbf{G}^{(k)}\) (thus, more localized edge weights \(G_{\text{sd}}^{(k)}\)) and a higher variance across clusters as \(\gamma\to 0\). The concentration parameter \(\gamma\) thus serves as a tunable parameter that determines the distinguishability of the generated synthetic clusters, with \(\gamma\to 0\) increasing the distinguishability of the clusters (e.g. increasing the signal-to-noise ratio). In Fig. 2 we plot the results of our synthetic reconstruction experiments. In Fig. 2(a) we show the inverse compression ratio \(\eta\) (Eq. 16) vs the logarithm of the planted heterogeneity \(\gamma\) for \(N\in[200,500,1000]\), averaged over 30 simulations at each combination of \(K\) and \(T\). Error bars represent three standard errors in the mean values estimated from the simulations, and the solid and dotted curves correspond to the dynamic programming and greedy algorithms described in Sec. II.3 respectively. We set \(\Delta t=1\) for the reconstruction simulations. We can see that as the noise level \(\gamma\) increases from \(\gamma=10^{-3}\) to \(\gamma=1\), the synthetic event clusters \(\mathcal{X}\) become less and less compressible, and that more events \(N\) results in better compression at all noise levels due to additional statistical evidence for the structure of each cluster. These results indicate that substantial data compression is possible using our algorithm, even at relatively high noise levels. We can also see very similar compression performance between the exact and greedy algorithms, indicating that the greedy method is achieving near-optimal compression at much lower computational cost than the dynamic programming method. As there are multiple binnings \(\mathbf{\tau}\) that could correspond to any given set of event clusters \(\mathcal{C}\)--any binning that preserves the event partition while shifting the cluster boundaries in time--there is always a high level of uncertainty in reconstruction of \(\mathbf{\tau}\), even with perfectly distinguishable event clusters \(\mathcal{C}\). We therefore quantify the reconstruction accuracy in our simulations by representing an event partition \(\mathcal{C}\) as a 1D partition of the temporally ordered event indices \(\{1,...,N\}\), then compute a mutual information measure between the 1D partitions corresponding to the planted clusters \(\mathcal{C}_{\text{pl}}\) and the clusters \(\mathcal{C}_{\text{in}}\) inferred by our algorithm. However, standard mutual information-based measures [50] are poorly suited for contiguous, low-dimensional partitions such as the ones we compare here, because they compute the partition similarity relative to an unconstrained (e.g., not necessarily contiguous) space of partitions that is much larger in size and much less structured than the space of contiguous partitions [38, 51]. This results in artificially inflated values of the mutual information between contiguous partitions that may have little correlation other than that induced by their contiguity. With this in mind, here we construct a contiguity-corrected adjusted mutual information (CCAMI) to compute the similarity between the event clusters \(\mathcal{C}_{\mathrm{pl}}\) and \(\mathcal{C}_{\mathrm{in}}\), which is given by \[\mathrm{CCAMI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})=\frac{ \mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})-\langle \mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})\rangle_{c}}{ \mathrm{max}\{\mathrm{H}(\mathcal{C}_{\mathrm{pl}}),\mathrm{H}(\mathcal{C}_{ \mathrm{in}})\}-\langle\mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{ \mathrm{in}})\rangle_{c}}, \tag{17}\] where \[\mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})=\mathrm{H}( \mathcal{C}_{\mathrm{pl}})+\mathrm{H}(\mathcal{C}_{\mathrm{in}})-\mathrm{H}( \mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}}) \tag{18}\] is the standard mutual information between partitions \(\mathcal{C}_{\mathrm{pl}}\) and \(\mathcal{C}_{\mathrm{in}}\), and \(\langle\mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})\rangle _{c}\) is the expected value of this mutual information over all possible contiguous partitions with the same numbers of groups as \(\mathcal{C}_{\mathrm{in}}\) and \(\mathcal{C}_{\mathrm{pl}}\). Eq. 17 quantifies how much information is shared between the planted event partition \(\mathcal{C}_{\mathrm{pl}}\) and our inferred event partition \(\mathcal{C}_{\mathrm{in}}\), relative to all pairs of contiguous partitions with the same numbers of clusters. In practice, \(\langle\mathrm{MI}(\mathcal{C}_{\mathrm{pl}},\mathcal{C}_{\mathrm{in}})\rangle _{c}\) is difficult to compute analytically, so we estimate it using an average over 100 random draws of partitioning the \(N\) events into \(|\mathcal{C}_{\mathrm{in}}|\) and \(|\mathcal{C}_{\mathrm{pl}}|\) contiguous clusters. In Fig. 2(b) we show the reconstruction accuracy of our experiments as a function of the noise level \(\gamma\), with the same parameter settings as in Fig. 2(a). Consistent with the improved compression at lower \(\gamma\) seen in Fig. 2(a), we can see better reconstruction accuracy as \(\gamma\) decreases and for synthetic datasets with a greater number of events \(N\) when \(\gamma\leq 10^{-1}\). When the noise level increases to \(\gamma>10^{-1}\), we observe a sharper drop in reconstruction accuracy for greater \(N\), likely as a result of finite-size smoothing in phase transition-like behavior for the model detectability [52]. We can also see that the exact and greedy algorithms have a non-negligible performance distinction with respect to reconstruction accuracy--as opposed to compression, as shown in Fig. 2(a)--since in the low-noise regime the CCAMI values are noticeably lower for the greedy method than the exact method. This relative performance discrepancy for the greedy algorithm in Fig. 2(a) and Fig. 2(b) indicates that in some cases good data compression can be achieved for a variety of different partitions of the events in \(\mathcal{X}\). To verify the time complexities estimated in Sec. II.3, in Fig. 3(a) we plot the average run time for the dynamic programming algorithm (left axis) and greedy algorithm (right axis) as a function of the number of time steps \(T\), for the reconstruction simulations in Fig. 2. Along with these points we plot the regression lines obtained from fits of the form \(\log(\mathrm{Runtime})=\beta_{1}\log(T)+\beta_{2}\), which are labelled with their least-squares estimates for the exponent \(\hat{\beta}_{1}\). We can see that the dynamic program has a run time that scales approximately like \(\mathrm{O}(T^{2})\), and that the greedy algorithm has a run time that is slightly worse than linear in the number of time steps \(T\), while the absolute run times for the greedy algorithm are much faster than those for the dynamic program. In Fig. 3(b), we plot the average inverse compression ratio \(\eta\) of the reconstruction experiments as a function of the number of time steps \(T\). We can see that the compression is essentially identical for all values of \(T\), up to Figure 2: **Synthetic reconstruction performance.****(a)** Average inverse compression ratio (Eq. 16) versus the logarithm of the planted level of cluster heterogeneity \(\gamma\), for \(N\in\{200,500,1000\}\) (line colors in red, blue, and yellow respectively). The exact dynamic programming algorithm results are shown with solid lines and circular markers, while the greedy algorithm results are shown with dotted lines and triangular markers. **(b)** Reconstruction accuracy, as quantified by the contiguity-corrected AMI (CCAMI, Eq. 17), over the same set of experiments. Averages for each panel are taken over 30 simulations with the parameters \(\{S,D,K,T\}\) described in Sec. III.1, and error bars represent 3 standard errors in the mean. fluctuations. This indicates that the results of our algorithm are independent of the specific choice of temporal resolution \(T\) as long as it is in a reasonable range (roughly at least on the order of the number of events \(N\)) that does not merge large time periods into single time steps. ### Foursquare checkins in NYC neighborhoods To examine the performance of our method on real event data \(\mathcal{X}\), we apply the algorithms of Sec. II.3 to a dataset of FourSquare checkins in New York City collected from April 2012 to February 2013 [53, 54]. In this dataset, each checkin event \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\in\mathcal{X}\) denotes a FourSquare checkin by a user \(s_{i}\) at venue \(d_{i}\) at time \(t_{i}\). Location-based social network (LBSN) data of this form are often used in urban planning, epidemiology, and sociology to understand human mobility co-location patterns [55, 56, 5], where users \(s,s^{\prime},s^{\prime\prime},...\in\mathcal{S}\) are co-located if they check in at the same venue \(d\in\mathcal{D}\) within some pre-defined time window. We can use the MDL method described in Sec. II to automatically extract a set of representative hypergraphs \(\mathcal{G}\) from the LBSN checkin data \(\mathcal{X}\) that capture homogeneous user activity patterns at different points in time. This allows us to, for instance, perform market segmentation to identify users \(s\in\mathcal{S}\) with similar consumption patterns at different points in the year, or to identify seasonality in the congestion patterns at different venues. To preprocess the FourSquare checkin data for analysis, we used neighborhood boundary shapefiles [57] to map the (latitude, longitude) pairs of the checkins to neighborhoods in NYC. We kept only the 1000 users and venues in the dataset with the most checkins and neighborhoods with at least 100 checkins over the 10 month period, with the aim of reducing biases and noise from users and venues with very infrequent app usage. The final dataset used in the analysis had \(N=64\),366 events and \(S=D=\) 1,000 users and venues spread across 91 neighborhoods. In Fig. 4 we show the results of applying our exact dynamic programming method for hypergraph binning to the checkins for each neighborhood separately. This neighborhood-level analysis allows us to more easily visualize the inferred hypergraphs as well as perform cross-sectional comparisons across the neighborhoods regarding their event homogeneity, temporal burstiness, and compressibility. In our inference we set \(\Delta t=\)1 day. In Fig. 4(a) we show the inferred hypergraphs \(\mathcal{G}=\{\mathbf{G}^{(1)},\mathbf{G}^{(2)},\mathbf{G}^{(3)},\mathbf{G}^{(4)}\}\) for the neighborhood (Bay Terrace, Queens) for which our method resulted in the highest level of data compression (\(\eta=0.68\)). The hypergraphs \(\mathbf{G}^{(k)}\), which are ordered chronologically left to right, are shown in their incidence representation, with the width of edge \((s,d)\) proportional to the edge weight \(G^{(k)}_{sd}\) which counts the number of events that contain user \(s\) and venue \(d\) in the time period corresponding to hypergraph \(\mathbf{G}^{(k)}\). Source and destination nodes in this representation are scaled in size proportionally to their weighted bipartite degrees (e.g. frequency of occurrence in events within the time period) and labelled by unique user and venue ids respectively for each neighborhood. We can see that the four inferred hypergraphs in Fig. 4(a) are very structurally distinct from one another. In the first time period inferred by our method (from the start of the study until July 9), corresponding to hypergraph \(\mathbf{G}^{(1)}\), we observe that a large portion of the activity was dominated by user 'u89', who visited venues 'v2' through 'v7' as well as 'v9', the last of which was Figure 3: **Reconstruction parameter sensitivities.****(a)** Average run time (in seconds) of reconstruction experiments (Fig. 2) versus number of time steps \(T\), for both algorithms described in Sec. II.3. The performance of the exact dynamic programming algorithm is shown on the left axis, while that of the greedy algorithm is shown on the right axis. Regression lines of the form \(\log(\text{Runtime})=\beta_{1}\log(T)+\beta_{2}\), labeled with their least-squares estimates for the exponent \(\hat{\beta}_{1}\), are shown as dotted lines. **(b)** Inverse compression ratio \(\eta\) (Eq. 16) versus \(T\) for the experiments conducted at different values of \(N\). Averages for each panel are taken over 30 simulations with the parameters \(\{S,D,K\}\) described in Sec. III.1 (the averages in panel (a) also allow \(N\) to vary), and error bars represent 3 standard errors in the mean. also visited by the rest of the users except 'u31'. User 'u31' also made a substantial number of checkins during this period, but only to venue 'v8'. The inclusion of these two distinct activity patterns (checkins by 'u89' and 'u31') in the hypergraph \(\mathbf{G}^{(1)}\) is a result of both users performing their checkins consistently over the time period corresponding to the first hypergraph. In the second hypergraph (corresponding to the time period July 10 - October 26), we can see that user 'u31' is still making consistent checkins at venue 'v8', but that user 'u89' is no longer making checkins. There is also turnover in the other users and venues. In the third hypergraph (corresponding to October 27 - November 1) we see a very different checkin activity pattern, which corresponds to the tropical storm Hurricane Sandy hitting New York City. Here we see many users (too many to be labelled in the figure) checking in at location 'v1', Throgs Neck Bridge between Queens and the Bronx, likely signalling evacuation and return to the city. In the fourth hypergraph (corresponding to November 2 through the end of the study period), we see a return to normal with a somewhat similar activity structure as in the second hypergraph, where most checkins are performed by users 'u31' and 'u136' at venues 'v8' and 'v9' respectively. The checkin data for this neighborhood is easily compressed using our method due to these four very distinct periods of high localization of the events onto a few users and venues. In contrast, in Fig. 4(b), we see a very different story for Melrose in the Bronx. Here we see that the event data was optimally compressed into a single hypergraph (i.e. \(K=\eta=1\)), for which checkin activity is dominated by user 'u1' and venue 'v2'. (Note that these are not the same as user 'u1' and venue 'v2' in Bay Terrace, since these abbreviated user and venue IDs were generated separately for the two neighborhoods in the figure.) The neighborhood-level set of events for Melrose is incompressible using multiple hypergraphs, since it does not have multiple distinct periods of activity, instead exhibiting consistent checkins by one user at one venue. To quantify the extent of temporal localization in the hypergraphs inferred with our method, we define a temporal event gap ratio \(\alpha\) as the ratio of the median inter-event time within clusters to the median inter-event time Figure 4: **FourSquare checkins in NYC neighborhoods.** The dataset, which aggregated checkins from April 2012 to February 2013 in New York City [53; 54], consists of events \(\mathbf{x}_{i}=(s_{i},d_{i},t_{i})\in\mathcal{X}\) that denote a FourSquare checkin by a user \(s_{i}\) at venue \(d_{i}\) at time \(t_{i}\). **(a)** Inferred hypergraphs for the Bay Terrace neighborhood, for which our method resulted in the highest level of data compression (\(\eta=0.68\)). The hypergraphs are ordered chronologically left to right and shown in their incidence representation, with the width of edge \((s,d)\) proportional to the edge weight \(G^{(k)}_{sd}\) which counts the number of events that contain user \(s\) and venue \(d\) in the time window. Source and destination nodes are scaled proportionally to their frequency of occurrence and labelled by unique user and venue ids respectively for each neighborhood. **(b)** Inferred hypergraph for Melrose, for which our method resulted in the lowest level of data compression (\(\eta=1\)). **(c)** Histogram of the temporal event gap ratio \(\alpha\) (Eq. 19) for all neighborhoods with \(K>1\). **(c)** Histogram of the edge Jensen-Shannon Divergence JSD\({}_{\text{Edges}}\) (Eq. 23) for all neighborhoods with \(K>1\), with mean indicated using the dotted black line. **(e)** Fraction of all events (blue) and inferred temporal bin boundaries (red) that took place within each month, across all neighborhoods. between clusters, or \[\alpha=\frac{\text{median}(\{t_{i+1}-t_{i}|c_{i}=c_{i+1}\}_{i=1}^{N-1})}{\text{ median}(\{t_{i+1}-t_{i}|c_{i}\neq c_{i+1}\}_{i=1}^{N-1})}, \tag{19}\] where \(c_{i}\in\{1,...,K\}\) is the event cluster index of the \(i\)-th event \(\mathbf{x}_{i}\). \(\alpha<1\) when the events within the event clusters tend to be more localized in time than the events on the borders of the clusters, and \(\alpha>1\) when the opposite is true. In Fig. 4(c), we plot a histogram of the ratio \(\alpha\) for all neighborhoods analyzed that had an inferred \(K>1\). We can see that the inferred hypergraphs in all but 4 neighborhoods had events that were more temporally localized than the pairs of events that transitioned between hypergraphs (\(\alpha<1\)), indicating that our method is identifying periods of temporally localized activity in the LBSN data. To examine the localization of inferred hypergraphs on sources and destinations relative to the overall localization in the dataset \(\mathcal{X}\), we compute the expected reduction in uncertainty for predicting the source and destination \((s,d)\) of a randomly chosen edge in \(\mathcal{G}\) versus the fully aggregated hypergraph \(\mathbf{G}_{0}=\bigcup_{k=1}^{K}\mathbf{G}^{(k)}\). If the edges \((s,d)\) in each hypergraph \(\mathbf{G}^{(k)}\) are much more highly localized on source/destination pairs than in the overall data \(\mathcal{X}\), it is substantially easier to predict the label \((s,d)\) of a randomly chosen edge in \(\mathcal{G}\) than in \(\mathbf{G}_{0}\). The reduction in our predictive uncertainty in going from \(\mathbf{G}_{0}\) to \(\mathcal{G}\) can be quantified by the Generalized Jensen-Shannon Divergence [58, 59], given by \[\text{JSD}_{\text{edges, unnormalized}}=\text{H}(\mathbf{G}_{0})-\sum_{k=1}^{K} \frac{m_{k}}{N}\text{H}(\mathbf{G}^{(k)}), \tag{20}\] where \[\text{H}(\mathbf{G}_{0})= -\sum_{s,d=1}^{S,D}\left(\frac{\sum_{k=1}^{K}G_{sd}^{(k)}}{N} \right)\log\left(\frac{\sum_{k=1}^{K}G_{sd}^{(k)}}{N}\right), \tag{21}\] \[\text{H}(\mathbf{G}^{(k)})= -\sum_{s,d=1}^{S,D}\left(\frac{G_{sd}^{(k)}}{m_{k}}\right)\log \left(\frac{G_{sd}^{(k)}}{m_{k}}\right) \tag{22}\] are the Shannon entropies of the edges in the aggregated graph and inferred hypergraphs respectively. One can show that Eq. 20 is bounded below by \(0\) (due to the concavity of entropy) and above by \(H(\mathbf{G}_{0})\), so we can rescale the JSD to \([0,1]\) by dividing by this upper bound, thus \[\text{JSD}_{\text{edges}}=1-\frac{1}{\text{H}(\mathbf{G}_{0})}\sum_{k=1}^{K} \frac{m_{k}}{N}\text{H}(\mathbf{G}^{(k)}). \tag{23}\] Eq. 23 tells us the fraction of information (in terms of predictive power for the edge labels) we lose by using the aggregated hypergraph \(\mathbf{G}_{0}\) instead of the cluster-level hypergraphs \(\mathcal{G}=\{\mathbf{G}^{(k)}\}_{k=1}^{K}\). A value of \(\text{JSD}_{\text{edges}}\approx 0\) indicates little edge localization within the clusters relative to the overall dataset, and a value of \(\text{JSD}_{\text{edges}}\approx 1\) indicates high edge localization within the clusters relative to the overall dataset. In Fig. 4(d), we plot a histogram of \(\text{JSD}_{\text{Edges}}\) for all neighborhoods analyzed that had an inferred \(K>1\), with the distribution mean of \(\text{JSD}_{\text{Edges}}\approx 0.15\) indicated with the vertical black line. The mean JSD value of \(0.15\) indicates a relatively high average level of localization among the edges \((s,d)\) within each hypergraph. We can also observe that all but 5 neighborhoods had an information gain of at least 5% relative to the overall data \(\mathcal{X}\), indicating non-negligible localization of the sources/destinations in the hypergraphs inferred with our method. Finally, in Fig. 4(e) we plot a histogram showing the fraction of all events that took place within each month (blue) and the fraction of inferred temporal snapshot boundaries that took place within each month (red). Here we can observe substantial differences in these distributions, indicating that the inferred boundaries are to some extent negatively correlated with the temporal density of events. For example, there is a sizable drop in event frequency from May to June, and we see a spike in the number of inferred temporal boundaries in June, suggesting that the drop in events provided sufficient statistical evidence for the formation of a new cluster boundary in the information theoretically optimal binnings. We also see large discrepancies for September and October, where there are comparatively many boundaries but few events. This may be correlated with the uptick in the overall density of events, seasonal fluctuations in consumer behavior, and Hurricane Sandy (for October). In Appendix A, we run additional tests using the neighborhood-level event data, in order to understand the discrepancies between the exact dynamic programming algorithm and the fast greedy agglomerative algorithm of Sec. II.3 when applied to this dataset. One can also examine the fluctuations in large-scale checkin patterns at the level of the entire city by aggregating the events over all neighborhoods. In the top two panels of Fig. 5(a), we show the binnings obtained when applying our exact dynamic programming method and greedy agglomerative method to the aggregated dataset representing the top 1000 users and venues in the FourSquare dataset across all neighborhoods. Different colors distinguish the \(K=4\) different temporal bins obtained by each of these algorithms, and the number of checkins for each day of the study is plotted as a solid black line. In the bottom two plots of Fig. 5(a), we show the partitions obtained using naive binning heuristics with the same number of clusters (\(K=4\))--a partition of the events into time windows of equal duration (third plot), and a partition of the events into time windows with an equal number of events (fourth plot). We can see substantial shifts in the cluster boundaries for these methods relative to the MDL-based methods, despite the (relatively strong) constraint of fixing the value of \(K\). The MDL-based methods have inferred boundaries in the gaps with sparse event data around days 145 (exact method) and 162 (greedy method), due to their focus on temporally localized clusters of events, while the other methods have boundaries that are uncorrelated with temporal event density. However, a binning heuristic that only looks for temporal gaps in events will also fail to reproduce the partitions obtained by the MDL methods, since we see some sizable gaps included within the inferred clusterings of both methods, where there is enough statistical evidence for the algorithm to create a contiguous cluster due to localization of the events on sources and destinations. In Fig. 5(b), we plot a matrix of the CCAMI values between each pair of partitions among the four shown in Fig. 5(a). We observe that, despite the apparent visual similarity of some pairs of partitions, the information shared between many pairs is only modestly more than what one would expect in random partitions of the time interval into \(K=4\) clusters. We also can see that--despite having a description length (\(\eta_{greedy}=0.758\)) which is comparable to the description length of the optimal partition obtained by the exact dynamic programming algorithm (\(\eta_{exact}=0.750\))--the greedy partition is actually quite different than the true MDL-optimal partition when considering the strong constraints imposed by contiguity (\(\text{CCAMI}=0.42\)). In fact, the greedy MDL partition is less similar to the true MDL-optimal partition than the partition obtained by simply splitting the interval into \(K=4\) windows of equal duration. This highlights the importance of our exact dynamic programming solution, and is consistent with findings in network community detection that identify high levels of degeneracy in the near-optimal partitions of networks [60; 61; 62; 63]. In Fig. 5(c), we plot summary statistics of the inferred hypergraphs using each binning method. Mirroring Fig. 5(a) we can see high variability in the number of events within the clusters across the four partitions. We can also see that the exact MDL approach has the best balance of edge localization (\(\text{JSD}_{\text{Edges}}=0.0648\)) and temporal localization (\(\alpha=0.0012\)) among its inferred event clusters. While it is only the second best method regarding each metric individually--e.g., it has the second-highest \(\text{JSD}_{\text{Edges}}\) value and the second lowest \(\alpha\) value--the top performers in \(\text{JSD}_{\text{Edges}}\) (Uniform Sizes) and \(\alpha\) (Greedy MDL) are the worst performers regarding \(\alpha\) and \(\text{JSD}_{\text{Edges}}\) respectively. ## IV Conclusion In this paper we develop a principled, efficient, nonparametric approach for inferring representative hypergraph snapshots from temporal event data based on the Figure 5: **FourSquare checkins across all of NYC.****(a)** Binnings obtained when applying the exact dynamic programming method (top plot) and greedy agglomerative method (second plot) of Sec. II.3 to the set of checkins aggregated across all neighborhoods in NYC, with the number of checkins for each day of the study plotted as a solid black line underneath. Colors distinguish the \(K=4\) different temporal bins inferred by each of these algorithms. The bottom two plots show the partitions obtained by naively partitioning the events into \(K=4\) time windows of equal duration and into time windows with an equal number of events (third and fourth plots respectively). **(b)** CCAMI matrix among all pairs of the four partitions shown in panel (a). **(c)** Table of summary statistics for the partitions in panel (a). MDL principle. Our approach considers the problem of transmitting the data to a receiver in multiple stages of increasing granularity, with the hypergraph snapshots as an intermediate step. The configuration of hypergraphs that minimizes the description length of this transmission process is then selected as the MDL-optimal hypergraph representation of the data. Our method automatically performs model selection for the number and composition of the hypergraphs with no parameter tuning. We employ an exact dynamic programming algorithm to identify the hypergraphs that minimize our description length objective in polynomial time, as well as a fast greedy agglomerative algorithm that achieves near-optimal configurations with substantially reduced run times. We demonstrate that our methods are able to consistently reconstruct synthetic data with planted hypergraph structure even with appreciable noise, and can reveal meaningful representative structures in real location-based social network data to understand human mobility patterns. There are a number of ways our methods can be extended in future work. In this paper we explore a data encoding that exploits redundancy provided by degree heterogeneity in the incidence representations of the representative hypergraphs within the data, but one can in principle exploit other structure as well to develop efficient encodings. This could include communities, transitivity, or overlaps among hyperedges among many other types of hypergraph structure. One can also impose asymmetry in the encoding between the source and destination nodes to more clearly highlight the desired hypergraph structure, or incorporate other relevant metadata on the edges such as weights or temporal duration of events. ###### Acknowledgements. This research was supported in part by the HKU-100 Start Up Grant and the HKU Institute of Data Science Research Seed Fund. The author thanks Shihui Feng for helpful discussions.
2309.09997
Rely-guarantee Reasoning about Concurrent Memory Management: Correctness, Safety and Security
Formal verification of concurrent operating systems (OSs) is challenging, in particular the verification of the dynamic memory management due to its complex data structures and allocation algorithm. An incorrect specification and implementation of the memory management may lead to system crashes or exploitable attacks. This article presents the first formal specification and mechanized proof of a concurrent memory management for a real-world OS concerning a comprehensive set of properties, including functional correctness, safety and security. To achieve the highest assurance evaluation level, we develop a fine-grained formal specification of the Zephyr RTOS buddy memory management, which closely follows the C code easing validation of the specification and the source code. The rely-guarantee-based compositional verification technique has been enforced over the formal model. To support formal verification of the security property, we extend our rely-guarantee framework PiCore by a compositional reasoning approach for integrity. Whilst the security verification of the design shows that it preserves the integrity property, the verification of the functional properties shows several problems. These verification issues are translated into finding three bugs in the C implementation of Zephyr, after inspecting the source code corresponding to the design lines breaking the properties.
Yongwang Zhao, David Sanan
2023-09-17T03:41:10Z
http://arxiv.org/abs/2309.09997v1
# Rely-guarantee Reasoning about Concurrent Memory Management: Correctness, Safety and Security+ ###### Abstract. Formal verification of concurrent operating systems (OSs) is challenging, in particular the verification of the dynamic memory management due to its complex data structures and allocation algorithm. An incorrect specification and implementation of the memory management may lead to system crashes or exploitable attacks. This article presents the first formal specification and mechanized proof of a concurrent memory management for a real-world OS concerning a comprehensive set of properties, including functional correctness, safety and security. To achieve the highest assurance evaluation level, we develop a fine-grained formal specification of the Zephyr RTOS buddy memory management, which closely follows the C code casing validation of the specification and the source code. The rely-guarantee-based compositional verification technique has been enforced over the formal model. To support formal verification of the security property, we extend our rely-guarantee framework PiCore by a compositional reasoning approach for integrity. Whilst the security verification of the design shows that it preserves the integrity property, the verification of the functional properties shows several problems. These verification issues are translated into finding three bugs in the C implementation of Zephyr, after inspecting the source code corresponding to the design lines breaking the properties. Key words and phrases:Rely-guarantee, Concurrent OS Kernel, Formal Verification, Memory Management, Isabelle/HOL + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote † †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). + Footnote †: This article is an extended version of (Xilin et al., 2020). reuse when no longer needed. The buddy memory allocation technique (Zhao and Daidi, 2018) is a memory allocation algorithm that splits memory into halves or quarters to try to satisfy a memory request in a best-fit manner. Buddy memory allocation has been widely applied in OS kernels (e.g. Linux kernel and Zephyr RTOS 1). Since program variables and data are stored in the allocated memory, correct, safe and secure memory management is extremely critical for the whole system. An incorrect specification and implementation of the memory management may lead to system crashes and exploitable attacks. Footnote 1: [https://www.zephyrproject.org/](https://www.zephyrproject.org/) Formal verification has been intensively conducted on OS kernels in recent years (Kennedy et al., 2018; Wang et al., 2018). Most of these efforts focus on sequential OS kernels and assume that there is no in-kernel concurrency (e.g. seL4 (Kennedy et al., 2018)). Concurrent kernels allow interleaved execution of kernel/user modules due to user thread preemption, I/O interrupts and execution in multicore architectures. Some related work studying the building and verification of concurrent kernels has been covered in (Kennedy et al., 2018; Wang et al., 2018; Wang et al., 2018). However, formal verification of concurrent OS kernels still present several open challenges. For instance, formal verification in (Kennedy et al., 2018) concerns kernels with device drivers using a verification framework that does not support preemptive and multicore concurrency. As a consequence it is only possible to verify interrupt handlers for device drivers not sharing data with and non-handler kernel code. Formal verification of OS memory management has been studied in CertiKOS (Kennedy et al., 2018; Zhao and Daidi, 2018), seL4 (Kennedy et al., 2018; Wang et al., 2018), Verisoft (Versic et al., 2018), and in the hypervisors from (Kennedy et al., 2018; Wang et al., 2018). Algorithms and implementations of dynamic memory allocation have been formally specified and verified in an extensive number of works (Kennedy et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Concurrency is only studied in (Kennedy et al., 2018; Wang et al., 2018) considering much simpler data structures and algorithms than our work. Moreover, only (Kennedy et al., 2018) studies the buddy memory allocation considering very abstract data structures. Finally, formal verification of security properties, e.g. integrity and confidentiality, is still challenging for concurrent OS kernels (Kennedy et al., 2018; Wang et al., 2018) and has not been studied for concurrent memory management before. Confidentiality refers to protecting information from being accessed by unauthorized parties. In general, it is not preserved by memory allocation at OS level. For instance, in Zephyr the memory release by a thread does not clear the allocated memory and the information may be accessed by other threads. Thus, we consider the integrity of concurrent memory management in this article, which means the allocated memory of a thread cannot be altered by other threads. This article concentrates on the formal specification and formal verification of functional correctness, safety and security properties on the concurrent buddy memory management of Zephyr. _Zephyr_ is an open-source state of the art RTOS managed by the Linux Foundation for connected, resource-constrained devices, and built with security and safety design in mind. It has been deployed in IoT gateways, safety shoes, and smart watches, etc. It uses a buddy memory allocation algorithm optimized for RTOS and is completely concurrent in the kernel that allows multiple threads to concurrently manipulate shared memory pools with fine-grained locking. We apply the PiCore rely-guarantee framework (Zebry, 2018) to the verification of Zephyr. The compositionality of rely-guarantee allows makes possible to handle the complexity of the memory allocation algorithm used in Zephyr, and of its data structures. ### Challenges Formal verification of concurrent memory management, in particular the buddy memory allocation in Zephyr, is a challenging work. 1. _Fine-grained concurrency of the execution of the memory services and of the shared memory structure_: On one hand, thread preemption and interruption make the kernel execution of memory services to be concurrent. Memory allocation usually uses fine-grained locking for threads. When manipulating a shared memory pool, memory services in a thread lock the pool by disabling interruptions inside critical sections as small as possible. On the other hand, memory pools are shared by threads in a fine-grained parts of its structure. When a thread is splitting a memory block into smaller ones to get a suitable block size, another thread may be coalescing partner blocks in the same pool into a larger one. That is, a thread is manipulating a block sub-tree of a memory pool, meanwhile another thread is manipulating another block sub-tree of the same pool. 2. _Complex data structure and algorithm of buddy memory management_: to achieve high performance, data structures and algorithms in Zephyr are laid out in a complex manner. First, the buddy memory allocation can split large blocks into smaller ones, allowing blocks of different sizes to be allocated and released efficiently while limiting memory fragmentation concerns. Seeking performance, Zephyr uses a multi-level structure where each level has a bitmap and a linked list of free memory blocks. The levels of bitmaps actually form a forest of quad trees of bits. Memory addresses are used as a reference to memory blocks, so the algorithm has to deal with address alignment and computation concerning the block size at each level, increasing the complexity of its verification. Second, the allocation algorithm supports various temporal modes. If a block of the desired size is unavailable, a thread can optionally wait for one to become available. There are three different modes for waiting threads: waiting forever, waiting for a time out, and no wait. Before a thread changes its state to waiting, it invokes rescheduling in the allocation service and thus is preempted by ready threads. The system must guarantee that each service eventually returns from each of the waiting modes. 3. _Verification of safety and security of concurrent memory management is difficult_: first, a complex algorithm and data structures implies as well complex invariants over them, that the formal model must preserve as functional safety properties. These invariants have to guarantee the multi-level well-shaped bitmaps and their consistency to multi-level free lists. To prevent memory leaks and block overlapping, a precise reasoning shall keep track of both numerical and shape properties. Second, as a security property we verify the integrity of allocated memory blocks among threads, i.e. one thread can not modify blocks allocated by other threads. In this context, formal verification of integrity on OS kernels needs to consider the system events (e.g. kernel services, interrupt handlers), and hence integrity is seen as a property of state-event based information-flow security (IFS) [(31; 8)]. Works on state-event IFS [(31; 35; 44)] tackle sequential systems and can not be applied either on the verification concurrent OS kernels. Although there are some work related to the verification of concurrent IFS [(32; 28; 33)], these focus on language-based IFS which can not be used on the verification of integrity for a concurrent operating system. Therefore the verification of state-event IFS is still an open challenge. ### Approach and Contributions The safety and security properties in this article concern the small steps inside memory services that must be preserved by any internal step of the services. For instance, in the case of Zephyr RTOS, a safety property is that memory blocks do not overlap each other even during internal steps of the allocation and release services. It is therefore necessary to find a verification approach that allows to reason at such fine-grained detail. In this article, we apply the rely-guarantee reasoning technique to verify the Zephyr concurrent memory management. This work uses PiCore [(50)], a two-level event-based rely-guarantee framework in Isabelle/HOL for the specification and verification of concurrent reactive systems (CRS). PiCore has support for concurrent OS features like modelling shared-variable concurrency of multiple threads, interruptable execution of handlers, self-suspending threads, and rescheduling. PiCore separates the specification and verification at two levels. The top level introduces the notion of "events" into the rely-guarantee method for system reactions. This level defines the events composing a system, and how and when they are triggered. It supports reactive semantics of interrupt handlers (e.g. kernel services, scheduler) in OSs, which makes formal specification of OSs much simpler than those represented by pure programs (e.g. in [3]). The second level focuses on the specification and reasoning of the behaviour of the events composing the first level. PiCore parametrizes the second level using a rely-guarantee interface, allowing to easily reuse existing rely-guarantee frameworks of imperative programs. PiCore concurrent constructs allow the specification of Zephyr multi-thread interleaving, fine-grained locking, and thread preemption. Compositionality of rely-guarantee makes feasible to prove the functional correctness of Zephyr and invariants over its data structures. In this article, we first formalize the data structures of Zephyr memory pools in Isabelle/HOL, and we analyze its structural properties. The properties clarify the constraints and consistency of quad trees, free block lists, memory pool configuration, and waiting threads. These properties conforms the safety of the memory management. They are defined as invariants for which its preservation under the execution of services is formally verified. The set of properties is comprehensive for buddy memory allocation since we can derive the memory separation property at the memory-block level as discussed below from them. Second, we consider memory separation as the security of Zephyr at two levels: memory-block level and thread level. At the memory-block level, memory separation ensures that the memory blocks of a memory pool cover the whole memory address of the pool, but do not overlap each other. This property is necessary to prevent memory leaks and can be derived from well-shaped properties of quad trees defined in the invariants. For memory separation at the thread level, we consider the aforementioned memory integrity among threads. To tackle this, we extend PiCore with a compositional reasoning approach for integrity on event-based concurrent systems. This approach redefines the concept of integrity in terms of fine-grain semantics and it uses rely-guarantee, as in the core of PiCore, for reasoning on the integrity of the events by means of observable equivalence among threads. Third, together with the formal verification of Zephyr, we aim at the highest evaluation assurance level (EAL 7) of Common Criteria (CC) [9], which was declared in the last year as the candidate standard for security certification by the Zephyr project. Therefore, we develop a fine-grained low level formal specification of a buddy memory management. The specification closely follows the Zephyr C code, and thus is able to do the _code-to-spec_ review required by the EAL 7 evaluation, covering all the data structures and imperative statements present in the implementation. The functional correctness of the memory management is specified by pre and post conditions of each service and compositionally proved by the rely-guarantee proof system of PiCore. Finally, we enforce the formal verification of functional correctness, invariant preservation, and memory separation by using the extended rely-guarantee proof system of PiCore. It supports total correctness for loops where fairness does not need to be considered. The formal verification shows the preservation of memory integrity, however, revealed three functional and safety bugs in the C code: an incorrect block split, an incorrect return from the kernel services, and non-termination of a loop. Two of them are critical and have been repaired in the latest release of Zephyr. The third bug causes nontermination of the allocation service when trying to allocate a block of a larger size than the maximum allowed. To the best of our knowledge, this article presents the first formal specification and mechanized proof of correctness, safety and security for concurrent memory allocation of a realistic operating system. The formal specification and proofs in this article are completely developed in Isabelle/HOL. All the Isabelle/HOL sources are available at [https://lvpgroup.github.io/tosem2021/](https://lvpgroup.github.io/tosem2021/). We summarize the main contributions of this article as follows. 1. A comprehensive set of critical properties for concurrent buddy memory management, including functional correctness, safety by invariants, and security by memory separation. In particular, we clarify the constraints and consistency of the complicated structure of buddy memory pools. 2. The first compositional reasoning approach for state-event based integrity, its application on a concurrent OS kernel, and its formal proof for the Zephyr concurrent memory management. 3. The first verified formal specification for concurrent buddy memory allocations which corresponds to the low-level design specification in CC EAL 7 evaluation. 4. Critical bugs founds on the functional correctness and safety in Zephyr C code, which have been repaired in the latest release of Zephyr. ### Roadmap Fig. 1 summarizes the main results presented in this article. First, Section 2 presents the preliminaries of this article including the buddy memory management in Zephyr (Section 2.1) and our previous PiCore framework (Section 2.2). In Section 3, we formalize the memory data structures, and the safety and security properties of buddy memory pools. We define the memory structures in Section 3.1, the invariant properties in Section 3.2, and the memory separation properties in Section 3.3. The formal specification of memory allocation and release services of Zephyr is presented in Section 4. For the compositional verification of security for Zephyr, we propose a security property _integrity_ for PiCore specifications in Section 5.2 and we discuss the compositional verification approach in Section 5.3. In Section 6, we show the rely-guarantee proofs of Zephyr. We first give the correctness specification by rely-guarantee conditions in Section 6.1, which is embedded with the invariant and memory separation properties. We then present the proof of partial correctness (Section 6.2), termination (Section 6.3), safety (Section 6.4) and security (Section 6.5) of the memory services. This article is an extension of our previous paper [49]. Compared to [49], (1) we add security properties about memory separation at memory-block level and thread level in this article; (2) we Figure 1. Outline of Main Results present the invariants in a more comprehensive and formal way; (3) for memory separation in concurrent settings, we extend our PiCore framework (Peng et al., 2019) by a compositional verification approach of integrity, and then we apply the new PiCore framework to the rely-guarantee reasoning of Zephyr RTOS; (4) we add security proof and present more comprehensive proofs of correctness and safety in this article; (5) finally, we add comparison to related work and present the limitation and discussion of our work. ## 2. Preliminaries ### Concurrent Memory Management in Zephyr RTOS In Zephyr, a memory pool is a kernel object that allows memory blocks to be dynamically allocated, from a designated memory region, and released back into the pool. Its C code implementation is shown in the left part of Fig. 2. The right part of this figure shows the formalization of the memory pool, which will be discussed in next section. A memory pool's buffer (\(*buf\)) is an \(n\_max\)-size array of blocks of \(max\_sz\) bytes at level 0, with no wasted space between them. The size of the buffer is thus \(n\_max\times max\_sz\) bytes long. Zephyr tries to accomplish a memory request by splitting available blocks into smaller ones fitting as best as possible the requested size. Each "level 0" block is a quad-block that can be split into four smaller "level 1" blocks of equal size. Likewise, each level 1 block is itself a quad-block that can be split again. At each level, the four smaller blocks become _buddies_ or _partners_ to each other. The block size at level \(l\) is thus \(max\_sz/4^{l}\). The pool is initially configured with the parameters \(n\_max\) and \(max\_sz\), together with a third parameter \(min\_sz\). \(min\_sz\) defines the minimum size for an allocated block and must be a multiple of four, i.e., there exists an \(X>0\) such that \(\min\_sz=4\times X\). Memory pool blocks are recursively split into quarters until blocks of the minimum size are obtained, at which point no further split can occur. The depth at which \(min\_sz\) blocks are allocated is \(n\_levels\) and satisfies that \(n\_max=min\_sz\times 4^{n\_levels}\). Figure 2. Data Structure of Memory Pool in Zephyr v1.8.0 and Its Formalization Every memory block is composed of a _level_; a _block_ index within its level, ranging from 0 to \((n\_max\times 4^{level})-1\); and the \(data\) as a pointer to the block start address, which is equal to \(buf+(max\_sz/4^{level})\times block\). We use the tuple \((level,block)\) to uniquely represent a block within a pool \(p\). A memory pool keeps track of how its buffer space has been split using a linked list _free_list_ with the start address of the free blocks in each level. To improve the performance of coalescing partner blocks, memory pools maintain a bitmap at each level to indicate the allocation status of each block in the level. This structure is represented by a C union of an integer _bits_ and an array _bits_p_. The implementation can allocate bitmaps at levels smaller than \(max\_inline{inline\_levels}\) using only an integer _bits_. However, the number of blocks in levels higher than \(max\_inline{inline\_levels}\) make necessary to allocate the bitmap information using the array _bits_map_. In such a design, the levels of bitmaps actually form a forest of complete quad trees. The bit \(i\) in the bitmap of level \(j\) is set to 1 for the block \((i,j)\) iff it is a free block, i.e., it is in the free list at level \(i\). Otherwise, the bitmap for such block is set to 0. Zephyr provides two kernel services \(k\_mem\_pool\_alloc\) and \(k\_mem\_pool\_free\), for memory allocation and release respectively. The main part of the C code of \(k\_mem\_pool\_alloc\) is shown in Fig. 3 in a compact manner. When an application requests for a memory block, Zephyr first computes \(alloc\_1\) and \(free\_1\) (Lines 7 - 16). \(alloc\_1\) is the level with the size of the smallest block that will satisfy the request, and \(free\_1\), with \(free\_1\leqslant alloc\_1\), is the lowest level where there are free memory blocks. Since the services are concurrent, when the service tries to allocate a free block _blk_ from level \(free\_1\) (Line 17), blocks at that level may be allocated or merged into a bigger block by other concurrent threads. In such case the service will back out (Line 18) and tell the main function \(k\_mem\_pool\_alloc\) to retry. If \(blk\) is successfully locked for allocation, then it is broken down to level \(alloc\_1\) (Lines 20 - 22). The allocation service \(k\_mem\_pool\_alloc\) supports a _timeout_ parameter to allow threads waiting for that pool for a period of time when the call does not succeed. If the allocation fails (Line 36) and the timeout is not \(K\_NO\_WAIT\), the thread is suspended (Line 40) in a linked list _wait_q_ and the context is switched to another thread (Line 41). Interruptions are always enabled in both services with the exception of the code for the functions \(alloc\_block\) and \(break\_block\), which invoke _irq_lock_ and _irq_unlock_ to respectively enable and disable interruptions. Similar to \(k\_mem\_pool\_alloc\), the execution of \(k\_mem\_pool\_free\) is interruptable as well. ### The PiCore Rely-guarantee Framework The abstract syntax of the PiCore language (Zhu et al., 2020) is shown in Fig. 4. The syntax for events distinguishes basic events pending to be triggered from already triggered events that are under execution. A basic event is defined as \(\textbf{Event}\ (l,g,P)\), where \(l\) is the event name, \(g\) the guard condition, and \(P\) the body of the event. When \(\textbf{Event}\ (l,g,P)\) is triggered, its body begins to be executed and it becomes a triggered event \(\lfloor P\rfloor\). The execution of \(\lfloor P\rfloor\) just simulates the program \(P\). Events are parametrized in the meta-logic as "\(\lambda(plist,\kappa)\). \(\textbf{Event}\ (l,g,P)\)", where \(plist\) is the list of input parameters, and \(\kappa\) is the event system identifier that the event belongs to. These parameters are not part of the syntax of events to make the guard \(g\) and the event body \(P\), as well as the rely and guarantee relations, more flexible, allowing to define different instances of the relations for different values of \(plist\) and \(\kappa\). Fig. 5 illustrates an _event_ in the concrete syntax of PiCore. Instead of defining a language for programs, PiCore reuses existing languages and their rely-guarantee proof systems. At the system reaction level, PiCore considers a reactive system as a set of event handlers called _event systems_ responding to stimulus from the environment. The execution of an event system concerns the continuous evaluation of guards of the events with their input arguments. From the set of events for which their associated guard condition holds in the current state, one event is non-deterministically selected to be triggered, and then its body is executed. After the event finishes, the evaluation of guards starts again looking for the next event to be triggered. We call the semantics of event systems _reactive semantics_, where the event context shows the event currently being executed. A CRS is modeled as the _parallel composition_ of event systems that are concurrently executed. PiCore supports the verification of two different kinds of properties in the rely-guarantee proof system for reactive systems: pre and post conditions of events and invariants in the fine-grained execution of events. A rely-guarantee specification for a system is a quadruple \(RGCond=\langle pre,R,G,pts\rangle\), where \(pre\) is the pre-condition, \(R\) is the rely condition, \(G\) is the guarantee condition, Figure 3: The C Source Code of Memory Allocation in Zephyr v1.8.0 and \(pst\) is the post-condition. The intuitive meaning of a valid rely-guarantee specification for a parallel component \(P\), denoted by \(\Sigma\models P\)**sat**\(\langle pre,R,G,pst\rangle\), is that if \(P\) is executed from an initial state \(s\in pre\) and any environment transition belongs to the rely relation \(R\), then the state transitions carried out by \(P\) belong to the guarantee relation \(G\) and the final states belong to \(pst\). \(\Sigma\) is used to represent static configuration of programs like environments for procedure declarations. We have defined a rely-guarantee axiomatic proof system for the PiCore specification language to prove validity of rely-guarantee specifications. Soundness of the proof system with regards to the definition of validity has been proven in Isabelle/HOL. Some of the rules composing the axiomatic reasoning system are shown in Fig. 6. A predicate \(P\) is stable w.r.t. a relation \(R\), represented as \(stable(P,R)\), when for any pair of states \((s,t)\) such that \(s\in P\) and \((s,t)\in R\) then \(t\in P\). The intuitive meaning is that an environment represented by \(R\) does not affect the satisfiability of \(P\). The parallel rule in Fig. 6 establishes compositionality of the proof system, where verification of the parallel specification can be reduced to the verification of individual event systems first and then to the verification of individual events. It is necessary that each event system \(\mathcal{PS}(\kappa)\) satisfies its specification \(\langle pres_{\kappa},R_{\kappa},G_{\kappa},psts_{\kappa}\rangle\) (Premise 1); the pre-condition for the parallel composition implies all the event system's pre-conditions (Premise 2); the overall post-condition must be a logical consequence of all post-conditions of event systems (Premise 3); since an action transition of the concurrent system is performed by one of its event system, the guarantee condition \(G_{\kappa}\) of each event system must be a subset of the overall guarantee condition \(G\) (Premise 4); an environment transition \(R_{\kappa}\) for the event system \(\kappa\) corresponds to a transition from the overall environment \(R\) (Premise 5); and an action transition of an event system \(\kappa\) should be defined in the rely condition of another event system \(\kappa^{\prime}\), where \(\kappa\neq\kappa^{\prime}\) (Premise 6). Figure 4: Abstract Syntax of PiCore Language Figure 5: An Example of Event in Concrete Syntax Figure 6: Subset of Rely-guarantee Proof Rules in PiCore PiCore considers invariants of CRSs in safety verification. To show that \(inv\) is preserved by a system \(\mathcal{PS}\), it suffices to show the invariant verification theorem as follows. This theorem indicates that (1) the system satisfies its rely-guarantee specification \(\langle init,R,G,post\rangle\), (2) \(inv\) initially holds in the set of initial states, and (3) each action transition as well as each environment transition preserve \(inv\). Later, invariant verification is decomposed to the verification of individual events by the proof system of PiCore. Theorem 2.1 (Invariant Verification).: _For formal specification \(\mathcal{PS}\) and \(\Sigma\), a state set \(init\), a rely condition \(R\), and \(inv\), if_ * \(\Sigma\vdash\mathcal{PS}\) **sat**\(\langle init,R,G,post\rangle\)_._ * \(init\subseteq\{s.\;in(s)\}\)_._ * \(stable(\{s.\;in(s)\},R)\) _and_ \(stable(\{s.\;inv(s)\},G)\) _are satisfied._ _then \(inv\) is preserved by \(\mathcal{PS}\) w.r.t. \(init\) and \(R\)._ ## 3. Defining Structures and Properties of Buddy Memory Pools This section formalizes the whole data structure of memory pools in Zephyr. Based on that formalization, we define safety properties as a comprehensive set of invariants and the security property as a two-level memory separation. The memory separation at memory-block level can be derived from the invariants on the memory. ### Structure of Memory Pools As a specification of low-level design, we use abstract data types to represent the complete structure of memory pools. The formalization of the memory pool in Zephyr is shown in the right part of Fig. 2. We use an abstract reference _ref_ in Isabelle to define pointers to memory pools. Starting addresses of memory blocks, memory pools, and unsigned integers in the implementation are defined as _natural_ numbers (_nat_). Linked lists used in the implementation for the elements _levels_ and _free_list_, together with the bitmaps used in _bits_ and _bits_p_, are defined as a _list_ type. C _structs_ are modelled in Isabelle as _records_ of the same name as the implementation and comprising the same data. There are two exceptions to this: (1) \(k\_mem\_block\_id\) and \(k\_mem\_block\) are merged in one single record, (2) the union in the struct \(k\_mem\_pool\_lot\) is replaced by a single list representing the bitmap, and thus _max_inline_level_ is removed. Threads may concurrently split and coalesce memory blocks during the execution of the allocation and release services. The Zephyr implementation makes use of a bitmap to represent the state of a memory block. The bit \(j\) of the bitmap for level a \(i\) is set to \(1\) iff the memory address of the memory block \((i,j)\) is in the free list at level \(i\). A bit \(j\) at a level \(i\) is set to \(0\) under the following conditions: (1) its corresponding memory block is allocated (_ALLOCATED_), (2) the memory block has been split (_DIVIDED_), (3) the memory block is being split in the allocation service (_ALLOCATING_) (Line 21 in Fig. 3), (4) the memory block is being coalesced in the release service (_FREEING_), and (5) the memory block does not exist (_NOEXIST_). Instead of only using a binary representation, our formal specification models the bitmap using a datatype _BlockState_ that is composed of these cases together with _FREE_. The reason of this decision is to simplify proving that the bitmap shape is well-formed. In particular, this representation makes less complex to verify the case in which the descendant of a free block is a non-free block. This is the case where the last free block has not been split and therefore lower levels do not exist. We illustrate a structure of a memory pool in Fig. 7. The top of the figure shows the real memory of the first block at level \(0\). ### Invariant The structural properties clarify the constraints on and the consistency of quad trees, free block lists, the memory pool configuration, and waiting threads. All of them are thought of as invariants on the kernel state and have been formally verified on the formal specification in Isabelle/HOL. #### 3.2.1. Well-shaped bitmaps We say that the logical memory block \(j\) at a level \(i\) physically exists iff the value of the bitmap \(j\) at the level \(i\) is _ALLOCATED_, _FREE_, _ALLOCATING_, or _FREEING_, represented by the predicate _is_memblock_. We do not consider blocks marked as _DIVIDED_ as physical blocks since it is only a logical block containing other blocks. A valid forest is defined by the following rules: (1) the parent bit of an existing memory block is _DIVIDED_ and its child bits are _NOEXIST_, denoted by the predicate _noexist_bits_ that checks for a given bitmap \(b\) and a position \(j\) that nodes \(b!j\) to \(b!(j+3)\) are set as _NOEXIST_; (2) the parent bit of a _DIVIDED_ block is also _DIVIDED_; and (3) the child bits of a _NOEXIST_ bit are also _NOEXIST_ and its parent can not be a _DIVIDED_ block. The property is defined as the predicate **inv-bitmap**(\(s\)) as follows, where \(s\) is the system state of Zephyr memory management. **inv-bitmap**: \(s\equiv\forall\,p\in\)_mem-pools_. **let**: \(mp=\) _mem-pool-info_\(s\)_\(p\)_**in**: \(\forall\,i<\) _length_ (_levels_\(mp\)). **let**: \(bts=\) _bits_ (_levels_\(mp\)!_) **in**: \((\forall\,j<\) _length_ _bits_. (_is_memblock_(_\(bts\,!\,j\)_) \(\longrightarrow\) (_\(i>0\) \(\longrightarrow\) (_bits_ (_levels_\(mp\)!_ (\(i-1\))))! (_\(j\) div_\(4\)_) = DIVIDED_) \(\wedge\) (_\(i<\) _length_ (_levels_\(mp\)) \(-1\longrightarrow\) _noexist-bits_\(mp\) (_\(i+1\)) (_\(j\)*4_) )) \(\wedge\) (_\(bts\,!\,j=\) DIVIDED_\(\longrightarrow\)_\(i>0\)_\(\longrightarrow\) (_bits_ (_levels_\(mp\)!_ (\(i-1\))))! (_\(j\) div_\(4\)_) = DIVIDED_) \(\wedge\) (_\(bts\,!\,j=\) _NOEXIST_\(\longrightarrow\)_\(i<\) _length_ (_levels_\(mp\)) \(-1\longrightarrow\) _noexist-bits_\(mp\) (_\(i+1\)) (_\(j\)*4_) \(\wedge\) (_\(bts\,!\,j=\) _NOEXIST_\(\wedge\)_\(i>0\)_\(\longrightarrow\) (_bits_ (_levels_\(mp\)!_ (\(i-1\))))! (_\(j\) div_\(4\)_) \(\neq\) DIVIDED_) ) In Isabelle, _mem_pools_ captures the set of pools in state \(s\), and _mem_pool_info_\(s\)_\(p\) gets the memory pool referred by \(p\). For a list \(l\), \(l\,!\,i\) gets the \(i\)th element. There are two additional properties on bitmaps. First, the address space of any memory pool cannot be empty, i.e., the bits at level \(0\) have to be different to _NOEXIST_. Second, the allocation algorithm may split a memory block into smaller ones, but not the those blocks at the lowest level (i.e. level \(n\_levels-1\)), therefore the bits at the lowest level have to be different than _DIVIDED_, Figure 7. Structure of Memory Pools being invalid if it is divided. The first property is defined as **inv-bitmap0**(s) and the second as **inv-bitmapn**(s). **inv-bitmap0**\(s\equiv\forall\)_p_\(\in\)_mem-pools_. **let** _bits0_\(=\)_bits_ (_levels_ (_mem-pool-info_s_p_)! _0_) _in_\(\forall\)_i_\(<\)_length bits0_. bits0_! _i_\(\neq\)_NOEXIST_ **inv-bitmapn**\(s\equiv\forall\)_p_\(\in\)_mem-pools_. **let** _bitsn_\(=\)_bits_ ((_levels_ (_mem-pool-info_s_p_)! (_length_ (_levels_ (_mem-pool-info_s_p_)) - _1_))) **in**\(\forall\)_i_\(<\)_length bitsn_. bitsn_! _i_\(\neq\)_DIVIDED_ #### 3.2.2. Consistency of the memory configuration The configuration of a memory pool is set when it is initialized. Since the minimum block size is aligned to 4 bytes, there must exist an \(n>0\) such that the maximum size of a pool is equal to \(4\times n\times 4^{n\_levels}\), relating the number of levels of a level 0 block with its maximum size. Moreover, the number of blocks at level 0 and the number of levels have to be greater than zero, since the memory pool cannot be empty. The number of levels is equal to the length of the pool _levels_ list. Finally, the length of the bitmap at level \(i\) has to be \(n\_max\times 4^{i}\). This property is defined as **inv-mempool-info**(s). **inv-mempool-info**\(s\equiv\forall\)_p_\(\in\)_mem-pools_s_. **let**\(mp=\)_mem-pool-info_s_p_**in** (\(\exists\)_n>0. max-sz_mp_\(=\) (4 * _n_\(\times\)_n-levels_\(mp\))) \(\wedge\)_n-max_mp_\(>\)_0_\(\wedge\)_n-levels_\(mp>\)_0_\(\wedge\)_n-levels_\(mp\)_=_length_ (_levels_\(mp\)) \(\wedge\)_(\(\forall\)_i_\(<\)_length_ (_levels_\(mp\)_). length_ (_bits_ (_levels_\(mp\)_! _i_)) = (_n-max_mp_) * 4 _i_) #### 3.2.3. No partner fragmentation The memory release algorithm in Zephyr coalesces free partner memory blocks into blocks as large as possible for all the descendants from the root level, without including it. Thus, a memory pool does not contain four _FREE_ partner bits. This is checked by the _partner_bits_ function. Note that the blocks of a pool at level 0 should not be coalesced. This property is defined as the **inv-bitmap-not4free**(s) predicate as follows. **inv-bitmap-not4free**\(s\equiv\forall\)_p_\(\in\)_mem-pools_s_. **let**\(mp=\)_mem-pool-info_s_p_**in** \(\forall\)_i_\(<\)_length_ (_levels_\(mp\)_). **let**\(bts\)_\(=\)_bits_ (_levels_\(mp\)_! _i_) _in_ (\(\forall\)_j_\(<\)_length_ _bts_. _i_\(>\)_0_\(\longrightarrow\)_- partner-bits_ _mp_! _j_) #### 3.2.4. Validity of free block lists The free list at one level keeps the starting address of free memory blocks. The memory management ensures that the addresses in the list are valid, i.e., they are different from each other and aligned to the _block size,_ which at a level \(i\) is given by \((max\_sz/4^{i})\). Moreover, a memory block is in the free list iff the corresponding bit of the bitmap is _FREE_. This property is defined as the **inv-bitmap-freelist**(s) predicate as follows. **inv-bitmap-freelist**\(s\equiv\forall\)_p_\(\in\)_mem-pools_s_. **let**\(mp=\)_mem-pool-info_s_p_**in** \(\forall\)_i_\(<\)_length_ (_levels_ _mp_). **let**\(bts\)_\(=\)_bits_ (_levels_ _mp_! _i_);_\(\not=\)_free-list_ (_levels_ _mp_! _i_) _in_ (\(\forall\)_\(<\)_length_ _bits_. _bits_! _j_\(=\)_FREE_\(\longleftrightarrow\)_buf_ _mp_! _j_\(*\) (_max-sz_ _mp_ _div_ (4 * _i_)) \(\in\)_set_ _ff_) \(\wedge\) (\(\forall\)_j_\(<\)_length_ _fl_. (_\(\exists\)_n. n_\(<\)_n-max_mp_ * (4 * _i_) \(\wedge\)_fl_! _j_\(=\)_buf_ _mp_ +_n_* (_max-sz_ _mp_ _div_ (4 * _i_)))) \(\wedge\)_distinct_ _fl_ #### 3.2.5. Non-overlapping of memory pools The memory spaces of the set of pools defined in a system must be disjoint, so the set of memory addresses of a pool does not belong to the memory space of any other pool. This property is defined as the **inv-pools-notoverlap**(s) predicate as follows. **inv-pools-notoverlap**\(s\equiv\)(\(\forall\)_p1_p2_. _p_\(\in\)_mem-pools_\(\wedge\)_p_!_mem-pools_\(\wedge\)_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_!_p_!_p_!_p_!_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_!_p_!_p_!_p_!_p_!_p_!_p_!p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!p_p_!_p_!_p_!_p_!_p_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!p_!_p_!_p_!_p_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!p_p_!_p_!_p_!p_!_p_!_p_!_p_!_p_p_!p_!_p_!_p_!_p_!_p_!_p_!_p_!_p_!p_!_p_!p_!_p_!_p_!_p_!p_!_p_!_p_!p_!_p_p_!_p_!_p_!_p_!p_!_p_!_p_!p_!_p_!p_!_p_p_!_p_!p_!_p_!_p_!p_!_p_p_!_p_!_p_!p_!p_!_p_!_p_!p_!_p_!_p_!p_!_p_!p_!_p_!p_!_p_!p_!_p_!_p_!p_p_!_p_!_p_!_p_!p_!p_!_p_!_p_!p_!_p_!p_!_p_!_p_!p_!_p_!_p_!p_!_p_!p_!_p_!_p_!p_!p_!p_!_p_!p_!_p_!p_p_!_p_!p_!_p_!p_!_p_!p_!_p_!p_!p_p_!_p_!p_!p_p_!_p_!p_!_p_!p_!_p_!p_!p_p_!_p!_p_!p_!p_!p_p_!_p_!p_p_!p_!p_!p_p_!p_p_!p_!p_!p_p_!p_!p_!p_!p_p_!p_!p_p_!p_p_!p_!p_!p_p_!p_!p_p_!p_p!_p_p!p_p_!p_p!_p_p!_p_p!p_p_!p_p_!p_p_!p_p!p_p_!p_p_!p_p!p_p_!p_p_!p_p_p!p_p_!p_p!p_p_p!p_p_p!p_p_p!p_p_p_p!p_p_p!p_p_p!p_p_p!p_p_p_p!p_p_p!p_p_p_p!p_p_p_p!p_p_p_p!p_p_p_p_p!p_p_p_p_p_p_p_p_p!p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_pp_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_p_pp_p_p_p_p_p_p_pp_p_p_p_pp_p_pp waiting for available memory blocks have to be in a _BLOCKED_ state. This property is defined as the **inv-thd-waitq**(s) predicate as follows. \(\mathbf{inv-thd-waitq}\)\(s\equiv(\forall\)\(p\)\(\mathit{mem-pools}\)\(s\). \(\forall\)\(t\)\(\mathit{set}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(p\))). \(\mathit{thd-state}\)\(s\)\(t=\)\(BLOCKED\)\(\land\)\((\forall\)\(t\). \(\mathit{thd-state}\)\(s\)\(t=\)\(BLOCKED\)\(\longrightarrow\)\((\exists\)\(p\)\(\mathit{mem-pools}\)\(s\). \(t\)\(\mathit{set}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(p\))))\(\land\)\((\forall\)\(p\)\(\mathit{mem-pools}\)\(s\). \(\mathit{dist-list}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(p\))))\(\land\)\((\forall\)\(p\)\(\mathit{q}\). \(p\)\(\mathit{mem-pools}\)\(s\)\(\land\)\(q\)\(\mathit{mem-pools}\)\(s\)\(p\)\(\neq\)\(q\)\((\nexists\)\(t\). \(t\)\(\mathit{set}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(p\))) \(\land\)\(t\)\(\mathit{set}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(q\))))\(\land\)\(\mathit{set}\) (\(\mathit{wait-q}\) (\(\mathit{mem-pool-info}\)\(s\)\(q\)))) #### 3.2.7. Consistency of freeing and allocating blocks During allocation and release of a memory block, blocks of the tree may temporally be manipulated during the coalesce and division process. A block can be only manipulated by a thread at a time, and the state bit of a block being temporally manipulate has to be _FREEING_ or _ALLOCATING_. Moreover, one of these memory blocks is being manipulated at most by one thread. This property is defined as the **inv-aux-vars**(s) predicate as follows. \(\mathbf{inv-aux-vars}\)\(s\equiv\) \((\forall\)\(t\). \(\mathit{freeing-node}\)\(s\)\(t=\)\(\mathit{Some}\)\(n\)\(\longrightarrow\)\(\mathit{get-bit}\) (\(\mathit{mem-pool-info}\)\(s\)) (\(\mathit{pool}\)\(n\)) (\(\mathit{level}\)\(n\)) (\(\mathit{block}\)\(n\)) = _FREEING_) \(\land\) \((\forall\)\(\mathit{n}\). \(\mathit{get-bit}\) (\(\mathit{mem-pool-info}\)\(s\)) (\(\mathit{pool}\)\(n\)) (\(\mathit{level}\)\(n\)) (\(\mathit{block}\)\(n\)) = _FREEING_ \(\land\) \((\forall\)\(\mathit{t}\). \(\mathit{freeing-node}\)\(s\)\(t=\)\(\mathit{Some}\)\(n\)) \(\land\) \((\forall\)\(\mathit{t}\). \(\mathit{allocating-node}\)\(s\)\(t=\)\(\mathit{Some}\)\(n\)\(\longrightarrow\) \(get-bit\) (\(\mathit{mem-pool-info}\)\(s\)) (\(\mathit{pool}\)\(n\)) (\(\mathit{level}\)\(n\)) (\(\mathit{block}\)\(n\)) = _ALLOCATING_) \(\land\) \((\forall\)\(\mathit{n}\). \(\mathit{get-bit}\) (\(\mathit{mem-pool-info}\)\(s\)) (\(\mathit{pool}\)\(n\)) (\(\mathit{level}\)\(n\)) (\(\mathit{block}\)\(n\)) = _ALLOCATING_ \(\land\) \((\exists\)\(\mathit{t}\). \(\mathit{allocating-node}\)\(s\)\(t=\)\(\mathit{Some}\)\(n\)) \(\land\) \((\forall\)\(\mathit{t}\)\(\mathit{t}\)\(\mathit{t}\)\(\mathit{2}\)\(n\)\(\mathit{1}\)\(\mathit{t}\)\(\mathit{t}\)\(\mathit{2}\)\(\land\)\(\mathit{freeing-node}\)\(s\)\(t\)\(\mathit{2}\)\(\land\)\(\mathit contains \(addr\). Here, we use relative address for \(addr\). The property is defined as the **mem-part**(s) predicate as follows. \(\mathbf{addr\text{-in-block}}\)\(mp\)\(addr\)\(i\)\(j\)\(\equiv\)\(i\)\(<\)\(\mathit{length}\)\((\mathit{levels}\)\(\mathit{mp})\)\(\wedge\)\(j\)\(<\)\(\mathit{length}\)\((\mathit{bits}\)\((\mathit{levels}\)\(\mathit{mp}\)\(!\)\(\mathit{i}))\(\wedge\) \((\mathit{bits}\)\((\mathit{levels}\)\(\mathit{mp}\)\(!\)\(\mathit{i})\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(\!\)\(\!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(!\)\(\!\)\(!\)\(\!\)\(!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\)\(\!\!\)\(\! processes represented by **AWATT \(b\) THEN \(P\) END**. The body \(P\) is executed atomically if and only if the boolean condition \(b\) holds, not progressing otherwise. **ATOM \(P\) END** denotes an _Await_ statement for which its guard is \(True\). Threads and kernel processes have their own execution context and local states. Each of them is modelled in PiCore as a set of events called _event systems_ and denoted as **ESYS**\(\mathcal{S}\equiv\{\mathcal{E}_{0},\...,\ \mathcal{E}_{n}\}\). The operational semantics of an event system is the _sequential composition_ of the execution of the events composing it. Finally, PiCore has a construct for parallel composition of event systems \(esys_{0}\parallel...\parallel esys_{n}\) which interleaves the execution of the events composing each event system \(esys_{i}\) for \(0\leq i\leq n\). #### 4.1.2. Execution model of Zephyr After being initialized, an OS kernel can be considered as a reactive system that is in an _idle_ loop until it receives an interruption which is handled by an interruption handler. Whilst an interrupt handler execution is atomic in sequential kernels, it can be interrupted in concurrent kernels [6, 46] allowing services invoked by threads to be interrupted and resumed later. In the execution model of Zephyr, we consider a scheduler \(\mathcal{S}\), a timer and a set of threads \(t_{1},...,t_{n}\). In this model, the execution of the scheduler is atomic since kernel services can not interrupt it. But kernel services can be interrupted via the scheduler, i.e., the execution of a memory service invoked by a thread \(t_{i}\) may be interrupted by the kernel scheduler to execute a thread \(t_{j}\). Fig.8 illustrates Zephyr execution model, where solid lines represent execution steps of the threads/kernel services and dotted lines mean the suspension of the thread/code. For instance, the execution of \(k\_mempool\_free\) in thread \(t_{1}\) is interrupted by the scheduler, and the context is switched to thread \(t_{2}\), which invokes \(k\_mempool\_alloc\). During the execution of \(t_{2}\), the kernel service may suspend the thread and switch to another thread \(t_{n}\) by calling _rescheduling_. Later, the execution is switched back to \(t_{1}\) and continues the execution of \(k\_mempool\_free\) in a different state from when it was interrupted. The event systems of Zephyr are illustrated in the right part of Fig. 8. A user thread \(t_{i}\) invoke allocation/release services, thus the event system for \(t_{i}\) is \(esys_{t_{i}}\), a set composed of the events _alloc_ and _free_. The input parameters for these events correspond with the arguments of the service implementation, that are constrained by the guard for each service. Together with system users we model the event service for the scheduler \(esys_{sched}\) consisting on a unique event _sched_ whose argument is a thread \(t\) to be scheduled when \(t\) is in the _READY_ state. The formal specification of the memory management is the parallel composition of the event system for the threads, the scheduler and the timer. Figure 8. An Execution Model of Zephyr Memory Management #### 4.1.3. Thread context and preemption Events are parametrized by a thread identifier used to access to the execution context of the thread invoking it. As shown in Figure 8, the execution of an event executed by a thread can be stopped by the scheduler to be resumed later. This behaviour is modelled using a global variable _cur_ indicating that the thread being currently has been scheduled and it is being executed. The model conditions the execution of parametrized events in \(t\) only when \(t\) is scheduled. This is achieved by using the expression \(\mathbf{t}\blacktriangleright\mathbf{p}\equiv\mathbf{AWAIT}\)_cur_\(=t\)**THEN**_p_**END**, so an event invoked by a thread \(t\) only progresses when \(t\) is scheduled. This scheme allows to use rely-guarantee for concurrent execution of threads on mono-core architectures, where only the scheduled thread is able to modify the memory. ### Formal Specification of Memory Management Services This section discusses the formal specification of the memory management services. These services deal with the initialization of pools, and memory allocation and release. #### 4.2.1. System state The system state includes the memory model introduced in Section 3, together with the thread under execution represented by the variable _cur_ and the local variables to the memory services. The local variables are used to keep temporal changes to the structure, guards in conditional and loop statements, and index accesses. The memory model is represented as a set _mem_pools_ storing the references of all memory pools and a mapping _mem_pool_info_ to query a pool by a pool reference. Local variables are modelled as total functions from threads to variable values, representing that the event is accessing the thread context. In the formal model of the events we represent the access to a state component \(c\) with _'c_ and the value of a local component \(c\) for the thread \(t\) is represented as _'c_\(t\). Local variables _allocating_node_ and _freeing_node_ are relevant for the memory services, storing the temporal blocks being split/coalesced in alloc/release services respectively. The memory blocks allocated in a thread are stored in the local variable _mblocks_ as discussed in the previous section. #### 4.2.2. Memory pool initialization Zephyr defines and initializes memory pools at compile time by constructing a static variable of type _struct_\(k\_mem\_pool\). The implementation initializes each pool with _n_max_ level 0 blocks with size _max_sz_ bytes. Bitmaps of level 0 are set to 1 and free list contains all level 0 blocks. Bitmaps and free lists of other level are initialized to 0 and to the empty list respectively. In the formal model, we specify a state corresponding to the implementation initial state and we show that it belongs to the set of states satisfying the invariant. #### 4.2.3. Memory allocation/release services The C code of Zephyr uses the recursive function _free_block_ to coalesce free partner blocks and the _break_ statement to stop the execution of a loop statements, which are not supported by the imperative language in PiCore. The formal specification overcomes this by transforming the recursion into a loop controlled by the recursion condition, and using a control variable to exit loops with breaks when the condition to execute the loop break is satisfied. Additionally, the memory management services use the atomic body _irq_lock(); P; irq_unlock()_; to keep interruption handlers _reentrant_ by disabling interruptions. We simplify this behaviour in the specification using an **ATOM** statement, avoiding the service to be interrupted at that point. The rest of the formal specification closely follows the implementation, where variables are modified using higher order functions changing the state as the code does it. The reason of using Isabelle/HOL functions is that PiCore does not provide a semantic for expressions, using instead state transformer relying on high order functions to change the state. Fig. 9 illustrates the PiCore specification of the _free_block_ function invoked by _k_mem_pool_free_ when releasing a memory block. The code accesses the following variables: _lsz_, _lsize_, and _lbl_ to keep information about the current level; _blk_, _bn_, and _bb_ to represent the address and number of the block currently being accessed; \(freeing\_node\) to represent the node being freeing; and \(i\) to iterate blocks. Additionally, the model includes the component \(free\_block\_r\) to model the recursion condition. To simplify the representation the model uses predicates and functions to access and modify the state. We refer readers to the Isabelle/HOL sources for the complete specification of these functions and the complete formal model. In the C code, \(free\_block\) is a recursive function with two conditions: (1) the block being released belongs to a level higher than zero, since blocks at level zero cannot be merged; and (2) the partners bits of the block being released are FREE so they can be merged into a bigger block. We represent (1) with the predicate \({}^{\prime}\!ol\ t>0\) and (2) with the predicate \(partner\_bit\_free\). The formal specification follows the same structure translating the recursive function into a loop that is controlled by a variable mimicking the recursion. The recursive process of \(free\_block\) is illustrated in Fig. 10. The formal specification for \(free\_block\) first releases an allocated memory block \(bn\) setting it to \(FREEING\). Then, the loop statement sets \(free\_block\) to \(FREE\) (Line 5 in Fig. 9), and also checks that the iteration/recursive condition holds in Line 7. If the condition holds, the partner bits are set to \(NOEXIST\), and remove their addresses from the free list for this level (Lines 12 - 15). Then, it sets the parent block bit to \(FREEING\) (Lines 17 - 22), and updates the variables controlling the current Figure 9: The PiCore Specification of \(free\_block\) block and level numbers, before going back to the beginning of the loop again. If the iteration condition is not true it sets the bit to _FREE_ and add the block to the free list (Lines 24 - 28) and sets the loop condition to false to end the procedure. This function is illustrated in Fig. 10. The block 174 is released by a thread and since its partner blocks (block 172, 173 and 175) are free, Zephyr coalesces the four blocks and sets their parent block 43 as _FREEING_. The coalescence continues iteratively if the partners of block 43 are all free. The main part of the C code of the \(k\_mem\_pool\_free\) service and its complete formalization are shown in Appendices A and B respectively. #### 4.2.4. Formal specification of the memory management The PiCore specification of the memory management of Zephyr is finally defined as follows. The events for the scheduler and the timer is simple. The _schedule_ event chooses a _READY_ thread \(t\) to be executing and set the current thread to _READY_. The _tick_ event just increases the _tick_ variable in system state by one. The _tick_ variable is used for the _time out_ waiting mode for memory allocation. \[\begin{split} Mem\_Spec\equiv\lambda\kappa.\ \textbf{case}\ \kappa\ \textbf{ of}\ (\mathcal{T}\ t)\Rightarrow\bigcup b.\ \{free(b)@t)\cup\bigcup(p,sz,to).\ \{alloc(p,sz,to)@t\}\\ |\ S\Rightarrow\bigcup t.\ \{schedule(t)\}\\ |\ \textbf{Time}\Rightarrow\{tick\}\end{split} \tag{1}\] ## 5. Compositional Reasoning about integrity in PiCore In this section, we present a compositional reasoning approach for the verification of integrity in PiCore. We use the notion of integrity from (Safar et al., 2016; Safar et al., 2016), which provides a formalism for the specification of security policies. We first define the integrity of PiCore specification. Then, we show that reasoning about the integrity of the system can be decomposed to events. For convenience, we first introduce the operational semantics and computations of PiCore in brief, which is the foundation of the integrity and its compositional reasoning. ### Operational Semantics and Computations of PiCore The semantics of PiCore is defined via transition rules between configurations. We define a configuration \(\mathcal{C}\) in PiCore as a triple \((\sharp,s,x)\) where \(\sharp\) is a specification, \(s:S\) is a system state, and \(x:\mathcal{K}\rightarrow\mathcal{E}\) is an event context. The event context indicates which event is currently being executed in an event system \(\kappa\). \(\sharp_{\mathcal{C}}\), \(s_{\mathcal{C}}\), and \(x_{\mathcal{C}}\) represent the projection of each component in the tuple \(\mathcal{C}=(\sharp,s,x)\). Transition rules in events, event systems, and parallel event systems have the form \(\Sigma\vdash(\sharp_{1},s_{1},x_{1})\xrightarrow{\delta}_{\Box}(\sharp_{2},s_ {2},x_{2})\), where \(\delta=t@x\) is a label indicating the type of transition, the subscript "\(\Box\)" (\(e\), \(es\) or \(pes\)) indicates the transition objects, and \(\Sigma\) is used for some static configuration for programs (e.g. an environment for procedure declarations). Here \(t\) indicates a program action \(c\) or an occurrence of an event \(\mathcal{E}\). \(@x\) means that the action occurs in event system \(\kappa\). Environment Figure 10. Coalescing Memory Blocks in _free_block_ transition rules have the form \(\Sigma\vdash(\sharp,s,x)\xrightarrow{en0}_{\Box}(\sharp,s^{\prime},x^{\prime})\). Intuitively, a transition made by the environment may change the state but not the event context nor the specification. The parallel composition of event systems is fine-grained since small steps in events are interleaved in the semantics of PiCore. A _computation_ of PiCore is a sequence of transitions. We define the set of computations of all parallel event systems with static information \(\Sigma\) as \(\Psi(\Sigma)\), which is a set of lists of configurations inductively defined as follows. The singleton list is always a computation (1). Two consecutive configurations are part of a computation if they are the initial and final configurations of an environment (2) or action transition (3). \[\begin{cases}\begin{pmatrix}(1)\end{pmatrix}[(\mathcal{P}\mathcal{S},s,x)] \in\Psi(\Sigma)\\ (2)(\mathcal{P}\mathcal{S},s_{1},x_{1})\mathfrak{e}cs\in\Psi(\Sigma)\Longrightarrow( \mathcal{P}\mathcal{S},s_{2},x_{2})\divide(\mathcal{P}\mathcal{S},s_{1},x_{1} )\#cs\in\Psi(\Sigma)\\ (3)\Sigma\vdash(\mathcal{P}\mathcal{S}_{2},s_{2},x_{2})\xrightarrow{\delta}_ {pes}(\mathcal{P}\mathcal{S}_{1},s_{1},x_{1})\wedge(\mathcal{P}\mathcal{S}_{1},s_{1},x_{1})\#cs\in\Psi(\Sigma)\\ \Longrightarrow(\mathcal{P}\mathcal{S}_{2},s_{2},x_{2})\divide(\mathcal{P} \mathcal{S}_{1},s_{1},x_{1})\mathfrak{e}cs\in\Psi(\Sigma)\end{cases}\] Computations for events and event systems are defined in a similar way. We use \(\Psi(\Sigma,\mathcal{P}\mathcal{S})\) to denote the set of computations of a parallel event system \(\mathcal{P}\mathcal{S}\). The function \(\Psi(\Sigma,\mathcal{P}\mathcal{S},s,x)\) denotes the computations of \(\mathcal{P}\mathcal{S}\) starting up from an initial state \(s\) and event context \(x\). In PiCore, the semantics is compositional. A computation \(\omega\) of \(\mathcal{P}\mathcal{S}\) could be decomposed into a set of computations \(\tilde{\omega}\) of its event systems. Computations in \(\tilde{\omega}\) have the same state and event context sequence. They do not have component transitions at the same time. \(\omega\) also has the same state and event context sequence as \(\tilde{\omega}\). Furthermore, in \(\omega\) a transition is labelled as \(\delta\) if this is the label in one of the computations \(\tilde{\omega}\) at the corresponding position; a transition is an environment transition if this is the case in all computations \(\tilde{\omega}\) at the corresponding position. We use the _conjoin_ notation \(\omega\propto\tilde{\omega}\) to present this compositionality, and \(\tilde{\omega}^{\kappa}\) denotes the computation of \(\kappa\). ### Integrity in PiCore In general, the definition of integrity relies on a security configuration for a system, which is actually the security policies, as well as a state machine of the system. #### 5.2.1. Security Configuration In order to discuss the security of a PiCore specification \(\mathcal{P}\mathcal{S}\), we assume a set of security domains \(\mathcal{D}\) and a security policy \(\leadsto\) that restricts the allowable flow of information among those domains. The security policy \(\leadsto\) is a reflexive relation on \(\mathcal{D}\). \(d_{1}\leadsto d_{2}\) means that actions performed by \(d_{1}\) can influence subsequent outputs seen by \(d_{2}\). \(\not\leadsto\) is the complement relation of \(\leadsto\). We call \(\leadsto\) and \(\not\leadsto\) the _interference_ and _noninterference_ relations respectively. Each event has associated an execution domain responsible of invoking the event. Traditional formulations in information-flow security assume a static mapping from events to domains, such that the domain of an event can be determined solely from the event itself [35, 44]. For flexibility, we use a dynamic mapping, which is represented by a function \(dom\_e:S\times\mathcal{K}\times\mathcal{E}\rightarrow\mathcal{D}\), and \(dom\_e(s,\kappa,ev)\) means the execution domain of event \(ev\) on context \(\kappa\) in state \(s\). The \(\mathcal{P}\mathcal{S}\) is _view-partitioned_ if, for each domain \(d\in\mathcal{D}\), there is an equivalent relation \(\stackrel{{ d}}{{\sim}}\) on \(S\). For convenience, we define \(\mathcal{C}_{1}\stackrel{{ d}}{{\sim}}\mathcal{C}_{2}\triangleq s _{\mathcal{C}_{1}}\stackrel{{ d}}{{\sim}}s_{\mathcal{C}_{2}}\). #### 5.2.2. State Machine Representation of PiCore Specification The state-event IFS is usually defined on a state machine. Here, we construct an equivalent state machine for a PiCore specification. The state of the machine is the configuration in the PiCore semantics. The security of PiCore consider small-step actions of systems. A small-step action in the machine is identified by the label of a transition, the event that the action belongs to, and the domain that triggers the event. A small-step action is thus in the form of a triple \((\delta,ev,d)\). The label of the transition can represent an occurrence of an event \(ev_{a}\), or \(c\) when is an internal step of an event already triggered. In that case \(x_{C}(\kappa)\) stores the event that is being executed. We construct a nondeterministic state machine for a closed PiCore specification as follows. Here, a closed specification means that we dont consider the environment of the whole system, i.e. the rely condition of the system is the identity relation. Definition 5.1 ().: A state machine of a specification \(\mathcal{PS}\) is a quadruple \(\mathcal{M}=\langle\Delta,A,step,C_{0}\rangle\), where * \(\Delta\) is the set of configurations. * \(A\) is the set of actions. An action is a triple \(a=\langle\delta,ev,d\rangle\), where \(\delta\) is a transition label in the PiCore semantics, \(ev\) is an event, and \(d\) is a domain. The notations \(\delta_{a}\), \(ev_{a}\) and \(d_{a}\) respectively denote the projections of the components of an action \(a\). * \(step:A\rightarrow\mathbb{P}(\Delta\times\Delta)\) is the transition function, where \(step(a)=\{(C,C^{\prime})\mid\Sigma\vdash C\xrightarrow{\delta_{a}}pes\)\(C^{\prime}\wedge((\delta_{a}=ev_{a}@\kappa\wedge dom\_e(s_{C},\kappa,ev_{a})=d_{a}) \vee(\delta_{a}=c@\kappa\wedge ev_{a}=x_{C}(\kappa)\wedge dom\_e(s_{C},\kappa, ev_{a})=d_{a}))\}\). * \(C_{0}\) is the initial configuration \((\sharp_{0},s_{0},x_{0})\). Based on the function \(step\), we define the function \(run\) as follows to represent the execution of a sequence of actions. \[\begin{cases}run(Nil)=Id\\ run(a\#as)=step(a)\circ run(as)\end{cases}\] We prove the following lemma to ensure that the state machine is an equivalent representation of the PiCore specification. Lemma 5.2 (Equivalence of PiCore and Its State Machine Representation).: _The state machine defined in Definition 5.1 is an equivalent representation of PiCore, i.e.,_ * _If_ \((C_{1},C_{2})\in run(as)\)_, then_ \(\exists\omega.\ \omega\in\Psi(\Sigma,\mathcal{PS})\wedge\omega_{0}=C_{1}\wedge last (\omega)=C_{2}\wedge(\forall j<len(\omega)-1.\ \Sigma\vdash\ ### Compositional Reasoning In order to decompose the integrity reasoning of the system to its events, we define a form of integrity on event as follows. Definition 5.4 (Integrity on Events).: The integrity on the events composing a parallel event system \(\mathcal{PS}\) is defined as \[\forall ev\ d\ s\ s^{\prime}\ \kappa.\ ev\in evts(\mathcal{P}\mathcal{S}) \wedge(s,s^{\prime})\in G_{\Gamma(ev)}\wedge(dom\_e(s,\kappa,ev)\not\sim d) \longrightarrow s\overset{d}{\sim}s^{\prime}\] where the \(evts(\mathcal{PS})\) function returns all the events defined in a specification \(\mathcal{PS}\). We assume a function \(\Gamma:evts(\mathcal{P}\mathcal{S})\to RGCond\), where \(RGCond\) is the type of the rely-guarantee specification, to specify the rely-guarantee specification of events in \(\mathcal{PS}\). \(G_{\Gamma(ev)}\) is the guarantee condition in the rely-guarantee specification of an event \(ev\). The integrity on events requires that when an event \(ev\) is executed, the interaction of \(ev\) with the environment affects only those domains for which the domain executing \(ev\) is allowed to send information to, according to the relation \(\sim\). Different from the integrity on actions of parallel in Definition 5.3, the integrity here considers the global effects of events to the environment. Next, we show the compositionality of integrity, i.e. the integrity on events implies the integrity on parallel event systems. First, Lemma 5.5 shows the consistency of the event context in computations under a closed \(\mathcal{PS}\), i.e., program transitions of \(\mathcal{PS}\), \(\Sigma\vdash\omega_{i}\overset{c\oplus\kappa}{\longrightarrow}pes\ \omega_{i+1}\), should be a transition of the event currently being under execution. Lemma 5.5 ().: _For any closed \(\mathcal{PS}\), if \(\forall ev\in evts(\mathcal{P}\mathcal{S})\). is_basic(ev), that is, all events in \(\mathcal{PS}\) are basic events, then for any computation \(\omega\) of \(\mathcal{P}\mathcal{S}\), we have_ \[\forall i<len(\omega)-1,\kappa.\ (\Sigma\vdash\omega_{i}\overset{c\oplus\kappa}{ \longrightarrow}pes\ \omega_{i+1})\longrightarrow(\exists ev\in evts(\mathcal{PS}).\ x_{\omega_{i} }(\kappa)=ev)\] Proof.: For the computation \(\omega\), we have its conjoined computations \(\tilde{\omega}\) such that \(\omega\propto\tilde{\omega}\). Hence, for a program transition \(\Sigma\vdash\omega_{i}\overset{c\oplus\kappa}{\longrightarrow}pes\ \omega_{i+1}\), we have that \(\Sigma\vdash\tilde{\omega}_{i}^{\kappa}\overset{c\oplus\kappa}{ \longrightarrow}es\ \tilde{\omega}_{i+1}^{\kappa}\). Since, all events in \(\mathcal{PS}\) are basic events, all events in the event system \(\mathcal{PS}(\kappa)\) are basic events too. Thus, there must be an event occurrence transition \(\Sigma\vdash\tilde{\omega}_{m}^{\kappa}\overset{c\oplus\kappa}{\longrightarrow }es\ \Sigma\vdash\tilde{\omega}_{m+1}^{\kappa}\) where \(m<i\) and \(ev\in evts(\mathcal{P}\mathcal{S})\). The transition sets the event context of \(\kappa\) to \(ev\), and all transitions between \(m\) and \(i\) are program transitions or environment transitions which will not change the event context. Second, Lemma 5.6 shows the compositionality of guarantee conditions of events in a valid and closed parallel event system, i.e., any component transition must preserve the guarantee condition of the current event. Lemma 5.6 ().: _For any closed \(\mathcal{PS}\), if_ 1. \(\mathcal{C}_{0}=(\mathcal{PS},s_{0},x_{0})\)_._ 2. _events in_ \(\mathcal{PS}\) _are basic events, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_. is_basic(ev)._ 3. _events in_ \(\mathcal{PS}\) _satisfy their rely-guarantee specification, i.e.,_ \(\forall ev\in evts(\mathcal{P}\mathcal{S})\)_._ \(\Sigma\vdash ev\) _sat_ \(\Gamma(ev)\)_._ 4. _from (_3_) we have_ \(\Sigma\vdash\mathcal{PS}\) _sat_ \(\langle\{s_{0}\},\{\},UNIV,UNIV\rangle\)_._ _then for any computation \(\omega\in\Psi(\mathcal{P}\mathcal{S},s_{0},x_{0})\), we have_ \[\forall i<len(\omega)-1,\kappa.\ (\Sigma\vdash\omega_{i}\overset{c\oplus\kappa}{ \longrightarrow}pes\ \omega_{i+1})\longrightarrow(s_{\omega_{i}},s_{\omega_{i+1}})\in G_{\Gamma(x_{ \omega_{i}}(\kappa))}\] Proof.: From assumption (4), we know that the rely-guarantee specifications of each event in assumption (3) are compositional, i.e. the execution of each event in all computations of \(\Psi(\mathcal{PS},s_{0},x_{0})\) preserves its specification. For the computation \(\omega\), we have its conjoined computations \(\tilde{\omega}\) such that \(\omega\propto\tilde{\omega}\). Hence, for a program transition \(\Sigma\vdash\omega_{i}\xrightarrow{c\in\emptyset\kappa}_{pes}\omega_{i+1}\), we have that \(\Sigma\vdash\tilde{\omega}_{i}^{\kappa}\xrightarrow{c\in\emptyset\kappa}_{es} \tilde{\omega}_{i+1}^{\kappa}\). Next, we apply induction on \(\hat{\sharp}_{\omega_{6}^{\kappa}}\): 1. \(\hat{\sharp}_{\omega_{6}^{\kappa}}=\{\mathcal{E}_{0},\...,\ \mathcal{E}_{m}\}\): the execution of an event system is a sequence of executions of its composed events. Thus, the program transition is a transition of its one event. We split the computation \(\tilde{\omega}^{\kappa}\) to a set of computations of events, and the program transition is in a computation of an event \(ev\in\{\mathcal{E}_{0},\...,\ \mathcal{E}_{m}\}\). By Lemma 5.5 and \(\Sigma\vdash ev\)**sat**\(\Gamma(ev)\), we have the conclusion. 2. \(\hat{\sharp}_{\tilde{\omega}_{6}^{\kappa}}=ev\succ\mathcal{S}\): first, if the transition is of the execution of \(ev\), we have \(x_{\tilde{\omega}_{6}^{\kappa}}(\kappa)=ev\) by Lemma 5.5 and the semantics of event occurrence. Moreover, the execution of \(ev\succ\mathcal{S}\) before position \(i\) is the same as of \(ev\) and \(\Sigma\vdash ev\)**sat**\(\Gamma(ev)\). We have \((s_{\omega_{i}},s_{\omega_{i+1}})\in G_{\Gamma(x_{\omega_{i}}(\kappa))}\). Second, if the transition is of the execution of \(\mathcal{S}\), we have the conclusion by the inductive case (1). By these two lemmas and the equivalence in Lemma 5.2, we have the following theorem for the compositionality of integrity. Theorem 5.7 (Compositionality of Integrity).: _For a closed parallel event system \(\mathcal{PS}\), if_ 1. \(\mathcal{C}_{0}=(\mathcal{PS},s_{0},x_{0})\)_._ 2. _events in_ \(\mathcal{PS}\) _are basic events, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(is\_basic(ev)\)_._ 3. _events in_ \(\mathcal{PS}\) _satisfy their rely-guarantee specification, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(\Sigma\vdash ev\)**sat**\(\Gamma(ev)\)_._ 4. \(\Sigma\vdash\mathcal{PS}\)**sat**\(\langle\{s_{0}\},\{\},UNIV,UNIV\rangle\)_._ 5. \(\mathcal{PS}\) _satisfies the integrity on its events._ _then \(\mathcal{PS}\) preserves the integrity property._ We require that all events in \(\mathcal{PS}\) are basic events to ensure the event context in computations of \(\mathcal{PS}\) is consistent before the execution of an event. It is a reasonable assumption since anonymous events are only used to represent intermediate specifications during the execution of events and they do not change modify the event context. The assumption (4) is to ensure the compositionality of the rely-guarantee specifications of each event in assumption (3), i.e. the execution of each event in all computations of \(\Psi(\mathcal{PS},s_{0},x_{0})\) preserves its specification. It is a highly relaxed condition and it is easy to prove. First, we only consider closed concurrent systems starting from the initial state \(s_{0}\). Thus, the pre-condition only has the initial state and the rely condition is empty. Second, we do not constrain the behaviour of the parallel event system, thus the guarantee condition is the universal set. Third, the integrity only concerns the action transition, but not the final state. Thus, the post-condition is the universal set. ## 6. Rely-Guarantee Proof of Zephyr We have proven correctness of the buddy memory management in Zephyr using the rely-guarantee proof system of PiCore. We ensure functional correctness of each kernel service w.r.t. the defined pre/post conditions, termination of loop statements in the kernel services, the separation of local variables of threads, safety by invariant preservation, and security by memory separation. In this section, we introduce how these properties are specified and verified using the PiCore rely-guarantee proof system. Actually, the safety and security properties verified in this article can be embedded in the guarantee conditions of events of the memory management specification. In this section, we first present the rely-guarantee specification of events. ### Correctness of the Specification Using the compositional reasoning of PiCore, correctness of Zephyr memory management can be specified and verified with the rely-guarantee specification of each event. The guarantee conditions for both memory services are the same, which is defined as: \[\begin{array}{l}\textbf{Mem-pool-guar}\ t\equiv\overbrace{\mathit{Id}}^{(1)} \cup\ (\overbrace{\mathit{gears\_conf\_stable}}^{(2)}\cap\\ **Mem-pool-alloc-pre**\(t\equiv\{s.\;inv\;s\;\wedge\;allocating\)-node \(s\;t=None\;\wedge\;freeing\)-node \(s\;t=None\)} **Mem-pool-alloc-post**\(t\;p\;sz\;timeout\;\equiv\) \(\{s.\;inv\;s\;\wedge\;allocating\)-node \(s\;t=None\;\wedge\;freeing\)-node \(s\;t=None\) \(\wedge\;(timeout=FOREVER\;\longrightarrow\) \((ret\;s\;t=ESIZEERR\;\wedge\;mempoolalloc\)-\(ret\;s\;t=None\;\vee\) \(ret\;s\;t=OK\;\wedge\;(\exists\;mblk.\;mempoolalloc\)-\(ret\;s=Some\;mblk\;\wedge\;mblk\)-\(valid\;s\;p\;sz\;mblk)))\) \(\wedge\;(timeout=NOWAIT\;\longrightarrow\) \(((ret\;s\;t=ENOMEM\;\vee\;ret\;s\;t=ESIZEERR)\;\wedge\;mempoolalloc\)-\(ret\;s\;t=None)\;\vee\) \((ret\;s\;t=OK\;\wedge\;(\exists\;mblk.\;mempoolalloc\)-\(ret\;s\;t=Some\;mblk\;\wedge\;mblk\)-\(valid\;s\;p\;sz\;mblk)))\) \(\wedge\;(timeout>\;0\longrightarrow\) \(((ret\;s\;t=ETIMEOUT\;\vee\;ret\;s\;t=ESIZEERR)\;\wedge\;mempoolalloc\)-\(ret\;s\;t=None)\;\vee\) \((ret\;s\;t=OK\;\wedge\;(\exists\;mblk.\;mempoolalloc\)-\(ret\;s\;t=Some\;mblk\;\wedge\;mblk\)-\(valid\;s\;p\;sz\;mblk)))\) If a thread requests a memory block in mode _FOREVER_, it may successfully allocate a valid memory block, or fail (_ESIZEERR_) if the request size is larger than the size of the memory pool. If the thread is requesting a memory pool in mode _NOWAIT_, it may also get _ENOMEM_ as a result if there is no available blocks. But if the thread is requesting in mode _TIMEOUT_, it will get the result of _ETIMEOUT_ if there is no available blocks in _timeout_ milliseconds. The property is indeed weak since even if the memory has a block able to allocate the requested size before invoking the allocation service, another thread running concurrently may have taken the block first during the execution of the service. For the same reason, the released block may be taken by another concurrent thread before the end of the release services. ### Proof of Partial Correctness In the PiCore system, verification of a rely-guarantee specification is carried out by inductively applying the proof rules for each system event and discharging the proof obligations the rules generate. Typically, these proof obligations require to prove stability of the pre- and post-condition to check that changes of the environment preserve them, and to show that a statement modifying a state from the precondition gets a state belonging to the postcondition. The final theorem of functional correctness is as follows. **Theorem 6.1** (Functional Correctness of Memory Management).: \[\Sigma\vdash Mem\_Spec\;\textbf{sat}\;\langle\{s_{0}\},\{\},Guar,UNIV\rangle\] _where \(Guar=tick\_guar\;\cup\;schedule\_guar\;\cup\;(\bigcup t.\;Mem\_pool\_guar\;t)\)._ We consider that the memory management is a closed system, i.e., the environment is the empty set. In the initial state \(s_{0}\), we assume that (1) the memory blocks at level 0 of all memory pools are free and not split, (2) the current thread is **None**, (3) the state of all threads are _READY_, and (4) the wait queue of thread of each memory pool is empty. We have that \(s_{0}\) satisfies the invariants. By the Par rule in Fig. 6, the proof of Theorem 6.1 can be decomposed to the satisfiability for each event of the proof of correctness of the specification introduced in section 6.1. We proved the following lemmas for the memory services. A detailed proof sketch and intermediate conditions of Lemma 6.3 is shown in Appendix B. **Lemma 6.2** (Functional Correctness of the Allocation Service).: \[\Sigma\vdash Mem\_pool\_alloc\;t\;p\;sz\;to\;\textbf{sat}\;\langle Mem\_pool\_alloc\_pre\;t,Mem\_pool\_re\;t,\] \[Mem\_pool\_guar\;t,Mem\_pool\_alloc\_post\;t\;p\;sz\;to\rangle\] **Lemma 6.3** (Functional Correctness of the Release Service).: \[\Sigma\vdash\mathit{Mem\_pool\_free}\ t\ p\ s\mathit{\mathbf{s}}\] \[\mathit{Mem\_pool\_guar}\ t,\mathit{Mem\_pool\_free\_post}\ t\ p\ s \mathit{\mathbf{z}}\] ### Proof of Termination To prove loop termination, loop invariants are parametrized with a logical variable \(\alpha\). It suffices to show total correctness of a loop statement by the following proposition where \(loopin(\alpha)\) is the parametrize invariant, in which the logical variable is used to find a convergent relation to show that the number of iterations of the loop is finite. \[\Sigma\vdash P\ \mathbf{s}\ \mathtt{sat}\ \langle loopin(\alpha) \cap\ \{\ \alpha>0\ \},R,G,\exists\beta<\alpha.\ loopin(\beta)\rangle\] \[\wedge loopin(\alpha)\cap\ \{\ \alpha>0\ \}\subseteq\{\ \ b\ \}\wedge loopin(0)\subseteq\{\ \neg b\ \}\] \[\wedge\forall s\in loopin(\alpha).\ (s,t)\in R\longrightarrow\exists \beta\leqslant\alpha.\ t\in loopin(\beta)\] For instance, to prove termination of the loop statement in \(\mathit{free\_block}\) shown in Fig. 9, we define the loop invariant with the logical variable \(\alpha\) as follows. Here, \(\{\ \mathit{ino}\}\) defines a set of states, each of which satisfies the \(\mathit{inv}\) predicate. It is equivalent to \(\{s.\ in\ s\}\). \(\mathbf{mp\_free\_loopinv}\ t\ b\ \alpha\equiv\{\ \ldots\ \wedge\mathit{inv}\ \wedge\ \mathit{level}\ b<\mathit{length}\ (\mathit{`}\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathitmathit{ }}}}}}}}}}}}}\ )\}\) \(\wedge\ (\forall\ \mathit{ii\_length}\ (\mathit{`}\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{ \mathit{ \mathit{ }}}}}}}}}}}}}}}\)\(t)\(\ \mathit{`}\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathitmathit{ { \mathitmathit{ { \mathit{ \mathitmathit{ { \mathit{ \mathitmathit{ { \mathitmathit{ { \mathit{ \mathit{ { \mathitmathit{ { \mathitmathit{ { \mathitmathit{ { \mathitmathit{ { \mathitmathit{ { \mathitmathit{ { \mathitmathit{{ { \mathitmathit{{ \mathitmathit{{ { \mathitmathit{{ { { \mathitmathit{ { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { { {{ {{ {{ {{ {{ {{ {{{ { { { { { {{{ { \ { { { {{{\ \ {\ \mathit{{{{ { { { { { {{\ {\ {\ {\ {\mathit{\ {\ \mathit{ { { { { { { { { { { {\mathit{{{\ {\ {\mathit{\mathit{\mathit{ { { { { { { { { { {\mathit{{\mathit{\mathit{\mathit{\mathit{\mathit{ { { { { { { { { { { { { { The first two predicates are obviously satisfied as we have shown before. The third predicate is satisfied since \(Mem\_pool\_guar\ t\) is stable with the invariant and the guarantee conditions of \(tick\) and \(schedule\) do not change the memory. ### Proof of Security To apply the compositional reasoning approach to proof integrity of a parallel event system, we instantiate the security configuration in Section 5.2 for the specification of the memory management, i.e. \(Mem\_Spec\) in Equation (1). The \(\leadsto\) relation of domains is instantiated as the _interference_ function as follows. The first two rules mean that the Timer can only interfere itself. The third rule means that a thread can interfere with itself but not other threads. The _scheduler_ can interfere with all threads and the _timer_. A special case in the memory management of Zephyr is that threads can interfere with the _scheduler_ too, because the memory allocation service may block the current thread and reschedule to other threads. It is very different from the interference relation in separation kernels (e.g. [30]) and ARINC 653 OSs (e.g. [51]), where the scheduler must not be interfered by partitions or processes for the purpose of temporal separation. \[\begin{cases}\textbf{Timer}\leadsto c=(c=Timer)\\ c\leadsto\textbf{Timer}=(c=Timer)\\ (\mathcal{T}\ t)\leadsto(\mathcal{T}\ r)=(t=r)\\ otherwise...=True\end{cases}\] The state equivalence relation \(s\overset{d}{\sim}r\) is instantiated as follows. It requires that two states are equivalent to a thread \(t\) iff the allocated memory blocks of \(t\) in the two states are the same. \[\begin{cases}s\overset{S}{\sim}r=(cur\ s=cur\ r)\\ s\overset{(\mathcal{T}\ t)}{\sim}r=(mblocks\ s\ t=mblocks\ r\ t)\\ s\overset{\textbf{Timer}}{\sim}r=(tick\ s=tick\ r)\end{cases}\] The domain function \(dom\_e\) is instantiated as follows, which is straightforward. \[\begin{cases}dom\_e\ s(\mathcal{T}\ t)\ (alloc(p,sz,to)@t)=\mathcal{T}\ t\\ dom\_e\ s(\mathcal{T}\ t)\ (free(b)@t)=\mathcal{T}\ t\\ dom\_e\ s\ (schedule(t))=\mathcal{S}\\ dom\_e\ \textbf{Timer}\ tick=\textbf{Timer}\end{cases}\] By the compositional reasoning approach in Section 5.3, to prove the integrity of the memory management of Zephyr, we first have to show the integrity of memory services shown as the following lemma. Lemma 6.5 (Integrity of Memory Services).: \(\forall ev\ u\ s_{1}\ s_{2}\ k.\ ev\in evts(Mem\_Spec)\land(s_{1},s_{2})\in G _{\Gamma(ev)}\land(dom\_e\ s_{1}\ k\ ev)\ \shortstack{\shortstack{\shortstack{\shortstack{\shortstack{ \shortstack{\shortstack{\shortstack{\shortstack{\shortstack{\shortstack{ \shortstackstack{\shortstackstack{\shortstackstack{\shortstackstackstack{ \shortstackstackstackstack{\shortstackstackstackstack{ \stackstackstackstackstackstackstack{ \stack * if \(u\) is a thread \(r\), then we have \(t\neq r\). It is proved by that the memory separation property in the guarantee condition **Mem-pool-guar** does not change the allocated memory of other threads. * if \(ev\) is the \(schedule\) event, then \(dom\_e\)\(s_{1}\)\(k\)\(ev\) is the scheduler \(\mathcal{S}\). * if \(u\) is the scheduler, then its proved since a domain can interfere with itself. * if \(u\) is the timer, then its proved since the \(schedule\) event does not change the \(tick\) variable. * if \(u\) is a thread \(t\), then its proved since the \(schedule\) event does not change the memory. * if \(ev\) is the \(tick\) event, then its proved since the \(tick\) event does not change the current thread and the memory. Finally, we have the following theorem to show the integrity of the memory management. Theorem 6.6 (Integrity of Memory Management).: _The PiCore specification_ **Mem_Spec** _in Equation (1) satisfies the integrity property._ Proof.: The proof is straightforward by Theorem 5.7, Theorem 6.1 and Lemma 6.5. The rest to prove is \(\forall ev\in evts(Mem\_Spec)\). \(is\_basic(ev)\), which is straightforward since the events are all basic events according to the definition of \(Mem\_Spec\). ## 7. Result and Evaluation ### Verification Effort The verification conducted in this work is on Zephyr v1.8.0. The C code of the buddy memory management is \(\approx\) 400 lines, not counting blank lines and comments. Table 1 shows the statistics for the effort and size of the proofs in the Isabelle/HOL theorem prover. In total, the models and mechanized verification consists of \(\approx\) 34,000 lines of specification and proofs (LOSP), and the total effort is \(\approx\) 26 person-months (PM), where the security proof in PiCore takes 4 PMs. The specification and proof of PiCore are reusable for the verification of other systems. We develop \(\approx\) 18,200 LOSP for the concurrent memory management of Zephyr, 40 times more than the lines of the C code due to the in-kernel concurrency, where invariant proofs represent the largest part. This takes 14 PMs. Since the safety and security properties are represented by the guarantee conditions of the memory management services, the final theorems to show the safety and security are relatively small, taking 400 LOSP. ### Bugs Found in Zephyr During the formal verification, we found 3 bugs in the C code of Zephyr. The first two bugs are critical and have been repaired in the latest release of Zephyr. To avoid the third one, callers to \(k\_mem\_pool\_alloc\) have to constrain the argument \(t\_size\) size. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PiCore Language**} & \multicolumn{3}{c|}{**Memory Management**} \\ \hline **Item** & **LOSP** & **PM** & **Item** & **LOSP** & **PM** \\ \hline _Language and Proof Rules_ & 700 & \multirow{3}{*}{8} & _Specification_ & 400 & \\ \cline{1-1} \cline{5-6} _Lemmas of Language/Semantics_ & 3,000 & & _Auxiliary Lemmas/Invariant_ & 1,700 & \\ \cline{1-1} \cline{5-6} _Soundness_ & 7,100 & & _Rely-guarantee Proof of Allocation_ & 10,700 & 14 \\ \cline{1-1} \cline{5-6} _Invariant_ & 100 & & _Rely-guarantee Proof of Free_ & 5,000 & \\ \cline{1-1} \cline{5-6} _Security_ & 4,800 & 4 & _Proof of Safety and Security_ & 400 & \\ \hline **Total** & 15,700 & 12 & **Total** & 18,200 & 14 \\ \hline \end{tabular} \end{table} Table 1. Specification and Proof Statistics **(1) Incorrect block split**: this bug is located in the loop in Line 20 of the \(k\_mem\_pool\_alloc\) service, shown in Fig. 3. The _level_empty_ function checks if a pool \(p\) has blocks in the free list at level \(alloc\_l\) Concurrent threads may release a memory block at that level, making the call to _level_empty_(\(p\), _alloc_l_) to return _false_ and stopping the loop. In such case, it allocates a memory block of a bigger capacity at a level \(i\) but it still sets the level number of the block as _alloc_l_ at Line 23. The service allocates a larger block to the requesting thread causing an internal fragmentation of \(max\_sz/4^{i}-max\_sz/4^{alloc\_l}\) bytes. When this block is released, it will be inserted into the free list at level _alloc_l_, but not at level \(i\), causing an external fragmentation of \(max\_sz/4^{i}-max\_sz/4^{alloc\_l}\). The bug is fixed by removing the condition _level_empty_(\(p\), _alloc_l_) in our specification. **(2) Incorrect return from \(k\_mem\_pool\_alloc\)**: this bug is found at Line 36 in Fig. 3. When a suitable free block is allocated by another thread, the _pool_alloc_ function returns _EAGAIN_ at Line 18 to ask the thread to retry the allocation. When a thread invokes \(k\_mem\_pool\_alloc\) in _FOREVER_ mode and this case happens, the service returns _EAGAIN_ immediately. However, a thread invoking \(k\_mem\_pool\_alloc\) in _FOREVER_ mode should keep retrying when it does not succeed. We repair the bug by removing the condition \(ret==EAGAIN\) at Line 36. As explained in the comments of the C Code, _EAGAIN_ should not be returned to threads invoking the service. Moreover, the _return_EAGAIN_ at Line 48 is actually the case of time out. Thus, we introduce a new return code _ETIMEOUT_ in our specification. **(3) Non-termination of \(k\_mem\_pool\_alloc\)**: we have discussed that the loop statement at Lines 34 - 47 in Fig. 3 does not terminate. However, it should terminate in certain cases, which are actually violated in the C code. When a thread requests a memory block in _FOREVER_ mode and the requested size is larger than _max_sz_, the maximum size of blocks, the loop at Lines 34 - 47 in Fig. 3 never finishes since _pool_alloc_ always returns _ENOMEM_. The reason is that the "_return_ _ENOMEM_" at Line 15 does not distinguish two cases, \(alloc\_l<0\) and \(free\_l<0\). In the first case, the requested size is larger than \(max\_sz\) and the kernel service should return immediately. In the second case, there are no free blocks larger than the requested size and the service tries forever until some free block available. We repair the bug by splitting the _if_ statement at Lines 13 - 16 into these two cases and introducing a new return code _ESIZEERR_ in our specification. Then, we change the condition at Line 36 to check that the returned value is _ESIZEERR_ instead of _ENOMEM_. ### Further Related Work Klein et al. (2018) presented the first formal verification of the functional correctness and security properties of a general-purpose OS kernel in Isabelle/HOL took roughly 20 person years for 10,000 lines of C code. To reduce the cost of formal verification, Yang and Hawblitzel (Yang and Hawblitzel, 2018) demonstrated mechanical verification of Verve, an operating system and run-time system to ensure both the safety and correctness using Boogie and the Z3 SMT solver, which only 2-3 lines of proof annotation per executable statement. Nelson et al. (2018) proposed an approach to designing, implementing, and formally verifying the functional correctness of Hyperkernel with a high degree of proof automation and low proof burden with the Z3 SMT solver. However, all these works did not consider concurrent OS kernels. Examples of recent progress in formal verification of concurrent OS kernels are CertiKOS with multicore support (Kennedy et al., 2018), a practical verification framework for preemptive OS kernels to reason about interrupts (Kennedy et al., 2018), and a compositional verification of interruptible OS kernels with device drivers (Kennedy et al., 2018). The Verisoft team (Brands et al., 2018; Kiennedy et al., 2018) applied the VCC framework to formally verify Hyper-V, which is a widely deployed multiprocessor hypervisor by Microsoft consisting of 100 kLOC of concurrent C code and 5 kLOC of assembly. To ease the formal verification, a large portion of related work make assumptions on the targeted OS kernel. The formal verification of seL4 [21] changed the C code of L4 microkernel and thus disabled the in-kernel concurrency. Nelson et al. made the Hyperkernel [34] interface finite, avoiding unbounded loops, recursion, or complex data structures. As opposed to brand new research systems developed for verifiability (e.g. CertiKOS [16]), this article presented the first verification of a 3rd-party existing and realistic concurrent OS. Our formal specification in PiCore completely corresponds to the execution behavior with fine-grained concurrency of the Zephyr C code. Formal verification of OS memory management has been studied in sequential and concurrent OS kernels, such as CertiKOS [16; 43], seL4 [21; 22], Verisoft [1], and in the hypervisors from [4; 5], where only the works in [4; 16] considered concurrency. Comparing to buddy memory allocation, the data structures and algorithms verified in [16] are relatively simpler, without block split/coalescence and multiple levels of free lists and bitmaps. The work in [4] only considered virtual mapping but not allocation or deallocation of memory areas. Algorithms and implementations of dynamic memory allocation have been formally specified and verified in an extensive number of works [11; 12; 13; 29; 41; 48]. However, the buddy memory allocation was only studied in [13], which did not consider concrete data structures (e.g. bitmaps) and concurrency. A memory model [39] provides the necessary abstraction to separate the behaviour of a program from the behaviour of the memory it reads and writes. There are many formalizations of memory models in the literature, e.g., [14; 26; 27; 42; 45], where some of them only created an abstract specification of the services for memory allocation and release [14; 27; 45]. Our article presents the first formal specification and mechanized proof for concurrent memory allocation of a realistic operating system. Integrity is a sort of information-flow security (IFS) which deals with the problem of preventing improper release and modification of information in complex systems. Language-based IFS [36] defines security policies on programming languages and concerns the data confidentiality among program variables. Compositional verification of language-based IFS has been conducted in [28; 32; 33]. Formal verification of IFS on OS kernels need to considers the events (e.g. kernel services, interrupt handlers) rather than on pure programs. Therefore, state-event IFS [31; 35; 44] is usually applied to OS kernels (e.g. [8; 10; 30; 51]). However, formal verification of state-event IFS for concurrent systems (e.g. OS kernels) has not been addressed in literature. Our article presents the first integrity verification of concurrent OS kernels. ### Limitations and Discussion The state of the art of formal verification of OS kernels focuses on the implementation level (e.g. [16; 21; 34; 47]). One limitation of this work is that formal verification is enforced at the low-level design specification. The first concern of this decision is that our work aims at the highest evaluation assurance level (EAL 7) of Common Criteria (CC) [9], which was declared as the candidate standard for security certification by the Zephyr project. With regard to the EAL 7, a main requirement of functional specification addressed by formal methods is a complete formal and modular design of Target of Evaluation (TOE) with security proofs, rather than mandating the formal verification at source code level. In this article, we develop a fine-grained low level formal specification of Zephyr. The specification closely follows the Zephyr C code, and thus is able to do the _code-to-spec_ review required by the EAL 7 evaluation, covering all the data structures and imperative statements present in the implementation. Second, formally verifying functional correctness, safety and security of concurrent C programs, in particular the memory management of Zephyr, is not well supported by the state of the art of C verifiers (e.g. VCC [24][7], Frame-C [18], CBMC). _Simpl_[40] is a generic imperative language embedded into Isabelle/HOL that was designed as an intermediate language for program verification. In the seL4 project, the C code was translated into _Simpl_ and then a state monad representation by the _CParser_ and _AutoCorres_ tools, which do not support concurrent C programs. Though we have extended _Simpl_ to _CSimpl_ by concurrent statements and a rely-guarantee proof system in (S compose the state of different modules of OS kernels whilst making few changes to the functional specification and formal proof.
2305.19689
Assessing Word Importance Using Models Trained for Semantic Tasks
Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called cross-task evaluation: Analyzing the performance of one model on an input masked according to the other model's weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training.
Dávid Javorský, Ondřej Bojar, François Yvon
2023-05-31T09:34:26Z
http://arxiv.org/abs/2305.19689v1
# Assessing Word Importance Using Models Trained for Semantic Tasks ###### Abstract Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called cross-task evaluation: Analyzing the performance of one model on an input masked according to the other model's weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training. ## 1 Introduction The ability to decide which words in a sentence are semantically important plays a crucial role in various areas of NLP (e.g. compression, paraphrasing, summarization, keyword identification). One way to compute (semantic) word significance for compression purposes is to rely on syntactic patterns, using Integer Linear Programming techniques to combine several sources of information Clarke and Lapata (2006); Filippova and Strube (2008). Xu and Grishman (2009) exploit the same cues, with significance score computed as a mixture of TF-IDF and surface syntactic cues. A similar approach estimates word importance for summarization Hong and Nenkova (2014) or learns these significance scores from word embeddings Schakel and Wilson (2015); Sheikh et al. (2016). Significance scores are also useful in an entirely different context, that of explaining the decisions of Deep Neural Networks (DNNs). This includes investigating and interpreting hidden representations via auxiliary probing tasks Adi et al. (2016); Conneau et al. (2018); quantifying the importance of input words in the decisions computed by DNNs in terms of analyzing attention patterns Clark et al. (2019); or using attribution methods based on attention Vashishth et al. (2019), back-propagation Sundararajan et al. (2017) or perturbation techniques Guan et al. (2019); Schulz et al. (2020). Along these lines, DeYoung et al. (2020) present a benchmark for evaluating the quality of model-generated rationals compared to human rationals. In this study, we propose to use such techniques to compute semantic significance scores in an innovative way. We demand the scores to have these intuitive properties: (a) Content words are more important than function words; (b) Scores are context-dependent; (c) Removing low-score words minimally changes the sentence meaning. For this, we train models for two semantic tasks, Natural Lan Figure 1: The first pass (yellow plain arrows): A premise and hypothesis are passed to the NLI model. The interpreter takes both text inputs \(x^{p}\), \(x^{h}\), and hidden states \(h^{p}\) of the NLI model’s encoder. It generates a binary mask \(z^{p}\) which is used to mask \(x^{p}\), resulting in \(\hat{x}^{p}\). The second pass (green dashed arrows): \(\hat{x}^{p}\) is passed to the NLI model together with the original hypothesis. The divergence \(D_{*}\) minimizes the difference between predicted distributions \(y\) and \(\hat{y}\) of these two passes. guage Inference and Paraphrase Identification, and use the attribution approach of De Cao et al. (2020) to explain the models' predictions. We evaluate the relevance of scores using the so-called _cross-task evaluation_: Analyzing the performance of one model on an input masked according to the other model's weights. We show that our method is robust with respect to the choice of the initial task and fulfills all our requirements. Additionally, hinting at the fact that trained hidden representations encode a substantial amount of linguistic information about morphology (Belinkov et al., 2017), syntax (Clark et al., 2019; Hewitt and Manning, 2019), or both (Peters et al., 2018), we also analyze the correlations of our scores with syntactic patterns. ## 2 Method We assume that sentence-level word significance (or word importance) is assessed by the amount of contribution to the overall meaning of the sentence. This means that removing low-scored word should only slightly change the sentence meaning. The method we explore to compute significance score repurposes attribution techniques originally introduced to explain the predictions of a DNN trained for a specific task. Attribution methods typically compute sentence level scores for each input word, identifying the ones that contribute most to the decision. By explicitly targeting semantic prediction tasks, we hope to extract attribution scores that correlate well with semantic significance. Our significance scoring procedure thus consists of two main components: an underlying model and an interpreter. The underlying model is trained to solve a semantic task. We select two tasks: Natural Language Inference (NLI) -- classifying the relationship of a premise-hypothesis pair into entailment, neutrality or contradiction -- and Paraphrase Identification (PI) -- determining whether a pair of sentences have the same meaning. The interpreter relies on the attribution method proposed by De Cao et al. (2020), seeking to mask the largest possible number of words in a sentence, while at the same time preserving the underlying model's decision obtained from the full sentence pair. The interpreter thus minimizes a loss function comprising two terms: an \(L_{0}\) term, on the one hand, forces the interpreter to maximize the number of masked elements, and a divergence term \(D_{*}\), on the other hand, aims to diminish the difference between the predictions of the underlying model when given (a) the original input or (b) the masked input. We take the outputs of the interpreter, i.e. the attribution scores, as probabilities that given words are not masked. Following De Cao et al. (2020), these probabilities are computed assuming an underlying Hard Concrete distribution on the closed interval \([0,1]\), which assigns a non-zero probability to extreme values (0 and 1) (Fig. 9, De Cao et al., 2020). During interpreter training, a reparametrization trick is used (so that the gradient can be propagated backwards) to estimate its parameters. Given the Hard Concrete distribution output, the attribution score for a token expresses the expectation of sampling a non-zero value, meaning that the token should be masked (Section 2, Stochastic masks, De Cao et al., 2020). We illustrate the process in Figure 1. ## 3 Experimental Setup ### Underlying Models We use a custom implementation of a variant of the Transformer architecture (Vaswani et al., 2017) which comprises two encoders sharing their weights, one for each input sentence. This design choice is critical as it allows us to compute importance weights of isolated sentences, which is what we need to do in inference. We then concatenate encoder outputs into one sequence from which a fully connected layer predicts the class, inspired by Sentence-BERT (Reimers and Gurevych, 2019) architecture. See Appendix A.1 for a discussion on the architecture choice, and for datasets, implementation and training details. Figure 2: Average scores for each POS category for the NLI model (left) and PI model (right). ### Interpreter We use the attribution method introduced by De Cao et al. (2020). The interpreter consists of classifiers, each processing hidden states of one layer and predicting the probability whether to keep or discard input tokens. See Appendix A.2 for datasets, implementation and training details.1 Footnote 1: Our source code with the license specification is available at [https://github.com/J4VORSKV/word-importance](https://github.com/J4VORSKV/word-importance) ## 4 Analysis In our analysis of the predicted masks, we only consider the last-layer classifier, rescaling the values so that the lowest value and the highest value within one sentence receive the scores of zero and one, respectively. All results use the snli validation set. ### Content Words are More Important We first examine the scores that are assigned to content and functional words. We compute the average score for each POS tag (Zeman et al., 2022) and display the results in Figure 2. For both models, Proper Nouns, Nouns, Pronouns, Verbs, Adjectives and Adverbs have leading scores. Determiners, Particles, Symbols, Conjunctions, Adpositions are scored lower. We observe an inconsistency of the PI model scores for Punctuation. We suppose this reflects idiosyncrasies of the PI dataset: Some items contain two sentences within one segment, and these form a paraphrase pair only when the other segment also consists of two sentences. Therefore, the PI model is more sensitive to Punctuation than expected. We also notice the estimated importance of the X category varies widely, which is expected since this category is, based on its definition, a mixture of diverse word types. Overall, these results fulfil our requirement that content words achieve higher scores than function words. ### Word Significance is Context-Dependent We then question the ability of the interpreter to generate context-dependent attributions, contrasting with purely lexical measures such as TF-IDF. To answer this question, we compute the distribution of differences between the lowest and highest scores for words having at least 100 occurrences in the training and 10 in the validation data, excluding tokens containing special characters or numerals. The full distribution is plotted in Figure 3. Scores extracted from both models report increased distribution density towards larger differences, confirming that significance scores are not lexicalized, but instead strongly vary according to the context for the majority of words. The greatest difference in scores for PI model is around 0.5, the analysis of the NLI model brings this difference even more towards 1. We explain it by the nature of datasets: It is more likely that the NLI model's decision relies mostly on one or on a small group of words, especially in the case of contradictions. ### Cross-Task Evaluation In this section, we address the validity of importance scores. We evaluate the models using so-called _cross-task evaluation_: For model A, we take its validation dataset and gradually remove a portion of the lowest scored tokens according to the interpreter of model B. We then collect the predictions of model A using the malformed inputs and compare it to a baseline where we randomly remove the same number of tokens. We evaluate both models in this setting, however, since the results for both models have similar properties, we report here only the analysis of the PI model in Table 1. See Appendix B for the NLI model results. Table 1 reports large differences in performance when the tokens are removed according to our scores, compared to random removal. When one third of tokens from both sentences is discarded, the PI model performance decreases by 2.5%, whereas a random removal causes a 15.1% drop (Table 1, 4th row and 4th column). The models differ most when a half of the tokens are removed, resulting in a difference in accuracy of 18.3% compared to the baseline (Table 1, 6th row and 6th column). Examining performance up to the removal of 20% of tokens, the difference between the random and Figure 3: The NLI model (left), PI model (right) and the distribution of differences between the maximal and minimal value for each token. importance-based word removal are not so significant, probably because of the inherent robustness of the PI model which mitigates the effect of the (random) removal of some important tokens. On the other hand, removing half of the tokens is bound to have strong effects on the accuracy of the PI model, especially when some important words are removed (in the random deletion scheme); this is where removing words based on their low importance score makes the largest difference. At higher dropping rates, the random and the importance-based method tend to remove increasing portions of similar words, and their scores tend to converge (in the limiting case of 100% removal, both strategies have exactly the same effect). Overall, these results confirm that our method is robust with respect to the choice of the initial task and that it delivers scores that actually reflect word importance. ### Important Words are High in the Tree Linguistic theories differ in ways of defining dependency relations between words. One established approach is motivated by the'reducibility' of sentences Lopatkova et al. (2005), i.e. gradual removal of words while preserving the grammatical correctness of the sentence. In this section, we study how such relationships are also observable in attributions. We collected syntactic trees of input sentences with UDPipe Straka (2018),2 which reflect syntactic properties of the UD format Zeeman et al. (2022).3 When processing the trees, we discard punctuation and compute the average score of all tokens for every depth level in the syntactic trees. We display the first 5 depth levels in Table 2. Footnote 2: [https://lindat.mff.cuni.cz/services/udpipe/](https://lindat.mff.cuni.cz/services/udpipe/) Footnote 3: UD favors relations between content words, function words are systematically leaves in the tree. However, having function words as leaves better matches our perspective of information importance flow, unlike in Gerdes et al. (2018). We can see tokens closer to the root in the syntactic tree obtain higher scores on average. We measure the correlation between scores and tree levels, resulting in -0.31 Spearman coefficient for the NLI model and -0.24 for the PI model. Negative coefficients correctly reflect the tendency of the scores to decrease in lower tree levels. It thus appears that attributions are well correlated with word positions in syntactic trees, revealing a relationship between semantic importance and syntactic position. ### Dependency Relations We additionally analyze dependency relations occurring more than 100 times by computing the \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \multicolumn{11}{c}{} & \multicolumn{6}{c}{PI Model performance} \\ \hline \hline & **0\%** & **10\%** & **20\%** & **30\%** & **40\%** & **50\%** & **60\%** & **70\%** & **80\%** & **90\%** & **100\%** \\ \hline 0\% & 85.17\(\uparrow\)0.0 & 84.70\(\uparrow\)0.7 & 84.54\(\uparrow\)6.4 & 83.07\(\uparrow\)0.6 & 80.91\(\uparrow\)0.9 & 77.71\(\uparrow\)2.2 & 74.31\(\uparrow\)2.9 & 69.31\(\uparrow\)0.6 & 62.6\(\uparrow\)7.3 & 56.0\(\uparrow\)4.0 & _50.01\(\uparrow\)0.0_ \\ 10\% & 84.77\(\uparrow\)0.9 & 84.71\(\uparrow\)2.0 & 84.47\(\uparrow\)5.7 & 82.81\(\uparrow\)6.7 & 81.01\(\uparrow\)9.9 & 77.81\(\uparrow\)2.9 & 74.51\(\uparrow\)13.4 & 69.51\(\uparrow\)13.3 & 62.6\(\uparrow\)7.5 & 55.8\(\uparrow\)3.8 & _50.01\(\uparrow\)0.0_ \\ 20\% & 84.27\(\uparrow\)4.1 & 84.25\(\uparrow\)2.5 & 84.37\(\uparrow\)8.3 & 83.01\(\uparrow\)0.13 & 81.51\(\uparrow\)12.2 & 78.14\(\uparrow\)7.4 & 74.91\(\uparrow\)14.4 & 70.1\(\uparrow\)12.3 & 63.0\(\uparrow\)8.2 & 56.2\(\uparrow\)4.3 & _50.01\(\uparrow\)0.0_ \\ 30\% & 83.17\(\uparrow\)6.9 & 83.11\(\uparrow\)7.1 & 83.31\(\uparrow\)0.8 & 82.61\(\uparrow\)6.12 & 88.18\(\uparrow\)15.0 & 79.01\(\uparrow\)6.1 & 75.61\(\uparrow\)6.7 & 79.01\(\uparrow\)3.3 & 63.5\(\uparrow\)6.6 & 34.5\(\uparrow\)0.0 & _50.01\(\uparrow\)0.0_ \\ 40\% & 80.71\(\uparrow\)9.9 & 80.41\(\uparrow\)10.4 & 80.11\(\uparrow\)12.7 & 81.01\(\uparrow\)14.0 & 80.91\(\uparrow\)16.8 & 78.71\(\uparrow\)9.5 & 55.16\(\uparrow\)1.1 & 71.1\(\uparrow\)3.7 & 64.2\(\uparrow\)9.9 & 56.7\(\uparrow\)5.0 & _50.01\(\uparrow\)0.1_ \\ 50\% & 77.31\(\uparrow\)1.3 & 77.51\(\uparrow\)1.6 & 78.11\(\uparrow\)3.8 & 78.15\(\uparrow\)15.0 & 78.81\(\uparrow\)16.6 & 78.01\(\uparrow\)8.3 & 75.2\(\uparrow\)17.0 & 77.1\(\uparrow\)21.5 & 64.2\(\uparrow\)9.6 & 56.8\(\uparrow\)5.0 & _50.01\(\uparrow\)0.1_ \\ 60\% & 73.61\(\uparrow\)1.7 & 73.91\(\uparrow\)12.0 & 74.41\(\uparrow\)13.3 & 75.91\(\uparrow\)15.2 & 75.31\(\uparrow\)6.4 & 59.1\(\uparrow\)9.7 & 74.41\(\uparrow\)7.4 & 71.2\(\uparrow\)15.7 & 65.3\(\uparrow\)11.2 & 57.1\(\uparrow\)5.2 & _49.9\(\downarrow\)0.2_ \\ 70\% & 68.41\(\uparrow\)3.6 & 68.81\(\uparrow\)11.1 & 68.71\(\uparrow\)11.3 & 70.12\(\uparrow\)12.8 & 70.71\(\uparrow\)3.4 & 71.1\(\uparrow\)15.3 & 71.0\(\uparrow\)15.5 & 79.0\(\uparrow\)15.4 & 66.0\(\uparrow\)13.3 & 58.2\(\uparrow\)6.0 & _50.01\(\downarrow\)0.3_ \\ 80\% & 62.31\(\uparrow\)3.7 & 62.37\(\uparrow\)5.7 & 62.47\(\uparrow\)6.3 & 62.78\(\uparrow\)6.3 & 63.6\(\uparrow\)3.4 & 63.41\(\uparrow\)3.4 & 64.71\(\uparrow\)11.1 & 65.81\(\uparrow\)12.6 & 67.1\(\uparrow\)15.0 & 59.8\(\uparrow\)1.8 & _49.7\(\downarrow\)1.4_ \\ 90\% & 56.24\(\uparrow\)0.6 & 56.34\(\uparrow\)1.4 & 56.54\(\uparrow\)6.7 & 74.7\(\uparrow\)25.3 & 57.2\(\uparrow\)5.4 & 57.5\(\uparrow\)5.5 & 58.5\(\uparrow\)7.1 & 60.5\(\uparrow\)8.8 & 63.9\(\uparrow\)12.1 & 50.2\(\downarrow\)2.4 \\ 100\% & _50.01\(\uparrow\)0.0_ & _50.01\(\uparrow\)0.0_ & _50.01\(\uparrow\)0.0_ & _50.01\(\uparrow\)0.0_ & _50.01\(\uparrow\)0.0_ & _50.01\(\uparrow\)0.1_ & _50.01\(\downarrow\)0.1_ & _50.1\(\downarrow\)0.2_ & _50.5\(\downarrow\)0.5_ & _50.01\(\uparrow\)0.0_ \\ \multicolumn{11}{c}{} \\ \end{tabular} \end{table} Table 1: The accuracy of the PI model when a given percentage of the least important input tokens are removed from the first sentence (rows) or the second (columns) according to the NLI model’s weights. Each cell contains the model accuracy (left), difference in comparison to the randomized baseline model (right) and an arrow denoting the increase (\(\uparrow\)) or decrease (\(\downarrow\)) in performance of our model compared to the baseline. The difference of values in _italics_ is _not_ statistically significant (\(p<0.01\)). \begin{table} \begin{tabular}{l|c|c|c|c|c} & **NLI Model** & **PI Model** & \\ \hline \hline **Depth** & **Avg** & **Std** & **Avg** & **Std** & **Count** \\ \hline 1 & **0.52** & 0.35 & **0.64** & 0.31 & 9424 \\ 2 & **0.36** & 0.36 & **0.53** & 0.39 & 27330 \\ 3 & **0.23** & 0.31 & **0.40** & 0.35 & 26331 \\ 4 & 0.22 & 0.31 & 0.33 & 0.36 & 7183 \\ 5 & 0.22 & 0.30 & 0.35 & 0.35 & 1816 \\ \end{tabular} \end{table} Table 2: Importance scores of tokens for each depth in syntactic trees. Stat. significant differences between the current and next row are bolded (\(p<0.01\)). \begin{table} \begin{tabular}{l|c|c|c|c|c} & **NLI Model** & **PI Model** & \\ \hline \hline **Dependency Relation** & **Avg** & **Std** & **Avg** & **Std** & **Count** \\ \hline det, case, cop, cc, punct, mark & -0.50 & 0.37 & -0.37 & 0.49 & 34034 \\ \hline advcl, acl, acl, scomp & 0.11 & 0.43 & 0.06 & 0.38 & 2789 \\ \ score difference between child and parent nodes, and averaging them for each dependency type. In Table 3, we depict relations which have noteworthy properties with respect to significance scores (the full picture is in Appendix C). Negative scores denote a decrease of word significance from a parent to its child. We make the following observations. The first row of the table illustrates dependencies that have no or very limited contribution to the overall meaning of the sentence. Looking at the corresponding importance scores, we observe that they are consistently negative, which is in line with our understanding of these dependencies. The second row corresponds to cases of clausal relationships. We see an increase in importance scores. This can be explained since the dependents in these relationships are often heads of a clause, and thus contribute, probably more than their governor, to the sentence meaning. It shows models' ability to detect some deep syntactic connections. The last block represents relations that are not consistent across the models. Nominal Subject is judged less important in the NLI model than in the PI model. As mentioned in Section 4.1, Punctuation differs similarly. Elements of Compound are preferred in different orders depending on the model. On the other hand, all other relation types are consistent: Ranking each type of dependency relation based on its average score and calculating correlation across our models results in 0.73 Spearman coefficient. This reveals a strong correlation between importance and syntactic roles. ## 5 Conclusion In this paper, we have proposed a novel method to compute word importance scores using attribution methods, aiming to explain the decisions of models trained for semantic tasks. We have shown these scores have desired and meaningful properties: Content words are more important, scores are context-dependent and robust with respect to the underlying semantic task. In our future work, we intend to exploit these word importance scores in various downstream applications. ## Limitations Our method of identifying important words requires a dataset for a semantic task (in our case NLI or PI), which limits its applicability. This requirement also prevents us from generalizing our observations too broadly: we tested our method only on one high-resource language where both dependency parsers and NLI / PI datasets are available. Our analysis also lacks the comparison to other indicators of word significance. ## Acknowledgements The work has been partially supported by the grants 272323 of the Grant Agency of Charles University, 19-26934X (NEUREM3) of the Czech Science Foundation and SVV project number 260 698. A part of this work has been done at Laboratoire Interdisciplinaire des Sciences du Numerique (LISN) in Orsay, France.
2309.08451
Modified temperature redshift relation and UHECR propagation
We re-examine the interactions of ultra-high energy cosmic rays (UHECRs) with photons from the cosmic microwave background (CMB) under a changed, locally non-linear temperature redshift relation $T(z)$. This changed temperature redshift relation has recently been suggested by the postulate of subjecting thermalised and isotropic photon gases such as the CMB to an SU(2) rather than a U(1) gauge group. This modification of $\Lambda$CDM is called SU(2)$_{\rm CMB}$, and some cosmological parameters obtained by SU(2)$_{\rm CMB}$ seem to be in better agreement with local measurements of the same quantities, in particular $H_0$ and S$_8$. In this work, we apply the reduced CMB photon density under SU(2)$_{\rm CMB}$ to the propagation of UHECRs. This leads to a higher UHECR flux just below the ankle in the cosmic ray spectrum and slightly more cosmogenic neutrinos under otherwise equal conditions for emission and propagation. Most prominently, the proton flux is significantly increased below the ankle ($5\times10^{18}$ eV) for hard injection spectra and without considering the effects of magnetic fields. The reduction in CMB photon density also favours a decreased cosmic ray source evolution than the best fit using $\Lambda$CDM. In consequence, it seems that SU(2)$_{\rm CMB}$ favours sources that evolve as the star formation rate (SFR), like starburst galaxies (SBG) and gamma-ray bursts (GRB), over active galactic nuclei (AGNs) as origins of UHECRs. We conclude that the question about the nature of primary sources of UHECRs is directly affected by the assumed temperature redshift relation of the CMB.
Janning Meinert, Leonel Morejón, Alexander Sandrock, Björn Eichmann, Jonas Kreidelmeyer, Karl-Heinz Kampert
2023-09-15T14:56:56Z
http://arxiv.org/abs/2309.08451v2
# Modified temperature redshift relation and UHECR propagation ###### Abstract We re-examine the interactions of ultra-high energy cosmic rays (UHECRs) with photons from the cosmic microwave background (CMB) under a changed, locally non-linear temperature redshift relation \(T(z)\). This changed temperature redshift relation is motivated by the postulate of subjecting thermalised and isotropic photon gases such as the CMB to an SU(2) rather than a U(1) gauge group. This modification of \(\Lambda\)CDM is called SU(2)\({}_{\rm CMB}\), and some cosmological parameters obtained by SU(2)\({}_{\rm CMB}\) seem to be in better agreement with local measurements of the same quantities. In this work, we apply the reduced CMB photon density under SU(2)\({}_{\rm CMB}\) to the propagation of UHECRs. This leads to a higher UHECR flux just below the ankle in the cosmic ray spectrum and slightly more cosmogenic neutrinos under otherwise equal conditions for emission and propagation. Most prominently, the proton flux is significantly increased below the ankle (\(5\times 10^{18}\) eV) for hard injection spectra and without considering the effects of magnetic fields. The reduction in CMB photon density also favours a decreased cosmic ray source evolution than the best fit using \(\Lambda\)CDM. In consequence, it seems that SU(2)\({}_{\rm CMB}\) favours sources that evolve as the star formation rate (SFR), like star burst galaxies (SBG) and gamma ray bursts (GRB), over active galactic nuclei (AGNs) as origins of UHECRs. We conclude that the question about the nature of primary sources of UHECRs is tightly-knit with the actual temperature redshift relation of the CMB. SU(2) Yang-Mills thermodynamics; cosmological model; ultra-high energy cosmic rays, cosmogenic neutrinos + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote † †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote †: journal: **Journal of the Royal Society of London** + Footnote † †: journal: **Journal of the Royal Society of London** + Footnote † †: journal: **Journal of the Royal Society of London** + Footnote † †: journal: **Journal of the Royal Society of London** + Footnote † † †: journal: **Journal of the Royal Society of London** + Footnote values obtained by distance ladors (Riess et al., 2022). Similarly, the mild tension in \(\sigma_{8}\) and \(\Omega_{\rm m,0}\)(Troster et al., 2020) is alleviated with \(\sigma_{8}=0.709\pm 0.020\) and \(\Omega_{\rm m,0}=0.384\pm 0.006\)(Hahn et al., 2019) especially when trying to break degeneracy independently of any cosmological model (Miyatake et al., 2022). A currently bad agreement of the SU(2)\({}_{\rm CMB}\) fit at low multipoles can be attributed to neglecting photon screening effects (Hofmann et al., 2022). Furthermore, SU(2)\({}_{\rm CMB}\) seems to be in tension with big bang nucleosynthesis (BBN) as the local baryon density in that model \(\omega_{\rm b,0}=0.0173\pm 0.0002\) is significantly lower than \(\omega_{\rm b,0}=0.02166\pm 0.00015\) as obtained by measurements of the primordial deuterium abundance (Cooke et al., 2018). However, the low baryon density could mitigate the missing baryon problem (Shull et al., 2012). The relatively high recombination redshift has mainly two consequences: The fit to CMB multipoles does not work, unless more matter is introduced to the cosmological model at some point after recombination (Hahn et al., 2019). This is resolved by splitting the dark matter content into two parts, one which is introduced before recombination and one afterwards. This sets some constraints to the nature of dark matter, preferring ultra light dark matter classes such as fuzzy dark matter (Meinert & Hofmann, 2021). The second consequence of this high recombination redshift is stretching the CMB photon density over longer periods of time, which thus effectively reduces the CMB spatial density and produces an increase in the interaction lengths of ultra-high energy cosmic rays. While there are many attempts to measure the \(T(z)\)-relation of the CMB indirectly, for example with absorber clouds (Riechers et al., 2022), it is not obvious that those indirect methods are sensitive to the actual CMB temperatures at finite redshifts (Hofmann & Meinert, 2023). They might only probe the temperature of the absorber clouds and the blackbody nature of the CMB. There is a need of determining the temperature redshift relation of the CMB directly. The consequences for UHECR interactions with an SU(2)\({}_{\rm CMB}\) description have been discussed previously, only considering the handedness of the photons, SU(2)\({}_{\rm L}\)(Tipler & Piasecki, 2018). A fully consistent understanding of the SU(2)\({}_{\rm CMB}\) model requires applying Yang-Mills thermodynamics and obtaining the modified \(T(z)\). Furthermore, the effect of this modified temperature redshift relation on the CMB density produces non-trivial redshift dependences on the UHECR interactions that need to be considered in depth. The purpose of this paper is to discuss the multi-messenger implications of employing the SU(2)\({}_{\rm CMB}\) modified temperature redshift relation consistently. Firstly, the modified \(T(z)\) relation is outlined in section 2. The consequences of this relation for all the interactions of UHECRs are discussed in section 3. section 4 compares fits of UHECRs spectral energy and composition measured by the Pierre Auger Observatory with both U(1) and SU(2) \(T(z)\) relations. The corresponding cosmogenic neutrino fluxes are presented in section 5. ## 2 _T(z)_ Relation of Su(2)\({}_{\rm CMB}\) In the following, we briefly review the \(T(z)\) relation of deconfining SU(2)\({}_{\rm CMB}\) thermodynamics. For a longer version of the argument, the reader is referred to Hahn & Hofmann (2018); Hofmann & Meinert (2023). The core idea is that the additional degrees of freedom in an SU(2) gauge group lead to the topological constant \(1/4^{1/3}\), so that the \(T(z)\) relation is for large \(z\gg 1\) given as \[T(z)/T_{0}=\left(\frac{1}{4}\right)^{1/3}(1+z)\,,\quad(T(z)\gg T(z=0)). \tag{1}\] To derive this constant, we assume energy conservation in a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe: \[\frac{{\rm d}\rho}{{\rm d}a}=-\frac{3}{a}\left(\rho+P\right)\,, \tag{2}\] where \(\rho\) denotes the energy density, and \(P\) the pressure of the deconfined phase in SU(2) thermodynamics. The scale factor \(a\) is dimensionless, \(a(T(z=0))=1\), and related to the redshift \(z\) according to \(1/a=z+1\). Eq. (2) has the solution \[a = \exp\left(-\frac{1}{3}\int_{P(T(z=0))}^{P(T)}\frac{{\rm d}\rho}{ {\rm d}T^{\prime}}\frac{{\rm d}\rho}{s(T^{\prime})}\right)\,, \tag{3}\] where the entropy density \(s\) is defined as \(s=(\rho+P)/T\). By using the Legendre transformation \[\rho=T\frac{{\rm d}P}{{\rm d}T}-P\,, \tag{4}\] the term \(\kappa\) can be expressed as \[\kappa=\frac{1}{T}\frac{{\rm d}\rho}{{\rm d}T}=\frac{{\rm d}^{2}P}{{\rm d}T^{ 2}}=\frac{{\rm d}s}{{\rm d}T}\,. \tag{5}\] Substituting Eq. (5) into Eq. (3) finally yields \[a=\exp\left(-\frac{1}{3}\log\frac{s(T)}{s(T(z=0))}\right)\,. \tag{6}\] The formal solution (6) is valid for any thermal and conserved fluid subject to expansion in an FLRW universe. If the function \(s(T)\) is known, then \(T(z)\) can be derived. The ground-state of the deconfining phase is independent of the \(T(z)\) relation, since the equation of state for ground-state pressure \(P^{\rm BS}\) and energy density \(\rho^{\rm BS}\) is \(P^{\rm BS}=-\rho^{\rm BS}\)(see also Hofmann, 2016). Asymptotic freedom occurs nonperturbatively for \(T(z)\gg T(z=0)\)(Gross & Wilczek, 1973; Toplitzer, 1973; Hofmann, 2016), and therefore \(s(T)\) is proportional to \(T^{3}\). Due to a decoupling of massive vector modes at \(T(z=0)\), excitations represent a free photon gas. Therefore, \(s(T(z=0))\) is also proportional to \(T^{3}(z=0)\). Correspondingly, the ratio \(s(T)/s(T(z=0))\) in Eq. (6) reads \[\frac{s(T)}{s(T(z=0))}=\frac{g(T)}{g(T(z=0))}\left(\frac{T}{T(z=0)}\right)^{3} \,,\ (T\gg T(z=0))\,, \tag{7}\] where \(g\) refers to the number of relativistic degrees of freedom at the respective temperatures. SU(2) has one massless gauge mode with two polarisations and two massive gauge modes with three polarisations each, so \(g(T)=2\times 1+3\times 2=8\), for U(1) there is only one massless mode, \(g(T(z=0))=2\times 1\). Substituting this into Eq. (6), inserting the result into Eq. (6), and solving for \(T\), we arrive at the high-temperature \(T(z)\) relation \[T(z) = \left(\frac{1}{4}\right)^{1/3}(z+1)\,T(z=0)\,,\ \ (T\gg T(z=0))\,. \tag{8}\] Due to two massive vector modes contributing to \(s(T)\) at low temperatures, the \(T(z)\) relation is modified to \[T(z)={\cal S}(z)(z+1)\,T(z=0)\,,\ \ (T\geq T(z=0))\,, \tag{9}\] where the nonlinear function \({\cal S}(z)\) is depicted in Fig. 1 and derived in Hahn et al. (2019). The function \({\cal S}(z)\) can be approximated reasonably well with the analytical function \[{\cal S}(z)_{{\rm SU(2)}}\approx\exp(1-1.7\,z)+\left(\frac{1}{4}\right)^{1/3}\,. \tag{10}\] This approximation will be used in section 3. However, the numerical solution was applied for all following sections. ## 3 Changes in propagation length In this section, we discuss the changes to the propagation of ultra-high energy cosmic rays produced by employing the modified temperature relation \(T(z)\) from SU(2)\({}_{\rm CMB}\) as derived in the previous section, Eqs. 8 and 9. The redshift dependence of the CMB temperature results in scaling and shifting of the differential CMB photon number density \(n_{\rm CMB}(\epsilon,z)\) \[n_{\rm CMB}(\epsilon,z)=\left(\frac{T(z)}{T_{0}}\right)^{2}\,n_{\rm CMB}\left( \epsilon\left(\frac{T(z)}{T_{0}}\right)^{-1},0\right)\,, \tag{11}\] where \(\epsilon\) is the energy of the photons, and \(n_{\rm CMB}\) as derived from the Planck distribution is \[n_{\rm CMB}(\epsilon,z)\,=\frac{1}{\pi^{2}c^{3}h^{3}}\frac{\epsilon^{2}}{\exp( \epsilon/k_{B}T(z))-1} \tag{12}\] where \(k_{B}\) is the Boltzmann constant. The redshift dependence of UHECR interactions with the CMB is reflected in the expression for the energy loss length (Berezinskii et al. 1990) \[-\frac{1}{E}\frac{dE}{dx}=\int_{\oplus}^{\infty}\frac{k_{B}T\,de^{\prime} \sigma(\epsilon^{\prime})f(\epsilon^{\prime})e^{\prime}}{2\pi^{2}\Gamma^{2}c ^{3}h^{3}}\,\left\{-\ln\left[1-\exp\left(-\frac{\epsilon^{\prime}}{2T\,k_{B}T }\right)\right]\right\}\] where \(E\) is the energy and \(\Gamma\) is the Lorentz boost of the UHECRs, and \(\sigma(\epsilon^{\prime})\) is the cross-section for the corresponding interaction (photodisintegration, photomeson, pair-production) and \(f(\epsilon^{\prime})\) is the average inelasticity of the interaction. The scaling of the CMB density produces a corresponding scaling of the interaction rates \(\lambda(\Gamma,z)\): \[\lambda(\Gamma,z)=\left(\frac{T(z)}{T_{0}}\right)^{3}\,\lambda\,\left(\frac{T (z)}{T_{0}}\,\Gamma,z=0\right)\,. \tag{13}\] The comparison of the energy loss lengths for U(1) and SU(2) is shown in Fig. 2 (protons) and in Fig. 3 (iron) for \(z=1\). The interaction processes with the CMB are represented separately (photopon, photodisintegration, pair production) while they are grouped into one curve for extragalactic background light (EBL, dotted dark red). For protons at redshift \(z=1\) the energy loss length at the GZK-limit (E\(\sim\) 50 EeV) is shifted by a factor of \(\sim\)2 to higher energies for SU(2) and the propagation lengths for both pair production and photopion production are increased by nearly a factor 3. For iron nuclei at the same redshift, the corresponding photodisintegration limit is also shifted to higher energies by a factor \(\sim\)2 for the SU(2). However, because the energy loss lengths are also increased due to the reduced CMB density, the interactions with the EBL are the dominant ones for cosmic ray energies below \(10^{20}\) eV and therefore the total energy loss length is not increased as much as in the case of protons. This is representative of the case for all intermediate nuclear species with masses between the proton and iron. The increase in energy loss lengths implies the expansion of the horizon for UHECRs: for protons at all energies, for nuclei at the highest energies starting from about \(\sim 10^{19}\) eV. With such an increase, protons from sources at redshift 1 and energies \(1-40\) EeV would propagate for several hundreds of megaparsecs more than in the case of the U(1), whereas protons at higher energies (where the photopion interactions prevail) would propagate for more than ten megaparsecs. These increases of propagation horizons are only important when the contribution from distant sources is the dominant one. As the redshift evolves to the present, the U(1) and SU(2)\({}_{\rm CMB}\) densities converge and by distances of 20 Mpc from Earth the loss lengths differ by only 1.5 %. Thus, although protons can propagate further away from sources beyond \(\sim\)200 Mpc in the SU(2) case, they completely lose their energy before reaching our galaxy and only the secondary neutrinos reach us, much like in the U(1) case. Figure 1: Plot of function \(\mathcal{S}(z)\) in Eq. (9) for SU(2)\({}_{\rm CMB}\) in solid. The conventional \(T(z)\) relation of the CMB, as used in the cosmological standard model \(\Lambda\)CDM, associates with the dashed line \(\mathcal{S}(z)\equiv 1\). The high-temperature value \(1/4^{1/3}\) is approximated by the dotted line \(\mathcal{S}(z)=0.63\). Figure 3: As Fig. 2 for the propagation length of iron nuclei. Figure 2: Propagation length of protons at redshift \(z=1\) as a function of the initial particle energy. The normal U(1) and the SU(2) induced \(T^{\prime}(z)\) propagation lengths are shown as dashed and full lines, respectively. Nonetheless, protons coming from sources marginally closer are able to reach our galaxy: at a distance of 200 Mpc Eq. 13 yields a reduction in the interaction rates of \(\sim\)9 % for the SU(2) scenario, see Fig. 2. For nuclei the increased propagation is, however, much less relevant since their propagation lengths are limited to a few dozens of Mpc. For such distances, the reduction in interaction rates with the CMB is of \(2-4\) % for SU(2). However, those interactions are overshadowed by the dominant interactions with the EBL. ## 4 Observational consequences for UHECR energy spectra We evaluate the impact on the propagation of UHECRs by employing the fit obtained by Heinze et al. (2019) to data from the Pierre Auger Observatory (Aab et al., 2017) under a conventional temperature redshift relation (\(\Lambda\)CDM). The changes in spectral energy and composition produced with the same fit values under SU(2)\({}_{\rm CMB}\) are obtained by employing the modified \(T(z)\)-relation. The propagation of UHECRs was performed using PriiNCE (Heinz et al., 2019), which is an efficient code to integrate the transport equations for the evolution of cosmic rays at cosmological scales. It includes all the relevant interactions and allows for custom modifications, however, it does not account for the effect of magnetic fields. The propagation scenario considers a population of sources with a continuous distribution in redshift proportional to \((1+z)^{m}\) with source evolution parameter \(m\) obtained from the fit. The sources are assumed to be isotropically distributed and to eject a rigidity-dependent spectral energy flux according to \[J_{A}(E)=\mathcal{J}_{A}\,f_{\rm cut}(E,Z_{A},R_{\rm max})\,\left(1+z\right)^ {m}\,\left(\frac{E}{E_{0}}\right)^{-\gamma}, \tag{14}\] with five nuclear mass groups indicated by the index \({}_{A}\) (denoting the nuclear species \({}^{1}\)H, \({}^{4}\)He, \({}^{14}\)N, \({}^{28}\)Si, and \({}^{56}\)Fe). They share the same spectral index \(\gamma\) and the maximal rigidity \(R_{\rm max}=E_{\rm max}/Z_{A}\). The cutoff of the injection spectra \(f_{\rm cut}\) is defined as \[f_{\rm cut}(E)=\begin{cases}1,&E<Z_{A}R_{\rm max}\\ \exp\left(1-E/(Z_{A}R_{\rm max})\right),&E>Z_{A}R_{\rm max}.\end{cases} \tag{15}\] \(\mathcal{J}_{A}\) represents the flux of particles of species \(A\) emitted per unit of time, comoving volume, and energy. The elemental injection fractions \(f_{A}\) are defined as \(f_{A}=\mathcal{J}_{A}/(\Sigma_{A^{\prime}}\,\mathcal{J}_{A^{\prime}})\) at the reference energy \(E_{0}=10^{18}\) eV. Integrating over the injected fluxes \(J_{A}\) leads to the integral fractions of the energy density \(I_{A}\), which are independent of the choice of \(E_{0}\): \[I_{A}=\frac{\int_{E_{\rm min}}^{\infty}J_{A}E\,dE}{\Sigma_{A^{ \prime}}\int_{E_{\rm min}}^{\infty}J_{A^{\prime}}E\,dE}=\frac{\int_{E_{\rm min }}^{\infty}f_{A}\,f_{\rm cut}(E,Z_{A})\,E^{1-\gamma}dE}{\Sigma_{A^{\prime}} \int_{E_{\rm min}}^{\infty}f_{A^{\prime}}\,f_{\rm cut}(E,Z_{A^{\prime}})\,E^ {1-\gamma}dE}, \tag{16}\] where \(E_{\rm min}=10^{18}\) eV. For the sake of completeness, we provide both \(f_{A}\) and \(I_{A}\) in the following sections. For SU(2)\({}_{\rm CMB}\) the following cosmological parameters were used for the propagation: The Hubble parameter \(H_{0}\)= 74.24 km s\({}^{-1}\)Mpc\({}^{-1}\), a dark energy fraction of \(\Omega_{\Lambda}=0.616\), and the local matter density \(\Omega_{\rm m,0}=0.384\), compare with Hahn et al. (2019). For U(1)\({}_{\rm CMB}\) (\(\Lambda\)CDM) the values from the Planck Collaboration were used (Aghanim et al., 2020, p. 15, Table 2), where \(H_{0}=67.36\) km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.6847\) and \(\Omega_{\rm m,0}=0.3153\) (TT,TE,EE+lowE+lensing). The best fit parameters of Heinze et al. (2019) can be seen in Table 1. Figure 4 shows a comparison of the fluxes obtained with the same parameters by employing the \(T(z)\) relation for SU(2) (solid lines) vs employing the \(\Lambda\)CDM relation (dashed lines). The resulting total flux for SU(2) is virtually unchanged for energies above \(6\times 10^{18}\) eV, while the fluxes for individual nuclear groups show slightly more pronounced peaks. This effect is a consequence of the modest increase in the horizons. At the same time, the reduction in the pair production losses produces sharper peaks because the effect of energy redistribution corresponding to the U(1) cases is less prominent for SU(2). For protons at the lowest energies, the differences are much more pronounced due to the change in pair production rates as the energies approach \(10^{18}\) eV from above. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline EBL & Gilmore et al. & Element & \(f_{A}\) \% & \(I_{A}\) \% \\ \hline models & TALYS \& Si\(\rm\ddot{u}\)yll 2.3c & H & 0.0 & 0.0 \\ \hline redshifts & \(1-0\) & He & 82.0 & 9.91 \\ \hline \(\gamma\) & \(-0.8\) & Ni & 17.3 & 69.99 \\ \hline \(R_{\rm max}\) & \(1.6\times 10^{18}\) V & Si & 0.6 & 16.91 \\ \hline \(m\) & 4.2 & Fe & 0.02 & 3.19 \\ \hline \end{tabular} \end{table} Table 1: Best fit parameters from Heinze et al. (2019), Table 3. Figure 4: Spectral fit to the 2017 Auger spectral flux data (Aab et al., 2017) from the best fit parameters in Heinze et al. (2019), see Tab. 1. The fluxes of the normal U(1) and modified SU(2) temperature redshift relations are shown as dashed and full lines, respectively. The \(\chi^{2}\) only considers data points above the ankle region (white dots), as was done in Heinze et al. (2019). Figure 5: Spectral fit using a gradient descent algorithm to the 2017 Auger spectral flux data, \(X_{\rm max}\) and \(\sigma(X_{\rm max})\) data (Aab et al., 2017). The fluxes of the normal U(1) and modified SU(2) temperature redshift relations are shown as dashed and full lines, respectively. The \(\chi^{2}\) was computed including all the white dots. An improved fit is found employing a gradient descent algorithm (Perrotta, 2020, p. 33 ff) including spectral data, \(X_{\rm max}\) data, and \(\sigma(X_{\rm max})\) data from 2017 (Aab et al., 2017) above (\(1\times 10^{18}\) eV). Table 2 presents the values for this fit, and Fig. 5 shows the flux comparison. For this fit, the proton excess of SU(2)\({}_{\rm CMB}\) below the ankle is reduced and the main contributing factor is the shallower source evolution (\(m=2.7\)) in contrast to the stronger evolution \(m=4.2\) for U(1) in Heinze's best fit. The injected chemical composition and the spectral index are only mildly changed, which suggests that the shallower source evolution is enough to compensate for the increased proton horizon and the pileup below the ankle. Note that the proton fraction below the ankle is still too high, in disagreement with the chemical composition inferred from the \(X_{\rm max}\) data (see also appendix B, Fig. 10 c). Below the ankle, an additional galactic component is expected with a heavier composition. To better illustrate the SU(2) impact on UHECR propagation, Fig. 6 contrasts the cosmic ray fluxes resulting with a conventional U(1) propagation employing the best fit parameters from Table 2 and scaling the CMB photon density by different factors as shown in the curve labels. The red dotted line in Fig. 6 corresponds to SU(2)\({}_{\rm L}\), where CMB photons interact only with half of the UHECRs due to their handedness. The excess in proton flux below the ankle is correlated with the CMB photon density, because these protons come from the disintegration of nuclei. However, this relation is dependent on the injection spectral index, and it is hard to distinguish an increased proton flux from an additional UHECR source and source evolution. Detailed directional studies which also consider the effects of magnetic fields as well as a better understanding of the chemical composition below the ankle are necessary in order to favour or disfavour the correlation between the slope of the UHECR flux below the ankle and \(T(z)\). Note also that only hard spectra, i.e. \(\gamma\leq 0\) can increase significantly the UHECR flux below the ankle, because of the larger contribution of the highest energies in secondary protons. Soft injection spectra, e.g. \(\gamma\approx 2\) as expected by shock acceleration, do not significantly increase the UHECR flux under SU(2)\({}_{\rm CMB}\). ## 5 Cosmogenic neutrinos The expected cosmogenic neutrino fluxes are shown in Fig. 7 for the modified temperature redshift relation under SU(2)\({}_{\rm CMB}\) and the normal \(T(z)\) for the best fit values from the gradient descent method, Table 2. The neutrino fluxes for SU(2)\({}_{\rm CMB}\) peak at slightly higher energies and are slightly increased. The former feature is a consequence of the changed redshift dependence, which increases the energy of the GZK limit in SU(2)\({}_{\rm CMB}\) compared to U(1). The latter effect results from the increase in the propagation horizon of the source protons. Figure 8 shows that changes in the CMB density only affect the cosmogenic neutrino flux for energies around \(10^{17}\) eV. The peak at around \(10^{15}\) eV, stemming mostly from the decay of neutrons from photodisintegration (see e.g. Ave et al. (2005)) is mostly unaffected except for being slightly narrower due to reduced pair production losses. The reduction of the CMB density by 25% matches very well the neutrino flux obtained with the best fit for the SU(2)\({}_{\rm CMB}\), however, from Fig. 6 it can be seen that the CR flux is also strongly increased for energies above the ankle, in comparison to \(\Lambda\)CDM. In addition to the cosmogenic neutrinos, the photopion production with the CMB also generates \(\gamma\)-rays and the resulting flux at Earth in the case of a SU(2)\({}_{\rm CMB}\) would be slightly enhanced compared to the U(1)\({}_{\rm CMB}\) due to the increased horizon in the absence of \(\gamma\gamma\)-pair production. However, \(\gamma\gamma\)-pair production and inverse Compton \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline EBL & Gilmore et al. & Element & \(f_{A}\) \% & \(I_{A}\) \% \\ \hline models & TALYS \& Sibyll 2.3c & H & 0.001 & 0.0 \\ redshifts & \(1-0\) & He & 82.273 & 9.74 \\ \hline \(\gamma\) & \(-0.89\) & Ni & 17.364 & 76.91 \\ \hline \(R_{\rm max}\) & \(1.74\times 10^{18}\) V & Si & 0.354 & 11.63 \\ \hline \(m\) & 2.7 & Fe & 0.009 & 1.72 \\ \hline \end{tabular} \end{table} Table 2: Best fit gradient descent parameters. Figure 6: The effect of seven modified CMB photon densities on the total cosmic ray flux is shown in comparison to the normal U(1) temperature redshift relation, as obtained in Heinze et al. (2019) (navy blue, dashed) on top of Auger data from 2017. The best fit parameters of the gradient descent method are used, compare Table 2. The total CR flux for an SU(2) \(T(z)\) relation is shown in many blue. \(0.5\times\) U(1) is shown in red dotted lines, \(0.75\times\) U(1) orange dashed, \(1,25\times\) U(1) yellow dot-dashed, \(1.5\times\) U(1) green dotted, \(1.75\times\) U(1) light blue dashed, \(2\times\) U(1) purple dot-dashed. Figure 7: The cosmogenic neutrino flux obtained from the gradient descent fit, Table 2. SU(2)\({}_{\rm CMB}\) is shown in navy blue, normal \(\Lambda\)CDM with the corresponding cosmological parameters and U(1) photon propagation is shown in a navy blue dashed line. The pink shaded area represents the projected sensitivity for the IceCube Gen2 radio upgrade after 5 years of observation, compare Fig. 5 in Aartsen et al. (2019). The learned dotted line indicates the expected sensitivity for Grand200k after 3 years (Alvarez-Muniz et al., 2020). The dark purple and green dashed lines show 90% CL limits from the IceCube and Pierre Auger Collaboration, respectively (Aartsen et al., 2018; Aab et al., 2019). scattering with the EBL are the dominant interactions, in particular for \(\gamma\)-ray energies \(\lesssim 100\) TeV, therefore, after the cascading of photon we expect no significant difference in the cosmogenic \(\gamma\)-ray flux between the SU(2)CMB and the U(1)CMB. ## 6 Summary and outlook In this paper, we examined the impact of locally non-linear modification of the CMB temperature redshift relation \(T(z)\) on the fit to ultra-high energy cosmic rays and the corresponding cosmogenic neutrinos. This changed temperature redshift relation is motivated by subjecting CMB photons to an SU(2) rather than a U(1) gauge group. While the temperature redshift relation is today (\(z=0\)) the same as in \(\Lambda\)CDM, the spectral CMB density increases non-linearly slower for small redshifts (\(z\lesssim 2\)) under SU(2)CMB, and then linearly with a scaling factor of \(1/4^{1/3}\approx 0.63\) in comparison to the normal \(T(z)\) under \(\Lambda\)CDM, compare Fig. 1. The reduction of the CMB densities is found to affect significantly the interaction lengths of UHECRs with CMB photons in the redshift range of relevance for UHECR propagation, resulting in extended horizons for protons and UHECR nuclei. However, the increase in interaction lengths has only a modest effect on the observed UHECR flux due to interactions with the EBL, which then become dominant for the energies of relevance. Hence, a comparison to an existing fit of UHECRs yields similar flux of UHECRs nuclei but differs considerably for protons where a pronounced bump appears below the ankle for the SU(2)CMB. In order to agree with Auger data in the case of a hard injection spectrum, a shallower source evolution of cosmic ray sources of \(m\approx 2.7\) is needed, which is more in line with SBGs and GRBs than with AGNs. This is in agreement with recent studies that consider arrival directions and extragalactic magnetic fields for energies beyond the ankle (\(\geq 5\times 10^{18}\) eV) (Bister, 2023). While the confirmation of the SU(2)CMB description requires further studies, the present work provides constraints for its validity. The independent determination of the redshift evolution of UHECR sources has the potential to reject the SU(2)CMB temperature redshift relation for _hard_ injection spectra: for a steeper cosmic ray source evolution, the predicted proton contribution below the ankle would be in tension with observations. Since there is currently no firm preference for a specific UHECR source class (Abreu et al., 2022), we would like to add modified \(T(z)\) and in particular in the case of SU(2)CMB to the discussion. This adds another tool to discriminate potential source classes and vice versa, constraining the sources by other means while simultaneously improving the knowledge of the UHECR composition may lead to a direct probe of \(T(z)\) of the CMB in the future. ## 7 Data availability The authors welcome requests to collaborate and will share the modifications of Jonas Heinze's original program PriNCe as used in this study accordingly. ## 8 Acknowledgements JM acknowledges insightful discussions with Ralf Hofmann and Wolfgang Rhode. This work is supported by the Vector Foundation under grant number P2021-0102 and by the SFB 1491 (Project A3). LM's work is supported by the DFG under grant number 445990517 (KA 710).
2309.07789
SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training
We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT (i-SOT) MRAM not only enables field-free magnetization switching with high endurance (> 10^11), but also hosts multiple stable probabilistic states with a low device-to-device variation (< 6.35%). Accordingly, the proposed PBNN outperforms other neural networks by achieving an 18* increase in training speed, while maintaining an accuracy above 97% under the write and read noise perturbations. Furthermore, by applying the binarization process with an additional SOT-MRAM dummy module, we demonstrate an on-chip MNIST inference performance close to the ideal baseline using our SOT-PBNN hardware.
Puyang Huang, Yu Gu, Chenyi Fu, Jiaqi Lu, Yiyao Zhu, Renhe Chen, Yongqi Hu, Yi Ding, Hongchao Zhang, Shiyang Lu, Shouzhong Peng, Weisheng Zhao, Xufeng Kou
2023-09-14T15:25:36Z
http://arxiv.org/abs/2309.07789v2
# SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training ###### Abstract We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT (i-SOT) MRAM not only enables field-free magnetization switching with high endurance (\(>10^{11}\)), but also hosts multiple stable probabilistic states with a low device-to-device variation (\(<6.35\%\)). Accordingly, the proposed PBNN outperforms other neural networks by achieving an 18\(\times\) increase in training speed, while maintaining an accuracy above 97% under the write and read noise perturbations. Furthermore, by applying the binarization process with an additional SOT-MRAM dummy module, we demonstrate an on-chip MNIST inference performance close to the ideal baseline using our SOT-PBNN hardware. ## I Introduction With the advent of artificial intelligence, a seismic shift is observed in computing paradigms. As we move towards handling larger volumes of data and higher task complexities, new architectures for artificial neural networks (ANNs) have sprung up in numerous applications [1]. Conventionally, ANNs have relied on high-precision floating-point arithmetic to obtain optimal computational results. However, these approaches always require substantial computing and memory resources. Therefore, as the number of operations and data volume increase, the training process inevitably slows. In addition, the performance of these deterministic networks is heavily affected by discrepancies between standard training datasets and actual input data, which invariably result in reduced accuracy [2]. Alternatively, probabilistic-featured PBNNs, which introduce stochasticity within the network to enhance robustness, have been proposed to facilitate the convergence to global optima [3]. Consequently, the noise-tolerant PBNNs could offer accelerated training, robustness, and resource-saving capabilities for image/video classification and natural language processing. In principle, the key of PBNNs relies on the conversion of probabilities into deterministic outcomes via data sampling [4]. This necessitates that the PBNN hardware withstands a large number of sampling operations, while the power consumption for each operation needs to be as low as possible. In this context, the non-destructive electrical manipulation of magnetic moments inherently allows MRAM to possess high endurance and energy-efficient write/read characteristics. More importantly, the magnetization switching probability can be well-controlled by the injection current level via spin-orbit torque, therefore making MRAM a suitable building block to construct PBNNs. Inspired by the above scenario (Fig. 1), we utilize the SOT-MRAM platform to harness the PBNN advantages. The device across the 8-inch wafer exhibits highly consistent performance in terms of low resistance variation and identical probabilistic switching curves with repeatable state variables. By implementing both the vector-matrix multiplication (VMM) and the binarization operations with SOT-MRAM, we demonstrate a noise-tolerant and fast-training PBNN with an on-chip MNIST digit recognition accuracy of 90%. ## II Field-Free Probabilistic Switching of In-Plane Magnetized SOT-MRAM ### _Device characterizations and SOT-driven probabilistic magnetization switching_ High-quality magnetic tunnel junction (MTJ)/heavy metal (HM) thin films were prepared on 8-inch Si/SiO\({}_{2}\) wafers by magnetron sputtering. To enable field-free operation, we adopted the i-SOT MRAM configuration, in which an elliptical-shaped MTJ design warrants in-plane magnetic anisotropy. Accordingly, the spin current generated in the HM layer is parallel to the magnetization direction (\(M\)) of the free layer; therefore, the SOT is inserted to directly switch \(M\) without the presence of an assisted magnetic field (Fig. 2). After film growth, large-scale SOT-MRAM array was fabricated, with typical device size of 0.7 \(\upmu\)m \(\times\) 2 \(\upmu\)m. High-resolution transmission electron microscope (HR-TEM) images in Fig. 3 visualize the sharp hetero-interfaces. Subsequently, reliable field-free SOT-driven magnetization switching was demonstrated (Fig. 4), where the response time of the SOT-MRAM is below 400 ps, and the hysteresis window (_i.e._, switching voltage of 400 ps pulse is \(V_{\text{C}}\)) of the \(R\)-\(V\) curve is modulated by the pulse width. Moreover, because of the non-destructive SOT switching mechanism, the recorded parallel resistance (\(R_{\text{P}}\)) and antiparallel resistance (\(R_{\text{AP}}\)) in Fig. 5 did not experience any distortion after \(10^{11}\) write/read cycles (_i.e._, the tunneling magnetoresistance ratio of \(\sim\)70% is sufficient for the VMM operation in PBNNs). Besides, by changing the input voltage around \(V_{0}\) (voltage of 50% switching probability), multiple switching probability states (_i.e._, network weights) are obtained (Fig. 6), and their corresponding probabilities are repeatable (_i.e._, whose values were deduced by counting the number of \(R_{\text{P}}\)-to-\(R_{\text{AP}}\) switching during 500 samplings per voltage). Apart from single SOT device characterizations, device-to device variations also play an important role in determining the overall network functionality. In this regard, Fig. 7 confirms that 8 randomly selected SOT-MRAM devices from the array all yielded 11 well-defined intermediate probabilistic states, with an average variation of 6.35% in the examined operating range. In the meantime, the standard deviation of the \(R_{\text{P}}\) (\(R_{\text{AP}}\)) normal distribution curve, which was collected from 100+ devices across the 8-inch wafer, is found to be 4.72% (4.79%), as shown in Fig. 8. Besides, we need to point out that such a small resistance variation has a negligible impact on the MNIST classification accuracy of the subsequent simulations and PBNN on-chip validation. Consequently, the proposed SOT-MRAM provides stable and uniform probabilistic switching states, thereby laying a solid foundation for the design and implementation of PBNN. ## III SOT-MRAM-Enabled PBNN Implementation ### PBNN network structure and process flow for MNIST test To demonstrate the SOT-MRAM-enabled PBNN, we developed a network that consists of two convolutional layers, two max-pooling layers, and three fully connected layers (_i.e._, all layers are constructed by SOT-MRAM) for standard MNIST handwritten digit recognition test (Fig. 9). Utilizing the faster training speed of PBNN, our PyTorch simulation results in Fig. 10 show that the ideal classification accuracy of PBNN quickly exceeds 98% after only 4 epochs, whereas other networks require at least 20 epochs under full-precision conditions [5, 6, 7]. Equivalently, the PBNN system can realize a significant training time reduction of 6\(\times\) to 18\(\times\) (Fig. 11). Another advantage of PBNN is its resource-saving feature. For instance, the entire network needs only eight quantized states for both weights and activations to achieve an accuracy above 98% at 30 epochs (Fig. 12). Furthermore, the PBNN displays a salient noise-tolerant property against the write and read errors. According to the error-awareness simulation data in Fig. 13, it is seen that the training result remains almost constant with respect to the write error, which may benefit from the natural stochasticity of probabilistic switching. On the other hand, even though the increase of the read error lowers the classification accuracy of PBNN to 90%, its performance is still better than that of other neural network counterparts [8]. Considering that the measured write and read errors of the SOT-MRAM devices are less than 6.35% (Fig. 7) and 4.8% (Fig. 8) respectively, the overall accuracy of our SOT-PBNN can reach the 97.33% benchmark in the ideal scenario. ### Hardware implementation of SOT-MRAM PBNN Guided by the PyTorch simulation, we further designed an on-chip PBNN system based on the in-plane magnetized SOT-MRAM array. As illustrated in Fig. 14, the row devices are selected through an SWL decoder, while the column devices are selected by write and read voltages from a digital-to-analog converter (DAC). Afterwards, a transimpedance amplifier (TIA) converts the current accumulated during the VMM operation into a voltage signal, which is binarized before passing to the next network layer. Given that the SOT-MRAM resistance changes with the read voltage, the binarization process in our SOT-PBNN system cannot be performed using a fixed-value resistor. Instead, to enable on-chip current comparison, we allocated an additional [2 \(\times\) n] dummy cell corresponding to the [m \(\times\) n] MRAM array cell. As a result, the binarization is achieved by writing \(R_{\text{AP}}\) and \(R_{\text{P}}\) into two MRAM devices along the same column of the dummy cell, and then determining the resistance state of a single MRAM using half of the summed currents under the same read voltage. The overview of the integrated SOT-MRAM PBNN chip is shown in Fig. 15. As a proof-of-concept, the VMM operation was validated experimentally in a [16 \(\times\) 1] array. Specifically, after all 16 serial-connected SOT-MRAM devices were initialized in the anti-parallel state (_i.e._, weight assignment), the accumulated current (\(I_{\text{out}}\)) measured at the output port is highly consistent with the ideal value (\(I_{\text{sum}}=\sum_{i=1:6}I_{\text{so},i}\)) under different read voltages (Fig. 16). It is also noted that with a maximum read voltage of 0.54 V, the average output current variation is only 4.31%, again demonstrating the low read error of our SOT-MRAM devices. Concurrently, the on-chip comparator function was evaluated from 9 randomly selected devices. Although the difference between \(R_{\text{P}}\) and \(R_{\text{AP}}\) becomes narrower as the read voltage increases, the reference resistance \(R_{\text{ref}}\) of the dummy cell is continuously kept within the resistance gap, hence verifying a wide operating range of the SOT-MRAM dummy cell (Fig. 17). Finally, we selected an 8-level activation quantization and 3-level weight quantization for handwritten digit recognition. After transferring the weight values to MRAM, we conducted inference using the MNIST dataset. From the measured and simulated results in Fig. 18, it is seen that the inference currents distributions share the same feature versus the assigned weight information, therefore yielding the correct digit judgement. Based on the inference results after 100 training sessions, we have obtained an on-chip classification accuracy over 90% in our integrated SOT-PBNN chip (Fig. 19). ## IV Conclusion Compared with other memristive devices and neural networks, the SOT-MRAM-enabled PBNN elaborated in this work shows advantages including long endurance, stable states, fast training, and robustness against input variations (Table 1). Our work provides a compelling framework for the design of reliable neural networks for low-power applications with limited computational resources. ## Acknowledgment This work is supported by the National Key R&D Program of China (2021YFA0715503), the NSFC Programs (11904230, 62004013), the Shanghai Rising-Star Program (21QA1406000), and the Young Elite Scientists Sponsorship Program by CAST (2021QNRC001).
2305.19662
Implementation of the SCAN Exchange-Correlation Functional with Numerical Atomic Orbitals
Kohn-Sham density functional theory (DFT) is nowadays widely used for electronic structure theory simulations, and the accuracy and efficiency of DFT rely on approximations of the exchange-correlation functional. By inclusion of the kinetic energy density $\tau$, the meta-generalized-gradient approximation (meta-GGA) family of functionals achieves better accuracy and flexibility while retaining the efficiency of semi-local functionals. The SCAN meta-GGA functional has been proven to yield accurate results for solid and molecular systems. We implement meta-GGA functionals with both numerical atomic orbitals and plane wave basis in the ABACUS package. Apart from the exchange-correlation potential, we also discuss the evaluation of force and stress. To validate our implementation, we perform finite-difference tests and convergence tests with the SCAN meta-GGA functional. We further test water hexamers, weakly interacting molecules of the S22 dataset, as well as 13 semiconductors. The results show satisfactory agreements with previous calculations and available experimental values.
Renxi Liu, Daye Zheng, Xinyuan Liang, Xinguo Ren, Mohan Chen, Wenfei Li
2023-05-31T08:58:10Z
http://arxiv.org/abs/2305.19662v1
# Implementation of the SCAN Exchange-Correlation Functional with Numerical Atomic Orbitals ###### Abstract Kohn-Sham density functional theory (DFT) is nowadays widely used for electronic structure theory simulations, and the accuracy and efficiency of DFT rely on approximations of the exchange-correlation functional. By inclusion of the kinetic energy density \(\tau\), the meta-generalized-gradient approximation (meta-GGA) family of functionals achieves better accuracy and flexibility while retaining the efficiency of semi-local functionals. The SCAN meta-GGA functional has been proven to yield accurate results for solid and molecular systems. We implement meta-GGA functionals with both numerical atomic orbitals and plane wave basis in the ABACUS package. Apart from the exchange-correlation potential, we also discuss the evaluation of force and stress. To validate our implementation, we perform finite-difference tests and convergence tests with the SCAN meta-GGA functional. We further test water hexamers, weakly interacting molecules of the S22 dataset, as well as 13 semiconductors. The results show satisfactory agreements with previous calculations and available experimental values. ## I Introduction Kohn-Sham density functional theory (DFT) [1; 2] is nowadays one of the most popular paradigms in electronic structure theory. In Kohn-Sham DFT, the many-electron system is replaced by an auxiliary system of non-interacting electrons, and all many-body interactions are carried by the exchange-correlation functional \(E_{\rm{xc}}\). The electronic density is solved self-consistently in an iterative way, the process of which is called the self-consistent field (SCF) method. However, the exact form of the exchange-correlation functional remains unknown, and approximations have to be made. In practice, the accuracy of Kohn-Sham density functional theory simulations largely depends on the choice of the approximated exchange-correlation functional. [3] Most approximations of the exchange-correlation (XC) functionals can be classified according to the Jacob's Ladder [4]. Moving up on the ladder, each rung introduces additional ingredients into the functional, making it more complicated and correcting some deficiencies of the lower rungs [5], until the heaven of chemical accuracy is reached. The first three rungs of the ladder are the local density approximation (LDA), the generalized gradient approximation (GGA), and the meta-generalized gradient approximation (meta-GGA), respectively. They are classified as local or semi-local functionals because the energy functional takes the form of \(E_{\rm{xc}}[\rho(\mathbf{r})]=\int e_{\rm{xc}}(\mathbf{r})d\mathbf{r}\), where \(\rho(\mathbf{r})\) is the electron density and the exchange-correlation energy density \(e_{\rm{xc}}(\mathbf{r})\) depends only on local physical quantities at each real-space point \(\mathbf{r}\). As a result of this locality, the first three rungs of functionals are highly efficient in calculations by using supercomputers. The highest rung of semi-local functionals is the meta-GGA functional. [6] In meta-GGA, the exchange-correlation potential depends on the kinetic energy density \(\tau(\mathbf{r})\) in addition to the electron density \(\rho(\mathbf{r})\) and its derivative \(\nabla\rho(\mathbf{r})\). The inclusion of \(\tau(\mathbf{r})\) not only grants the meta-GGA functional to be more flexible compared to the first two rungs but also allows it to differentiate single-orbital regions from overlap regions. [6; 7; 8; 9] As a result, meta-GGA functionals are capable of producing accurate results on a variety of systems, including both molecules and solids [8; 6]. One of the most widely used meta-GGA functionals is the SCAN functional proposed by Sun, Ruzsinszky, and Perdew. [10] The SCAN functional satisfies all 17 known physical constraints of the exact exchange-correlation functional. It has been shown to produce accurate predictions of bulk properties for semiconducting oxide, including formation enthalpies [11], equilibrium lattice constants, cohesive energies, bulk moduli [12; 13], and transition pressures [14]. SCAN also yields qualitatively correct descriptions of liquid water and ice [15; 16; 17; 18; 19; 20]. The success of SCAN has been attributed to its ability to give a first-principles description of the medium-ranged Van der Waals interactions [16; 21], which are absent in LDA and GGA functionals. Furthermore, when combined with the Fock exchange, the SCAN0 functional can be obtained. Recently, the SCAN0 hybrid functional has been shown to produce satisfactory results for both model systems [22] and liquid water [23] systems. Previously, meta-GGA functionals including the kinetic energy density have been implemented with the plane-wave (PW) basis,[24; 25] and Gaussian-type orbital basis[26; 27]. Yet as far as we noticed, an implementation of the meta-GGA functional with numerical atomic orbital (NAO) basis including forces and stress is still absent in the literature. Compared to the widely used PW basis in condensed systems with periodic boundary conditions, the NAO basis[28; 29; 30; 31; 32] offers highly efficient basis sets for large-scale first-principles calculations, as the number of basis functions needed is typically much smaller than the plane wave basis set.[33; 34; 35; 36] NAOs are also strictly localized in real space, resulting in a sparse Hamiltonian matrix suitable for linear-scaling methods.[37; 38] However, complications arise when evaluating the forces and stress based on the NAO basis. In particular, besides the Hellmann-Feynman terms, there are also contributions from the Pulay[39] and orthogonal[33] terms. In this work, we implement the meta-GGA SCAN functional for both PW and NAO basis sets in the electronic structure software ABACUS.[70] In particular, we derive the formulas of the energy term, the potential term, the forces, and the stress of the meta-GGA functionals with the systematically improvable NAO basis sets.[28; 29; 40] We follow the treatment in previous literature[24; 25; 26; 27; 41] in evaluating the action of meta-GGA potential on the electronic wave functions. We also note that our implementation is slightly different from that of the NAO-based ONETEP[42] software in the detailed implementation of meta-GGA potential term[5], on which more will be elaborated in Sec. IIsection*.2. We also perform systematic tests on the SCAN functional over a variety of systems. The accuracy of stress calculation is verified by comparing it with the finite-difference results. For the plane-wave basis, we follow the formulation of Yao et al[25]. The rest of the paper is organized as follows. In Sec. IIsection*.2, we present the formulas of the meta-GGA and its implementation with NAO basis sets, including the corresponding numerical operations in ABACUS. Sec. IIIsection*.7 shows the results of applying the SCAN meta-GGA functional on a variety of systems. Conclusions are drawn in Sec. IVsection*.12. ## II Methods ### Evaluation of \(\tau\)-dependent XC Potential The total energy of a system, as described by the Kohn-Sham DFT[2], is partitioned as \[E_{\text{KS}}[\rho]=T_{s}[\rho]+E_{H}[\rho]+E_{\text{xc}}[\rho]+\int\rho( \mathbf{r})v_{ext}(\mathbf{r})d\mathbf{r}, \tag{1}\] where \(T_{s}[\rho]\), \(E_{H}[\rho]\) and \(E_{\text{xc}}[\rho]\) are the non-interacting kinetic term, the Hartree term, and the exchange-correlation term, respectively. The \(v_{ext}(\mathbf{r})\) term represents the external potential. As mentioned in Sec. Isection*.1, one of the key challenges in Kohn-Sham DFT is to develop an accurate and computationally efficient approximation to the exchange-correlation functional \(E_{xc}\). Compared to the LDA and GGA functionals, the meta-GGA functionals are more flexible due to the inclusion of the kinetic energy density dependence. The general form of \(\tau\)-dependent meta-GGA exchange-correlation functional is[10; 27; 44; 43] \[E_{\text{xc}}[\rho]=\int e_{\text{xc}}\Big{[}\rho(\mathbf{r}),\nabla\rho( \mathbf{r}),\tau(\mathbf{r})\Big{]}d\mathbf{r}, \tag{2}\] where \(\rho(\mathbf{r})\) and \(\nabla\rho(\mathbf{r})\) are the electron density and its gradient, respectively, and \(\tau(\mathbf{r})\) is the kinetic energy density, defined as a summation over all occupied Kohn-Sham orbitals \[\tau(\mathbf{r})=\frac{1}{2}\sum_{i=1}^{\text{occ}}\left|\nabla\psi_{i}( \mathbf{r})\right|^{2}. \tag{3}\] To perform self-consistent electronic iterations, we need to evaluate the exchange-correlation potential \(v_{\text{xc}}(\mathbf{r})\) by taking the derivative of \(E_{\text{xc}}\) with respect to the electron density as \[v_{\text{xc}}(\mathbf{r})= \frac{\delta E_{\text{xc}}}{\delta\rho(\mathbf{r})}\] \[= \Big{[}\frac{\partial e_{\text{xc}}}{\partial\rho}-\nabla\cdot( \frac{\partial e_{\text{xc}}}{\partial\nabla\rho})\Big{]}+\int\frac{\delta e _{\text{xc}}}{\delta\tau(\mathbf{r}^{\prime})}\frac{\delta\tau(\mathbf{r}^{ \prime})}{\delta\rho(\mathbf{r})}d\mathbf{r}^{\prime}. \tag{4}\] In our implementation, derivatives of the energy functional, namely \(\frac{\partial e_{\text{xc}}}{\partial\rho}\), \(\frac{\partial e_{\text{xc}}}{\partial\nabla\rho}\) and \(\frac{\delta e_{\text{xc}}}{\delta\tau}\), are obtained from the LIBXC package.[45] Since the first two terms of Eq. 4 equation.2.4 in the square bracket also appear in GGA functionals, we focus on the last term, which is specific to the meta-GGA functionals. We henceforth refer to it as \(V^{\tau}(\mathbf{r})\), and denote the corresponding operator as \(\hat{V}^{\tau}\). Complications arise for meta-GGA functionals because the kinetic energy density \(\tau\) introduces orbital dependence into the potential \(V^{\tau}(\mathbf{r})\). Although \(V^{\tau}(\mathbf{r})\) is still recognized as the density functional, as kinetic energy density \(\tau\) implicitly relies on the electronic density, it is challenging to explicitly write \(V^{\tau}(\mathbf{r})\) in a \(\rho\)-dependent form. To be more specific, we first note that \(\tau\) can be rewritten in terms of the Kohn-Sham orbitals \(\{\psi_{i}\}\) and eigenvalues \(\{\epsilon_{i}\}\) \[\tau(\mathbf{r})=\tau[\{\psi_{i}\},\{\epsilon_{i}\}](\mathbf{r})=\frac{1}{2} \sum_{i}\Theta(\mu-\epsilon_{i})|\nabla\psi_{i}(\mathbf{r})|^{2}, \tag{5}\] where \(\Theta\) is the Heaviside step function, and \(\mu\) is the chemical potential. Then, we apply the chain rule to Eq. 4equation.2.4 and obtain \[V^{\tau}(\mathbf{r})= \int\frac{\delta e_{\text{xc}}(\mathbf{r})}{\delta\tau(\mathbf{r}^ {\prime})}\frac{\delta\tau(\mathbf{r}^{\prime})}{\delta\rho(\mathbf{r})}d \mathbf{r}^{\prime}\] \[= \sum_{i=1}^{\text{occ}}\int\frac{\delta e_{\text{xc}}(\mathbf{r})} {\delta\tau(\mathbf{r}^{\prime})}\Big{[}\frac{\delta\tau(\mathbf{r}^{\prime})}{ \delta\psi_{i}(\mathbf{r}^{\prime\prime})}\frac{\delta\psi_{i}(\mathbf{r}^{ \prime\prime})}{\delta\rho(\mathbf{r})}+\frac{\delta\tau(\mathbf{r}^{\prime})}{ \delta\epsilon_{i}}\frac{\delta\epsilon_{i}}{\delta\rho(\mathbf{r})}\Big{]}d \mathbf{r}^{\prime}. \tag{6}\] Therefore, in order to express \(V^{\tau}(\mathbf{r})\) as a local and multiplicative potential, it is necessary to evaluate \(\frac{\delta\psi_{i}(\mathbf{r}^{\prime})}{\delta\rho(\mathbf{r})}\) and \(\frac{\delta\epsilon_{i}}{\delta\rho(\mathbf{r})}\), which is non-trivial. One way to deal with the issue is to use the optimized effective potential (OEP) method [46], which removes the explicit orbital dependence of exchange-correlation potentials by further applying the chain rule and evaluates newly derived terms using the response theory. Although the OEP method has been applied to meta-GGA functionals [47; 41], it is numerically challenging because an additional set of self-consistent equations must be solved along with the original Kohn-Sham SCF equations [46]. Alternatively, one may observe that it is not the operator \(\hat{V}^{\tau}\) itself, but rather its action on the Kohn-Sham orbitals, namely \(\hat{V}^{\tau}\psi_{i}\), that is of practical importance. Therefore, by inserting \[\frac{\delta\rho(\mathbf{r})}{\delta\psi_{i}(\mathbf{r}^{\prime})}=2\psi_{i}( \mathbf{r}^{\prime})\delta(\mathbf{r}-\mathbf{r}^{\prime}), \tag{7}\] into \(\hat{V}^{\tau}\psi_{i}\), one arrives at \[\hat{V}^{\tau}\psi_{i}(\mathbf{r}) = \frac{1}{2}\int\frac{\delta e_{\mathrm{xc}}}{\delta\tau(\mathbf{ r}^{\prime})}\frac{\delta\tau(\mathbf{r}^{\prime})}{\delta\psi_{i}(\mathbf{r})} \mathrm{d}\mathbf{r}^{\prime} \tag{8}\] \[= -\frac{1}{2}\nabla\cdot\left[\frac{\delta e_{\mathrm{xc}}}{ \delta\tau(\mathbf{r})}\nabla\psi_{i}(\mathbf{r})\right].\] By doing so, we convert the original problem of evaluating functional derivative of \(\frac{\delta e_{\mathrm{xc}}}{\delta\rho}\) into evaluating \(\frac{\delta e_{\mathrm{xc}}}{\delta\psi}\), which could be explicitly written out. However, this operation turns \(V^{\tau}(\mathbf{r})\) into a non-multiplicative operator \[\hat{V}^{\tau}(\mathbf{r})=-\frac{1}{2}\nabla\cdot\left[\frac{\delta e_{xc}} {\delta\tau(r)}\nabla\right], \tag{9}\] which implies that the value of \(\tau\)-dependent XC potential varies depending on the Kohn-Sham wave function \(\psi_{i}\). [48] The formulation in Eq. 8 was first brought up in Ref. [26] to evaluate the Becke-Roussel exchange functional [49] and has been widely adopted in the implementations of meta-GGA functionals. [49; 50; 51; 24; 25; 48; 47; 46; 49; 50; 51] It has been given different names in the literature, such as the "orbital-based density-functional derivative method (ODDM)", [46] or "functional derivatives of \(\tau\)-dependent functionals with respect to the orbitals (FDO)". [5; 48] In this work, we adopt the same framework. ### Kinetic energy density and Hamiltonian in NAOs In numerical atomic basis, the Kohn-Sham orbitals are expanded as a linear combination of atomic orbitals as \[\psi_{i}(\mathbf{r})=\sum_{\mu}C_{i\mu}\chi_{\mu}(\mathbf{r}), \tag{10}\] where \(C_{i\mu}\) denotes the coefficients, \(\chi_{\mu}\) depicts atomic orbitals, and \(i\) is the index for the Kohn-Sham wave function \(\psi_{i}(\mathbf{r})\). Each atomic basis is located on a certain atom in the system: \(\chi_{\mu}(\mathbf{r})=\bar{\chi}_{\mu}(\mathbf{r}-\mathbf{X_{a}})\), where \(\mathbf{X_{a}}\) is the position of atom \(a\). In ABACUS, systems are treated with periodic boundary conditions (PBCs) with multiple \(k\)-point sampling. For clarity of discussion, we start with a derivation in the non-periodic case, followed by a short extension to the periodic scenario. We discuss the two major added components of the meta-GGA functional as compared to the GGA functionals, i.e., the kinetic energy density \(\tau\) and the contribution of the \(\tau\)-dependent part to the Hamiltonian matrix \(\langle\chi_{\mu}|\hat{V}_{\tau}|\chi_{\nu}\rangle\). First, the \(\tau\) term is given by \[\tau(\mathbf{r}) =\frac{1}{2}\sum_{i=1}^{occ}f_{i}[\nabla\psi_{i}(\mathbf{r})]^{*} \cdot[\nabla\psi_{i}(\mathbf{r})]\] \[=\frac{1}{2}\sum_{\mu\nu}\rho_{\mu\nu}[\nabla\chi_{\mu}(\mathbf{r })]\cdot[\nabla\chi_{\nu}(\mathbf{r})], \tag{11}\] where \(f_{i}\) is the occupation number of state \(i\) while the density matrix \(\rho_{\mu\nu}\) takes the form of \[\rho_{\mu\nu}=\sum_{i=1}^{occ}f_{i}C_{i\mu}^{*}C_{i\nu}. \tag{12}\] Under the PBCs, Kohn-Sham wave functions obey the Bloch theorem and take the form of \[\psi_{i\mathbf{k}}(\mathbf{r})=\frac{1}{\sqrt{N}}\sum_{\mathbf{R}}\sum_{\mu}C_ {i\mu}(\mathbf{k})\chi_{\mu\mathbf{R}}(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{R}}, \tag{13}\] where \(N\) is the number of unit cells in the Born-von-Karman supercell, \(\chi_{\mu\mathbf{R}}=\bar{\chi}_{\mu}(\mathbf{r}-\mathbf{X}_{a}-\mathbf{R})\) is the atomic orbital located on atom \(a\) in the cell \(\mathbf{R}\), and \(\mathbf{k}\) labels the \(k\)-points in the first Brillouin zone. With this, the density matrix in real space takes the form of \[\rho_{\mu\nu}(\mathbf{R})=\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}\sum_{i=1}^ {occ}f_{i}C_{i\mu}^{*}(\mathbf{R})C_{i\nu}(\mathbf{R})e^{-i\mathbf{k}\cdot \mathbf{R}}, \tag{14}\] where \(N_{\mathbf{k}}\) denotes the number of \(k\)-points sampled in the first Brillouin zone. In this regard, the kinetic energy density in PBCs is written as \[\tau(\mathbf{r}) =\frac{1}{2}\frac{1}{N_{\mathbf{k}}}\sum_{\mathbf{k}}\sum_{i=1}^ {occ}f_{i}[\nabla\psi_{i\mathbf{k}}(\mathbf{r})]^{*}\cdot[\nabla\psi_{i\mathbf{ k}}(\mathbf{r})] \tag{15}\] \[=\frac{1}{2}\sum_{\mu\nu}\sum_{\mathbf{R}}\rho_{\mu\nu}(\mathbf{ R})[\nabla\chi_{\mu\mathbf{0}}(\mathbf{r})]\cdot[\nabla\chi_{\nu\mathbf{R}}( \mathbf{r})]. \tag{16}\] Second, the \(\tau\)-dependent XC matrix element, \(\langle\chi_{\mu}|\hat{V}_{\tau}|\chi_{\nu}\rangle\) term, is obtained using integration by parts \[\hat{V}_{\mu\nu}^{\tau} =\langle\chi_{\mu}|\hat{V}^{\tau}|\chi_{\nu}\rangle\] \[=-\frac{1}{2}\int\chi_{\mu}(\mathbf{r})\nabla\cdot\left[\frac{ \delta e_{\mathrm{xc}}}{\delta\tau(\mathbf{r})}\nabla\chi_{\nu}(\mathbf{r}) \right]\mathrm{d}\mathbf{r} \tag{17}\] \[=\frac{1}{2}\int[\nabla\chi_{\mu}(\mathbf{r})]\cdot[\nabla\chi_{ \nu}(\mathbf{r})]\frac{\delta e_{\mathrm{xc}}}{\delta\tau(\mathbf{r})} \mathrm{d}\mathbf{r}. \tag{18}\] We note that such derivation is first brought up by Ref. [26]. Eq. 18equation.2.18 has been implemented in the Gaussian software [26; 50], and the derivative of Gaussian orbital could be derived analytically. Meanwhile, the ONETEP team implemented the formula of Eq. 17equation.2.17, where the gradient operator has been applied to the reciprocal space due to the derivative of the non-orthogonal generalized Wannier function basis adopted in ONETEP may lose its locality.[5] With PBCs, the \(\tau\)-dependent contribution in real space is written as \[\hat{V}_{\mu\nu}^{\tau}(\mathbf{R})=\frac{1}{2}\int[\nabla\chi_{\mu\mathbf{0}} (\mathbf{r})]\cdot[\nabla\chi_{\nu\mathbf{R}}(\mathbf{r})]\frac{\delta e_{ \mathrm{xc}}}{\delta\tau(\mathbf{r})}\mathrm{d}\mathbf{r}. \tag{19}\] When obtaining the expansion coefficients of Kohn-Sham orbitals \(C_{i\mu}(\mathbf{k})\), the operator is transformed into the corresponding \(k\)-space representation \(\hat{V}_{\mu\nu}^{\tau}(\mathbf{k})=\sum_{\mathbf{R}}\hat{V}_{\mu\nu}^{\tau}( \mathbf{R})e^{-i\mathbf{k}\cdot\mathbf{R}}\) through Fourier Transform. In ABACUS, derivatives are directly evaluated from the numerical atomic orbitals, so we adopt Eq. 18equation.2.18. Additionally, we note that the evaluation of both \(\tau(\mathbf{r})\) and \(\hat{V}_{\mu\nu}^{\tau}\) require expanding the basis functions \(\{|\chi_{\mu}\rangle\}\) (and their derivatives) in real space. ABACUS achieves operations involving such expansions with real-space FFT (Fast Fourier Transform) grid points as the basic operation unit. A more detailed discussion of such operations can be found in Section II.3Section*.6. With the above details of evaluating \(\tau\) and \(\hat{V}_{\tau}\), we are able to run SCF calculations using meta-GGA functionals. However, to enable structural relaxation or _ab-initio_ molecular dynamics, the forces and stresses are needed. ### \(\tau\)-dependent Force and Stress in NAOs The \(\tau\)-dependent force term of atom \(a\) takes the form of \[F_{a}^{\tau} =-\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\mathrm{Tr}[\hat{V} ^{\tau}]\] \[=-\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\sum_{\mu\nu}\rho_{ \mu\nu}\langle\chi_{\mu}|\hat{V}^{\tau}|\chi_{\nu}\rangle. \tag{20}\] In practice, the derivative operation results in three terms. The first term is the derivative of the density matrix \(\rho_{\mu\nu}\) with respect to the atom positions, which is also known as the "orthogonal" term [33]. The term vanishes if an orthogonal set of basis functions is used. The second term is the derivative of the operator with respect to the atom positions, namely the Hellmann-Feynman term. The last one is the Pulay term [39], involving the derivative of basis functions with respect to atom positions. In fact, since the \(V^{\tau}(\mathbf{r})\) term is independent of the atom positions, the Hellmann-Feynman term vanishes. In particular, when the plane-wave basis is employed, the Pulay and orthogonal terms are absent, so the \(\tau\)-dependent part of meta-GGA functionals has no contributions to the total force. For NAO basis sets, the Pulay and orthogonal terms need to be considered. As discussed in the literature [33], the orthogonal term arises due to the requirement for maintaining the orthogonality of Kohn-Sham orbitals \(\{\psi_{i}\}\), and it is obtained using the following transformation \[\sum_{\mu\nu}\hat{H}_{\mu\nu}\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\rho_ {\mu\nu}=-\sum_{\mu\nu}E_{\mu\nu}\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}} S_{\mu\nu}, \tag{21}\] where \(S_{\mu\nu}=\langle\chi_{\mu}|\chi_{\nu}\rangle\) is the overlap matrix of atomic basis. The orthogonal term is evaluated for all components of the Hamiltonian matrix, and no extra work is required specifically for the \(\tau\)-dependent part of meta-GGA functionals. The Pulay term is given by \[F_{a}^{\tau,Pulay} =-\sum_{\mu\nu}\rho_{\mu\nu}\Big{[}\left\langle\frac{\mathrm{d}}{ \mathrm{d}\mathbf{X_{a}}}\chi_{\mu}\Big{|}\hat{V}^{\tau}\Big{|}\chi_{\nu} \right\rangle\] \[+\left\langle\chi_{\mu}\Big{|}\hat{V}^{\tau}\Big{|}\frac{ \mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\chi_{\nu}\right\rangle\Big{]}\] \[=-2\sum_{\mu\nu}\rho_{\mu\nu}\left\langle\frac{\mathrm{d}}{ \mathrm{d}\mathbf{X_{a}}}\chi_{\mu}\Big{|}\hat{V}^{\tau}\Big{|}\chi_{\nu} \right\rangle. \tag{22}\] Here we know the density matrix \(\rho_{\mu\nu}\) is symmetric. As for the derivative of basis functions with respect to the atom positions, the derivative is nonzero only for basis functions that are located on atom \(a\), and for \(\mu\in a\), we have \[\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\chi_{\mu}(\mathbf{r}) =\frac{\mathrm{d}}{\mathrm{d}\mathbf{X_{a}}}\bar{\chi}_{\mu}( \mathbf{r}-\mathbf{X_{a}})\] \[=-\nabla\bar{\chi}_{\mu}(\mathbf{r}-\mathbf{X_{a}})\] \[=-\nabla\chi_{\mu}(\mathbf{r}). \tag{23}\] Substituting Eq. 18equation.2.18 and Eq. 23equation.2.23 into Eq. 23equation.2.22, we obtain the expression of Pulay force on atom \(a\) along the \(\alpha\) direction (\(\alpha anad\beta\) take the values of \(x,y,z\)) \[F_{a\alpha}^{\tau,Pulay}=2\sum_{\mu\in a\nu}\rho_{\mu\nu}\langle \frac{\partial}{\partial\alpha}\chi_{\mu}\Big{|}\hat{V}^{\tau}\Big{|}\chi_{\nu}\rangle\] \[=\sum_{\mu\in a\nu}\rho_{\mu\nu}\int\frac{\delta e_{\mathrm{xc}}} {\delta\tau(\mathbf{r})}\sum_{\beta}\left[\frac{\partial^{2}}{\partial \alpha\partial\beta}\chi_{\mu}(\mathbf{r})\right]\left[\frac{\partial}{ \partial\beta}\chi_{\nu}(\mathbf{r})\right]\mathrm{d}\mathbf{r}. \tag{24}\] Similar to the evaluation of Hamiltonian element evaluation under PBCs, the Pulay force under the PBCs takes the form of \[F_{a\alpha}^{\tau,Pulay}= \sum_{\mathbf{R}}\sum_{\mu\in a\nu}\rho_{\mu\nu}(\mathbf{R}) \frac{\delta e_{\mathrm{xc}}}{\delta\tau(\mathbf{r})}\] \[\sum_{\beta}\left[\frac{\partial^{2}}{\partial\alpha\partial \beta}\chi_{\mu}(\mathbf{r})\right]\left[\frac{\partial}{\partial\beta}\chi_{ \nu\mathbf{R}}(\mathbf{r})\right], \tag{25}\] where \(\rho_{\mu\nu}({\bf R})\) is defined in Eq. 14equation.2.14. Similarly, the \(\tau\)-dependent part contributes to the orthogonal and Pulay terms in the total stress. On the one hand, the orthogonal term is calculated along with other components of the Hamiltonian as noted in a previous work [52]. On the other hand, the Pulay stress in the \(\alpha\beta\) direction is (\(\alpha,\beta,\gamma\) take values of \(x,y,z\)) \[\sigma^{\tau,Pulay}_{\alpha\beta}= -\frac{1}{2\Omega}\sum_{\mu\nu}\rho_{\mu\nu}\int\frac{\delta e_{ \rm sc}}{\delta\tau({\bf r})}(r^{\beta}-X_{a}^{\beta})\] \[\sum_{\gamma}\left[\frac{\partial^{2}}{\partial\alpha\partial \gamma}\chi_{\mu}({\bf r})\right]\left[\frac{\partial}{\partial\gamma}\chi_{ \nu}({\bf r})\right]{\rm d}{\bf r}, \tag{26}\] where \(\Omega\) is the volume of cell, while \(r^{\beta}\) and \(X_{a}^{\beta}\) denote the components of \({\bf r}\) and \({\bf X_{a}}\) in the \(\beta\) direction. Furthermore, by considering the PBCs, the Pulay stress can be written as \[\sigma^{\tau,Pulay}_{\alpha\beta}= -\frac{1}{2\Omega}\sum_{\bf R}\sum_{\mu\nu}\int\frac{\delta e_{ \rm sc}}{\delta\tau({\bf r})}(r^{\beta}-X_{a}^{\beta})\] \[\sum_{\gamma}\left[\frac{\partial^{2}}{\partial\alpha\partial \gamma}\chi_{\mu\bf 0}({\bf r})\right]\left[\frac{\partial}{\partial\gamma}\chi_{ \nu\bf R}({\bf r})\right]{\rm d}{\bf r}. \tag{27}\] From the expressions of Pulay force and stress terms, we note that they also require the evaluation of atomic basis functions and their derivatives in real space; more details are presented in Sec. II.2section*.6. ### Operations on Real-Space Grids In ABACUS, in order to obtain the NAO values on real-space grids and their derivatives with respect to atom positions, we adopt uniform real-space grid points \(\{{\bf r}\}\) under PBCs as basic units of grid operations. Values calculated on these grids are either directly assigned with physical quantities such as the electron density \(\rho({\bf r})\) or the kinetic energy density \(\tau({\bf r})\), or accumulated (integrated) as in the case of matrix elements, force or stress. The major physical quantities include the \(\tau\)-dependent XC functional, the kinetic energy density \(\tau\) (Eq. 11equation.2.11), the \(\tau\)-dependent exchange-correlation potential \(V_{\mu\nu}^{\tau}\) (Eq. 18equation.2.18), the \(\tau\)-dependent forces \(F_{\alpha a}^{r,Pulay}\) (Eq. 25equation.2.25), and the \(\tau\)-dependent stress \(\sigma^{\tau,Pulay}_{\alpha\beta}\) (Eq. 27equation.2.27). These physical quantities can be evaluated based on three classes of basic grid operations: evaluation of basis orbitals \(\{\chi_{\mu}({\bf r})\}\) and their derivatives, transforming a vector by the density matrix: \(f_{\mu}({\bf r})=\sum_{\nu}\rho_{\mu\nu}g_{\nu}({\bf r})\), and arithmetic operations. For the first operation, i.e., evaluation of \(\chi_{\mu}\) and its derivatives on the grid points, we separate the basis function into the radial and the angular parts, \[\chi_{\mu}({\bf r})=R_{\mu}(r)Y_{lm}(\theta,\phi), \tag{28}\] where the radial part depicts a linear combination of spherical Bessel functions generated by minimizing the spillage of the wave functions between the atomic orbital calculations and the converged plane wave calculations for dimer systems [28]. Furthermore, the angular part denotes the real spherical harmonics. As for the derivatives of numerical atomic orbitals, we first make the following transformation, \[\chi_{\mu}({\bf r})=\frac{R_{\mu}(r)}{r^{l}}\cdot\left[r^{l}Y_{lm}(\theta,\phi )\right]. \tag{29}\] The term in the square bracket gives the so-called'solid' spherical harmonics, which can be readily expressed in the Cartesian coordinates. For example, the solid spherical harmonics with \(l=2,m=2\) can be written as \[r^{2}Y_{22}(\theta,\phi)=\frac{1}{4}\sqrt{\frac{15}{\pi}}(x^{2}-y^{2}), \tag{30}\] and the gradient is \[\nabla[r^{2}Y_{22}]=\frac{1}{4}\sqrt{\frac{15}{\pi}}(2x\hat{x}-2y\hat{y}). \tag{31}\] Furthermore, the gradients of the basis functions are calculated using the product rule \[\nabla\chi_{\mu} =\left(\nabla\frac{R_{\mu}}{r^{l}}\right)\cdot r^{l}Y_{lm}+\frac{R _{\mu}}{r^{l}}\cdot\nabla(r^{l}Y_{lm})\] \[=\hat{r}\frac{R_{\mu}^{\prime}-lR_{\mu}}{r^{l+1}}\cdot(r^{l}Y_{lm} )+\frac{R_{\mu}}{r^{l}}\cdot\nabla(r^{l}Y_{lm}). \tag{32}\] Figure 1: Stress errors \(\Delta p\) (in kBar) of GaAs with respect to different lattice constants. Both NAO basis sets (DZP and TZDP) and PW basis sets are used in ABACUS. The error is defined as the difference between the stress computed by the finite-difference method (set to 0) and the stress computed by the analytic method. The parameter Ecut in the legend denotes the kinetic energy cutoff adopted in these calculations. As for the second operation, i.e., \(f_{\mu}(\mathbf{r})=\sum_{\nu}\rho_{\mu\nu}g_{\nu}(\mathbf{r})\) (transforming a vector by the density matrix), we note that in ABACUS, the density matrix is stored in a form, so the basis functions on a pair of atoms \((a_{1},a_{2})\) define a contiguous sub-matrix. Therefore, we partition the vector \(g_{\nu}(\mathbf{r})\) into segments according to the atoms to which the index \(\nu\) belongs. Matrix-vector multiplications are then carried out by calling the LAPACK [53] subroutines. With the above operations, we are able to calculate needed physical quantities. Take the Pulay force in Eq. (25equation.2.25) as an example, we perform the following five steps for each grid point to evaluate the Pulay force. First, we evaluate the derivatives terms \(\frac{\partial^{2}}{\partial\alpha\partial\beta}\chi_{\mu}(\mathbf{r})\) and \(\frac{\partial}{\partial\beta}\chi_{\nu}(\mathbf{r})\). Second, a multiplication operation is performed to obtain \(g_{\nu\beta}(\mathbf{r})=\frac{\delta e_{\omega}}{\delta\tau(\mathbf{r})} \frac{\partial}{\partial\beta}\chi_{\nu}(\mathbf{r})\). Third, we transform \(g_{\nu\beta}\) by the use of the density matrix: \(f_{\mu\beta}~{}=~{}\sum_{\nu}\rho_{\mu\nu}g_{\nu\beta}\). Fourth, another multiplication operation is performed: \(h_{\alpha\beta}=\sum_{\mu}\frac{\partial^{2}\chi_{\mu}}{\partial\alpha\partial \beta}f_{\mu\beta}\). Finally, \(h_{\alpha\beta}\) is accumulated to generate the corresponding force component. In practice, several adjacent grid points (typically a 2\(\times\)2\(\times\)2 cube) are grouped to exploit the efficiency of matrix-matrix multiplications. Furthermore, parallelization is achieved by distributing the independent units onto different processes. ## III Results and Discussions Previous studies showed that the SCAN functional yields significantly more accurate results on such systems as compared to the PBE functional. [10; 54] Here we first compare the results from the SCAN functional with those obtained from the finite difference calculations. We also study the convergence behavior of stress with respect to the radius cutoff of numerical atomic orbitals and the energy cutoff of the FFT grid. We use the ABACUS 3.0.3 package together with the LIBXC 5.2.3 package to perform meta-GGA calculations. We validate a series of materials, including the water hexamers, the weakly interacting molecules of the S22 dataset [55], as well as 13 semiconducting materials. Figure 4: Relative binding energies of water hexamers with four different structures, i.e., the prism, cage, book, and cyclic. We set the binding energy of the prism structure as zero. The values calculated from the ABACUS package with the use of the SCAN XC functional and TZDP basis set are in blue. The red line is from Ref. [15], where the Gaussian package with the aug-cc-pvtz basis sets is adopted. Figure 3: Stress of GaAs calculated with the DZP (red solid line) and the TZDP (green solid line) basis sets with different radius cutoffs ranging from 6.0 to 10.0 a.u. The orange line marks the stress calculated with the PW basis and 150 Ry energy cutoff (Ecut). Figure 2: Stresses of GaAs calculated with the PW (the orange line), DZP (the red line), and TZDP (the green line) basis sets with different energy cutoffs for the uniform real-space grid. In all of the tests, an energy cutoff of 100 Ry is used unless specially noted. The optimized Norm-conserving Vanderbilt (ONCV) pseudopotentials [56] generated by the Perdew-Burke-Ernzerhof [57] (PBE) exchange-correlation functional are used throughout the tests. We adopt two sets of NAO basis sets, namely, the double zeta orbitals with a polar orbital (DZP) and the triple zeta orbitals with two polar orbitals (TZDP). A detailed number of orbitals used in the DZP and TZDP basis sets are listed in Table. ITwo different types of numerical atomic orbitals for 13 elements, i.e., DZP and TZDP, used in calculations. The number of valence electrons and the radius cutoff for NAOs (in a.u.) are marked in the parenthesis for each element. In terms of the DZP and TZDP basis sets, the number of NAOs for each angular momentum is shown, and the total number of atomic orbitals for each atom is listed in the parenthesis. In semiconductor calculations, we use 6 and 7 a.u. NAOs for the C and N, respectively. In the water hexamers and S22 tests, we adopt 10 a.u. NAOs for the C and N. DFT calculations on four water hexamers and weakly interacting clusters are carried out in a 20\(\times\) 20\(\times\)20 A\({}^{3}\) cell, where the \(\Gamma\)-point sampling of the Brillouin zone is used. We use a radius cutoff of 10 a.u. for the TZDP basis sets in order to achieve the desired level of accuracy in the two tests, as shown in Table ITwo different types of numerical atomic orbitals for 13 elements, i.e., DZP and TZDP, used in calculations. The number of valence electrons and the radius cutoff for NAOs (in a.u.) are marked in the parenthesis for each element. In terms of the DZP and TZDP basis sets, the number of NAOs for each angular momentum is shown, and the total number of atomic orbitals for each atom is listed in the parenthesis. In semiconductor calculations, we use 6 and 7 a.u. NAOs for the C and N, respectively. In the water hexamers and S22 tests, we adopt 10 a.u. NAOs for the C and N. In the test of semiconductors, the radius cutoffs of elements are also listed in Table. ITwo different types of numerical atomic orbitals for 13 elements, i.e., DZP and TZDP, used in calculations. The number of valence electrons and the radius cutoff for NAOs (in a.u.) are marked in the parenthesis for each element. In terms of NAOs for each angular momentum is shown, and the total number of atomic orbitals for each atom is listed in the parenthesis. In semiconductor calculations, we use 6 and 7 a.u. NAOs for the C and N, respectively. In the water hexamers and S22 tests, we adopt 10 a.u. NAOs for the C and N. In terms of the DZP and TZDP basis sets, the number of valence electrons and the radius cutoff for NAOs (in a.u.) are marked in the parenthesis for each element. In terms of the DZP and TZDP basis sets, the number of NAOs for each angular momentum is shown, and the total number of atomic orbitals for each atom is listed in the parenthesis. In semiconductor calculations, we use 6 and 7 a.u. NAOs for the C and N, respectively. In the water hexamers and S22 tests, we adopt 10 a.u. NAOs for the C and N. In addition, a Monkhorst-Pack (MP) \(k\)-mesh of 6\(\times\)6\(\times\)6 is used for self-consistent calculations. In the tests of the water hexamers and semiconductors, the structures are fully relaxed until the largest atomic force is less than 0.01 eV/A. ### Finite-Difference Test and Convergence Test of Stress We compare the analytic stress results with those obtained from the finite-difference (FD) method. The stress tensor is defined as \[\sigma_{\alpha\beta}=-\frac{1}{\Omega}\frac{\partial E_{tot}}{\partial\epsilon _{\alpha\beta}}|_{\epsilon=0}, \tag{33}\] where \(\alpha,\beta=x,y,z\). The strain tensor \(\epsilon_{\alpha\beta}\) stands for an infinitesimal displacement of the crystal lattice, and \(\Omega\) depicts the volume of the cell. We take GaAs as an example and Fig. 1Stress errors \(\Delta p\) (in kBar) of GaAs with respect to different lattice constants. Both NAO basis sets (DZP and TZDP) and PW basis sets are used in ABACUS. The error is defined as the difference between the stress computed by the finite-difference method (set to 0) and the stress computed by the analytic method. The parameter Ecut in the legend denotes the kinetic energy cutoff adopted in these calculationsfigure.1 shows the comparison results for the calculated stresses obtained by using both NAO \begin{table} \begin{tabular}{c c c} \hline & DZP & TZDP \\ \hline H (1e, 10 a.u.) & 2s, 3p (5 orbitals) & 3s, 6p (9 orbitals) \\ \hline C (4e, 6/10 a.u.), N (5e, 7/10 a.u.), O (6e, 10 a.u.), & 2s, 6p, 5d (13 orbitals) & 3s, 9p, 10d (22 orbitals) \\ Si (4e, 8 a.u.), P (5e, 9 a.u.), As (5e, 9 a.u.) & & \\ \hline Ga (13e, 9 a.u.), Ge (14e, 9 a.u.), & 2s, 6p, 10d, 7f (25 orbitals) & 3s, 9p, 15d, 14f (41 orbitals) \\ In (13e, 9 a.u.), Sb (15e, 9 a.u.) & & \\ \hline Al (11e, 9 a.u.) & 4s, 12p, 5d (21 orbitals) & 6s, 18p, 10d (34 orbitals) \\ \hline \end{tabular} \end{table} Table 1: Two different types of numerical atomic orbitals for 13 elements, i.e., DZP and TZDP, used in calculations. The number of valence electrons and the radius cutoff for NAOs (in a.u.) are marked in the parenthesis for each element. In terms of the DZP and TZDP basis sets, the number of NAOs for each angular momentum is shown, and the total number of atomic orbitals for each atom is listed in the parenthesis. In semiconductor calculations, we use 6 and 7 a.u. NAOs for the C and N, respectively. In the water hexamers and S22 tests, we adopt 10 a.u. NAOs for the C and N. and PW basis sets in ABACUS. The stress difference is obtained by using the analytic method and the FD method. The pressure \(p=\frac{1}{3}\sum_{\alpha=1}^{3}\sigma_{\alpha\alpha}\) is calculated by using the FD method, where we apply \(\pm 0.15\%\) isotropic deformation for the cell. We note the errors obtained from the PW basis with an energy cutoff of 100 Ry are larger than 5 kBar for several cases. This is caused by the Pulay stress arising from the variation of the plane wave basis set under strain, which is not considered in the finite difference calculations. Notably, the effect diminishes when we use a converged basis set.[52; 58] By increasing the energy cutoff to 150 Ry, the errors of PW calculations are reduced to the order of 0.1 kBar. On the other hand, the errors in NAO basis are much smaller at 100 Ry, already at the order of 0.1 kBar. The results are expected because in DFT calculations with NAO basis sets, the energy cutoff only affects the number of real-space grid points; since the basis functions remain the same, no additional Pulay stress is present. The effect of Pulay stress is also reflected in the slow convergence of stress with energy cutoff for PW basis[52]. As shown in Fig. 2Stresses of GaAs calculated with the PW (the orange line), DZP (the red line), and TZDP (the green line) basis sets with different energy cutoffs for the uniform real-space gridfigure.2, the stresses calculated with both DZP and TZDP basis converge to within 1 kBar with an energy cutoff of 120 Ry, while a cutoff of 230 Ry is required for PW calculations to achieve the same level of accuracy. Also, the convergence of stress using the SCAN functional in PW basis is slower than that of the PBE functional[52], where convergence at 1 kBar is obtained at an energy cutoff of 150 Ry. Indeed, it has been shown that the SCAN functional is numerically less stable than GGA functionals, and often requires a denser real-space grid.[59; 60] We also test the convergence of stress with respect to the radial cutoff of NAO basis functions. In previous tests with the PBE functional,[52] results using NAO basis converge to about 1 kBar at a radial cutoff of 8 a.u. and agree well with the results from the PW basis. Here we perform the same test but with the SCAN functional and observe a similar pattern. As shown in Fig. 3Stress of GaAs calculated with the DZP (red solid line) and the TZDP (green solid line) basis sets with different radius cutoffs ranging from 6.0 to 10.0 a.u. The orange line marks the stress calculated with the PW basis and 150 Ry energy cutoff (Ecut)figure.3, by choosing the radial cutoff to be 8 a.u., both DZP and TZDP basis sets yield converged results. ### Water Hexamer Weak interactions, including the hydrogen bond (HB) and van der Waals interactions, play a key role in the interactions between molecules. It is well known that the use of GGA-level functionals within the framework of KSDFT largely overestimates the HB strength due to the self-interaction errors and inadequate treatment of the medium- and long-ranged van der Waals interactions, which substantially undermines the accuracy of molecular interactions.[61; 16] By incorporating the kinetic energy density in the XC functional, the SCAN meta-GGA functional is able to capture the short- and intermediate-ranged van der Waals interactions and provides a more accurate description of hydrogen bonds. Therefore, we first establish the accuracy of DFT calculations with the NAO basis set by evaluating the interaction strength between molecules and comparing the results with those obtained from the plane wave and Gaussian basis sets. A typical example of weakly interacting system is the water hexamer.[15] Here we choose four stable configurations of water hexamer, namely the prism, cage, book, and cyclic structures.[62; 63; 64] High-precision quantum chemistry methods such as the coupled cluster singles and doubles with perturbative triples (CCSD(T)) and Moller-Plesset perturbation theory (MP2)[67] predict the cyclic structure of water hexamer to be the most stable structure with the lowest energy, followed by the book, cage, and prism structures.[62] However, most GGA-level functionals and hybrid functionals fail to predict a correct energy ordering for the four water hexamers due to the lack of a proper description of the van der Waals interactions.[62] On the contrary, the SCAN functional has been shown to correctly predict the order of water hexamers with the Gaussian basis extrapolated to the complete basis set limit.[15] By utilizing the ABACUS package with the newly implemented SCAN function, we calculate the energies of the four water hexamer clusters with the TZDP NAO basis set, and align the results by setting the total energy of the prism structure to 0, and the results are shown in Fig. 4Relative binding energies of water hexamers with four different structures, i.e., the prism, cage, book, and cyclic. We set the binding energy of the prism structure as zero. The values calculated from the ABACUS package with the use of the SCAN XC functional and TZDP basis set are in blue. The red line is from Ref. [15], where the Gaussian package with the aug-cc-pvtz basis sets is adoptedfigure.4. We find the SCAN functional with NAO basis sets predicts the correct energy ordering and agrees well with the results listed in Ref. [15]. We also perform basis set extrapolation, and the difference of calculated energies between the complete basis limit and the TZDP basis is less than 3 meV per water molecule. Therefore, we conclude that our results are reliable. ### Weakly Interacting Molecules We also test the accuracy of the SCAN functional on a wider range of weakly interacting systems in the S22 dataset[55]. Here we adopt the NAO basis sets provided by ABACUS. The dataset contains three different groups of weakly interacting molecular systems, where the interactions are dominated by hydrogen bonds, dispersion interactions, and mixed interactions, respectively. We calculate the interaction energies with the SCAN functional and the TZDP basis set by using the ABACUS package; the results are listed in Table 3Interaction energies (in kcal/mol) for the dimers in S22 sets from the SCAN meta-GGA functional with the TZDP basis sets (using the ABACUS pacakge), the SCAN results (with the Gaussian basis and plane wave basis respectively) from Refs. [10; 54] and the CCSD(T) results (with the Gaussian basis) from Ref. [65]. Basis set superposition error (BSSE) is corrected using the counterpoise method in the computation.[68] We also list the SCAN results with the Gaussian basis[10]. In addition, we include the SCAN results[54] calculated from the plane wave basis within the framework of the projector augmented wave (PAW) method[69], as well as the CCSD(T) results with the Gaussian basis set[65]. The resulting interaction energy errors with respect to these reference SCAN data are within 0.3 kcal/mol, most of which are within 0.1 kcal/mol, proving that the accuracy of the SCAN functional with the TZDP basis set is sufficient for the description of the weak interactions between molecules. The largest deviation (about 0.2 kcal/mol) comes from the case of benzene-HCN dimer and indole-benzene dimer. However, we note that the deviation between the SCAN functional with the PW basis and the CCSD(T) method with the Gaussian basis \begin{table} \begin{tabular}{l c c c} & SCAN (TZDP basis) & SCAN (Gaussian[10], PW[54] basis) & CCSD(T) (Gaussian basis[65]) \\ \hline \multicolumn{4}{c}{7 hydrogen-bonded complexes} \\ NH\({}_{3}\) dimer (C\({}_{2h}\)) & 3.14 & 3.14, 3.12 & 3.15 \\ H\({}_{2}\)O dimer (C\({}_{s}\)) & 5.44 & 5.39, 5.43 & 5.00 \\ Formic acid dimer (C\({}_{2h}\)) & 20.91 & 20.63, 20.93 & 18.75 \\ Formamide dimer (C\({}_{2h}\)) & 16.25 & 16.39, 16.54 & 16.06 \\ Uracil dimer (C\({}_{2h}\)) & 20.12 & 20.33, 20.49 & 20.64 \\ 2-pyridone-2-aminopyridine (C\({}_{1}\)) & 16.76 & 16.69, 16.85 & 16.94 \\ Adenine-thymine WC (C\({}_{1}\)) & 15.68 & 15.88, 15.99 & 16.55 \\ \multicolumn{4}{c}{8 dispersion-bound complexes} \\ CH\({}_{4}\) dimer (D\({}_{3d}\)) & 0.38 & 0.37, 0.35 & 0.53 \\ C\({}_{2}\)H\({}_{4}\) dimer (D\({}_{2d}\)) & 1.03 & 1.07, 1.02 & 1.48 \\ Benzene-CH\({}_{4}\) (C\({}_{3}\)) & 0.93 & 0.89, 0.87 & 1.45 \\ Benzene dimer (C\({}_{2h}\)) & 1.02 & 1.14, 1.07 & 2.66 \\ Pyrazine dimer (C\({}_{s}\)) & 2.74 & 2.71, 2.65 & 4.26 \\ Uracil dimer (C\({}_{2}\)) & 8.07 & 8.00, 7.96 & 9.78 \\ Indole-benzene (C\({}_{1}\)) & 2.09 & 2.19, 2.12 & 4.52 \\ Adenine-thymine (C\({}_{1}\)) & 8.83 & 8.69, 8.65 & 11.86 \\ \multicolumn{4}{c}{7 mixed complexes} \\ C\({}_{2}\)H\({}_{4}\)-C\({}_{2}\)H\({}_{2}\) (C\({}_{2v}\)) & 1.40 & 1.35, 1.34 & 1.50 \\ Benzene-H\({}_{2}\)O (C\({}_{s}\)) & 3.41 & 3.30, 3.28 & 3.28 \\ Benzene-NH\({}_{3}\) (C\({}_{s}\)) & 2.07 & 2.00, 1.99 & 2.32 \\ Benzene-HCN (C\({}_{s}\)) & 4.28 & 4.08, 4.06 & 4.54 \\ Benzene dimer (C\({}_{2v}\)) & 1.58 & 1.50, 1.48 & 2.72 \\ Indole-benzene (C\({}_{s}\)) & 4.26 & 4.07, 4.07 & 5.63 \\ Phenol dimer (C\({}_{1}\)) & 5.96 & 5.91, 5.91 & 7.10 \\ \end{tabular} \end{table} Table 2: Interaction energies (in kcal/mol) for the dimers in S22 sets from the SCAN meta-GGA functional with the TZDP basis sets (using the ABACUS pacakge), the SCAN results (with the Gaussian basis and plane wave basis respectively) from Refs. [10; 54] and the CCSD(T) results (with the Gaussian basis) from Ref. [65]. could be as large as 0.3 kcal/mol (in the case of formic acid dimer), indicating this level of deviation is expected when comparing results using different basis sets. ### Semiconductors Besides the molecular systems, we also take 13 semiconductors as examples to test the SCAN functional with both NAO and PW basis. For the semiconductors, we compute the lattice constants \(a\) and bulk moduli \(B\) with the DZP and TZDP NAO basis, as well as the plane wave basis. The data are listed in Table 3.1.1.1 constants (\(a\) in A) and bulk moduli (\(B\) in GPa) calculated with NAO (DZP and TZDP) and PW basis sets. The experiment results (Exp) extrapolated to 0 K are cited from Ref. [66] and the reference data (Ref) are cited from Ref. [54]table.3 and compared with Ref. [54] and experiment [66]. The bulk moduli \(B\) are calculated by \[B=V\frac{\partial^{2}E}{\partial V^{2}}|_{V=V_{0}}, \tag{34}\] where \(V\) is the volume per atom, \(E\) depicts the energy per atom, and \(V_{0}\) denotes the volume per atom that minimize the energy. By comparing the NAO results with those from the plane wave basis, it could be seen that the calculated lattice constants have converged to within 0.01 A for most semiconductors at the DZP basis set level, except for a few cases, such as SiC, AlN, and InP. For bulk moduli, most semiconductors converge to within 1 GPa at the DZP level, except for a few cases like diamond and AlP. Compared with the referenced data calculated using the PAW method, [54] the lattice constants from the NAO basis mostly fall within 0.03 A around the reference results, except for Ge and InSb, where the errors are 0.06 and 0.08 A respectively. The calculated bulk moduli also show satisfactory agreement with the experiment, most of which fall into 10 GPa around the experiment results, proving that SCAN functional produces highly accurate results in the simulations of bulk semiconductor materials. Therefore, we conclude that results with NAO basis are consistent with the PW basis in the simulation of bulk systems using the SCAN meta-GGA functional. We also show that the accuracy of NAO simulations at the DZP level is sufficient for most of the tested systems. In general, TZDP is needed only for cases where high accuracy is required. ## IV Conclusions In conclusion, we implemented meta-GGA functionals in both numerical atomic orbital and plane-wave basis sets in the electronic structure software ABACUS. We adopted the formulation commonly referred to as ODDM [46] or FDO [5] method rather than the numerically challenging OEP method. Apart from the \(\tau\)-dependent contribution to the Hamiltonian matrix, we also implemented the force and stress with both basis sets. To validate our stress implementation, we compared the stress calculated using the finite difference method with the analytic results and obtained a satisfactory \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{DZP} & \multicolumn{2}{c}{TZDP} & \multicolumn{2}{c}{PW} & \multicolumn{2}{c}{Exp} & \multicolumn{2}{c}{Ref} \\ & \(a\) & \(B\) & \(a\) & \(B\) & \(a\) & \(B\) & \(a\) & \(B\) & \(a\) \\ \hline Diamond & 3.56 & 444 & 3.56 & 454 & 3.56 & 449 & 3.55 & 443 & 3.55 \\ Si & 5.45 & 92 & 5.45 & 91 & 5.45 & 94 & 5.42 & 99 & 5.43 \\ Ge & 5.58 & 73 & 5.58 & 73 & 5.59 & 77 & 5.64 & 76 & 5.66 \\ SiC & 4.38 & 218 & 4.37 & 218 & 4.36 & 220 & 4.35 & 225 & 4.35 \\ AlN & 4.39 & 218 & 4.37 & 215 & 4.36 & 219 & 4.37 & 202 & 4.36 \\ AlP & 5.48 & 91 & 5.48 & 93 & 5.47 & 93 & 5.45 & 86 & 5.47 \\ AlAs & 5.69 & 78 & 5.69 & 78 & 5.68 & 78 & 5.65 & 77 & 5.67 \\ GaN & 4.49 & 179 & 4.49 & 181 & 4.48 & 181 & 4.52 & 210 & 4.50 \\ GaP & 5.42 & 90 & 5.42 & 91 & 5.41 & 81 & 5.44 & 89 & 5.45 \\ GaAs & 5.66 & 69 & 5.65 & 69 & 5.66 & 67 & 5.64 & 76 & 5.66 \\ InP & 5.88 & 68 & 5.87 & 68 & 5.86 & 69 & 5.86 & 71 & 5.89 \\ InAs & 6.11 & 55 & 6.10 & 56 & 6.10 & 56 & 6.05 & 58 & 6.09 \\ InSb & 6.46 & 44 & 6.45 & 44 & 6.45 & 44 & 6.47 & 46 & 6.52 \\ \hline \hline \end{tabular} \end{table} Table 3: Lattice constants (\(a\) in Å) and bulk moduli (\(B\) in GPa) calculated with NAO (DZP and TZDP) and PW basis sets. The experiment results (Exp) extrapolated to 0 K are cited from Ref. [66] and the reference data (Ref) are cited from Ref. [54]. agreement between the two methods. The convergence of the results with respect to the basis sets was also discussed. We then calculated the binding energies of water hexamers, interaction energies of weak interacting molecules in the S22 dataset, as well as lattice constants and bulk moduli of 13 semiconductors. The results were consistent with the experimental and previous computational values. We expect our implementation of the meta-GGA exchange-correlation functionals in ABACUS will facilitate more method developments and applications in the future. ## Acknowledgement We thank Tianqi Zhao, Xingliang Peng, Chun Cai, Qi Ou, and Xiaokuang Bai for improving the ABACUS package from various aspects, which include adding test examples and setting up environments and workflows. We also thank Jianwei Sun for discussions on the meta-GGA functional and helpful suggestions on the manuscript. The work of M.C., R.L., and X.L. was supported by the National Science Foundation of China under Grant No. 12122401, 12074007, and 12135002. The authors gratefully acknowledge funding support from the AI for Science Institute, Beijing (ASI). Computational work in this study benefits from the use of the high-performance computing platform of Peking University and the Bohrium platform supported by DP Technology.
2309.07951
Non-standard axion electrodynamics and the dual Witten effect
Standard axion electrodynamics has two closely related features. First, the coupling of a massless axion field to photons is quantized, in units proportional to the electric gauge coupling squared. Second, the equations of motion tell us that a time-dependent axion field in a background magnetic field sources an effective electric current, but a time-dependent axion field in a background electric field has no effect. These properties, which manifestly violate electric-magnetic duality, play a crucial role in experimental searches for axions. Recently, electric-magnetic duality has been used to motivate the possible existence of non-standard axion couplings, which can both violate the usual quantization rule and exchange the roles of electric and magnetic fields in axion electrodynamics. We show that these non-standard couplings can be derived from SL(2,Z) duality, but that they come at a substantial cost: in non-standard axion electrodynamics, all electrically charged particles become dyons when the axion traverses its field range, in a dual form of the standard Witten effect monodromy. This implies that there are dyons near the weak scale, leads to a large axion mass induced by Standard Model fermion loops, and dramatically alters Higgs physics. We conclude that non-standard axion electrodynamics, although interesting to consider in abstract quantum field theory, is not phenomenologically viable.
Ben Heidenreich, Jacob McNamara, Matthew Reece
2023-09-14T18:00:00Z
http://arxiv.org/abs/2309.07951v2
# Non-standard axion electrodynamics ###### Abstract Standard axion electrodynamics has two closely related features. First, the coupling of a massless axion field to photons is quantized, in units proportional to the electric gauge coupling squared. Second, the equations of motion tell us that a time-dependent axion field in a background magnetic field sources an effective electric current, but a time-dependent axion field in a background electric field has no effect. These properties, which manifestly violate electric-magnetic duality, play a crucial role in experimental searches for axions. Recently, electric-magnetic duality has been used to motivate the possible existence of non-standard axion couplings, which can both violate the usual quantization rule and exchange the roles of electric and magnetic fields in axion electrodynamics. We show that these non-standard couplings can be derived from SL(2,Z) duality, but that they come at a substantial cost: in non-standard axion electrodynamics, all electrically charged particles become dyons when the axion traverses its field range, in a dual form of the standard Witten effect monodromy. This implies that there are dyons near the weak scale, leads to a large axion mass induced by Standard Model fermion loops, and dramatically alters Higgs physics. We conclude that non-standard axion electrodynamics, although interesting to consider in abstract quantum field theory, is not phenomenologically viable. ###### Contents * 1 Introduction and Central Argument * 2 Standard Axion Electrodynamics * 2.1 Derivation of coupling quantization * 2.2 Revisiting the assumptions * 2.3 The Witten monodromy and anomaly inflow * 3 Duality and Non-Standard Axion Electrodynamics * 3.1 Electric-magnetic duality for a free photon * 3.2 The standard axion-photon coupling * 3.3 Alternative axion-photon couplings * 3.4 Comparison to prior literature * 4 Phenomenological Assessment * 4.1 The dual Witten monodromy * 4.2 More general couplings * 5 Conclusions * A Working backwards: equations of motion to a standard action * A.1 Relating the Sokolov-Ringwald equations to standard electrodynamics * A.2 Quantization of the generalized axion couplings ## 1 Introduction and Central Argument In this paper, we study axion electrodynamics: the interaction of a periodic scalar field \(\theta\cong\theta+2\pi\) (the axion) with a \(\mathrm{U}(1)\) gauge field \(A\) (the photon) with field strength \(F\) through a topological, Chern-Simons-type interaction: \[\int\mathrm{d}^{4}x\,\sqrt{|g|}\left(-\frac{1}{2}f^{2}\partial_{\mu}\theta \partial^{\mu}\theta-\frac{1}{4e^{2}}F_{\mu\nu}F^{\mu\nu}\right)+\frac{n}{8 \pi^{2}}\int\theta F\wedge F, \tag{1}\] where in the last term we use the differential form notation \(F=\frac{1}{2}F_{\mu\nu}\,\mathrm{d}x^{\mu}\wedge\mathrm{d}x^{\nu}\) to emphasize the topological nature of the interaction. It is a well-known fact that a consistent quantum field theory with this action obeys a quantization condition, \[n\in\mathbb{Z}, \tag{2}\] where \(A\) is normalized such that the minimally charged particle has charge 1, and we assume the spacetime background is restricted to spin 4-manifolds. This quantization condition has important applications for the couplings of axion fields of interest in real-world particle physics, including the QCD axion [1, 2, 3, 4] or more general axion-like particles. Such particles are the subject of intense experimental scrutiny. Models where \(n\) is an order-one integer provide natural targets of such experimental searches (though very large integer values of \(n\) are also possible, in principle [5, 6]). Given the action (1), one can derive an axionic modification of Maxwell's equations [7], which takes the form: \[\mathbf{\nabla}\cdot\mathbf{E} =\rho-g_{a\gamma\gamma}\mathbf{B}\cdot\mathbf{\nabla}a\,, \mathbf{\nabla}\times\mathbf{E} =-\frac{\partial\mathbf{B}}{\partial t}-\mathbf{J}_{\mathrm{M}}\,, \tag{3}\] \[\mathbf{\nabla}\cdot\mathbf{B} =\rho_{\mathrm{M}}\,, \mathbf{\nabla}\times\mathbf{B} =\frac{\partial\mathbf{E}}{\partial t}+\mathbf{J}-g_{a\gamma\gamma} \biggl{(}-\mathbf{B}\frac{\partial a}{\partial t}+\mathbf{E}\times\mathbf{\nabla}a\biggr{)}\,.\] Here \(\rho,\mathbf{J}\) are the usual electric charge density and current, \(\rho_{\mathrm{M}},\mathbf{J}_{\mathrm{M}}\) are the (hypothetical) magnetic charge density and current, \(a(x)=f\theta(x)\) is the canonically normalized axion field, \(\mathbf{E},\mathbf{B}\) are the canonically normalized electric and magnetic fields, and \[g_{a\gamma\gamma}=\frac{ne^{2}}{4\pi^{2}f} \tag{4}\] is the axion photon coupling, proportional to the integer \(n\).1 The equations (3) manifestly break electric-magnetic duality. For example, a time-dependent axion field in a background magnetic field leads to an effective _electric_ current, sourcing \(\mathbf{\nabla}\times\mathbf{B}\). Many searches for axion dark matter rely on this coupling. Furthermore, we see that an axion gradient aligned with a magnetic field behaves as an effective _electric_ charge density. The axion does not source effective magnetic charge densities or currents. This breaking of electric-magnetic duality is also reflected in the fact that it is the electric coupling \(e\) that appears in the numerator of (4), rather than the magnetic coupling (which is _inversely_ proportional to \(e\)). The fact that axion electrodynamics breaks electric-magnetic duality has spurred some authors to propose non-standard formulations of axion electrodynamics, which aim to either restore electric-magnetic duality [13] or break it in alternative ways [14, 15, 16]. These non-standard formulations of axion electrodynamics not only allow for \(g_{a\gamma\gamma}\propto 1/e^{2}\), implying much stronger couplings, they also introduce new terms in (3); for example, allowing \(\mathbf{E}\frac{\partial a}{\partial t}\) to source \(\mathbf{\nabla}\times\mathbf{E}\). Some of the proposed modifications have begun to receive attention in the context of the design or interpretation of experiments [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] or astrophysical observations [29]. Thus, it is important to understand to what extent such a non-standard axion electrodynamics is theoretically and phenomenologically viable. We will focus our attention on the formulation in [15], which constitutes the bulk of this literature. (The alternative introduced in [13] is on a less sound footing since it does follow from any known action, but it shares the same new terms in the equations of motion that we will argue are phenomenologically excluded.) In this paper, we consider an in-principle well-motivated theoretical alternative to standard axion electrodynamics, namely, to implement a coupling of the form \(\theta F^{\prime}\wedge F^{\prime}\) where \(F^{\prime}\) is an SL(2,\(\mathbb{Z}\)) dual of the standard field strength \(F\).2 We show that such a coupling leads to equations of motion that can be written in the form studied in [15]. However, these equations have an important implication. In non-standard axion electrodynamics, every electrically charged particle undergoes a monodromy, becoming a dyon in the presence of an axion field that evolves around the circle from \(\theta=0\) to \(\theta=2\pi\). We will argue that this is inconsistent with the physics of our universe, and in particular with the existence of light, weakly coupled, chiral fermion fields that obtain a mass only from electroweak symmetry breaking. Thus, although non-standard axion electrodynamics is interesting from the viewpoint of quantum field theory, it is already excluded as a theory of real-world particle phenomenology. Throughout the paper, we use standard quantum field theory formalism, rather than the less standard Zwanziger approach that appears in recent work like [15]. Nothing is lost by doing so, but to reassure devotees of that formalism, we emphasize that our key results rely only on the equations of motion away from singular sources, not on the precise fashion in which these (massive) sources are quantized. Footnote 2: To be precise, the coupling takes this form in the \(F^{\prime}\) duality frame. See §3.3 for the form of the coupling in the \(F\) duality frame. Let us now sketch out our argument, to be explained more precisely in subsequent sections. Our reasoning relies crucially on the Witten effect [31]: in an environment with nonzero \(\theta\), a magnetic monopole with unit magnetic charge acquires a fractional electric charge \(\frac{n\theta}{2\pi}\). A simple argument for this (originating in [30]; also see [32]), is to consider a monopole carrying purely magnetic charge in a local environment with zero \(\theta\), surrounded by a region in which a nonzero value of \(\theta\) turns on at larger radius. The equation for \(\mathbf{\nabla}\cdot\mathbf{E}\) implies that the radial \(\mathbf{B}\) field sourced by the monopole, together with the radial axion gradient \(\mathbf{\nabla}\theta\), will source an electric field at larger radii. (See Fig. 1.) Thus, an observer at larger distances will see an electric field that appears to have been sourced by a particle with nonzero electric charge. If we shrink the size of the region around the monopole with \(\theta=0\), the effective charge observed from afar doesn't change. Thus, in the limit that we embed the monopole in an environment with constant \(\theta\) everywhere we conclude that it is a dyon, of electric charge \(\frac{n\theta}{2\pi}\). If we continuously vary \(\theta\) from \(0\) to \(2\pi q\) (with \(q\in\mathbb{Z}\)), a magnetic monopole becomes a dyon with \(nq\) full units of electric charge, even though (due to its periodicity) the \(\theta\) value in its environment has returned to its starting point. In general, the dyon's mass will increase in this process. This phenomenon, in which the theory is periodic as a function of \(\theta\) but a given particle will transmute into other particles when \(\theta\) continuously varies around its circle, is known as "monodromy," and it arises in contexts as simple as the familiar problem of a quantum-mechanical particle on a circle (reviewed in, e.g., [33, 34]). Other straightforward arguments for the Witten effect, independent of the UV completion of the theory, appear in [35, 36]. Because it plays a central role in our argument, below we will use the phrase "Witten monodromy" to mean the monodromy in the dyon spectrum under \(\theta\to\theta+2\pi\) induced by the Witten effect. (See Fig. 2). To dispel any lingering doubts, in SS2.3 we will review an argument that does not refer to magnetic monopoles at all but solely focuses on the construction of a dual magnetic gauge field in regions away from point charges, which directly demonstrates the Witten monodromy. Now, suppose that rather than the standard equations (3), we had a modified equation in which \(\mathbf{\nabla}\cdot\mathbf{B}\) is sourced by a term of the form \(\mathbf{E}\cdot\mathbf{\nabla}a\). Such a term appears explicitly in the proposed modified equations in [13, 15]. It leads to a magnetic dual of the Witten effect: in a \(\theta\) background, such a term would imply that a particle that has purely _electric_ charge in a region of zero \(\theta\) acquires an effective _magnetic_ charge \(\frac{n\theta}{2\pi}\) when in a region of nonzero \(\theta\). One might reasonably ask if this is a well-defined claim. It is perfectly reasonable, and even standard, to _define_ electric charge to be the charge carried by an electron, so that it carries zero magnetic charge by definition. Indeed, it is well-known that there are several different useful ways to define charge in the presence of Chern-Simons terms [37]. However, independent of one's preferred definitions, an invariant physical fact remains: there is a dual Witten monodromy. That is to say, if we continuously vary \(\theta\) from \(0\) to \(2\pi q\) (with \(q\in\mathbb{Z}\)), the ordinary electron would become a dyon state with magnetic charge \(nq\). Because these are two different states in the same theory, this is an invariant physical fact, not an artifact of a particular definition of charge. Unlike the standard Witten effect, this dual monodromy is a phenomenological disaster. We Figure 1: A classic argument for the Witten effect [30]. A magnetic monopole in a region of zero \(\theta\) appears to be a dyon far away in a region with \(\theta\neq 0\). claim that it is completely impossible. In the world around us, we do not observe a collection of light dyon states with the mass of the electron and arbitrary amounts of magnetic charge. Thus, in the process of varying \(\theta\) from \(0\) to \(2\pi\) and turning the electron into a dyon, the electron mass should increase (dramatically!) as \(\theta\) turns on. As a result, electron loops would generate a large perturbative mass for the axion.3 However, this is only the start of the problem, as the electron is a _chiral_ fermion in the Standard Model. The electron obtains a mass only via electroweak symmetry breaking, and \(\theta\) is a neutral scalar, so turning it on cannot violate electroweak symmetry. At best, we can couple \(\theta\) to a Higgs-dependent electron mass term. This implies an infinite tower of dyon states all obtaining a mass from the Higgs, which would drive the Higgs field to strong coupling and significantly alter Standard Model predictions for Higgs properties.4 All said, there is no way to modify axion electrodynamics and obtain anything resembling the Standard Model coupled to a light axion. Footnote 3: If the axion in question were the QCD axion, this effect would dominate over the contribution from QCD instantons and spoil the solution to the Strong CP problem. Footnote 4: We expect that such a theory, with an infinite tower of states obtaining mass from the Higgsing of a nonabelian gauge theory, is actually inconsistent even at the formal level. However, even if such a theory exists formally, it is certainly not compatible with observed physics. This is our central argument: modifying axion electrodynamics would require that the electron (and every other elementary charged particle) obtains a magnetic charge in an axion background, which is impossible due to the chiral structure of the Standard Model and the desire for a light axion. Before returning to this point, we will first review the physics of axion electrodynamics and electric-magnetic duality in more detail below, in the interest of providing a clear pedagogical reference and a more complete argument. We will highlight some other interesting and under-appreciated physics along the way. The outline of the paper is as follows: in SS2, we review standard axion electrodynamics and prove the quantization condition (2). We also give a straightforward derivation of the Witten monodromy. In SS3, we discuss electric-magnetic duality and explain how it allows non-standard axion electrodynamics evading the quantization condition in the context of U(1) gauge theory with no charged matter coupled to an axion. In SS4, we argue that non-standard axion electrodynamics is incompatible with the Standard Model (for the reason we have just explained above). Finally, in SS5 we offer some concluding remarks. In appendix A we systematically compare Figure 2: The Witten monodromy. As \(\theta\) increases the monopole (red point) gradually acquires electric charge, ending up with a full charge quantum at \(\theta=2\pi\). The complete dyon spectrum (gray points) has now returned to its original configuration, reflecting the periodicity of \(\theta\), even though individual dyons have different charges than they started out with. our approach with [15] and derive quantization rules for the generalized axion couplings. ## 2 Standard Axion Electrodynamics In this section we review standard axion electrodynamics and derive its various features, such as the Witten monodromy. Readers interested in a more in-depth treatment of many of these ideas may also wish to consult the TASI lectures [38] by one of the authors. ### Derivation of coupling quantization We begin by giving the simple derivation that the action (1) only defines a consistent quantum field theory when \(n\in\mathbb{Z}\). A consistent quantum field theory can be studied on a variety of spacetime backgrounds. This is a necessity for theories that can be consistently coupled to gravity. In particular, we will consider the Euclidean continuation of the theory on a 4-manifold (without boundary) \(M\), in which the Chern-Simons term \(\int\theta F\wedge F\) acquires an extra factor of \(\mathrm{i}\). Our quantum field theory is defined by a path integral summing over field configurations for \(\theta\) and \(A\). We will begin with four key assumptions: 1. The axion field is periodic (we often say it is a "compact scalar"): \(\theta\cong\theta+2\pi\). In particular, this allows for field configurations in which the value of \(\theta\)_winds_ around a circle in spacetime. This means that \(\theta\) itself is not a well-defined (gauge invariant) variable, whereas \(\mathrm{e}^{\mathrm{i}\theta}\) is. We can think of \(\theta\mapsto\theta+2\pi\) as a gauge transformation. 2. The photon's gauge group is \(\mathrm{U}(1)\), which is compact. Gauge transformations take the form \(A\mapsto A+\mathrm{i}g^{-1}\,\mathrm{d}g\), where \(g(x)=\mathrm{e}^{\mathrm{i}\alpha(x)}\) takes values in \(\mathrm{U}(1)\). The distinction between this and the related non-compact gauge group \(\mathbb{R}\), both of which have the Lie algebra \(\mathfrak{u}(1)\cong\mathbb{R}\), is that the gauge transformations for \(\mathrm{U}(1)\) can wind around circles in spacetime, allowing for non-trivial disorder operators such as 't Hooft lines. 3. The axion field \(\theta\) is invariant under \(\mathrm{U}(1)\) gauge transformations of \(A\). 4. The gauge field \(A\) (along with its field strength \(F\)) is invariant under the \(2\pi\) shift of \(\theta\). The path integral sums over all field configurations for \(\theta\) and \(A\), which, because of their respective periodicity properties, include topologically nontrivial field configurations. For example, field configurations can have a winding number of the axion around a 1-cycle \(C\): \[\frac{1}{2\pi}\int_{C}\mathrm{d}\theta=w(C)\in\mathbb{Z} \tag{5}\] and a magnetic flux of the gauge field through a 2-cycle \(S\): \[\frac{1}{2\pi}\int_{S}F=m(S)\in\mathbb{Z}. \tag{6}\] In more mathematical jargon, we can think of \(w(C)\) and \(m(S)\) as information about classes in integer cohomology, \([\frac{1}{2\pi}\,\mathrm{d}\theta]\in H^{1}(M,\mathbb{Z})\) and \([\frac{1}{2\pi}F]\in H^{2}(M,\mathbb{Z})\).5 As in the familiar case of the Dirac monopole, a nontrivial topology means that we can't define the fields \(\theta\) and \(A\) globally, but we can patch them together on different coordinate charts such that, on overlaps, they agree up to gauge transformations. The field strengths \(\mathrm{d}\theta\) and \(F\) are defined globally. Once we specify any field configuration \((\theta,A)\) lying in a particular cohomology class, then the _differences_\((\theta-\theta^{\prime},A-A^{\prime})\) between this and any other field configuration \((\theta^{\prime},A^{\prime})\) specified by the same classes are globally well-defined. This allows us to separate the path integral into a discrete sum over topological classes, together with a continuous integral over field configurations without regard to topology. Now we rely on a mathematical fact that we will not prove (see, e.g., [39, 40]): if a 2-form \(\omega\) is a representative of a class in integer cohomology, then the 4-form \(\omega\wedge\omega\) is _also_ a representative of a class in integer cohomology. In other words, once we have chosen an \(F\) such that (6) holds, we are also guaranteed that \[\frac{1}{4\pi^{2}}\int_{M}F\wedge F\in\mathbb{Z}\qquad\text{(any $M$)}. \tag{7}\] This is sufficient to derive a quantization condition on axion-photon couplings, but we can do slightly better. For describing real-world physics, we can restrict to spacetime manifolds on which it is possible to define fermion fields. These are known as spin manifolds, and it turns out that on a spin manifold the integer (7) is always even. That is, we have: \[\frac{1}{8\pi^{2}}\int_{M}F\wedge F\in\mathbb{Z}\qquad\text{(any $spin$ $M$)}. \tag{8}\] Now, the action (1) is manifestly invariant under \(\mathrm{U}(1)\) gauge transformations but is not invariant under the shift \(\theta\mapsto\theta+2\pi\). However, physical quantities depend only on the exponentiated Euclidean action, \(\exp(-S_{E}[A,\theta])\), because this appears in the path integral measure. We have: \[\theta\mapsto\theta+2\pi:\qquad\mathrm{e}^{-S_{E}[A,\theta]}\mapsto\mathrm{e} ^{-S_{E}[A,\theta]}\exp\left[-\frac{\mathrm{i}n}{4\pi}\int F\wedge F\right]. \tag{9}\] Now, for _every_ field configuration that we sum over in the path integral, the integral appearing in the last factor of (9) is of the form \(8\pi^{2}k\) for some \(k\in\mathbb{Z}\), and hence the factor takes the form \(\exp[-2\pi\mathrm{i}nk]\). This is always 1 if \(n\in\mathbb{Z}\), but in general is not 1 if \(n\notin\mathbb{Z}\). This proves (2). ### Revisiting the assumptions Our proof was straightforward, but relied on four assumptions. Let's revisit them one by one: 1. The axion was assumed to be periodic. If \(\theta\) is a non-compact field, there is no \(\theta\mapsto\theta+2\pi\) gauge redundancy, and the whole argument falls apart. On the other hand, we have good reasons for studying periodic axion fields, beyond the fact that compactness is often taken to be part of the definition of an axion. UV completions give rise to compact axions: a pseudo-Nambu-Goldstone boson of an approximate \(\mathrm{U}(1)\) global symmetry, or a zero mode of a higher-dimensional \(\mathrm{U}(1)\) gauge field, is intrinsically compact. A compact scalar can only admit periodic terms in its potential,6 which opens up the possibility that the potential is dominated by exponentially small instanton effects.7 For a generic non-compact scalar, it would be difficult to explain why the field is light and why its dominant source of shift-symmetry breaking originates from coupling to gluons, so it is unlikely to solve the Strong CP problem. In short, if we want to drop the compactness assumption on the axion, we are not considering a traditional axion at all, and we have to modify the entire structure of the model. 2. The gauge group was taken to be U(1) rather than \(\mathbb{R}\). If this assumption is dropped, there can be no magnetic flux, \(\int F=0\) for any 2-cycle, and \(\int F\wedge F=0\) for any 4-manifold. Then the axion-photon coupling can take on any real value. However, there are compelling arguments that consistency of black hole physics forbids \(\mathbb{R}\) gauge groups from arising in quantum gravity [46], so we do not expect this case to be relevant in the real world.8 Footnote 8: It should also be noted that the non-standard axion couplings considered in §3 are impossible if the electromagnetic gauge group is \(\mathbb{R}\), because only U(1) has the necessary \(\mathrm{SL}(2,\mathbb{Z})\) self-duality required to make the non-standard axion periodic. 3. The axion field \(\theta\) was assumed to be invariant under U(1) gauge transformations. If it were not, it would get eaten via the Stueckelberg mechanism, and give the photon a mass. To make \(\exp(\mathrm{i}S)\) gauge invariant, we would have to add anomalous charged matter as in the 4d Green-Schwarz mechanism. A massive photon scenario is not the case of interest for us, but because our argument made use of invariance under \(\theta\) gauge transformations but not \(A\) gauge transformations, dropping this assumption would also not change the conclusion. 4. The gauge field strength \(F\) was assumed not to change under the gauge transformation \(\theta\mapsto\theta+2\pi\). This may seem innocuous, but in fact it is the weakest point in the argument. We will explain the possible alternative, a dual Witten monodromy, in SS3. The first two assumptions can't be evaded by flowing from a UV theory in which they hold to an IR theory in which they do not. If one begins with multiple U(1) gauge fields and higgses, the surviving massless gauge field has a compact U(1) gauge group. Similarly, if one begins with multiple axion fields and then gives a mass to some of them, either via a periodic potential or through a Stueckelberg mechanism in which they are eaten by a gauge field, a surviving light axion is always compact [47, 48]. This fact has proven useful in diagnosing some mistaken analyses of multi-axion models in the literature. ### The Witten monodromy and anomaly inflow Next, we show that the monodromy associated with the Witten effect for a dynamical axion can be derived in a very straightforward way, without referring to pointlike monopoles at all. This makes it clear that it is an effect within the low-energy effective field theory associated with the action (1), independent of details of the UV completion (in contrast to some claims [15]). This argument is simply a special case of the much more general phenomenon of _anomaly inflow_ in the presence of Chern-Simons terms [49]; this was also recently pointed out in [50]. To derive the Witten monodromy, let's first recall what it means to introduce a magnetic dual gauge field \(A_{\mathrm{M}}\). The field strength of the magnetic dual gauge field should be the Hodge dual of the usual gauge field strength, up to normalization. Specifically, in free Maxwell theory without a \(\theta\) term and without an axion coupling, we would define \[\frac{1}{2\pi}\,{\rm d}A_{\rm M}=-\frac{1}{e^{2}}\star F.\qquad\mbox{(no axion)} \tag{10}\] The integral of the left hand side gives the magnetic flux of the gauge field \(A_{\rm M}\), which is minus the electric flux of the original gauge field \(A\), which we know to be measured by the right-hand side. Now, the reason that the equation (10) makes sense is that Maxwell's equations, in the absence of any electric charges or currents, tell us that \({\rm d}\star F=0\), i.e., that the electric flux density \(\star F\) is _closed_. Any closed form is _locally_ exact, which means that in any given region, we can find a solution \(A_{\rm M}\) to the equation (10). There is no guarantee that \(\star F\) is exact, which means that we may not be able to _globally_ define \(A_{\rm M}\), but this is fine: we can define it locally in different coordinate patches, with agreement on the overlaps to construct a gauge bundle. Also, solutions to (10) are not unique: if \(A_{\rm M}\) solves the equation, so does \(A_{\rm M}-{\rm d}\alpha_{\rm M}\) for any \(\alpha_{\rm M}\). This is the expected magnetic gauge redundancy. Gauge transformations of \(A\) do not act on \(A_{\rm M}\), and vice versa. For axion electrodynamics with the action (1) (and \(n=1\), for simplicity), introducing \(A_{\rm M}\) is not so straightforward. The reason is that we now have an equation of motion \[\frac{1}{e^{2}}\,{\rm d}\star F=\frac{1}{4\pi^{2}}\,{\rm d}\theta\wedge F. \tag{11}\] The electric flux density \(\star F\) is no longer closed, even away from charged particles, in the presence of a varying axion field. This means that we can no longer find a solution to (10); it is simply not the right way to locally define a magnetic gauge field \(A_{\rm M}\). However, we can rewrite (11) in the form \[{\rm d}\biggl{(}\frac{1}{e^{2}}\star F-\frac{1}{4\pi^{2}}\theta F\biggr{)}=0, \tag{12}\] which is equivalent away from magnetic monopoles where \({\rm d}F\neq 0\). We have only been discussing equations that hold locally away from charged objects, so this restriction is fine. It motivates introducing the magnetic gauge field \(A_{\rm M}\) with the new definition \[\frac{1}{2\pi}\,{\rm d}A_{\rm M}=-\frac{1}{e^{2}}\star F+\frac{1}{4\pi^{2}} \theta F.\qquad\mbox{(with axion)} \tag{13}\] Just as before, we can always _locally_ solve this equation, thanks to (12). Again, solutions are not unique, which corresponds to the gauge redundancy of \(A_{\rm M}\). Furthermore, gauge transformations of \(A\) do not affect \(A_{\rm M}\). However, we now have a new subtlety: the equation that we are solving for \(A_{\rm M}\) depends on \(\theta\), which is itself not gauge invariant. In particular, if we construct a solution \(A_{\rm M}{}^{(0)}\) to (13) and then perform a gauge transformation \(\theta\mapsto\theta+2\pi\), our original \(A_{\rm M}{}^{(0)}\) will no longer be a solution. Instead, we have a new solution \(A_{\rm M}{}^{(1)}=A_{\rm M}{}^{(0)}+A\). Said differently, the magnetic gauge field \(A_{\rm M}\) is _not gauge invariant_ under the gauge transformation \(\theta\mapsto\theta+2\pi\). It transforms as: \[\theta\mapsto\theta+2\pi:\qquad A_{\rm M}\mapsto A_{\rm M}+A. \tag{14}\] This result is the key equation specifying the Witten monodromy: an object with pure magnetic charge acquires one unit of electric charge under a complete shift of the axion around its field space. This derivation of the Witten monodromy (14) is very clean, since we only asked about how to define a magnetic gauge field _away_ from any sources like monopoles or electrons. Thus, it is clear that the result has nothing to do with any divergences one might find in the cores of such objects, or any limiting procedure as in the argument we reviewed in the introduction. Nonetheless, it also implies the standard claims about dyonic modes on a magnetic monopole, through an anomaly inflow argument. A heavy magnetically charged object can be described by an effective theory living on its worldline \(C\). Ordinarily, the dependence of the action of this object on \(A_{\rm M}\) would look like \(S_{\rm M}=\int_{C}A_{\rm M}\). This is not a gauge-invariant action, but it is invariant when exponentiated, just as a standard Wilson loop is. However, in axion electrodynamics this is no longer true, because \(\exp({\rm i}S_{\rm M})\) is not invariant under (14). To fix this, we must add additional ingredients to our theory that cancel out the change in \(S_{\rm M}\). A minimal approach is to add a compact boson \(\sigma\cong\sigma+2\pi\) that shifts under an \(A\) gauge transformation, i.e., \[A\mapsto A-{\rm d}\alpha:\qquad\sigma\mapsto\sigma-\alpha. \tag{15}\] This allows us to define a consistent worldline action \[S_{\rm M}=\int_{C}\left[A_{\rm M}-\frac{\theta}{2\pi}({\rm d}\sigma+A)\right]. \tag{16}\] (The full action \(S_{\rm M}\) will also include a monopole mass term that depends on the proper length of \(C\) as well as a kinetic term for \(\sigma\), but these are not relevant for our current discussion, which focuses only on charges.) By construction, (16) is invariant under both \(A\) gauge transformations and \(\theta\) gauge transformations. The degree of freedom \(\sigma\) behaves as a quantum-mechanical particle on a ring, which is the familiar dyonic degree of freedom on the monopole (originally discovered in the context of the 't Hooft-Polyakov monopole [51]). Here we see that the existence of this degree of freedom, or some other one with a similar ability to cancel the change in \(S_{\rm M}\) under \(\theta\mapsto\theta+2\pi\), is a fundamental consistency requirement on the theory.9 Footnote 9: One might wonder if this argument can be evaded by imposing a \(\theta=0\) boundary condition on the monopole worldline. However, for dynamical monopoles (as opposed to ’t Hooft lines), this implies a strong coupling of the monopole to the axion. In fact, it is not really an alternative theory at all, it is just the limiting case where dyonic excitations become infinitely heavy (equivalently, the \(\sigma\) kinetic term goes to zero). This is not an innocuous limit to take. For instance, monopole loop effects on the axion (as in [52]) are not exponentially suppressed in this limit. This argument is a particular case of a very general phenomenon: dualizing gauge fields in the presence of Chern-Simons terms produces magnetic gauge fields that are not invariant under electric gauge transformations. Consistency then requires that magnetically charged objects admit zero modes that can be excited to give them electric charge. These modes are said to arise by anomaly inflow [49]. An exactly analogous argument tells us that axion strings admit chiral charge-carrying excitations. An even more well-known example, with ample experimental verification, is the existence of edge modes in quantum Hall systems, which are described by Chern-Simons terms in \((2+1){\rm d}\) with chiral electrically charged modes on the \((1+1){\rm d}\) boundary. The monodromy (14)--rather than the details of the localized worldline mode required by anomaly inflow--will play the key role in our arguments below. Before moving on, let us make two other brief comments about the Witten effect. First, one might wonder what would have happened if we had traded the \(\theta F\) term in (12) for an \(A\wedge{\rm d}\theta\) term. In this case, \(A_{\rm M}\) would have been defined differently, and would directly shift under an ordinary electric gauge transformation. One can work through the details, and find that (despite different intermediate steps) the physical conclusions are the same. Second, our argument above was about axion electrodynamics, and in particular \({\rm d}\theta\) played a key role in the discussion starting from (11). The Witten effect in a theory with a constant \(\theta\) term, rather than a dynamical axion, is slightly more subtle. Nonetheless, it can again be derived from general principles. Perhaps the most straightforward way to convince oneself of its validity is to dimensionally reduce to 2d QED with a \(\theta\) term by compactifying on a closed 2-manifold with flux, then study the 2d theory on a spatial circle. This theory is equivalent to the quantum mechanics of a particle on a ring with a \(\theta\) term, which is a familiar (and straightforward) problem to solve. The Witten effect here appears in the fact that the canonical momentum shifts in the presence of a nonzero \(\theta\). As a result, the entire spectrum of the quantum mechanical theory is \(\theta\) dependent, and exhibits monodromy. In fact, this textbook problem in ordinary quantum mechanics is exactly the same as the theory on the monopole worldline. ## 3 Duality and Non-Standard Axion Electrodynamics In this section, we review electric-magnetic duality in order to motivate non-standard axion-photon couplings which evade the formal arguments presented in SS2.1. In particular, we find the possibility of greatly enhanced axion-photon couplings proportional to \(1/e^{2}\), in agreement with the results of [15]. However, we also find that precisely when these enhanced axion-photon couplings appear, a dual version of the Witten monodromy leads to electric charges acquiring a magnetic charge when we take \(\theta\mapsto\theta+2\pi\). ### Electric-magnetic duality for a free photon It is well-known that the theory of a free U(1) gauge field has an SL(2,\(\mathbb{Z}\)) duality group, generated by the matrices \[S=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\quad T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}. \tag{17}\] A general element of SL(2,\(\mathbb{Z}\)) has the form \[\Lambda=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\quad a,b,c,d\in\mathbb{Z},\quad ad-bc=1. \tag{18}\] The electric and magnetic potentials transform in a 2-dimensional representation:10 Footnote 10: To be clear, we are not considering a formulation of the theory where \(A\) and \(A_{\rm M}\) are both integrated over in the path integral. One should really think of this equation as a shorthand for the transformation of physical quantities like Wilson and ’t Hooft lines, and electric and magnetic fluxes. \[\begin{pmatrix}A_{\rm M}{}^{\prime}\\ A^{\prime}\end{pmatrix}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\begin{pmatrix}A_{\rm M}\\ A\end{pmatrix}, \tag{19}\] whereas the electric and magnetic currents \(J_{\rm E}\) and \(J_{\rm M}\) transform in the dual representation: \[\begin{pmatrix}J_{\rm M}{}^{\prime}&J_{\rm E}{}^{\prime}\end{pmatrix}=\begin{pmatrix} J_{\rm M}&J_{\rm E}\end{pmatrix}\Lambda^{-1}=\begin{pmatrix}J_{\rm M}&J_{\rm E} \end{pmatrix}\begin{pmatrix}d&-b\\ -c&a\end{pmatrix}, \tag{20}\] ensuring that the coupling \(A\wedge J_{\rm E}+A_{\rm M}\wedge J_{\rm M}\) is SL(2,\(\mathbb{Z}\)) invariant. Another equivalent way to write (20) is \[\begin{pmatrix}J_{\rm E}{}^{\prime}\\ -J_{\rm M}{}^{\prime}\end{pmatrix}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\begin{pmatrix}J_{\rm E}\\ -J_{\rm M}\end{pmatrix}. \tag{21}\] The coupling constant and \(\theta\) angle are packaged into a complex background field \[\tau=\frac{\theta}{2\pi}+{\rm i}\frac{2\pi}{e^{2}}. \tag{22}\] This transforms as \[S:\tau\mapsto-\frac{1}{\tau},\quad T:\tau\mapsto\tau+1, \tag{23}\] or more generally \[\tau\mapsto\frac{a\tau+b}{c\tau+d} \tag{24}\] under the matrix (18). The \(T\) operation corresponds to a \(2\pi\) shift of \(\theta\). In the duality frame where our fundamental gauge field is \(A\), we have electric and magnetic field strengths \[F={\rm d}A,\quad F_{\rm M}=-\frac{2\pi}{e^{2}}\star F+\frac{\theta}{2\pi}F, \tag{25}\] where \(F_{\rm M}={\rm d}A_{\rm M}\) (where the gauge field \(A_{\rm M}\), like \(A\) itself, need only be locally well-defined, i.e., it is a connection on a U(1) bundle rather than a 1-form globally). Here we see explicitly that under the \(T\) operation \(\theta\mapsto\theta+2\pi\), we have \(F_{\rm M}\mapsto F_{\rm M}+F\) and hence \(A_{\rm M}\mapsto A_{\rm M}+A\). This is exactly the Witten monodromy (14) that we derived in SS2.3, which we see is intrinsically part of the standard SL(2,\(\mathbb{Z}\)) formulation of electromagnetic duality. The magnetic flux quantization condition (6) holds as a topological constraint on the \(A\) field configurations we sum over in the path integral, whereas the analogous electric flux quantization condition \[-\frac{1}{2\pi}\int_{S}F_{\rm M}=\int_{S}\left(\frac{1}{e^{2}}\star F-\frac{ \theta}{4\pi^{2}}F\right)=e(S)\in\mathbb{Z} \tag{26}\] holds by the equations of motion. The appearance of \(\theta\) in this condition, which is a direct consequence of the equations of motion, is one manifestation of the Witten effect. It is possible to derive the SL(2,\(\mathbb{Z}\)) invariance directly from the path integral. The \(T\) operation is the \(2\pi\) shift of \(\theta\), which leaves the theory invariant for the reason we derived in the previous section. The \(S\) operation can be derived by integrating in additional fields in the path integral and then integrating out all but one of the new fields [53, 54]. The partition function is not SL(2,\(\mathbb{Z}\)) invariant, but rather transforms as a modular form [53]. After this operation one finds that the electric flux quantization condition (26) is now a topological constraint on the dual gauge field configurations that we now sum over in the path integral, whereas the magnetic flux quantization condition (6) now holds via equations of motion. Thus, the question of whether flux quantization is topological or dynamical is not an invariant fact in U(1) gauge theory, but an artifact of a chosen duality frame. The theory of a free U(1) gauge field has no particles with electric or magnetic charge, but it does have line operators which one can think of as infinitely heavy, static electrically and magnetically charged objects with which one can probe the theory. In particular, there is a Wilson line operator defined for integer \(q\) (corresponding to a representation of U(1)) and curves \(C\): \[W_{q}(C)=\exp\left[{\rm i}q\int_{C}A\right] \tag{27}\] and an 't Hooft line operator \(T_{p}(C)\) defined for a _magnetic_ charge \(p\in\mathbb{Z}\) and curve \(C\). In terms of the magnetic dual gauge field, the 't Hooft line operator is similar to a Wilson operator: \[T_{p}(C)=\exp\left[{\rm i}p\int_{C}A_{\rm M}\right]. \tag{28}\] In terms of the electric gauge field \(A\) over which we perform the path integral in the standard duality frame, we can define the 't Hooft operator by excising a small tube around \(C\) and imposing a boundary condition on the field that the magnetic flux \(m(S)\) through a surface \(S\) linking \(C\) with linking number \(\ell(S,C)\) is \(\ell(S,C)p\). One further has an infinite collection of dyonic line operators \(L_{p,q}(C)\) with magnetic charge \(p\) and electric charge \(q\), which can be thought of as the fusion of \(q\) minimal-charge Wilson lines and \(p\) minimal-charge 't Hooft lines. The duality group acts on this collection of line operators via the map (19). ### The standard axion-photon coupling Consider the theory with a dynamical axion \(\theta(x)\) coupling to \(F\wedge F\) for an otherwise free photon, as in (1) (with \(n=1\), for convenience). This promotes the _real_ part of the background field \(\tau\) in (22) to a dynamical field that we sum over in the path integral, but not the imaginary part. Clearly, treating different components of \(\tau\) differently in this way explicitly breaks the SL(2,\(\mathbb{Z}\)) duality symmetry. This is reflected in the equations of motion, in the factor of \(e^{2}\) in the coupling of the axion to photons when the fields are canonically normalized, and in the spectrum of line and surface operators in the theory. One way to think of the theory with an axion is that we have now _gauged_ the \(\mathbb{Z}\) subgroup of SL(2,\(\mathbb{Z}\)) generated by \(T\), because the \(T\) operation corresponds to the gauge redundancy \(\theta\mapsto\theta+2\pi\). The \(T\) operation acts trivially on the field strength \(F\) (consistent with our fourth assumption in SS2), but it acts nontrivially on the magnetic field strength \(F_{\rm M}\), which shifts to \(F_{\rm M}+F\) (the Witten monodromy). In the theory with an axion, this means that \(F_{\rm M}\) is not a gauge invariant operator, even though \(F\) is! This breaking of electric-magnetic duality has an important implication for the physics of magnetic charges. In particular, the 't Hooft operator (28) is no longer a genuine (gauge invariant) operator, as it transforms under \(\theta\mapsto\theta+2\pi\). This is precisely the same issue that we discussed for physical monopoles in SS2.3, which can be resolved by introducing a localized mode \(\sigma\cong\sigma+2\pi\) on the curve \(C\) transforming under \(A\) gauge transformations as (15). As explained in [55], we can then define an 't Hooft operator to include a path integral over this mode: \[\widehat{T}_{p}(C)=\int{\cal D}\sigma\exp\left[{\rm i}p\int_{C}\left(A_{\rm M} -\frac{\theta}{2\pi}({\rm d}\sigma+A)\right)\right]. \tag{29}\] Even in the absence of dynamical monopoles, then, the anomaly inflow phenomenon manifests itself in the spectrum of line operators in the theory. Rather than the entire family of \(L_{p,q}(C)\) operators, we now have Wilson operators \(W_{q}(C)\) and 't Hooft operators \(\widehat{T}_{p}(C)\), but the dyonic lines have been subsumed into the 't Hooft operator thanks to its localized degree of freedom.11 This reflects the breaking of SL(2,\(\mathbb{Z}\)) by the axion coupling. Footnote 11: One might also attempt to define \(L_{p,q}(C)\) line operators for all \(p\) and \(q\) by imposing a \(\theta=0\) boundary condition along the line. A detailed assessment of the full range of allowed boundary conditions and line operators is beyond the scope of this work. ### Alternative axion-photon couplings We have seen that the standard axion-photon coupling breaks SL(2,\(\mathbb{Z}\)) in a specific way. It gauges a specific \(\mathbb{Z}\) subgroup of SL(2,\(\mathbb{Z}\)) corresponding to powers of \(T\), which leaves Wilson line operators and magnetic 1-form surface operators untouched, while modifying 't Hooft lines and electric 1-form surface operators. This is reflected in the effective electric, but not magnetic, charges and currents that appear in the axionic modification of Maxwell's equations (3). Given the structure of SL(2,\(\mathbb{Z}\)), it is clear that this choice of axion coupling, and its privileging of electric over magnetic currents, was not a unique choice. We can define a different type of axion-photon coupling for every SL(2,\(\mathbb{Z}\)) element \(\Lambda\) (18), which defines a new duality frame. To do so, we follow a simple procedure. First, transform to the new frame; then, add a properly quantized axion coupling \(\theta F\wedge F\) in the new frame; then, transform back. Let's begin in the frame where \(A\) is the gauge field that couples to the usual electric charge carried by the electron. In this frame, we will take the complexified gauge coupling in the case where the axion field \(\theta(x)=0\) to be given by \[\tau_{0}=\frac{\theta_{0}}{2\pi}+{\rm i}\frac{2\pi}{e_{0}^{2}}. \tag{30}\] The subscripts \(0\) signal that these are constants, independent of field \(\theta(x)\). Now, we perform an SL(2,\(\mathbb{Z}\)) transformation as given by (20), (19) and (24) to the \(A^{\prime}\) frame. In this frame, the constant complexified gauge coupling is \(\tau_{0}^{\prime}=(a\tau_{0}+b)/(c\tau_{0}+d)\). Then we add, in this frame, a new term to the action: \[\delta S=\frac{k}{8\pi^{2}}\int\theta(x)F^{\prime}(x)\wedge F^{\prime}(x). \tag{31}\] We assume that \(F^{\prime}\) is invariant under \(\theta\mapsto\theta+2\pi\), and hence \(k\in\mathbb{Z}\) as derived in SS2.1. The addition (31) changes the effective complexified gauge coupling in the \(A^{\prime}\) frame to \[\tau^{\prime}(x)=\tau_{0}^{\prime}+\frac{k}{2\pi}\theta(x). \tag{32}\] Now, we transform _back_ to the original frame with \(\Lambda^{-1}\) to find the field-dependent complexified gauge coupling \[\tau(x)=\frac{d\tau^{\prime}(x)-b}{-c\tau^{\prime}(x)+a}=\frac{\tau_{0}+d(c \tau_{0}+d)\frac{k}{2\pi}\theta(x)}{1-c(c\tau_{0}+d)\frac{k}{2\pi}\theta(x)}. \tag{33}\] This expression fully captures how the axion field \(\theta(x)\) couples to the standard photon field: \(\mathrm{Re}\,\tau(x)\) determines the coupling to \(F\wedge F\), and \(\mathrm{Im}\,\tau(x)\) determines the coupling to \(F\wedge\star F\). The full expressions for the real and imaginary parts of (33) are complicated, but we can take a look at their expansion to linear order in \(\theta(x)\) to see how \(1/e_{0}^{2}\) and \(\theta_{0}\) are corrected: \[\frac{1}{e^{2}(x)}\equiv\frac{1}{2\pi}\mathrm{Im}\,\tau(x) =\frac{1}{e_{0}^{2}}\left[1+2c\left(d+c\frac{\theta_{0}}{2\pi} \right)\frac{k\theta(x)}{2\pi}+\cdots\right],\] \[\vartheta(x)\equiv 2\pi\mathrm{Re}\,\tau(x) =\theta_{0}+\left[\left(d+c\frac{\theta_{0}}{2\pi}\right)^{2}-c^ {2}\left(\frac{2\pi}{e_{0}^{2}}\right)^{2}\right]k\theta(x)+\cdots. \tag{34}\] where \(\cdots\) refers to terms of order \(\theta(x)^{2}\) or higher, and we have used the notation \(\vartheta(x)\) for effective coefficient of \(\frac{1}{8\pi^{2}}F\wedge F\), to distinguish it from the axion field \(\theta(x)\). Let us highlight some key features of these results: * By continuity, if \(e_{0}^{2}\) is small then the effective coupling \(e^{2}(x)\) remains small for small axion field values. However, larger axion field values \(\theta(x)\sim O(1)\) can drive the coupling to be strong. * When \(c=0\), the axion coupling has the expected quantized form: in this case we necessarily have \(d=\pm 1\), so the coefficient is \(k\in\mathbb{Z}\). * When \(c\neq 0\), the axion coupling in \(\vartheta\) is strongly enhanced. In particular, canonical normalization multiplies the coupling by \(e_{0}^{2}\), so the canonical coupling for \(c\neq 0\) is proportional to \(1/e_{0}^{2}\) instead of \(e_{0}^{2}\) itself. This is as one would expect, from the exchange of electric and magnetic couplings under Dirac quantization. * When \(c\neq 0\), we also observe a coupling of the axion to the standard kinetic term of the photon. Generically (unless \(\theta_{0}=-\frac{2\pi d}{c}\)), the coupling is linear. * When \(c\neq 0\), the curve \(\tau(\theta)\) in the upper half-plane traced out by varying \(\theta\) is a circle tangent to the real axis at \(\tau=-d/c\) (a point approached asymptotically as \(\theta\to\pm\infty\)) and passing through the point \(\tau_{0}\). We illustrate the curve \(\tau(\theta)\) in two examples in Fig. 3, starting with a purely imaginary \(\tau_{0}\). The first is a standard axion coupling, where \(\theta\mapsto\theta+2\pi n\) shifts \(\tau\mapsto\tau+n\). The second is a coupling in an \(S\)-dual frame, where shifting \(\theta\mapsto\theta+2\pi n\) acts on \(\tau\) with the SL(2,\(\mathbb{Z}\)) transformation \(S^{-1}T^{n}S\). In this case, \(\tau(\theta)\) is a circle passing through the origin. These results evade the argument for the axion coupling quantization given in SS2.1 for the reason anticipated in SS2.2: the coupling (31) is defined in a frame where \(F^{\prime}\) is invariant under \(\theta\mapsto\theta+2\pi\), which means that the original field strength \(F\) is _not_ invariant under this operation. This violates the fourth assumption in the argument. Indeed, the composition of \(\Lambda\) followed by \(T\) followed by \(\Lambda^{-1}\) corresponds to the SL(2,\(\mathbb{Z}\)) element \[T^{\prime}=\begin{pmatrix}1+cd&d^{2}\\ -c^{2}&1-cd\end{pmatrix}, \tag{35}\] under which \[A\mapsto(1-cd)A-c^{2}A_{\rm M}. \tag{36}\] Thus, an immediate consequence of having an axion coupling that is not quantized in the standard way is that electrically charged particles acquire a magnetic charge when \(\theta\mapsto\theta+2\pi\), in a dual form of the Witten monodromy (14). We will return to this point in SS4, but first let us discuss how our results relate to non-standard axion electrodynamics in the literature. Figure 3: The complexified coupling \(\tau\) as a function of the axion \(\theta\), as in (33) with \(\tau_{0}=1.7{\rm i}\) and \(k=1\). The blue line is the standard \(\theta F\wedge F\) axion coupling, corresponding to the choice \(c=0\), \(d=1\). In this case, the axion only affects the usual theta angle captured by \({\rm Re}\,\tau\). The orange curve is the case where the axion couples in an \(S\)-dual frame, with \(d=0\) and \(c=-1\). In this case, \(\tau\) asymptotically approaches zero for large values of the axion. In both cases, the dots on the curve correspond to shifts of the axion by multiples of \(2\pi\), which map \(\tau_{0}\) to values related by an SL(2,\(\mathbb{Z}\)) matrix. The faint gray lines in the background trace the boundaries of different SL(2,\(\mathbb{Z}\)) fundamental domains. ### Comparison to prior literature The expression for \(\tau(x)\) in (33) is clunky, so let's see how we can rearrange our results to more closely resemble those that have previously appeared in the literature. In the \(A^{\prime}\) frame, the equations of motion are simple: \[\mathrm{d}F^{\prime} =0,\] \[\frac{1}{e^{\prime 2}}\,\mathrm{d}\star F^{\prime} =\frac{k}{4\pi^{2}}\,\mathrm{d}\theta\wedge F^{\prime},\] \[f^{2}\,\mathrm{d}\star\mathrm{d}\theta =\frac{k}{8\pi^{2}}F^{\prime}\wedge F^{\prime}. \tag{37}\] The primed gauge coupling is independent of \(\theta(x)\) and given by \[e^{\prime 2}=e_{0}^{2}\left[c^{2}\left(\frac{2\pi}{e_{0}^{2}}\right)^{2}+ \left(d+c\frac{\theta_{0}}{2\pi}\right)^{2}\right]. \tag{38}\] Because these quantities will recur throughout the discussion below, it is useful to define \[\gamma\equiv c\left(\frac{2\pi}{e_{0}^{2}}\right),\quad\delta\equiv d+c\frac{ \theta_{0}}{2\pi}, \tag{39}\] so that \(e^{\prime 2}=e_{0}^{2}\left(\gamma^{2}+\delta^{2}\right)\). In terms of the usual field strength \(F\) and its magnetic dual \(F_{\mathrm{M}}\), the primed field strength is \(F^{\prime}=cF_{\mathrm{M}}+dF\). In the special case where \(\theta(x)=0\), we further have \(F_{\mathrm{M}}=-\frac{2\pi}{e_{0}^{2}}\star F+\frac{\theta_{0}}{2\pi}F\). Let us define, in the general case, a quantity \(\mathcal{F}\) that is related to \(F^{\prime}\) in the same way that \(F\) is when \(\theta(x)=0\). That is, \(\mathcal{F}\) is defined by \[F^{\prime}=\left(d+c\frac{\theta_{0}}{2\pi}\right)\mathcal{F}-c\frac{2\pi}{e_ {0}^{2}}\star\mathcal{F}=\delta\mathcal{F}-\gamma\star\mathcal{F}. \tag{40}\] Then we necessarily have \(\mathcal{F}\to F\) when \(\theta(x)\to 0\). Now, we simply substitute the expression (40) into the equations for \(\mathrm{d}F^{\prime}\) and \(\mathrm{d}\star F^{\prime}\) in (37) and then solve for \(\mathrm{d}\mathcal{F}\) and \(\mathrm{d}\star\mathcal{F}\). We obtain: \[\mathrm{d}\star\mathcal{F}+\frac{ke_{0}^{2}}{4\pi^{2}}\left(- \gamma\delta\,\mathrm{d}\theta\wedge\star\mathcal{F}+\delta^{2}\,\mathrm{d} \theta\wedge\mathcal{F}\right) =0,\] \[\mathrm{d}\mathcal{F}+\frac{ke_{0}^{2}}{4\pi^{2}}\left(-\gamma^{2} \,\mathrm{d}\theta\wedge\star\mathcal{F}+\gamma\delta\,\mathrm{d}\theta \wedge\mathcal{F}\right) =0. \tag{41}\] Notice that \(\mathcal{F}\) can't be interpreted as a field strength, because \(\mathrm{d}\mathcal{F}\neq 0\) (even away from singular points like the core of a monopole). However, it does agree with \(F\) in the limit \(\theta\to 0\). The equations (41) closely resemble the equations of motion that were obtained in [15]. In particular, we can identify the interaction terms in our equations with the three axion-photon couplings there via \[g_{AB}=\frac{ke_{0}^{2}}{4\pi^{2}}\gamma\delta,\quad g_{AA}=\frac{ke_{0}^{2}}{ 4\pi^{2}}\delta^{2},\quad g_{BB}=\frac{ke_{0}^{2}}{4\pi^{2}}\gamma^{2}, \tag{42}\] (up to normalization and sign conventions). The field strength in the original duality frame is related to \(\mathcal{F}\) via \[F=\mathcal{F}+\frac{k\theta(x)}{2\pi}c\left(\delta\mathcal{F}-\gamma\star \mathcal{F}\right). \tag{43}\] The axion equation of motion takes the form \[f^{2}\,\mathrm{d}\star\mathrm{d}\theta=\frac{1}{2e_{0}^{2}}\left[(g_{AA}-g_{BB}) \mathcal{F}\wedge\mathcal{F}-2g_{AB}\mathcal{F}\wedge\star\mathcal{F}\right] \tag{44}\] (where we have used that, in Minkowski signature, \(\star\mathcal{F}\wedge\star\mathcal{F}=-\mathcal{F}\wedge\mathcal{F}\); compare (34)). What we have found is that the axion couplings defined in a different SL(2,\(\mathbb{Z}\)) frame are essentially the new couplings of [15], with the following caveats: * The three couplings are not independent: in our normalization, \(g_{AB}^{2}=g_{AA}g_{BB}\). (However, this constraint can be relaxed for a more general coupling; see SS4.2 and appendix A.) * The couplings obey a nontrivial quantization condition, in the sense that three integers (\(c\), \(d\), and \(k\)) fully determine their dependence on the fundamental parameters \(e_{0}\) and \(\theta_{0}\). * The field strength \(\mathcal{F}\) for which the equations take the simple form (41) is not a field strength in the usual sense, as is clear from the fact that it is not a closed form. * When \(c\neq 0\), the axion-photon coupling \(g_{BB}\), after canonically normalizing, can be \(\propto 1/e_{0}^{2}\) rather than \(\propto e_{0}^{2}\). (We already noted this in the linearized analysis around (34).) However, precisely when this large coupling appears, the electron acquires magnetic charge when \(\theta\mapsto\theta+2\pi\). This last point, the "dual Witten monodromy," is crucial for understanding whether nonstandard axion-photon couplings are phenomenologically viable. In Appendix A, we give a somewhat different and more complete perspective, starting with a set of equations of the form (41) (but with completely undetermined coefficients) and systematically working out how they can map onto equations involving a closed field strength and its magnetic dual. This leads us to a very general family of functions \(\tau(x)\) encoding how an axion can couple to gauge fields. Some of these couplings are simply periodic functions, where the coupling explicitly depends on \(\sin(n\theta(x))\) and \(\cos(n\theta(x))\). In complete theories, we expect that such couplings are suppressed by the axion mass squared, because effects that can generate such couplings can also, in general, generate an axion potential with the same spurions for violation of the continuous axion shift symmetry. We also find a set of couplings that precisely correspond to the SL(2,\(\mathbb{Z}\)) family of Chern-Simons couplings that we have just discussed, as well as more general couplings of the type discussed in SS4.2. ## 4 Phenomenological Assessment In this section, we present our main arguments regarding the phenomenological viability of nonstandard axion electrodynamics. We find that the dual Witten monodromy, which is implied by the presence of non-standard axion-photon couplings, is incompatible with the Standard Model, and so non-standard axion-photon couplings are phenomenologically excluded. ### The dual Witten monodromy When we couple the axion to the photon in an SL(2,\(\mathbb{Z}\)) dual frame, the ordinary photon field \(A\) is no longer invariant under \(\theta\mapsto\theta+2\pi\): it shifts as in (36), and in particular, acquires a term proportional to \(A_{\mathrm{M}}\). As a result, every electrically charged particle in the theory must acquire magnetic charge and become a dyon when \(\theta\mapsto\theta+2\pi\). This is just the dual of the usual Witten monodromy, as explained in SS2.3. Let's begin by commenting on some general features of this dual Witten effect, before discussing it in the context of the Standard Model in particular. Consider an electrically charged particle, say the electron, at \(\theta=0\). If we continuously vary \(\theta\) from \(0\) to \(2\pi\), this particle will become a dyon. In order to have a reasonable QFT, it must be the case that its mass changes during this process. Otherwise, we would find an infinite degeneracy of dyons by tracking this state to \(\theta=2\pi n\) for all integer \(n\). Thus, the spectrum of dyonic excitations of the electron (or any other charged particle) should exhibit a monodromy, as depicted in Fig. 4. The electron mass should increase as \(\theta\) increases, while the mass of some other dyon state will decrease, and that state will become the new electron at \(\theta=2\pi\). This is the same sort of behavior that we see in the context of the usual Witten effect, where the magnetic monopole mass increases as \(\theta\) varies (see, e.g., [56]). Because the mass of the dyons depends on \(\theta\), we can integrate them out to obtain an effective potential \(V(\theta)\) for the axion [52]. In the usual duality frame, where the Witten effect applies to magnetic monopoles, we integrate out very heavy monopole states with small dyonic splittings. Because the monopole is a heavy semiclassical object, we should not treat it with a weakly-coupled monopole field. Instead, we include magnetic monopoles in the path integral by summing over the different paths that heavy monopole worldlines can take (see, e.g., [57]). The sum over dyons can be recast as a sum over a winding of the dyon collective coordinate around the monopole loop, in which case the calculation admits a saddle point approximation where such monopole loops with dyonic winding can be thought of as a type of instanton [52, 58]. When the axion couples in a non-standard duality frame, the character of our calculation changes. Now the Witten effect implies that a light, weakly-coupled particle like the electron becomes a dyon as \(\theta\) varies. We treat such particles as fields in the path integral, rather than heavy semiclassical worldlines. In particular, the saddle point from the monopole calculation would now lie at small proper time and would have small action. Thus, there is no exponential suppression in Figure 4: The mass spectrum of the electron (or any other charged particle) and associated dyonic excitations, in theories with a dual Witten effect. At \(\theta=0\), the lightest state is the electron with mass \(m_{e}\), and the first dyon appears at mass \(m_{D}\). The blue curve tracks the mass of the electron as \(\theta\) varies and it becomes a heavy dyon. The orange curves are dyonic states at \(\theta=0\); a different dyonic state plays the role of the electron at different nonzero integer values of \(\theta/(2\pi)\). the axion mass arising from electrically charged fermion loops, and the calculation lies in a regime in which we do not trust semiclassical methods. Instead, standard perturbative methods should give a reasonable estimate. We assume that the mass of a charged Dirac fermion \(\Psi\) is approximately \[\left[m_{\Psi}+m_{D}\left(n-\frac{\theta}{2\pi}\right)^{2}+\cdots\right]\overline {\Psi}\Psi, \tag{45}\] to quadratic order in the axion, where \(n\) labels which dyon state we are considering and the \(\cdots\) represent terms of higher order in \(\theta\). Given such a term, and focusing on the lightest state \(n=0\) near \(\theta=0\), we estimate an axion mass from the one-loop Feynman diagram in Fig. 5: \[m_{\theta}^{2}\sim\frac{1}{16\pi^{2}}m_{\Psi}m_{D}\frac{\Lambda^{2}}{f^{2}} \log\frac{\Lambda}{m_{\Psi}}. \tag{46}\] It seems reasonable to expect that \(\Lambda\) could be of order the dyon mass \(m_{D}\), since that is a scale where new physics enters. From this expression, we see that this contribution to the axion mass is potentially much larger than standard instanton contributions. So far, we have kept the discussion rather general: our only assumption is that the "standard" duality frame is the one in which \(A\) couples to light, weakly-interacting charged particles. In this case, the standard \(\theta F\wedge F\) coupling ensures that only monopole loops contribute to the axion potential, allowing for an exponentially small (semiclassical) axion mass. In any other frame, where the coupling takes the form \(\theta F^{\prime}\wedge F^{\prime}\), we expect the axion to obtain a large mass. Now, let's turn to a more realistic axion phenomenology, with the axion coupled to the Standard Model photon. Every charged particle contributes in loops like Fig. 5. In particular, the top quark does. Taking \(m_{\Psi}=m_{t}\) and taking \(m_{D}\sim\Lambda\sim 1\,\text{TeV}\), we immediately see that a top quark loop contributes an axion mass \[m_{\theta}\sim 30\,\text{eV}\,\frac{10^{12}\,\text{GeV}}{f}. \tag{47}\] This completely overwhelms the standard axion mass. There is no possibility of suppressing \(m_{D}\) or \(\Lambda\), since we have already probed physics up to the TeV scale. There is no a priori reason for these contributions to the axion potential to have the same phase as the QCD contribution, so this would spoil the solution to the Strong CP problem. In fact, the situation is even worse than this. The Standard Model fermions are chiral and weakly interacting, and we assume that their interactions with the axion are also weak (a reasonable assumption, if we are discussing anything resembling standard axion phenomenology). Thus, not Figure 5: If the electrically charged fermion \(\Psi\) becomes a dyon by coupling to the axion \(\theta\), then a loop of fermions can generate an axion potential. The corresponding contribution to the axion mass term can be estimated from this one-loop diagram. only the usual fermion mass \(m_{\Psi}\) but also the dyon mass term \(m_{D}\) in (45) should be proportional to the Higgs vev. It does not make sense to imagine that the electron acquires a mass in a \(\theta\) background in the limit that the Higgs field has no expectation value. This immediately tells us that \(m_{D}\) should not be far above the TeV scale, because there is a unitarity bound on the strength of the interaction of the Higgs with the fermion [59]. In fact, the _entire tower_ of dyons should have this property! In the limit that the Higgs vev is turned off, the electron should not acquire a mass as \(\theta\) varies, and we can vary it over many cycles to find that every dyon becomes massless. This is a complete disaster for field theory, with an infinite tower of massless states. Although the nonzero Higgs vev means that our universe does not strictly reside in this limit, the theory nonetheless predicts a tower of particles, all with mass tied to the electroweak scale and interacting with the Higgs. These particles run in loops in processes involving the Higgs boson, and it is difficult to see how the theory could remain weakly coupled in any sense. Concretely, all of these dyons appear in triangle diagrams contributing to the Higgs couplings to photons and (for dyonic excitations of quarks) to gluons, which are empirically known to be approximately the values predicted by the Standard Model. These measurements, independent of any speculation about how a viable model of strongly-coupled QFT could accommodate all of the dyon states, are sufficient to phenomenologically exclude such a model. We expect that a stronger statement is true, that such a theory (with an infinite tower of dyons obtaining mass via the Higgs mechanism) is simply inconsistent. In recent years there has been intensive study of theories in which infinite towers of particles become massless at a point in scalar field space (e.g., [60, 61, 62, 63, 64, 65, 66, 67]). When the particles can be treated as approximately elementary, the loop effects of these particles modify the scalar kinetic term and make the scalar \(\to 0\) limit an infinite distance limit [61, 62]. In the case we are discussing, this scalar would be the Higgs. Such infinite distance limits are believed to happen only in weak-coupling limits of quantum gravity, in which the scalar field controlling the tower's mass in all known limits parametrizes either the volume of decompactifying extra dimensions or the tension of an emergent light string [63]. None of these examples resemble the Higgs boson; for example, none of the scalar fields carry nonabelian gauge charge. There is another alternative: when the states in the tower are strongly interacting (as the dyons are expected to be), the origin of field space may not lie at infinite distance, but instead may be a strongly interacting CFT.12 In examples, the gauge theory under which the particles in the tower are charged is emergent, and does not exist at the origin of field space. It is unclear if a Standard Model-like theory could ever arise by perturbing such a theory onto a Higgs branch. Perhaps the most general argument we can give is that integrating out a tower of particles of increasingly large charge under a gauge symmetry is generically expected to drive that gauge theory to weak coupling (e.g., [69]). For the dyon tower, it is the magnetic charge that grows as we ascend the tower, and so we would expect the _magnetic_ coupling to be driven small. Accordingly, the electric coupling would become _large_--the opposite of what we see in the Standard Model. However, we emphasize that even if our doubts are ill-founded and a theory with chiral fermions accompanied by dyonic towers actually exists, it is not phenomenologically viable for the reasons discussed above; none of the conclusions of this paper rest on this paragraph. Footnote 12: See, e.g., [68] for examples of this phenomenon in 5d, which should lead to similar 4d examples upon dimensional reduction. ### More general couplings In our discussion so far, we have focused on axion couplings that take the standard form \(\theta F\wedge F\) in some duality frame. This corresponds to the requirement that the gauge transformation \(\theta\mapsto\theta+2\pi\) acts on the complex coupling parameter \(\tau\) via an SL(2,\(\mathbb{Z}\)) element of the form \(\Lambda^{-1}T^{n}\Lambda\). (These are known as the parabolic elements of SL(2,\(\mathbb{Z}\)), those with the absolute value of the trace equal to 2.) One could also consider an even more general axion coupling: given any SL(2,\(\mathbb{Z}\)) element \(\Lambda=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\), we could consider _any_ function \(\tau(\theta)\) with a \(\Lambda\)-twisted periodicity property, i.e., \[\tau(\theta+2\pi)=\frac{a\tau(\theta)+b}{c\tau(\theta)+d}. \tag{48}\] Then \(\theta\mapsto\theta+2\pi\) returns the theory to itself up to a duality transformation. Here we will offer some brief remarks about these more general possibilities. The simplest examples of such more general functions are those that correspond to the duality frames we have already discussed, but with additional periodic contributions to \(\tau(\theta)\). For instance, we might have a coupling of the form \(\sin(\theta)F\wedge F\). Because this is manifestly gauge invariant, it can come with an arbitrary coupling constant. Such couplings arise in ordinary quantum field theories, and are generally expected to be suppressed by the mass of the axion squared, because physics that generates such a coupling could also generate a periodic potential \(V(\theta)\). An interesting, well-known, example is the contribution to the axion-photon coupling arising from the axion mixing with the pion. This contribution is larger than one might naively expect, because the axion and pion masses both arise from QCD dynamics [6, 8, 9, 10, 11, 12]. It gives an \(O(1)\) contribution \(\delta n\) modifying the prefactor in the coupling (4). In the KSVZ model [70, 71], where \(n=0\) in (1), the pion mixing generates the only axion-photon coupling and we are in the \(\Lambda=1\) case. In other models, like the DFSZ model [72, 73], where \(n\neq 0\), we have \(\Lambda=T^{n}\) but now the function \(\tau(\theta)\) is the sum of a term linear in \(\theta\) and a term periodic in \(\theta\). Now, consider the most general case. The only SL(2,\(\mathbb{Z}\)) elements under which electrically charged particles do not acquire magnetic charge are those of the form \(T^{n}\), corresponding to standard axion couplings, or \(-T^{n}\), in which case the \(\theta\mapsto\theta+2\pi\) operation is accompanied by charge conjugation. Any other choice, then, will imply that the electrically charged particles of the Standard Model have a family of dyonic excitations, with associated phenomenological difficulties. If \(\Lambda\) is an element of infinite order (either the parabolic type already discussed, or a hyperbolic element with absolute value of the trace \(>2\)), then we have an infinite tower of dyon modes, and the theory is pathological for the reasons discussed in SS4.1. One more interesting possibility remains: \(\Lambda\) could be a nontrivial SL(2,\(\mathbb{Z}\)) element of finite order (an elliptic element, with absolute value of the trace \(<2\)). Apart from the \(\mathbb{Z}_{2}\) subgroup generated by charge conjugation, which is not of interest to us, SL(2,\(\mathbb{Z}\)) has finite subgroups isomorphic to \(\mathbb{Z}_{3}\), \(\mathbb{Z}_{4}\), and \(\mathbb{Z}_{6}\). For example, \(S\) itself is an element of order 4, while \(ST\) has order 6. In such cases, one would have only a finite number of dyonic excitations of charged particles. Could the Standard Model be coupled to an axion with such a finite monodromy orbit? We believe that this is again problematic, though less pathological than the case with an infinite tower of dyons. Some of the arguments of SS4.1 continue to apply: dyon loops would again generate large corrections to the axion potential. We would again expect that the dyonic partners of Standard Model fermions can only obtain a mass from electroweak symmetry breaking, so they would have mass near the TeV scale, and would also alter the Higgs couplings to photons and gluons away from their Standard Model predictions. (If such a theory exists and could be reconciled with precision Higgs physics, it would provide a novel motivation for searches for monopoles and dyons at the TeV scale, like [74, 75].) From a more theoretical viewpoint, one should take care that the SL(2,\(\mathbb{Z}\)) elements that act on the theory are not anomalous [53, 76]. Another problem, when \(\Lambda\) is of even order, is that a power of \(\Lambda\) corresponds to charge conjugation, which is not a symmetry of the Standard Model. One would need more elaborate model-building to make sense of this. Finally, it is not at all clear what form a UV completion of such a coupling could take. Because the coupling \(\tau(\theta)\) is a periodic function in this case, we would tend to expect the coefficients of such couplings to be highly suppressed, for similar reasons to the standard periodic couplings mentioned above. A case in which \(\theta\mapsto\theta+2\pi\) generates a finite monodromy could be thought of as a coupling generated by a novel sort of "fractional instanton," and would be expected to have an exponentially small coefficient. All of these considerations make it highly unlikely that a consistent theory of an axion coupled to the Standard Model with a finite monodromy orbit could exist. ## 5 Conclusions It is a familiar fact about a wide variety of axion theories that the axion coupling to photons is quantized in units of \(e^{2}/(8\pi^{2}f)\), when the fields are canonically normalized. Recently this conventional wisdom has been called into question, especially by [15]. In agreement with that work, we find that a wider variety of axion couplings to U(1) gauge theory are possible. These correspond to the possibility that \(\theta\mapsto\theta+2\pi\) is accompanied by a nontrivial SL(2, \(\mathbb{Z}\)) electromagnetic duality transformation. However, we conclude that these non-standard theories of axion electrodynamics are incompatible with the real world, due to the existence of electrically charged chiral fermions in the Standard Model, which would acquire dyonic excitations if such non-standard axion couplings exist. This is inconsistent with the Standard Model as a weakly coupled effective field theory in which electroweak symmetry is broken only by the Higgs boson, as indicated by experimental results. The Witten effect, and in particular the monodromy of the spectrum of charged objects that arises under \(\theta\mapsto\theta+2\pi\), played a key role in our discussion. We reviewed a simple argument for the inevitability of the Witten monodromy in SS2.3. Standard axion electrodynamics can be thought of as gauging the \(\mathbb{Z}\) subgroup of SL(2,\(\mathbb{Z}\)) generated by \(T\). One class of non-standard couplings can be thought of as instead gauging the \(\mathbb{Z}\) subgroup whose elements are powers of the element \(\Lambda^{-1}T\Lambda\in\text{SL}(2,\mathbb{Z})\). The spectrum still undergoes a monodromy under this \(\mathbb{Z}\) subgroup, but for nontrivial \(\Lambda\), electrically charged particles are part of a tower of dyons carrying magnetic charge. Another class of non-standard couplings has only a finite monodromy orbit, but still implies that electrically charged particles become dyons as the axion field value varies. All of these possibilities are excluded by Higgs physics. As mentioned in SS4.2, the quantization of the axion-photon coupling applies only to the Chern-Simons coupling, not to additional couplings like \(\sin(\theta)F\wedge F\) that are manifestly gauge invariant. An important such contribution arises from the QCD axion's mixing with the pion. There are some subtleties in the Chern-Simons couplings themselves. First, the quantization rule depends on the basic quantum of U(1) charge; if we discovered a particle of hypercharge 1/12, for instance, our conclusion about the allowed base unit of the axion-photon coupling would change. In the Standard Model, an additional subtlety arises from the global structure of the gauge group, which is ambiguous since there are elements of the center of SU(2)\({}_{\text{L}}\) and SU(3)\({}_{\text{C}}\) that act on all known fields in the same way as elements of U(1)\({}_{\text{Y}}\)[77]. This allows the existence of field configurations with correlated fractional topological charges [78, 79, 80], which lead to quantization rules that correlate the axion couplings to gluons and photons [81, 82, 83]. In general non-abelian gauge theories, one can also modify the path integral to include only field configurations with topological charge a multiple of some base unit \(p\neq 1\)[84, 85, 86, 87, 88]. Finally, even more exotic generalized axion couplings are known to arise in various examples, such as Kaluza-Klein reduction of 5d gauge theory. In this case, we find a coupling of the form \(\theta^{3}H\wedge H\) of an axion \(\theta\) to the KK field strength \(H\), which is not invariant under \(\theta\mapsto\theta+2\pi\). However, it appears as part of a monodromy with a different gauge field that has field strength \(F\), in a structure of the schematic form \(\theta F\wedge F+\theta^{2}H\wedge F+\theta^{3}H\wedge H\) (with appropriate coefficients). Under \(\theta\mapsto\theta+2\pi\), we have \(F\mapsto F-H\), which ensures consistency of the whole structure. Such generalized theta terms have recently been examined in [89, 90]. These examples are qualitatively similar to the SL(2,Z) alternatives we have discussed in this paper, in the sense that they rely on gauge field strengths that transform nontrivially under \(2\pi\) shifts of the axion. From the phenomenological standpoint, they lead to weaker axion interactions than the standard couplings, so they do not seem to pose an interesting loophole. The physics of axion-photon couplings is very rich, with a number of subtleties and interesting applications of topology in quantum field theory. Nonetheless, the equations of axion electrodynamics (3), presented by Sikivie already forty years ago [7], are the correct equations that should guide experimental searches for an axion or axion-like particle coupling to photons. ## Acknowledgments MR thanks Prateek Agrawal and John Terning for raising thought-provoking questions (in conversations a few years ago) about how electric-magnetic duality relates to axion physics, and Kevin Zhou for providing references to some of the literature on non-standard formulations of axion electrodynamics. MR also thanks Anton Sokolov for an email exchange, and Eduardo Garcia-Valdecasas for comments on a draft. BH is supported by NSF grant PHY-2112800. JM is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632. MR is supported in part by the DOE Grant DE-SC0013607 and the NASA Grant 80NSSC20K0506. ## Appendix A Working backwards: equations of motion to a standard action In the main text, we started with a manifestly well-defined axion coupling in some choice of SL(2,Z) duality frame, and then showed that an appropriate definition of a non-standard field strength \(\mathcal{F}\) could recast the equations of motion in the form (41) that was studied in [15]. In this appendix, we work in the other direction. Beginning with a hypothesized set of equations of motion for \(\mathcal{F}\) coupled to a shift-symmetric scalar \(\phi\), we re-express them in terms of a standard field strength \(F\) with complexified gauge coupling determined by \(\phi\). The latter admits a standard quantization via a generalized Maxwell action, allowing us to determine the quantization conditions on the axion-photon couplings. ### Relating the Sokolov-Ringwald equations to standard electrodynamics Motivated by [15], we consider classical electrodynamic equations of the form:13 Footnote 13: These are related to the equations in [15] via \(\big{(}\begin{smallmatrix}g_{11}&g_{12}\\ g_{21}&g_{22}\end{smallmatrix}\big{)}^{\text{(here)}}=\big{(}\begin{smallmatrix}g _{4AB}&-g_{aAA}\\ g_{aBB}&-g_{aAB}\end{smallmatrix}\big{)}^{\text{(there)}}\). \[\begin{split}\text{d}\star\mathcal{F}+\text{d}\phi\wedge(g_{11} \star\mathcal{F}-g_{12}\mathcal{F})&=0,\\ \text{d}\mathcal{F}+\text{d}\phi\wedge(-g_{21}\star\mathcal{F}+g_{22} \mathcal{F})&=0.\end{split} \tag{49}\] Note that we can set \(g_{22}=-g_{11}\) after a field redefinition \(\mathcal{F}\to\exp\bigl{[}-\frac{g_{11}+g_{22}}{2}\phi\bigr{]}\mathcal{F}\), so we assume this to be the case henceforward. These equations have the virtue that they are invariant under constant axion shifts, \(\phi\to\phi+\delta\phi\). However, since the field-strength tensor \(\mathcal{F}\) is not closed, we cannot introduce a gauge potential \(\mathcal{A}\) such that \(\mathcal{F}=\text{d}\mathcal{A}\) in the standard way, making quantization difficult. Instead of using the Zwanziger formalism as in [15], we aim to rewrite these equations in standard form via a field redefinition, i.e., we seek functions \(F(\mathcal{F},\phi)\), \(e(\phi)\) and \(\theta(\phi)\) such that (49) becomes: \[\mathrm{d}F=0,\qquad\mathrm{d}F_{\mathrm{M}}=0,\qquad\text{where}\qquad F_{ \mathrm{M}}\equiv-\frac{2\pi}{e^{2}(\phi)}\star F+\frac{\theta(\phi)}{2\pi}F. \tag{50}\] Here \(F=\mathrm{d}A\) is a standard \(U(1)\) gauge field with gauge coupling \(e(\phi)\) and theta angle \(\theta(\phi)\), whose quantization is well known (see SS2, SS3). To do so, let us assume that \(F|_{\phi=0}=\mathcal{F}\).14 Footnote 14: More generally, if \(F|_{\phi=0}=\alpha\mathcal{F}+\beta\star\mathcal{F}\) for constants \(\alpha,\beta\) then we first rewrite (49) in terms of \(\mathcal{F}^{\prime}=\alpha\mathcal{F}+\beta\star\mathcal{F}\): \[\mathrm{d}\star\mathcal{F}^{\prime}+\mathrm{d}\phi\wedge(g_{11}^{\prime}\star \mathcal{F}^{\prime}-g_{12}^{\prime}\mathcal{F}^{\prime})=0,\qquad\text{where} \qquad\begin{pmatrix}g_{11}^{\prime}&g_{12}^{\prime}\\ g_{21}^{\prime}&g_{22}^{\prime}\end{pmatrix}=\begin{pmatrix}\alpha&\beta\\ -\beta&\alpha\end{pmatrix}\begin{pmatrix}g_{11}&g_{12}\\ g_{21}&g_{22}\end{pmatrix}\begin{pmatrix}\alpha&\beta\\ -\beta&\alpha\end{pmatrix}^{-1}.\] With foresight, we first define \[\mathcal{F}_{\mathrm{M}}\equiv-\frac{2\pi}{e_{0}^{2}}\star\mathcal{F}+\frac{ \theta_{0}}{2\pi}\mathcal{F},\qquad\text{where}\qquad e_{0}=e(0),\qquad\phi_{ 0}=\phi(0). \tag{51}\] In terms of \(\mathcal{F},\mathcal{F}_{\mathrm{M}}\), (49) becomes: \[\mathrm{d}\mathcal{F}_{\mathrm{M}}+\mathrm{d}\phi\wedge(k_{11} \mathcal{F}_{\mathrm{M}}+k_{12}\mathcal{F})=0, \tag{52}\] \[\mathrm{d}\mathcal{F}+\mathrm{d}\phi\wedge(k_{21}\mathcal{F}_{ \mathrm{M}}+k_{22}\mathcal{F})=0,\] and \(k_{22}=-k_{11}\). Thus, by construction \(F|_{\phi=0}=\mathcal{F}\) and \(F_{\mathrm{M}}|_{\phi=0}=\mathcal{F}_{\mathrm{M}}\). More generally, for \(\phi\neq 0\): \[\begin{pmatrix}F_{\mathrm{M}}\\ F\end{pmatrix}=\begin{pmatrix}a(\phi)&b(\phi)\\ c(\phi)&d(\phi)\end{pmatrix}\begin{pmatrix}\mathcal{F}_{\mathrm{M}}\\ \mathcal{F}\end{pmatrix},\qquad\text{where}\qquad\begin{pmatrix}a&b\\ c&d\end{pmatrix}\bigg{|}_{\phi=0}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \tag{53}\] To determine the functions \(a(\phi),b(\phi),c(\phi),d(\phi)\), we impose Maxwell's equations \(\mathrm{d}F=\mathrm{d}F_{\mathrm{M}}=0\) and apply (52). This yields the differential equation \[\frac{\mathrm{d}}{\mathrm{d}\phi}\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&k_{22}\end{pmatrix},\qquad\text{so that}\qquad\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\exp\biggl{[}\phi\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&k_{22}\end{pmatrix}\biggr{]}, \tag{54}\] upon imposing the \(\phi=0\) boundary condition (53). Using (51), (53), we obtain \[F_{\mathrm{M}}=-a\frac{2\pi}{e_{0}^{2}}\star\mathcal{F}+\biggl{(}a\frac{ \theta_{0}}{2\pi}+b\biggr{)}\mathcal{F},\qquad F=-c\frac{2\pi}{e_{0}^{2}} \star\mathcal{F}+\biggl{(}c\frac{\theta_{0}}{2\pi}+d\biggr{)}\mathcal{F}. \tag{55}\] Solving the second equation for \(\mathcal{F}\) and substituting into the first, one finds after some algebra that \[F_{\mathrm{M}}=-\operatorname{Im}\biggl{(}\frac{a\tau_{0}+b}{c\tau_{0}+d} \biggr{)}\star F+\operatorname{Re}\biggl{(}\frac{a\tau_{0}+b}{c\tau_{0}+d} \biggr{)}F,\qquad\text{where}\qquad\tau_{0}\equiv\frac{\theta_{0}}{2\pi}+ \mathrm{i}\frac{2\pi}{e_{0}^{2}}. \tag{56}\] Thus, the axion-dependent coupling constants are given by \[\tau(\phi)\equiv\frac{\theta(\phi)}{2\pi}+\frac{2\pi\mathrm{i}}{e(\phi)^{2}}= \frac{a(\phi)\tau_{0}+b(\phi)}{c(\phi)\tau_{0}+d(\phi)}. \tag{57}\] This is simply a \(\mathrm{PSL}(2,\mathbb{R})\) transformation of the \(\phi=0\) coupling \(\tau_{0}\) by We can now write an action leading to (49): \[S=-\frac{(2\pi f)^{2}}{2}\!\int\mathrm{d}\phi\wedge\star\mathrm{d} \phi+\frac{1}{4\pi}\int F\wedge[\mathrm{Re}\,\tau(\phi)F-\mathrm{Im}\,\tau(\phi )\star F],\\ \text{where}\qquad F=\mathrm{d}A,\qquad\tau(\phi)=\frac{a\tau_{0}+ b}{c\tau_{0}+d},\qquad\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\exp\!\left[\phi\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&-k_{11}\end{pmatrix}\right]\!. \tag{58}\] Here \(f\) is the axion decay constant, \(\tau_{0}=\frac{\theta_{0}}{2\pi}+\mathrm{i}\frac{2\pi}{e_{0}^{2}}\) is the complexified gauge coupling at \(\phi=0\), and \(k_{11},k_{12},k_{21}\) are additional real constants. Defining \(\mathcal{F}\equiv\mathrm{Re}\big{[}\frac{F+\mathrm{i}\star F}{c\tau_{0}+d} \big{]}\) and following the same steps as above in reverse, one recovers (49) with \[\begin{pmatrix}g_{11}&g_{12}\\ g_{21}&g_{22}\end{pmatrix}=\begin{pmatrix}\frac{2\pi}{e_{0}^{2}}&\frac{\theta_ {0}}{2\pi}\\ 0&1\end{pmatrix}^{-1}\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&-k_{11}\end{pmatrix}\begin{pmatrix}\frac{2\pi}{e_{0}^{2}}&\frac{\theta_ {0}}{2\pi}\\ 0&1\end{pmatrix}. \tag{59}\] To complete the comparison with [15], we consider the axion equation of motion: \[(2\pi f)^{2}\,\mathrm{d}\star\mathrm{d}\phi=\frac{1}{4\pi}F\wedge[-\,\mathrm{ Re}\,\tau^{\prime}(\phi)F+\mathrm{Im}\,\tau^{\prime}(\phi)\star F]. \tag{60}\] Re-expressing this in terms of \(\mathcal{F}\) using the relation \(\mathcal{F}+\mathrm{i}\star\mathcal{F}=\frac{F+\mathrm{i}\star F}{c\tau_{0}+d}\) and applying \[(c\tau_{0}+d)^{2}\tau^{\prime}(\phi)=2k_{11}\tau_{0}+k_{12}-\tau_{0}^{2}k_{21 }=(g_{21}+g_{12}+2\mathrm{i}g_{11})\,\mathrm{Im}\,\tau_{0}, \tag{61}\] we find that \[(2\pi fe_{0})^{2}\,\mathrm{d}\star\mathrm{d}\phi=-\frac{g_{21}+g_{12}}{2} \mathcal{F}\wedge\mathcal{F}+g_{11}\mathcal{F}\wedge\star\mathcal{F}. \tag{62}\] This matches with [15] up to signs. Note that in the special case \(g_{11}=0\), \(g_{21}=-g_{12}\), \(\mathcal{F}\) decouples from the axion equation of motion (62). To understand why, note that in this case \[\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\begin{pmatrix}\cos(g_{12}\phi)-\frac{\mathrm{Re}\,\tau_{0}}{ \mathrm{Im}\,\tau_{0}}\sin(g_{12}\phi)&\frac{|\tau_{0}|^{2}}{\mathrm{Im}\, \tau_{0}}\sin(g_{12}\phi)\\ -\frac{1}{\mathrm{Im}\,\tau_{0}}\sin(g_{12}\phi)&\cos(g_{12}\phi)+\frac{ \mathrm{Re}\,\tau_{0}}{\mathrm{Im}\,\tau_{0}}\sin(g_{12}\phi)\end{pmatrix}, \tag{63}\] using (52), (54), so that \[\tau(\phi)=\frac{\big{[}\cos(g_{12}\phi)-\frac{\mathrm{Re}\,\tau_{0}}{ \mathrm{Im}\,\tau_{0}}\sin(g_{12}\phi)\big{]}\tau_{0}+\frac{|\tau_{0}|^{2}}{ \mathrm{Im}\,\tau_{0}}\sin(g_{12}\phi)}{-\frac{1}{\mathrm{Im}\,\tau_{0}}\sin( g_{12}\phi)\tau_{0}+\cos(g_{12}\phi)+\frac{\mathrm{Re}\,\tau_{0}}{\mathrm{Im}\, \tau_{0}}\sin(g_{12}\phi)}=\frac{\cos(g_{12}\phi)-\mathrm{i}\sin(g_{12}\phi)} {\cos(g_{12}\phi)-\mathrm{i}\sin(g_{12}\phi)}\tau_{0}=\tau_{0}. \tag{64}\] As a result \(\phi\) and \(A\) decouple from each other in (58).15, 16 ### Quantization of the generalized axion couplings So far, we have shown that the generalized axion electrodynamics equations derived in [15] follow from a standard action (58) at the classical level (up to a likely sign error in the axion equation of motion given in [15]). The quantization of (58) is straightforward, along the lines discussed in SS2, SS3. In particular, given the assumptions discussed in SS2.2, we are interested in the case where \(F\) is a (holomorphically normalized) \(U(1)\) gauge field and \(\phi\cong\phi+1\) is a compact scalar. Then we are forced to impose the consistency condition that the monodromy matrix lies within \(\mathrm{SL}(2,\mathbb{Z})\): \[\Lambda_{1}\equiv\begin{pmatrix}a&b\\ c&d\end{pmatrix}\biggr{|}_{\phi=1}=\exp\biggl{[}\begin{pmatrix}k_{11}&k_{12} \\ k_{21}&-k_{11}\end{pmatrix}\biggr{]}\in\mathrm{SL}(2,\mathbb{Z}), \tag{65}\] so that the shift symmetry \(\phi\cong\phi+1\) is exact. Written out explicitly, the precise form of the constraint (65) depends on the sign of \(\vartheta^{2}\equiv k_{11}^{2}+k_{12}k_{21}\). We first consider the case where \(\vartheta^{2}>0\), for which we obtain: \[\Lambda_{1}=\begin{pmatrix}\cosh\vartheta+k_{11}\frac{\sinh\vartheta}{\vartheta }&k_{12}\frac{\sinh\vartheta}{\vartheta}\\ k_{21}\frac{\sinh\vartheta}{\vartheta}&\cosh\vartheta-k_{11}\frac{\sinh \vartheta}{\vartheta}\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z}). \tag{66}\] Taking the trace, we conclude that \(\cosh\vartheta=\frac{n}{2}\) for \(n\in\mathbb{Z}\), \(n>2\). A general solution then takes the form \(k_{ij}=\frac{\vartheta}{2\sinh\vartheta}n_{ij}\) for integers \(n_{ij}\) satisfying \(n_{11}^{2}+n_{12}n_{21}=n^{2}-4\) with \(n_{12}\) and \(n_{21}\) even. In other words, given a monodromy matrix \(\bigl{(}\begin{smallmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{smallmatrix}\bigr{)}\in\mathrm{SL}(2,\mathbb{Z})\) with trace \(a_{1}+d_{1}>2\), the couplings \(k_{ij}\) are fixed to be \[\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&-k_{11}\end{pmatrix}=\frac{\cosh^{-1}\bigl{(}\frac{a_{1}+d_{1}}{2} \bigr{)}}{\sqrt{\bigl{(}\frac{a_{1}+d_{1}}{2}\bigr{)}^{2}-1}}\begin{pmatrix} \frac{a_{1}-d_{1}}{2}&b_{1}\\ c_{1}&\frac{d_{1}-a_{1}}{2}\end{pmatrix}, \tag{67}\] so the choice of a monodromy matrix \(\Lambda_{1}\in\mathrm{SL}(2,\mathbb{Z})\) with trace \(\mathrm{Tr}\,\Lambda_{1}>2\) fully fixes the couplings. Next, consider the case \(\vartheta^{2}=0\), for which \[\Lambda_{1}=\begin{pmatrix}1+k_{11}&k_{12}\\ k_{21}&1-k_{11}\end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z}), \tag{68}\] so that \(k_{ij}\in\mathbb{Z}\) with \(k_{11}^{2}+k_{12}k_{21}=0\). As before, this implies that the couplings \(k_{ij}\) are fully fixed by the monodromy matrix \(\Lambda_{1}\) when \(\mathrm{Tr}\,\Lambda_{1}=2\): \[\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&-k_{11}\end{pmatrix}=\begin{pmatrix}\frac{a_{1}-d_{1}}{2}&b_{1}\\ c_{1}&\frac{d_{1}-a_{1}}{2}\end{pmatrix}, \tag{69}\] except that the special case \(\Lambda_{1}=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\) need not imply trivial couplings, as we will see. Finally, consider the case \(\vartheta^{2}<0\). Defining \(\theta^{2}\equiv-\vartheta^{2}=-k_{11}^{2}-k_{12}k_{21}\), one finds \[\Lambda_{1}=\begin{pmatrix}\cos\theta+k_{11}\frac{\sin\theta}{\theta}&k_{12} \frac{\sin\theta}{\theta}\\ k_{21}\frac{\sin\theta}{\theta}&\cos\theta-k_{11}\frac{\sin\theta}{\theta} \end{pmatrix}\in\mathrm{SL}(2,\mathbb{Z}). \tag{70}\] Taking the trace, we conclude that \(\cos\theta\in\{0,\pm\frac{1}{2},\pm 1\}\), so that \(\theta\) is a multiple of \(\pi/2\) or \(\pi/3\). For \(\cos\theta\in\{0,\pm\frac{1}{2}\}\), a general solution takes the form \(k_{ij}=\frac{\theta}{2\sin\theta}n_{ij}\) for integers \(n_{ij}\) satisfying \(n_{11}^{2}+n_{12}n_{21}=4\cos^{2}\theta-4\) with \(n_{12}\) and \(n_{21}\) even. In other words, \[\begin{pmatrix}k_{11}&k_{12}\\ k_{21}&-k_{11}\end{pmatrix}=\frac{\theta}{\sin\theta}\begin{pmatrix}\frac{a_{1}-d _{1}}{2}&b_{1}\\ c_{1}&\frac{d_{1}-a_{1}}{2}\end{pmatrix}, \tag{71}\] where \(\theta\) is any solution to \(\cos\theta=\frac{a_{1}+d_{1}}{2}\). Thus, a choice of monodromy matrix \(\Lambda_{1}\) satisfying \(\operatorname{Tr}\Lambda_{1}\in\{0,\pm 1\}\) plus a choice of branch cut for \(\theta=\cos^{-1}\bigl{(}\frac{\operatorname{Tr}\Lambda_{1}}{2}\bigr{)}\), \(\theta>0\) uniquely fixes the \(k_{ij}\). Finally, in the case \(\cos\theta=\pm 1\) there is no further constraint on the \(k_{ij}\) beyond \(k_{11}^{2}+k_{12}k_{21}=-\theta^{2}\). To summarize, the necessary and sufficient conditions for the \(k_{ij}\) quantization rules to be satisfied are as follows. First, we pick a monodromy matrix \(\Lambda_{1}\in\operatorname{SL}(2,\mathbb{Z})\) satisfying either \(\operatorname{Tr}\Lambda_{1}\geqslant-1\) or \(\Lambda_{1}=-1_{2\times 2}\). If \(\operatorname{Tr}\Lambda_{1}\geqslant 2\) and \(\Lambda_{1}\neq 1_{2\times 2}\) then this uniquely fixes the \(k_{ij}\) via (67) or (69). If \(-1\leqslant\operatorname{Tr}\Lambda_{1}\leqslant 1\) then this fixes the \(k_{ij}\) via (71) after a choice of branch cut for \(\theta=\cos^{-1}\bigl{(}\frac{\operatorname{Tr}\Lambda_{1}}{2}\bigr{)}\), \(\theta>0\). Finally, when \(\Lambda_{1}=\pm 1_{2\times 2}\) we require \(k_{11}^{2}+k_{12}k_{21}=-(n\pi)^{2}\) for \(n\in\mathbb{Z}\) where \(n\) is even (odd) when \(\Lambda_{1}=1_{2\times 2}\) (\(\Lambda_{1}=-1_{2\times 2}\)). Note that only in the last case can the \(k_{ij}\) be varied continuously consistent with the quantization rules. Otherwise they are discretely quantized.
2310.20255
Asymptotic normalization coefficients for $α+ {}^{12}{\rm C}$ synthesis and the $S$-factor for ${}^{12}{\rm C}(α, \,γ){}^{16}{\rm O}$ radiative capture
The $^{12}{\rm C}(\alpha,\gamma)^{16}$O reaction, determining the survival of carbon in red giants, is of interest for nuclear reaction theory and nuclear astrophysics. A specific feature of the $^{16}$O nuclear structure is the presence of two subthreshold bound states, (6.92 MeV, 2$^+$) and (7.12 MeV, 1$^-$), that dominate the behavior of the low-energy $S$-factor. The strength of these subthreshold states is determined by their asymptotic normalization coefficients (ANCs), which need to be known with high accuracy. Recently, using a model-independent extrapolation method, Blokhintsev {\it et al.} [Eur. Phys. J. A {\bf 59} (2023) 162] determined the ANCs for the $\alpha$-particle removal taking into account three subthreshold states in $^{16}$O. The goal of this paper is to address four main problems elucidating the impact of the subthreshold ANCs on the low-energy $S$-factor. Firstly, we analyse the connection between variations of the subthreshold ANCs and the low-energy $S$-factor, in particular, at the most effective energy of $300$ keV. Secondly, we calculate contributions to the $S(300\,{\rm keV})$-factor from the subthreshold $1^{-}$ and $2^{+}$ resonances, that are controlled by the subthreshold ANCs. We also evaluate the contribution of the uncertainties of the subthreshold ANCs to the budget of the low-energy $S$-factor uncertainty, especially, the $S(300\,{\rm keV})$-factor. Thirdly, we analyse interference of the subthreshold resonances (SRs) with higher resonances and with the $E1$ and $E2$ direct captures to the ground state. Finally, we investigate a correlated effect of the subthreshold and ground-state ANCs on the low-energy $S$-factor and, in particular, on the $S(300\,{\rm keV})$-factor.
A. M. Mukhamedzhanov, R. J. deBoer, B. F. Irgaziev, L. D. Blokhintsev, A. S. Kadyrov, D. A. Savin
2023-10-31T08:20:46Z
http://arxiv.org/abs/2310.20255v4
New ANCs for \(\alpha+{}^{12}\mathrm{C}\) synthesis obtained using extrapolation method and the \(S\)-factor for \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) radiative capture ###### Abstract **Background:** The \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) reaction, determining the survival of carbon in red giants, is of interest for nuclear reaction theory and nuclear astrophysics. Numerous attempts to obtain the astrophysical factor of the \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) reaction, both experimental and theoretical, have been made for almost 50 years. The specifics of the \({}^{16}\mathrm{O}\) nuclear structure is the presence of two subthreshold bound states, \((6.92\,\mathrm{MeV},2^{+})\) and \((7.12\,\mathrm{MeV},1^{-})\), dominating the behavior of the low-energy \(S\)-factor. The strength of these subthreshold states is determined by their asymptotic normalization coefficients (ANCs) which need to be known with high accuracy. Recently, using the model-independent extrapolation method, Blokhintsev _et al._ [Eur. Phys. J. A **59**, 162 (2023)] determined the ANCs for the three subthreshold states in \({}^{16}\mathrm{O}\). **Purpose:** In this paper, using these newly determined ANCs, we calculated the low-energy astrophysical \(S\)-factors for the \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) radiative capture. **Method:** The \(S\)-factors are calculated within the framework of the \(R\)-matrix method using the AZURE2 code. **Conclusion:** Our total \(S\)-factor includes the resonance \(E1\) and \(E2\) transitions to the ground state of \({}^{16}\mathrm{O}\) interfering with the corresponding direct captures and cascade radiative captures to the ground state of \({}^{16}\mathrm{O}\) through four subthreshold states: \(0^{+}_{2},\,3^{-},\,2^{+}\) and \(1^{-}\). Since our ANCs are higher than those used by deBoer _et al._ [Rev. Mod. Phys. **89**, 035007 (2017)], the present total \(S\)-factor at the most effective astrophysical energy of \(300\,\mathrm{keV}\) is \(174\,\mathrm{keV}\) versus \(137\,\mathrm{keV}\)b of that work. Accordingly, our calculated reaction rate at low temperatures (\(T_{9}<2\)) is higher than the one given in the aforesaid paper. ## I Introduction The \({}^{12}\mathrm{C}/^{16}\mathrm{O}\) ratio in the red giants has been attracting substantial scientific attention for a long time [1; 2]. While \({}^{12}\mathrm{C}\) is formed via the triple-\(\alpha\) fusion, the oxygen is the result of the \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) radiative capture reaction, which determines the survival of carbon. Numerous attempts to obtain the astrophysical factor of the \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) reaction, both experimental and theoretical, have been made for almost 50 years (see [1; 2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] and references therein). The latest comprehensive and thorough review of the state of the art has been presented by deBoer _et al._[2]. The specifics of the \({}^{16}\mathrm{O}\) nuclear structure is the presence of two subthreshold bound states [1] \((6.92\,\mathrm{MeV},2^{+})\) and \((7.12\,\mathrm{MeV},1^{-})\). These subthreshold bound states govern the behavior of the low-energy \(S\)-factor of the \({}^{12}\mathrm{C}(\alpha,\,\gamma){}^{16}\mathrm{O}\) reaction (see [2] and references therein) through the dominating \(E1\) and \(E2\) resonance radiative captures to the ground state of \({}^{16}\mathrm{O}\). Nuclear excited states below the particle emission threshold typically undergo \(\gamma\) decay to lower lying states. These decays result in the initial excited states having their own natural width. In the case when \(\gamma\) emission is the only open decay channel, the natural radiative width \(\Gamma_{\gamma}\) is typically \(\sim\)1 eV. If a particle bound excited state lies very close to the particle threshold, the natural width can result in the tail of the wave function extending above the particle threshold. As a result of this tail, the subthreshold bound state can behave like a resonance state in a capture reaction [19]. Such states are often referred to as subthreshold resonance states and they can play an important role in determining reaction rates of astrophysical radiative capture reactions. In the \(R\)-matrix approach the resonance radiative capture amplitude in simplified notations is given by \[M_{R}(E)=\frac{\sqrt{\Gamma_{\Gamma}(E)}\,\sqrt{\Gamma_{\gamma}(E)}}{E_{0}-E+i \,\Gamma(E)}. \tag{1}\] Here \(E\) is the relative energy of the colliding particles, \(E_{0}\) is the real part of the resonance energy, \(\Gamma_{I}(E)\) is the resonance width in the partial wave \(l\). The energy dependence of the resonance width means that in the \(R\)-matrix method a background non-resonant term is included. \(\Gamma_{\gamma}(E)\) is the radiative width of the resonance decay to the ground state. It is expressed in terms of the internal and channel reduced width amplitudes. The channel term is expressed in terms of the ANC of the ground state of \({}^{16}\mathrm{O}\). We did not determine this ANC using extrapolation method because it is located quite far from the threshold (the binding energy of the \(\alpha\)-particle in the \({}^{16}\)O ground state is 7.16 MeV). In this paper, the ground state ANC of 58 fm\({}^{-1/2}\) was taken from [2]. \(\Gamma(E)=\Gamma_{l}(E)+\Gamma_{\gamma}(E)\) is the total width. The width of the subthreshold resonance is given as [19; 20] \[\Gamma_{l}(E)= 2\,P_{l}(E,R_{ch})\,\gamma_{l}^{2}\] \[= P_{l}(E,R_{ch})\,\frac{1}{\mu\,R_{ch}}C_{l}^{2}\,W_{-\eta_{s},\,l +1/2}^{2}(2\,\kappa_{s}\,R_{ch}), \tag{2}\] where \(\,\gamma_{l}^{2}\,\) is the observable reduced width of the bound state in the partial wave \(l,\ P_{l}(E,R_{ch})\,\) is the penetrability factor calculated at the channel radius \(R_{ch}\), \(\mu\) is the reduced mass of the interacting particles, \(C_{l}\) is the ANC of the bound state. \(W_{-\eta_{s},\,l+1/2}(2\,\kappa_{s}\,R_{ch})\) is the Whittaker function describing the radial behavior of the bound-state wave function in the external region (\(r>R_{ch}\)) where the nuclear interaction between the particles can be neglected, \(\kappa_{s}\) and \(\eta_{s}\) are the wave number and the Coulomb parameter of the bound state, respectively. From Eq. (2) follows that the radiative capture to the ground state through the subthreshold resonance is determined by the ANC of the subthreshold state. This is why so much effort has been put towards determining the ANCs of the \(\alpha\)-particle removal from the subthreshold bound states of \({}^{16}\)O (see [2]). The different experimental and theoretical methods of determining the ANCs are discussed in [21]. The ANCs of \(1^{-}\) and \(2^{+}\) subthreshold bound states in \({}^{16}\)O available in the literature are listed in Table 1 of Ref. [2]. We notice large discrepancies between the ANCs determined by different techniques, suggesting further efforts are needed to pinpoint the ANCs of the two near-threshold bound states in \({}^{16}\)O. In a series of papers [22; 23; 24; 25] we developed and applied a novel method for determining the ANCs through extrapolation of the elastic scattering phase shifts to the subthreshold bound-state poles. In particular, in [24; 25] we applied this method to determine the ANC for \(\alpha\)-particle removal from the four subthreshold states of \({}^{16}\)O: \(0_{2}^{+},\,3^{-},\,2^{+}\) and \(1^{-}\). The ANC values for the channels \({}^{16}\)O\({}^{*}\rightarrow\alpha+{}^{12}\)C obtained by various methods are compared in Table 1. Unfortunately, uncertainties of the experimental phase shifts are unknown. Therefore, in this paper, in order to estimate uncertainties of the ANCs obtained by the extrapolation method we assumed 5% uncertainty in the experimental phase shifts. Taking into account the role of the ANCs in determining the \(S\)-factors for the \({}^{12}\)C(\(\alpha\), \(\gamma\))\({}^{16}\)O radiative capture, here we present the low-energy \(S\)-factors calculated within the \(R\)-matrix approach using the ANCs obtained by the extrapolation method, and given in Table 1. Note that the uncertainty of the ANC for the \(0_{2}^{+}\) state determined by applying the extrapolation method is significantly higher than for the three weaker bound subthreshold states presented in Table 1. The reason is that the farther from the threshold the pole in the energy plane corresponding to the bound state, the lower is the accuracy of the ANC obtained by the extrapolation of the elastic-scattering phase shifts to the corresponding bound-state pole. Since our main goal is to check the impact of the newly determined ANCs of the subthreshold states of \({}^{16}\)O\({}^{*}\) in the channel \(\alpha+{}^{12}\)C, the \(S\)-factors are calculated only at low energies \(E<3\) MeV where we can trace the impact of the ANCs. We compare the \(S\)-factors calculated using the central values of our ANCs given in the last two rows of Table 1 with the results from the review [2]. II \(S\)-factors for \({}^{12}\)C(\(\alpha\), \(\gamma\))\({}^{16}\)O Radiative Capture to the Ground State of \({}^{16}\)O ### \(S\)-factors for resonance \(E1\) and \(E2\) captures to the ground state First we present the results of the calculations of two dominant low-energy \(S\)-factors corresponding to the resonance \(E1\) and \(E2\) transitions to the ground state. The resonance \(E1\) transition is contributed by the subthreshold resonance \((7.12\,{\rm MeV},1^{-})\), the first above the threshold resonance \((9.51\,{\rm MeV},1^{-})\), and higher \(1^{-}\) resonances. Similarly, the resonance \(E2\) transition is dominated by the subthreshold resonance \((6.92\,{\rm MeV},2^{+})\), the lowest above-threshold resonance \((9.84\,{\rm MeV},2^{+})\) and higher \(2^{+}\) resonances. Since we constrain our consideration by low energies (\(E<3\) MeV), where we can check the impact of the ANCs of the subthreshold states, it is enough to take into account two lowest levels plus background resonances for the \(E1\) and \(E2\) resonance transitions. Details and parameters of the calculations are given in [2]. The only difference for the \(E1\) transition in the current calculations versus those performed in [2] is the replacement of the ANC of \(2.08\times 10^{14}\) fm\({}^{-1/2}\) for the \((7.12\,{\rm MeV},1^{-})\) subthreshold state with our ANC of \(2.27\times 10^{14}\) fm\({}^{-1/2}\) from Table 1. Similarly, for the \(E2\) transition the ANC of the subthreshold bound state \((6.92\,{\rm MeV},2^{+})\) of \(1.14\times 10^{5}\) used in [2] is replaced with our ANC of \(1.42\times 10^{5}\) fm\({}^{-1/2}\) from Table 1. The \(S\)-factors for the resonance \(E1\) and \(E2\) transitions to the ground state are shown in Fig. 1. We need to add some important comments about calculation of the \(E1\) and \(E2\) resonance captures to the ground state of \({}^{16}\)O. To be precise, we took into account the interference of the \(E1\) and \(E2\) resonance captures to the ground state with the \(E1\) and \(E2\) direct captures to the ground state. In the \(R\)-matrix approach the normalization of the direct capture amplitude is determined by the ANC for the \(\alpha\)-particle removal from the ground state of \({}^{16}\)O. The values of this ANC published in the literature vary significantly [18]. The ground-state ANC of \(337\pm 45\) fm\({}^{-1/2}\) found in [18] from the heavy-ion induced transfer reaction requires a higher value of \((1.55\pm 0.09)\times 10^{5}\) fm\({}^{-1/2}\) for the ANC of the 6.92 MeV excited state to reconcile with the \(S\)-factor from [2]. This value is close but slightly higher than our ANC for the 6.92 MeV (see Table 1). In the present calculations we adopted a low value of this ANC of 58 fm\({}^{-1/2}\), because our goal was to compare with the results from [2] by varying only the ANCs. Hence in the current calculations the interference of the \(E1\) and \(E2\) resonance and direct transitions is small. Radiative capture to subthreshold states and the total \(S\)-factor for \({}^{12}\)C(\(\alpha\), \(\gamma\))\({}^{16}\)O radiative capture Besides the \(E1\) and \(E2\) resonance captures to the ground state, we also calculated the captures to the four excited bound states (6.05 MeV, \(0^{+}_{2}\)), (6.13 MeV, \(3^{-}\)), (6.92 MeV, \(2^{+}\)) and (7.12 MeV, \(1^{-}\)) using the ANCs from the last two rows of Table 1. These transitions include direct and resonance captures. In the \(R\)-matrix approach, the direct radiative capture amplitude is given by the external matrix element, which describes the capture at \(r>R_{ch}\)[6]. Hence the overall normalization of the direct capture amplitude is determined by the ANC of the final bound state [21; 33]. Figure 2 depicts all the \(S\)-factors calculated using the ANCs obtained by extrapolation procedure (see Table 1). In this figure we show not only the \(S\)-factors for the radiative captures to the four excited states of \({}^{16}\)O but also the \(S\) factors for the \(E1\) and \(E2\) resonance captures to the ground state of \({}^{16}\)O (see Fig. 1) and the total \(S\)-factor, which is the sum of all six \(S\)-factors depicted in Fig. 2. ## III Results The results of the comparison of the \(S\)-factors from the current paper and the ones from [2] are presented in Table 2. Table 2 is very instructive. The difference between the current \(S\)-factors and the ones from [2] is caused by variation of only one parameter, the ANC of the corresponding subthreshold state. Hence we can check the correlation between the uncertainties of the ANCs of the subthreshold bound states and the corresponding \(S\)-factors. In Table 3 we compare the variation of the squares of the ANCs with the variation of the correspond Figure 1: The \(S\)-factors for the \({}^{12}\)C(\(\alpha\), \(\gamma\))\({}^{16}\)O radiative capture reaction to the ground state through \(1^{-}\) and \(2^{+}\) resonances. The black stars and solid dots are experimental data from [5] for the \(E1\) and \(E2\) transitions, respectively. The green dash-dot-dotted and dash-dotted lines are the \(S\)-factors from [2] for the resonance \(E1\) and \(E2\) transitions, respectively. The red solid and dash lines are the resonance \(S\)-factors for the \(E1\) and \(E2\) transitions obtained using our ANCs, see the last row of Table 1. ing \(S(300\,{\rm keV})\)-factors. Here \(\Delta C_{l}^{2}=C_{l(pr)}^{2}/C_{l(RMP)}^{2}-1\) and \(\Delta S(300\,{\rm keV})=S_{pr}(300\,{\rm keV})/S_{RMP}(300\,{\rm keV})-1\). Quantities with the subscript "pr" calculated using the present ANCs and those with the subscript "RMP" are from [2]. Another important point that needs to be discussed is the impact of the ground state ANC of \({}^{16}\)O. In this paper we employed the ground-state ANC of 58 fm\({}^{-1/2}\), which was used in [2]. As we mentioned above, for such a low ground-state ANC the interference of the resonance \(E1\) and \(E2\) transitions with the direct captures to the ground state is small. However, in the recent paper [18] a higher value of the ground-state ANC of 337 fm\({}^{-1/2}\) was deduced from the analysis of the \({}^{12}\)C(\({}^{11}\)B,\({}^{7}\)Li)\({}^{16}\)O reaction. Our calculations of the \(E1\) and \(E2\) transitions to the ground state with the ground-state ANC of 337 fm\({}^{-1/2}\) and our ANCs from Table 1 for the subthreshold \(1^{-}\) and \(2^{+}\) states, which include the interference of the resonance and direct captures, resulted in \(S(300\,{\rm keV})=99\) keVb and \(S(300\,{\rm keV})=57\) keVb for the \(E1\) and \(E2\) transitions, respectively. Thus, a significant increase of the ground-state ANC of \({}^{16}\)O and our higher subthreshold ANCs from Table 1 change very little the \(E1\)\(S(300\,{\rm keV})\) astrophysical factor but significantly decrease the \(E2\)\(S(300\,{\rm keV})\) value compared with the \(S(300\,{\rm keV})\) obtained for the lower ANC (see Table 2). The total \(S\)-factor \(S(300\,{\rm keV})=156\) keVb, which is \begin{table} \begin{tabular}{|c|c|c|} \hline Transitions & \(S(300\,{\rm keV})\) & \(S(300\,{\rm keV})\) \\ \hline Resonance to ground state & Present & Ref. [2] \\ \hline \(E1\) & 98 & 85 \\ \hline \(E2\) & 70 & 45 \\ \hline \(E1\) + \(E2\) & 168 & 130 \\ \hline Cascade & Present & Ref. [2] \\ \hline \(0_{2}^{+}+3^{-}+2^{+}+1^{-}\) & 6 & 7 \\ \hline Total & Present & Ref. [2] \\ \hline \(E1+E\) +Cascade & 174 & 137 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the current \(S\)-factors and the \(S\)-factors from [2] for transition to the ground state of \({}^{16}\)O at the most effective astrophysical energy of 300 keV. The resonance \(E1\) and \(E2\) transitions include interference with the direct \(E1\) and \(E2\) captures to the ground state. The cascade transition is the radiative capture to the ground state proceeding through four subthreshold bound states of \({}^{16}\)O: \(0_{2}^{+}\), \(3^{-}\), \(2^{+}\) and \(1^{-}\). The S-factors are given in units of keVb. Figure 3: The total \(S\)-factors for the \({}^{12}\)C(\(\alpha,\,\gamma\))\({}^{16}\)O reaction given by \(E1+E2+{\rm cascade}\) transitions to the ground state of \({}^{16}\)O for three different calculations. The solid red line is the present total \(S\)-factor for the ground-state ANC of 58 fm\({}^{-1/2}\) and the subthreshold states ANCs from Table 1; the blue dash-dotted line is similar to the solid red line but for the ground-state ANC of 337 fm\({}^{-1/2}\); the green dashed line is the total \(S\)-factor from [2]. The black pentagons with the error bars are the experimental data from [3]. \begin{table} \begin{tabular}{|c|c|c|} \hline Transitions & \(\Delta C_{l}^{2}\) \% & \(\Delta S(300\,{\rm keV})\) \% \\ \hline \(E1\) & 19 & 15 \\ \hline \(E2\) & 55 & 56 \\ \hline \(E1\) + \(E2\) & & 29 \\ \hline \end{tabular} \end{table} Table 3: Correlation of the uncertainties of squares of ANCs and the uncertainties of the \(S(300\,{\rm keV})\)-factors. Figure 2: All the calculated \(S\)-factors. The \(S\)-factors for the \(E1\) and \(E2\) resonance radiative captures to the ground state are presented by red short dashed line and by the magenta long dashed line, respectively. The \(S\)-factors for the cascade radiative captures: to the 7.12 MeV state - may dotted line; the radiative capture to the 6.92 MeV state - cyan dash-dotted line; the radiative capture to the 6.13 MeV state - green dash-dotted line; the radiative capture to the 6.05 MeV state - blue dash-dot-dotted line. The solid black line is the total \(S\)-factor given by the sum of all the \(S\)-factors shown in this figure. obtained for the high ground-state ANC and higher subthreshold ANCs, moves closer to the \(S(300\,{\rm keV})=130\) keVb obtained with lower ANCs in [2] (where the cascade is not included). We would like to mention that one of the main uncertainties of the ground-state ANC obtained in [18] is caused by the uncertainty of the ANC for \({}^{11}\)B \(\rightarrow\alpha+{{}^{7}}\)Li, which was not properly discussed in [18]. Our findings indicate a correlation between the ground-state ANC of \({}^{16}\)O with the ANCs of the subthreshold \(2^{+}\) state in agreement with [18]. From this point of view, considering the results from [2] and the current ANC for the \(2^{+}\) state as realistic, we can assume that the ground-state ANC of \({}^{16}\)O, which fits the \(S\)-factor from [2], should be less than 337 fm\({}^{-1/2}\). Figure 3 shows the total \(S\)-factors contributed by the sum of the \(E1+E2\) resonance transitions to the ground state of \({}^{16}\)O (interference with the direct captures is taken into account) plus the cascade transition to the ground state of \({}^{16}\)O. The explanation of the three different lines is given in the caption to the figure. One can see that the increase of the ground-state ANC and the subthreshold ANCs moves the present results closer to the ones from [2] calculated using lower ANCs. ## IV Summary The \(S\)-factors for the \({}^{12}\)C(\(\alpha\), \(\gamma\))\({}^{16}\)O reaction are calculated within the \(R\)-matrix AZURE2 code using the recently determined ANCs for four subthreshold states \(0^{+}_{2}\), \(3^{-}\), \(2^{+}\), \(1^{-}\) and a low ground-state ANC of 58 fm\({}^{-1/2}\) of \({}^{16}\)O. The ANCs are obtained through extrapolation of the elastic scattering phase shifts to the subthreshold bound-state poles. The results are compared with the ones obtained in [2] using the subthreshold ANCs available in the literature. Higher subthreshold bound state ANCs used in the present calculations lead to a higher \(S(300\,{\rm keV})\)-factor and higher low-temperature reaction rates (see Appendix). We also discuss a correlation between the ground-state ANC of \({}^{16}\)O and the ANCs of the subthreshold states. ###### Acknowledgements. A.M.M. acknowledges the support from the US DOE National Nuclear Security Administration under Award Number DENA0003841 and from the DOE Grant No. DE-FG02-93ER40773. R.J.D. utilized resources from the Notre Dame Center for Research Computing and was supported by the National Science Foundation through grant Nos. PHY-1713857 and PHY-2011890, and the Joint Institute for Nuclear Astrophysics through grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). A.S.K. acknowledges the support from the Australian Research Council. ## Appendix: Reaction rates In Table 4 we present the comparison of the reaction rates calculated using the present total \(S\)-factor with the ones from [2]. For easier comparison, the tabulated low-temperature reaction rates are calculated at the same temperatures as in [2]. Since the present \(S\)-factor is larger than that from [2], at low temperatures \(T_{9}<2.0\) our reaction rates exceed the reaction rates from [2]. However, since we constrain our calculations to low-energy \(S\)-factor, at temperatures \(T_{9}\geq 2\) the reaction rates from [2] exceed ours.
2309.04280
On the lattice of fuzzy rough sets
By the means of lower and upper fuzzy approximations we define quasiorders. Their properties are used to prove our main results. First, we characterize those pairs of fuzzy sets which form fuzzy rough sets w.r.t. a t-similarity relation $\theta$ on $U$, for certain t-norms and implicators. Then we establish conditions under which fuzzy rough sets form lattices. We show that for the $\min$ t-norm and any S-implicator defined by the $\max$ co-norm with an involutive negator, the fuzzy rough sets form a complete lattice, whenever $U$ is finite or the range of $\theta$ and of the fuzzy sets is a fixed finite chain.
Dávid Gégény, Sándor Radeleczki
2023-09-08T11:57:29Z
http://arxiv.org/abs/2309.04280v1
# On the lattice of fuzzy rough sets ###### Abstract By the means of lower and upper fuzzy approximations we define quasiorders. Their properties are used to prove our main results. First, we characterize those pairs of fuzzy sets which form fuzzy rough sets w.r.t. a t-similarity relation \(\theta\) on \(U\), for certain t-norms and implicators. Then we establish conditions under which fuzzy rough sets form lattices. We show that for the min t-norm and any S-implicator defined by the max co-norm with an involutive negator, the fuzzy rough sets form a complete lattice, whenever \(U\) is finite or the range of \(\theta\) and of the fuzzy sets is a fixed finite chain. keywords: Fuzzy rough sets; Fuzzy relations; Lower and upper approximation; Self-dual poset + Footnote †: journal: Computer Science ## 1 Introduction Rough sets were introduced by Zdzislaw Pawlak [20], by defining the lower and upper approximations of a (crisp) set based on a so-called indiscernibility relation of the elements. Originally, Pawlak assumed that this relation is an equivalence, but later several other types of relations were also examined (see e.g. [13], [14], or [12], [28]). For a relation \(\varrho\subseteq U\times U\) and any element \(u\in U\), denote \(\varrho(u):=\{x\in U\mid(u,x)\in R\}\). Now, for any subset \(A\subseteq U\), the _lower_ approximation_ of \(A\) is defined as \[A_{\varrho}:=\{x\in U\mid\varrho(x)\subseteq A\},\] and the _upper approximation_ of \(A\) is given by \[A^{\varrho}:=\{x\in U\mid\varrho(x)\cap A\neq\emptyset\}.\] If \(\varrho\) is reflexive and transitive, i.e. it is a _quasiorder_, then the properties \(A_{\varrho}\subseteq A\subseteq A^{\varrho}\) and \((A_{\varrho})_{\varrho}=A_{\varrho}\), \((A^{\varrho})^{\varrho}=A^{\varrho}\) hold for all \(A\subseteq U\). The rough sets induced by \(\varrho\) can be ordered w.r.t. the component-wise inclusion, and for an equivalence, or more generally, for a quasiorder \(\varrho\), they form a complete distributive lattice with several particular properties, see e.g. [14] or [22]. The notion of a fuzzy set was introduced by Lotfi Zadeh [29]. A fuzzy set is defined by a mapping \(f:U\to[0,1]\). We say that \(f\) has a _finite range_, whenever the (crisp) set \(\{f(x)\mid x\in U\}\) is finite. The collection of all fuzzy sets on \(U\) is denoted by \(\mathcal{F}(U)\). Ordering any elements \(f,g\in\mathcal{F}(U)\) as follows \[f\leq g\Leftrightarrow f(x)\leq g(x),\,\text{for all }x\in U,\] we obtain a completely distributive (complete) lattice \(\mathcal{F}(U)\). For any system \(f_{i}\in\mathcal{F}(U)\), \(i\in I\), its infimum and supremum are given by the formulas \[\left(\bigwedge_{i\in I}f_{i}\right)(x)=\bigwedge_{i\in I}f_{i}(x);\,\,\left( \bigvee_{i\in I}f_{i}\right)(x)=\bigvee_{i\in I}f_{i}(x), \tag{1}\] where \(\bigwedge\) and \(\bigvee\) denote the infimum and the supremum, respectively, in the complete lattice \(([0,1],\leq)\). The first step to integrate the two main theories relates to the works of del Cerro and Prade [2], Nakamura [19] and Dubois and Prade [5]. In [5] the fuzzy rough sets are defined as pairs \((\underline{f},\overline{f})\in\mathcal{F}(U)\times\mathcal{F}(U)\) of lower and upper approximations of the fuzzy sets \(f\in\mathcal{F}(U)\). These fuzzy approximations were defined by using a similarity relation, the t-norm min and conorm max. Their approach was generalized in several papers, like [3, 8, 10, 11, 21, 23, 26, 27] and [4], where fuzzy rough sets are defined on the basis of different t-norms (or conjunctors) and related implicators. A detailed study of these approximation operators was developed in [4], [23] and in [16], [24], where the structure of the lower and upper approximations of \(L\)-fuzzy sets is also investigated. An axiomatic approach of these properties was elaborated e.g. in [1], [15], [17], and [18]. In [9] it was shown that for crisp reference sets (i.e. for \(f(x)\in\{0,1\}\), \(\forall x\in U\)) the fuzzy rough sets defined by a t-similarity relation \(\theta\) with a well-ordered spectrum form a completely distributive lattice. The goal of the present paper is to find conditions under which fuzzy rough sets form lattices. With this purpose, by the means of lower and upper fuzzy approximations we define (crisp) quasiorders on \(U\). The properties of these quasiorders and of the equivalences determined by them are discussed in Sections 3, 4 and 6. These properties will be used to prove our main results, Theorems 5.1 and 7.4. Section 2 contains the essential prerequisites of our study. In Section 5 by using singleton equivalence classes, we characterize those pairs of fuzzy sets which form a fuzzy rough set with respect to a t-similarity relation \(\theta\) for certain t-norms and related implicators. In Section 7, we establish conditions under which fuzzy rough sets with a finite range form lattices. For instance, we show that for the min t-norm and any S-implicator defined by the max co-norm with an involutive negator, the fuzzy rough sets form a complete lattice, whenever \(U\) is finite, or whenever the range of \(\theta\) and of the fuzzy reference sets is a fixed finite chain \(L\subseteq[0,1]\). ## 2 Preliminaries ### T-norms, implicators and fuzzy relations _A triangular norm \(\odot\)_ (_t-norm_ for short) is a commutative, associative and monotone increasing binary operation \(\odot\) defined on \([0,1]\) satisfying \(1\odot x=x\odot 1=x\), for all \(x\in[0,1]\). The t-norm \(\odot\) is called _(left) continuous,_ if it is (left) continuous as a function \(\odot\colon[0,1]^{2}\to[0,1]\) in the usual interval topology on \([0,1]^{2}\). Every t-norm \(\odot\) satisfies \(x\odot 0=0\odot x=0\), for all \(x\in L\). The most known t-norms are: - the _standard min operator_: \(x\odot y:=\min(x,y)\); - the _arithmetical product_: \(x\odot y:=x\cdot y\); - the _Lukasiewicz t-norm_\(x\odot y:=\max(0,x+y-1)\). A _negator_ is a decreasing map \(n\colon[0,1]\to[0,1]\) with \(n(0)=1\) and \(n(1)=0\). \(n\) is called _involutive_ if \(n(n(x))=x\), for all \(x\in[0,1]\) (see e.g. [6]). The so-called _standard negator_\(n(x):=1-x\), \(x\in[0,1]\) is an involutive negator. _A triangular conorm \(\oplus\)_ (shortly _t-conorm_) is a commutative, associative and monotone increasing binary operation \(\oplus\) defined on \([0,1]\), that satisfies \(0\oplus x=x\oplus 0=x\), for all \(x\in[0,1]\). The t-conorm \(\oplus\) is _(left) continuous,_ if it is (left) continuous as a function \(\oplus\colon[0,1]^{2}\to[0,1]\) in the usual topology. Given an involutive negator \(n\), a t-norm \(\odot\) and a t-conorm \(\oplus\), we say that \(\odot\) and \(\oplus\) form an \(n\)_-dual pair_ if for all \(x,y\in[0,1]\) \[n(x\oplus y)=n(x)\odot n(y).\] Clearly, this identity also implies the identity \[n(x\odot y)=n(x)\oplus n(y).\] For instance, \(\min(x,y)\), \(\max(x,y)\) form a well-known \(n\)-dual pair w.r.t. any involutive negator on \([0,1]\). An _implicator_ is a binary operation (mapping) \(\rhd\colon[0,1]^{2}\to[0,1]\) that is decreasing in the first and increasing in the second argument and that satisfies the boundary conditions \[0\rhd 0=0\rhd 1=1\rhd 1=1\text{ and }1\rhd 0=0.\] \(\rhd\) is called a _border implicator_ if \(1\rhd x=x\) holds for all \(x\in[0,1]\). There are two important classes of border implicators. The _R-implicator_ based on a t-norm \(\odot\) is defined by \[x\rhd y:=\bigvee\{z\in[0,1]\mid x\odot z\leq y\},\text{ for all }x,y\in[0,1].\] If \(\odot\) is a continuous t-norm, then the algebra \(([0,1],\vee,\wedge,\odot,\rhd,0,1)\) is a so-called _commutative (integral) residuated lattice_ (see [7]). Let \(\oplus\) be a t-conorm and \(n\) a negator on \([0,1]\), then the _S-implicator_ based on them is defined by \[x\rhd y:=n(x)\oplus y\] The _Eukasiewicz implicator_\(\rhd_{L}\) is both an R-implicator and S-implicator defined by \(x\rhd_{L}y:=\min(1,1-x+y)\), \(\forall x\in[0,1]\). The _Kleene-Dienes (KD) implicator_\(\rhd_{KD}\) is an S-implicator given by \(x\rhd_{KD}y:=\max(1-x,y)\), \(\forall x\in[0,1]\). If \(\rhd\) is an implicator, then a corresponding negator is defined by \(n(x)=x\rhd 0\). If \(n\) is an involutive negator and \(\rhd\) is an R-implicator defined by a left-continuous t-norm, then \(\rhd\) is called an _ITML-implicator_. A _fuzzy binary relation_ on \(U\) is a fuzzy set \(\theta\colon U\times U\to[0,1]\). The pair \((U,\theta)\) is usually called a _fuzzy approximation space_. \(\theta\) is called _reflexive_ if \(\theta(x,x)=1\) for all \(x\in U\), and it is called _symmetric_ if for all \(x,y\in U\), \(\theta(x,y)=\theta(y,x)\). Given a t-norm \(\odot\), the relation \(\theta\) is called _\(\odot\)-transitive_ if \[\theta(x,y)\odot\theta(y,z)\leq\theta(x,z)\] holds for every \(x,y,z\in U\). If a relation \(\theta\) is reflexive and \(\odot\)-transitive, then it is called a (fuzzy) \(\odot\)_-quasiorder_. A symmetric \(\odot\)-quasiorder \(\theta\) is called a (fuzzy) \(\odot\)_-similarity relation_. When \(x\odot y=\min(x,y)\), then \(\theta\) is simply named a _similarity relation_. Since the minimum t-norm is the largest t-norm, a similarity relation is always \(\odot\)-transitive for any t-norm \(\odot\). We say that \(\theta\) is of a _finite range_, if the (crisp) set \(\{\theta(x,y)\mid x,y\in U\}\) of its values is finite. ### Fuzzy rough sets Let \((U,\theta)\) be a fuzzy approximation space with a relation \(\theta\colon U\times U\to[0,1]\). The precise notion of a fuzzy rough set was introduced by D. Dubois and H. Prade in [5]. They defined for any fuzzy set \(f\in\mathcal{F}(U)\) its _lower approximation_\(\underline{\theta}(f)\) and its _upper approximations_\(\overline{\theta}(f)\)_relative to_\(\theta\) by the formulas \[\frac{\underline{\theta}(f)(x):=\bigwedge\{\max(1-\theta(x,y),f(y))\mid y\in U\},\,\text{for all }x\in U;\] \(\overline{\theta}(f)(x):=\bigvee\{\min(\theta(x,y),f(y))\mid y\in U\},\,\text {for all }x\in U\). The _fuzzy rough set of_\(f\) is identified by the pair \((\underline{\theta}(f),\overline{\theta}(f))\in\mathcal{F}(U)\times\mathcal{F}(U)\) (see [5]). This definition was generalized in several papers. Here we will use the approach based on indicators and t-norms from [4] and [23]. Hence, in what follows, let \(\odot\) be a t-norm and \(\rhd\) a border implicator on \([0,1]\). **Definition 2.1**.: _If \((U,\theta)\) is a fuzzy approximation space, then for any fuzzy set \(f\in\mathcal{F}(U)\) its fuzzy lower approximation \(\underline{\theta}(f)\) and its fuzzy upper approximation \(\overline{\theta}(f)\) are defined as follows:_ \[\underline{\theta}(f)(x):=\bigwedge\{\theta(x,y)\rhd f(y)\mid y\in U\}\text{, for all }x\in U\text{.} \tag{2}\] \[\overline{\theta}(f)(x):=\bigvee\{\theta(x,y)\odot f(y)\mid y\in U\}\text{, for all }x\in U\text{.}\] (2') _The pair \((\underline{\theta}(f),\overline{\theta}(f))\in\mathcal{F}(U)\times\mathcal{F}(U)\) is called a fuzzy rough set in \((U,\theta)\)._ This definition also includes the one of Dubois and Prade, where \(\odot\) is the min t-norm and \(\rhd\) is the Kleene-Dienes implicator \(x\rhd_{KD}y=\max(1-x,y)\). Notice that \(\underline{\theta}\) and \(\overline{\theta}\) are _order-preserving_ operators, i.e. \(f\leq g\) implies \(\underline{\theta}(f)\leq\underline{\theta}(g)\) and \(\overline{\theta}(f)\leq\overline{\theta}(g)\). In addition, if \(\theta\) is a reflexive fuzzy relation, then \(\underline{\theta}(f)\leq f\leq\overline{\theta}(f)\) holds for all \(f\in\mathcal{F}(U)\) (see e.g. [4] or [23]) The following properties will have a special importance in our proofs: (D) Let \(\odot\) be a left-continuous t-norm such that its induced R-implicator \(\rhd\) is an ITML implicator, i.e. \(n(x):=x\rhd 0\), \(x\in U\) is an involutive negator, or let \(n\) be an involutive negator, \(\oplus\) a t-conorm \(n\)-dual to \(\odot\) and \(\rhd\) the S-indicator defined by them (i.e. \(x\rhd y=n(x)\oplus y\)). Then \(n(\overline{\theta}(f))=\underline{\theta}(n(f))\) and \(n(\underline{\theta}(f))=\overline{\theta}(n(f))\) (see e.g. [4], [16] or [23]). (ID) Let \(\odot\) be a left-continuous t-norm and \(\rhd\) the R-implicator induced by it, or \(n\) an involutive negator, \(\oplus\) a t-conorm \(n\)-dual to \(\odot\) and \(\rhd\) the S-indicator corresponding to them. If \(\theta\) is \(\odot\)-transitive, then for any \(f,g\in\mathcal{F}(U)\) we have \(\overline{\theta}(\overline{\theta}(f))=\overline{\theta}(f)\) and \(\underline{\theta}(\underline{\theta}(g))=\underline{\theta}(g)\) (see [4], [16], [23]). In other words, for \(F=\overline{\theta}(f)\) and \(G=\underline{\theta}(g)\) we have \(F=\overline{\theta}(F)\) and \(G=\underline{\theta}(G)\). **Lemma 2.2**.: _Let \((U,\theta)\) be a fuzzy approximation space such that the relation \(\theta\) is of a finite range. If \(f\in\mathcal{F}(U)\) has a finite range, then the fuzzy sets \(\underline{\theta}(f)\) and \(\overline{\theta}(f)\) are also of a finite range._ Proof.: Since \(\{\theta(x,y)\mid x,y\in U\}\) and \(\{f(y)\mid y\in U\}\) are finite sets, their Cartesian product \(\{(\theta(x,y),f(y))\mid x,y\in U\}\) is finite, hence the sets \(\mathcal{C}=\{\theta(x,y)\odot f(y)\mid x,y\in U\}\) and \(\mathcal{I}=\{\theta(x,y)\rhd f(y)\mid x,y\in U\}\) are also finite. In particular, this means that the sets \(\mathcal{C}\) and \(\mathcal{I}\) have finitely many (different) subsets of the form \(\{\theta(x,y)\rhd f(y)\mid y\in U\}\) and \(\{\underline{\theta}(x,y)\odot f(y)\mid y\in U\}\), and this immediately implies that both \(\underline{\theta}(f)\) and \(\overline{\theta}(f)\) have finitely many different values, i.e. they have finite ranges. ## 3 Quasiorders induced by lower and upper approximations In what follows, suppose that conditions in (ID) hold and \(n(x):=x\rhd 0\). For any \(f,g\in\mathcal{F}(U)\), denote \(F=\overline{\theta}(f)\) and \(G=\underline{\theta}(g)\). Using \(F\) and \(G\) we define two binary relations \(R(F)\) and \(\underline{\varrho}(G)\) on \(U\) as follows: **Definition 3.1**.: _Let \((U,\theta)\) be a fuzzy approximation space, \(a,b\in U\) and \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\). Then_ _(i)_ \((a,b)\in R(F)\Leftrightarrow F(a)=\theta(a,b)\odot F(b)\)_;_ _(ii)_ \((a,b)\in\underline{\varrho}(G)\Leftrightarrow G(a)=\theta(a,b)\rhd G(b)\)_._ **Proposition 3.2**.: _(i) \((a,b)\in R(F)\) implies \(F(a)\leq\theta(a,b)\), and \((a,b)\in\underline{\varrho}(G)\) implies \(G(a)\geq n(\theta(a,b))\)._ _(ii) If_ \(\theta\) _is reflexive, then for any_ \(f,g\in\mathcal{F}(U)\)_,_ \(R(F)\) _and_ \(\underline{\varrho}(G)\) _are reflexive._ _(iii) If_ \(\theta\) _is a_ \(\odot\)_-quasiorder, then_ \(R(F)\)_,_ \(\underline{\varrho}(G)\) _are crisp quasiorders, and_ \(F(a)\geq\theta(a,y)\odot F(y)\)_,_ \(G(a)\leq\theta(a,y)\rhd G(y)\)_, for any_ \(a,y\in U\)_._ _(iv) If_ \(n(x)\) _is involutive, then_ \(R(F)=\underline{\varrho}(n(F))\)_, and_ \(\underline{\varrho}(G)=R(n(G))\)_._ _(v) Let_ \(\odot\) _be the minimum t-norm,_ \(n\) _an involutive negator, and_ \(x\rhd y:=\text{max}(n(x),y)\)_. If_ \(\theta\) _is a similarity relation, then_ \((a,b)\in R(F)\Leftrightarrow F(a)\leq\theta(a,b)\) _and_ \((a,b)\in\underline{\varrho}(G)\Leftrightarrow G(a)\geq n(\theta(a,b))\) Proof.: (i) By definition \((a,b)\in R(F)\) implies \(F(a)=\theta(a,b)\odot F(b)\leq\theta(a,b)\) and \((a,b)\in\varrho(G)\) yields \(G(a)=\theta(a,b)\rhd G(b)\geq\theta(a,b)\rhd 0=n(\theta(a,b))\). (ii) If \(\theta\) is reflexive, then \(\theta(a,a)=1\) implies \(F(a)=\theta(a,a)\odot F(a)\) and \(G(a)=\theta(a,a)\rhd G(a)\), i.e. \((a,a)\in R(F)\) and \((a,a)\in\varrho(G)\) hold for all \(a\in U\). Thus \(R(F)\) and \(\varrho(G)\) are reflexive. (iii) Let \(\theta\) be a \(\odot\)-quasiorder. Then \(R(F)\), \(\varrho(G)\) are reflexive, property (ID) holds, and hence \(F(a)=\overline{\theta}(F)(a)\), \(G(a)=\underline{\theta}(G)(a)\) imply \(F(a)=\bigvee\{\theta(x,y)\odot F(y)\mid y\in U\}\geq\theta(a,y)\odot F(y)\) and \(G(a)=\bigwedge\{\theta(x,y)\rhd G(y)\mid y\in U\}\leq\theta(a,y)\rhd G(y)\), \(\forall y\in U\). Take \(a,b,c\in U\) with \((a,b),(b,c)\in R(F)\). Then \(F(a)=\theta(a,b)\odot F(b)\) and \(F(b)=\theta(b,c)\odot F(c)\) imply \(F(a)=(\theta(a,b)\odot\theta(b,c))\odot F(c)\leq\theta(a,c)\odot F(c)\), because \(\theta\) is \(\odot\)-transitive. Now \(F(a)\geq\ \theta(a,c)\odot F(c)\) yields \(F(a)=\theta(a,c)\odot F(c)\), i.e. \((a,c)\in R(F)\). Thus \(R(F)\) is also transitive, hence it is a quasiorder. Let \((a,b),(b,c)\in\varrho(G)\). Then \(G(a)=\theta(a,b)\rhd G(b)\) and \(G(b)=\theta(b,c)\rhd G(c)\) imply \(G(a)=\theta(a,b)\rhd(\theta(b,c)\rhd G(c))\). If \(\rhd\) is an R-implicator, then \(\theta(a,b)\rhd(\theta(b,c)\rhd G(c))=(\theta(a,b)\odot\theta(b,c))\rhd G(c)\). If \(\rhd\) is an S-implicator \(x\rhd y=n(x)\oplus y\), then \(\theta(a,b)\rhd(\theta(b,c)\rhd G(c))=n(\theta(a,b))\oplus(n(\theta(b,c)) \oplus G(c))=(n(\theta(a,b))\oplus n(\theta(b,c))\oplus G(c)=(n(\theta(a,b) \odot\theta(b,c))\oplus G(c)=(\theta(a,b)\odot\theta(c))\rhd G(c)\). Hence in both cases \(G(a)=(\theta(a,b)\odot\theta(b,c))\rhd G(c)\). Because \(\theta\) is \(\odot\)-transitive (and \(\rhd\) is decreasing in the first variable) we get \(G(a)\geq\theta(a,c)\rhd G(c)\). Then \(G(a)\leq\theta(a,c)\rhd G(c)\) yields \(G(a)=\theta(a,c)\rhd G(c)\), i.e. \((a,c)\in\varrho(G)\). Thus \(\varrho(G)\) is a \(\odot\)-quasiorder. (iv) Observe, that in this case property (D) holds, i.e., \(n(\overline{\theta}(f))=\underline{\theta}(n(f))\). This yields \(n(F)=\underline{\theta}(n(f))\), and \((a,b)\in R(F)\) means \(F(a)=\theta(a,b)\odot F(b)\). As \(n\) is involutive, this is equivalent to \(n(F(a))=n(\theta(a,b)\odot F(b))\). If \(\rhd\) is an ITML implicator, then \(n(\theta(a,b)\odot F(b))=(\theta(a,b)\odot F(b))\rhd 0=\theta(a,b)\rhd(F(b)\rhd 0)=\theta(a,b)\rhd n(F(b))\). If \(\rhd\) is an S-implicator, then \(n(\theta(a,b)\odot F(b))=n(\theta(a,b))\oplus n(F(b))=\theta(a,b)\rhd n(F(b))\). Hence in both cases \(n(F(a))=n(\theta(a,b)\odot F(b))\Leftrightarrow n(F)(a)=\theta(a,b)\rhd n(F)(b)\). The right side means \((a,b)\in\varrho(n(F))\). Thus we get \((a,b)\in R(F)\Leftrightarrow(a,b)\in\varrho(n(F))\), proving \(R(F)=\varrho(n(F))\). Let \(g=n(h)\) for some \(h\in{\cal F}(U)\). Then \(n(G)=n(\underline{\theta}(g))=\overline{\theta}(n(g))=\overline{\theta}(h)\), and \(G=n(\overline{\theta}(h))\). Hence \((a,b)\in\varrho(G)\Leftrightarrow(a,b)\in\varrho(n(\overline{\theta}(h))) \Leftrightarrow(a,b)\in R(\overline{\theta}(h))=R(n(G))\), and this proves \(\varrho(G)=R(n(G))\). (v) In view of (i) \((a,b)\in R(F)\) yields \(F(a)\leq\theta(a,b)\) and \((a,b)\in\varrho(G)\) implies \(G(a)\geq n(\theta(a,b))\). We need only to prove the converse implications. Let \(F(a)\leq\theta(a,b)\). Since \(\theta\) is also a \(\odot\)-quasiorder, in view of (iii) \(F(b)\geq\theta(a,b)\rhd G(a)\), we have \(\varrho(G)\geq\theta(a,b)\rhd G(a)\). Hence \(\varrho(G)\geq\theta(a,b)\rhd G(a)\). \(\theta(b,a)\odot F(a)=\min(\theta(b,a),F(a))=\min(\theta(a,b),F(a))=F(a)\). Hence \(F(a)\leq\min(\theta(a,b),F(b))\). As \(F(a)\geq\theta(a,b)\odot F(b)=\min(\theta(a,b),F(b))\) also holds, we get \(F(a)=\min(\theta(a,b),F(b))\), i.e. \((a,b)\in R(F)\). Now let \(G(a)\geq n(\theta(a,b))\). As \(\theta(b,a)=\theta(a,b)\), and by (iii), \(G(b)\leq\theta(b,a)\rhd G(a)=\max(n(\theta(a,b)),G(a))=G(a)\), we get \(G(a)\geq\max(n(\theta(a,b),G(b))\). Since \(G(a)\leq\theta(a,b)\rhd G(b)=\max(n(\theta(a,b)),G(b))\) also holds, we obtain \(G(a)=\max(n(\theta(a,b)),G(b))=\theta(a,b)\rhd G(b)\), i.e. \((a,b)\in\varrho(G)\). **Corollary 3.3**.: _If the conditions in Proposition 3.2(v) are satisfied, then \((a,b)\notin R(F)\Leftrightarrow F(a)>\theta(a,b)\) and \((a,b)\notin\varrho(G)\Leftrightarrow G(a)<n(\theta(a,b))\)._ **Proposition 3.4**.: _Let \(\theta\) be a \(\odot\)-quasiorder and \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\), for some \(f,g\in\mathcal{F}(U)\), and \(a,b\in U\). The following hold true: (i) If \((a,b)\in R(F)\), then for any \(h\in\mathcal{F}(U)\) with \(h\leq F\), \(h(b)=F(b)\) implies \(\overline{\theta}(h)(a)=F(a)\). (ii) If \((a,b)\in\varrho(G)\), then for any \(h\in\mathcal{F}(U)\) with \(h\geq G\), \(h(b)=G(b)\) implies \(\underline{\theta}(h)(a)=G(a)\)._ Proof.: (i) By definition, we have \(F(a)=\theta(a,b)\odot F(b)\). Hence we get: \[\begin{array}{l}\overline{\theta}(h)(a)=\bigvee\{\theta(a,y)\odot h(y)\mid y \in U\}\geq\\ \theta(a,b)\odot h(b)=\ \theta(a,b)\odot F(b)=F(a)\end{array}\] On the other hand, \(\overline{\theta}(h)\leq\overline{\theta}(F)=F\) implies \(\overline{\theta}(h)(a)\leq F(a)\). Thus we obtain \(\overline{\theta}(h)(a)=F(a)\). (ii) Now, analogously we have \(G(a)=\theta(a,b)\rhd G(b)\). Therefore, we get: \[\begin{array}{l}\underline{\theta}(h)(a)=\bigwedge\{\theta(a,y)\rhd h(y) \mid y\in U\}\leq\\ \theta(a,b)\rhd h(b)=\theta(a,b)\rhd G(b)=G(a).\end{array}\] Now \(\underline{\theta}(h)\geq\underline{\theta}(G)=G\) yields \(\underline{\theta}(h)(a)\geq G(a)\), whence \(\underline{\theta}(h)(a)=G(a)\). ## 4 The equivalences induced by the quasiorders \(R(f)\) and \(\varrho(g)\) In this section we assume that conditions in (ID) are satisfied, i.e. that \(\odot\) is a left-continuous t-norm, \(\rhd\) is the R-indicator induced by it and \(n(x)=x\rhd 0\), or \(n\) is an involutive negator, \(\oplus\) is the t-conorm \(n\)-dual to \(\odot\) and \(\rhd\) is the S-indicator defined by them. We also suppose that \((U,\theta)\) is an approximation space with a \(\odot\)-similarity relation \(\theta\) and \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\), for some \(f,g\in\mathcal{F}(U)\). Now, by Proposition 3.2(iii), \(R(F)\), \(\varrho(G)\subseteq U\times U\) are (crisp) quasiorders. It is known that for any quasiorder \(q\subseteq U\times U\) the relation \(\varepsilon_{q}:=q\cap q^{-1}\) is an equivalence and \(q\) induces a _natural partial order_\(\leq_{q}\) on the factor-set \(U/\varepsilon_{q}\) as follows: for any equivalence classes \(A,B\in U/\varepsilon_{q}\) we say that \(A\leq_{q}B\), whenever there exists \(a\in A\) and \(b\in B\) with \((a,b)\in q\). This is equivalent to the fact that \((x,y)\in q\) holds for all \(x\in A\) and \(y\in B\). Thus we can introduce two equivalence relations \(E(F)\) and \(\varepsilon(G)\) as follows: \[E(F):=R(F)\cap R(F)^{-1}\mbox{and }\varepsilon(G):=\varrho(G)\cap\varrho(G)^{-1}.\] The corresponding (natural) partial orders on the factor-sets \(U/E(F)\) and \(U/\varepsilon(G)\) can be defined as follows: For any \(E_{1},E_{2}\in U/E(F)\), we have \(E_{1}\leq_{R(F)}E_{2}\Leftrightarrow(a_{1},a_{2})\in R(F)\) for some \(a_{1}\in E_{1}\) and \(a_{2}\in E_{2}\), and for any \({\cal E}_{1},{\cal E}_{2}\in U/\varepsilon(G)\) we have \({\cal E}_{1}\leq_{\varrho(G)}\mathcal{E}_{2}\Leftrightarrow(b_{1},b_{2})\in \varrho(G)\) for some \(b_{1}\in{\cal E}_{1}\) and \(b_{2}\in{\cal E}_{2}\). \(E\) is called a _maximal_\(E(F)\)_class_, if it is a maximal element of the poset \((U/E(F),\leq_{R(F)})\) and \(\mathcal{E}\) is a _maximal_\(\varepsilon(G)\)_class_ if it is maximal in \((U/\varepsilon(G),\leq_{\varrho(G)})\). The \(E(F)\) and \(\varepsilon(G)\) class of an \(a\in U\) is denoted by \([a]_{E(F)}\) and \([a]_{\varepsilon(G)}\), respectively. In this section we prove several properties of these classes used to characterize pairs of fuzzy sets which together form fuzzy rough sets. **Lemma 4.1**.: _The following assertions hold true: (i) If \(E\subseteq U\) is an \(E(F)\) class, then \(F(a)=F(b)\leq\theta(a,b)\), for all \(a,b\in E\); (ii) If \({\cal E}\subseteq U\) is an \(\varepsilon(G)\) class, then \(G(a)=G(b)\geq n(\theta(a,b))\), for all \(a,b\in{\cal E}\); (iii) If \(E\subseteq U\) is a maximal \(E(F)\) class, then \(\theta(a,z)\odot F(z)<F(a)=F(b)\leq\theta(a,b)\) and \(\theta(a,z)<\theta(a,b)\), for all \(a,b\in E\) and \(z\notin E\); (iv) If \({\cal E}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then \(n(\theta(a,b))\leq G(a)=G(b)<\theta(a,z)\rhd G(z)\) and \(\theta(a,z)<\theta(a,b)\), for all \(a,b\in{\cal E}\) and \(z\notin{\cal E}\); (v) Assume that \(n(x)=x\rhd 0\) is involutive. Then the \(E(F)\) classes and the \(\varepsilon(n(F))\) classes are the same, and \(E\subseteq U\) is a maximal \(E(F)\) class if and only if it is also a maximal \(\varepsilon(n(F))\) class._ Proof.: (i) If \(E\subseteq U\) is an \(E(F)\) class, then \((a,b),(b,a)\in R(F)\) holds for any \(a,b\in E\). Therefore, \(F(a)=\theta(a,b)\odot F(b)\leq\theta(a,b)\), \(F(b)\) and \(F(b)=\theta(b,a)\odot F(a)\leq F(a)\). Thus we obtain \(F(b)=F(a)\leq\theta(a,b)\). (ii) If \({\cal E}\subseteq U\) is an \(\varepsilon(G)\) class, then \((a,b),(b,a)\in\varrho(G)\) imply \(G(a)=\theta(a,b)\rhd G(b)\geq 1\rhd G(b)=G(b)\) and \(G(b)\geq\theta(b,a)\rhd G(a)\geq 1\rhd G(a)=G(a)\), for any \(a,b\in{\cal E}\). Hence \(G(a)=G(b)\). By Proposition 3.2(i) we obtain \(G(a)=G(b)\geq n(\theta(a,b))\), for all \(a,b\in{\cal E}\). (iii) Let \(E\) be a maximal \(E(F)\) class. Then \(F(a)=\theta(a,b)\odot F(b)\), \(F(b)=\theta(a,b)\odot F(a)\), and in view of (i), \(F(a)=F(b)\leq\theta(a,b)\). We also have \(F(a)\geq\ \theta(a,z)\odot F(z)\), according to Proposition 3.2(iii). As \(E\nleq[z]_{E(F)}\) implies \((a,z)\notin R(F)\) for each \(z\notin E\), we obtain \(F(a)>\theta(a,z)\odot F(z)\), for all \(z\notin E\). Now, suppose that \(\theta(a,c)\geq\theta(a,b)\), for some \(c\notin E\). Then \(F(c)\geq\ \theta(c,a)\odot F(a)=\theta(a,c)\odot F(b)\geq\theta(a,b)\odot F(b)=F(a)\). This further yields \(F(a)>\theta(a,c)\odot F(c)\geq\theta(a,b)\odot F(a)=F(b)\), a contradiction. Thus \(\theta(a,z)<\theta(a,b)\), for each \(z\notin E\). (iv) If \(\mathcal{E}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then \(G(a)=\theta(a,b)\rhd G(b)\), \(G(b)=\theta(a,b)\rhd G(a)\), and in view of (i), \(n(\theta(a,b))\leq G(a)=G(b)\), for all \(a,b\in\mathcal{E}\) and \(\mathcal{E}\nleq[z]_{\varepsilon(G)}\), for any \(z\notin\mathcal{E}\). Now, by Proposition 3.2(iii), \(G(a)\leq\theta(a,z)\rhd G(z)\), hence \((a,z)\notin\varrho(G)\) yields \(G(a)<\theta(a,z)\rhd G(z)\). By way of contradiction, assume \(\theta(a,c)\geq\theta(a,b)\), for some \(c\notin E\). Then \(G(c)\leq\theta(a,c)\rhd G(a)\leq\theta(a,b)\rhd G(b)=G(a)\). This further yields \(G(a)<\theta(a,c)\rhd G(c)\leq\theta(a,b)\rhd G(a)=G(b)\), a contradiction again. (v) If \(n(x)=x\rhd 0\) is involutive, then property (D) means that \(n(F)=n(\overline{\theta}(f))=\underline{\theta}(n(f))\). Hence, relation \(\varrho(n(F))=R(F)\) is well defined, and \(E(F)=R(F)\cap R(F)^{-1}=\varrho(n(F))\cap\varrho(n(F))^{-1}=\varepsilon(n(F))\). Thus the equivalence classes of \(E(F)\) and \(\varepsilon(n(F))\) coincide. \(R(F)=\varrho(n(F))\) also yields \(\leq_{R(F)}=\leq_{\varrho(n(F))}\), i.e. the posets \((U/E(F),\leq_{R(F)})\) and \((U/\varepsilon(n(F)),\leq_{\varrho(n(F))})\) are the same. Therefore, the maximal \(E(F)\) and \(\varepsilon(n(F))\) classes coincide. **Corollary 4.2**.: _Let \(E\) be an \(E(F)\) class and \(\mathcal{E}\) be an \(\varepsilon(G)\) class such that \(E\cap\mathcal{E}\neq\emptyset\). Then the following assertions hold: (i) If \(E\subseteq U\) is a maximal \(E(F)\) class and \(\mathcal{E}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then \(E\subseteq\mathcal{E}\) or \(\mathcal{E}\subseteq E\) holds. (ii) If \(\theta\) is a similarity relation, then \((x,y)\in R(F)\) or \((y,x)\in\varrho(G)\) holds for all \(x\in E\) and \(y\in\mathcal{E}\)._ Proof.: Let \(a\in E\cap\mathcal{E}\). (i) Assume that neither \(E\subseteq\mathcal{E}\) nor \(\mathcal{E}\subseteq E\) hold. Then there exist elements \(b\in E\setminus\mathcal{E}\), \(c\in\mathcal{E}\setminus E\). As \(a,b\in E\) but \(c\notin E\), in view of Lemma 4.1(iii) we have \(\theta(a,c)<\theta(a,b)\). Similarly, \(a,c\in\mathcal{E}\) and \(b\notin\mathcal{E}\) imply \(\theta(a,b)<\theta(a,c)\), a contradiction to the previous result. (ii) If \(\mathcal{E}\subseteq E\) or \(E\subseteq\mathcal{E}\) then (ii) is clearly satisfied. Hence we may assume \(\mathcal{E}\setminus E\neq\emptyset\) and \(E\setminus\mathcal{E}\neq\emptyset\). Suppose that there exist \(x\in E\) and \(y\in\mathcal{E}\) with \((x,y)\notin R(F)\). We claim that \((y,x)\in\varrho(G)\). Assume by contradiction \((y,x)\notin\varrho(G)\). Since \(x,a\in E\), \(y\notin E\) and \(y,a\in\mathcal{E}\), \(x\notin\mathcal{E}\), in view of Lemma 4.1(iii) and (iv) we get \(\theta(x,y)<\theta(x,a)\) and \(\theta(x,y)=\theta(y,x)<\theta(x,y)\). Hence, by Lemma 4.1(iii) we have \(\theta(x,y)<\theta(x,y)\). Hence, by Lemma 4.1(iii) we have \(\theta(x,y)<\theta(x,y)\). \(\theta(y,a)=\theta(a,y)\). Thus we obtain \(\theta(x,y)<\min(\theta(x,a),\theta(a,y))\leq\theta(x,y)\), a contradiction. This proves \((y,x)\in\varrho(G)\). **Proposition 4.3**.: _(i) If \(E_{1},E_{2}\) are different \(E(F)\) classes with \(E_{1}\leq_{R(F)}E_{2}\), then for any \(a_{1}\in E_{1}\) and \(a_{2}\in E_{2}\) we have \(F(a_{1})<F(a_{2})\)._ _(ii) If \(\mathcal{E}_{1},\mathcal{E}_{2}\) are different \(\varepsilon(G)\) classes with \(\mathcal{E}_{1}\leq_{\varrho(G)}\mathcal{E}_{2}\) then for any \(b_{1}\in\mathcal{E}_{1}\) and \(b_{2}\in\mathcal{E}_{2}\) we have \(G(b_{1})>G(b_{2})\)._ Proof.: (i) Assume \(E_{1}\leq_{R(F)}E_{2}\). Then for any \(a_{1}\in E_{1}\) and \(a_{2}\in E_{2}\) we have \((a_{1},a_{2})\in R(F)\), i.e. \(F(a_{1})=\theta(a_{1},a_{2})\odot F(a_{2})\leq F(a_{2})\). Observe that \(F(a_{2})\neq F(a_{1})\). Indeed, \(F(a_{2})=F(a_{1})\) would imply \(F(a_{2})=\theta(a_{1},a_{2})\odot F(a_{1})=\theta(a_{2},a_{1})\odot F(a_{1})\), i.e. \((a_{2},a_{1})\in R(F)\), which means \(E_{2}\leq_{R(F)}E_{1}\). As \(\leq_{R(F)}\) is a partial order, this would yield \(E_{1}=E_{2}\), a contradiction. Thus we deduce \(F(a_{1})<F(a_{2})\). (ii) Let \(\mathcal{E}_{1}\leq_{\varrho(G)}\mathcal{E}_{2}\). Then for any \(b_{1}\in\mathcal{E}_{1}\), \(b_{2}\in\mathcal{E}_{2}\) we have \((b_{1},b_{2})\in\varrho(G)\), which gives \(G(b_{1})=\theta(b_{1},b_{2})\vartriangleright G(b_{2})\geq G(b_{2}).\) We claim \(G(b_{1})>G(b_{2})\). Indeed, \(G(b_{2})=G(b_{1})\) would imply \(G(b_{2})=\theta(b_{1},b_{2})\vartriangleright G(b_{1})=\theta(b_{2},b_{1})\vartriangleright G (b_{1})\), i.e. \((b_{2},b_{1})\in\varrho(G)\), which would yield \(\mathcal{E}_{1}=\mathcal{E}_{2}\), a contradiction. Clearly, if each chain in the posets \((U/E(F),\leq_{R(F)})\) and \((U/\varepsilon(G),\leq_{\varrho(G)})\) is finite, then any element of them is less than or equal to a maximal element in the corresponding poset. By using this observation we deduce **Corollary 4.4**.: _Assume that the relation \(\theta\) and the fuzzy sets \(f,g\in\mathcal{F}(U)\) have a finite range, and let \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\). Then for any \(E(F)\) class \(E\), there exists a maximal \(E(F)\) class \(E_{M}\) such that \(E\leq_{R(F)}E_{M}\), and for any \(\varepsilon(G)\) class \(\mathcal{E}\), there is a maximal \(\varepsilon(G)\) class \(\mathcal{E}_{M}\) with \(\mathcal{E}\leq_{\varrho(G)}\mathcal{E}_{M}\)._ Proof.: If the above conditions hold, then the fuzzy sets \(F\) and \(G\) also have a finite range. Now let \(\{E_{i}\mid i\in I\}\) be an arbitrary (nonempty) chain of \(E(F)\) classes. In view of Proposition 4.3, for any \(a_{i}\in E_{i}\), \(i\in I\), the values \(\{F(a_{i})\mid i\in I\}\) also form a chain, and for \(E_{i}\leq_{R(F)}E_{j}\), \(E_{i}\neq E_{j}\) we have \(F(a_{i})<F(a_{j})\), and vice versa. This means that the chains \(\{E_{i}\mid i\in I\}\) and \(\{F(a_{i})\mid i\in I\}\) are order-isomorphic. Since \(F\) has a finite range, the chain \(\{F(a_{i})\mid i\in I\}\) has a finite length. Hence the chain \(\{E_{i}\mid i\in I\}\) is also finite. As every chain in the poset \((U/E(F),\leq_{R(F)})\) is finite, any element \(E\) of it is less than or equal to a maximal element \(E_{M}\) of it, i.e. \(E\leq_{R(F)}E_{M}\). The second statement is proved analogously. The importance of maximal classes in this case is shown by the following: **Proposition 4.5**.: _Suppose that \(\theta\) and the fuzzy sets \(f,g\in\mathcal{F}(U)\) have a finite range, and let \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\). Then the following assertions hold: (i) If \(h\leq F\) for some \(h\in\mathcal{F}(U)\) and for any maximal \(E(F)\) class \(E_{M}\) there exists an element \(u\in E_{M}\) with \(h(u)=F(u)\), then \(\overline{\theta}(h)=F\). (ii) If \(h\geq G\) for some \(h\in\mathcal{F}(U)\) and for any maximal \(\varepsilon(G)\) class \(\mathcal{E}_{M}\) there exists an element \(v\in\mathcal{E}_{M}\) with \(h(v)=G(v)\), then \(\underline{\theta}(h)=G\)._ Proof.: (i) Let \(x\in U\) be arbitrary. As \(\theta\) and \(f\) have finite ranges, in view of Corollary 4.4, there exists a maximal \(E(F)\) class \(E_{M}\) such that \([x]_{E(F)}\leq_{R(F)}E_{M}\). Then \((x,y)\in R(F)\) for all \(y\in E_{M}\). By assumption, there exists an element \(u\in E_{M}\) with \(h(u)=F(u)\). Since \(h\leq F\) and \((x,u)\in R(F)\), in view of Proposition 3.4(i) we obtain \(\overline{\theta}(h)(x)=F(x)\). This proves \(\overline{\theta}(h)=F\). (ii) is proved dually, by using Corollary 4.4 and Proposition 3.4(ii). **Proposition 4.6**.: _Suppose that the relation \(\theta\) and the fuzzy sets \(f,g\in\mathcal{F}(U)\) have a finite range, and let \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\). (i) If \(E\) is a maximal \(E(F)\) class, then for any \(a\in E\) we have_ \(F(a)=\) _max\(\{f(y)\mid y\in E\}\). (ii) If \(\mathcal{E}\) is a maximal \(\varepsilon(G)\) class, then for any \(a\in\mathcal{E}\) we have_ \(G(a)=\) _min\(\{g(y)\mid y\in\mathcal{E}\}\)._ Proof.: (i) By definition, \(F(a)=\overline{\theta}(f)(a)=\bigvee\{\theta(a,y)\odot f(y)\mid y\in U\}\). If \(y\notin E\), then \((a,y)\notin R(F)\), because \(E\) is a maximal \(E(F)\) class. This means that \(F(a)=\theta(a,y)\odot F(y)\) is not possible, and hence \(F(a)>\theta(a,y)\odot F(y)\), according to Proposition 3.2(iii). Since \(f\leq\overline{\theta}(f)=F\), we obtain \(F(a)>\theta(a,y)\odot f(y)\), for all \(y\in U\setminus E\). As \(\theta\) and \(f\) are of a finite range, the set \(\{\theta(a,y)\odot f(y)\mid y\in U\setminus E\}\) has finitely many different elements, and hence \(\bigvee\{\theta(a,y)\odot f(y)\mid y\in U\setminus E\}<F(a)\). This implies \(F(a)=\bigvee\{\theta(a,y)\odot f(y)\mid y\in E\}\bigvee(\bigvee\{\theta(a,y) \odot f(y)\mid y\in U\setminus E\})=\) \(\bigvee\{\theta(a,y)\odot f(y)\mid y\in E\}\). If \(y\in E\), then \(F(y)=F(a)\). As \(\theta(a,y)\odot f(y)\leq f(y)\leq F(y)=F(a)\), we obtain: \(F(a)=\bigvee\{\theta(a,y)\odot f(y)\mid y\in E\}\leq\bigvee\{f(y)\mid y\in E \}\leq F(a)\). This implies \(F(a)=\bigvee\{f(y)\mid y\in E\}\). Because \(f\) has a finite range, the set \(\{f(y)\mid y\in E\}\) is finite, and hence we can write \(F(a)=\) max\(\{f(y)\mid y\in E\}\). (ii) By definition \(G(a)=\underline{\theta}(g)(a)=\bigwedge\{\theta(a,y)\rhd g(y)\mid y\in U\}\). If \(y\notin\mathcal{E}\), then \((a,y)\notin\varrho(G)\), because \(\mathcal{E}\) is a maximal \(\varepsilon(G)\) class, and hence \(G(a)\neq\theta(a,y)\rhd G(y)\). Thus we have \(G(a)<\theta(a,y)\rhd G(y)\), according to Proposition 3.2(iii). Since \(G=\underline{\theta}(g)\leq g\), we obtain \(G(a)<\theta(a,y)\vartriangleright g(y)\), for all \(y\in U\setminus\mathcal{E}\). As \(\theta\) and \(g\) are of a finite range, the set \(\{\theta(a,y)\vartriangleright g(y)\mid y\in U\setminus E\}\) is finite, whence we get \(G(a)<\bigwedge\{\theta(a,y)\odot g(y)\mid y\in U\setminus E\}\). This yields \(G(a)=(\bigwedge\{\theta(a,y)\vartriangleright g(y)\mid y\in\mathcal{E}\}) \wedge(\bigwedge\{\theta(a,y)\vartriangleright g(y)\mid y\in U\setminus \mathcal{E}\})=\) \(\bigwedge\{\theta(a,y)\vartriangleright g(y)\mid y\in\mathcal{E}\}\). If \(y\in\mathcal{E}\), then \(G(y)=G(a)\). Since \(\theta(a,y)\vartriangleright g(y)\geq 1\vartriangleright g(y)=g(y)\geq G(y)=G(a)\), we obtain \(G(a)=\bigwedge\{\theta(a,y)\vartriangleright g(y)\mid y\in\mathcal{E}\}\geq \bigwedge\{g(y)\mid y\in\mathcal{E}\}\geq G(a)\), and this implies \(G(a)=\bigwedge\{g(y)\mid y\in\mathcal{E}\}\). Since \(\{g(y)\mid y\in\mathcal{E}\}\) is a finite set, we can write: \(G(a)=\min\{g(y)\mid y\in\mathcal{E}\}\). The following corollary is immediate: **Corollary 4.7**.: _Assume that \(\theta\) and \(f,g\in\mathcal{F}(U)\) are of a finite range, and let \(a\in U\) and \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\). (i) If \(\{a\}\) is a maximal \(E(F)\) class, then \(F(a)=f(a)\). (ii) If \(\{a\}\) is a maximal \(\varepsilon(G)\) class, then \(G(a)=g(a)\)._ **Corollary 4.8**.: _Assume that \(\theta\) and \(f\in\mathcal{F}(U)\) are of a finite range, \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(f)\), and let \(E\) be a maximal \(E(F)\) class and \(\mathcal{E}\) a maximal \(\varepsilon(G)\) class. (i) If every \(\{x\}\subseteq E\) is a maximal \(\varepsilon(G)\) class, then there exists a \(u\in E\) with \(F(u)=G(u)\). (ii) If every \(\{x\}\subseteq\mathcal{E}\) is a maximal \(E(F)\) class, then there exists a \(v\in\mathcal{E}\) with \(F(v)=G(v)\)._ Proof.: (i) In view of Proposition 4.6(i), for each \(x\in E\) we have \(F(x)=\max\{f(y)\mid y\in E\}\), i.e. \(F(x)=f(u)\), for some \(u\in E\). As \(\{u\}\) is a maximal \(\varepsilon(G)\) class, by applying Corollary 4.7(ii) with \(g:=f\) we get \(G(u)=f(u)\). Hence \(F(u)=G(u)\). (ii) is proved dually. **Example 4.9**.: Let us consider the similarity relation \(\theta\), a fuzzy set \(h\) and its approximations \(F=\overline{\theta}(h)\), \(G=\underline{\theta}(h)\) given on Figure 1 and Table 1. Figure 1: The fuzzy similarity relation \(\theta\) of Example 4.9 The quasiorders \(R(F)\) and \(\varrho(G)\), their \(E(F)\) and \(\varepsilon(G)\) equivalence classes and the partial orders induced on the factor-sets are given in Figure 2. Loops are not drawn for any relation. As \(\theta\) is symmetric, its edges are undirected, and those with \(\theta(x,y)=0\) are not shown either. The maximal \(\varepsilon(G)\) classes are \(\mathcal{E}_{1}\), \(\mathcal{E}_{4}\), and the maximal \(E(F)\) classes are \(E_{1}\) and \(E_{5}\). In all our examples the approximations are defined by a min t-norm and the KD-implicator \(\max(1-x,y)\). Clearly, all statements in 4.1, 4.2, 4.8 hold. ## 5 A characterization of fuzzy rough sets In case of an equivalence \(\varrho\subseteq U\times U\), the sets \(\{X\subseteq U\mid X_{\varrho}=X\}\) and \(\{X\subseteq U\mid X^{\varrho}=X\}\) coincide and their members are called \(\varrho\)_-definable_ subsets of \(U\). They can be described as those subsets of \(U\) which are the union of some \(\varrho\)-equivalence classes, and their set is denoted by \(\mathrm{Def}(U,\varrho)\). The rough \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(u\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h(u)\) & 0 & 1 & 0.25 & 0.5 & 0.5 & 0.75 \\ \hline \(F(u)\) & 1 & 1 & 0.75 & 0.5 & 0.5 & 0.75 \\ \hline \(G(u)\) & 0 & 0 & 0.25 & 0.5 & 0.5 & 0.5 \\ \hline \end{tabular} \end{table} Table 1: The fuzzy set \(h\) of Example 4.9 Figure 2: The quasiorders, the factor-sets and their Hasse-diagrams for Example 4.9 sets induced by an equivalence relation \(\varrho\subseteq U\times U\) can be characterized by using the _set of_ its _singletons_\(S=\{s\in U\mid\varrho(s)=\{s\}\}\), as follows: \((A,B)\) is rough set of \(\varrho\), if and only if \((A,B)\in\mbox{Def}(U,\varrho)\times\mbox{Def}(U,\varrho)\), \(A\subseteq B\) and \(A\cap S=B\cap S\) (see e.g. [12]). In this section we will derive an analogous characterisation for the fuzzy rough sets with finite ranges and satisfying conditions in (ID). For a fuzzy approximation space \((U,\theta)\) we will introduce the notations: \[\mbox{Fix}(\underline{\theta})=\{f\in{\cal F}(U)\mid\underline{\theta}(f)=f\},\,\mbox{Fix}(\overline{\theta})=\{f\in{\cal F}(U)\mid\overline{\theta}(f)=f\}.\] Unfortunately, in case of a \(\odot\)-similarity relation \(\mbox{Fix}(\underline{\theta})\) and \(\mbox{Fix}(\overline{\theta})\) coincide only for a left-continuous t-norm \(\odot\) and the R-implicator \(\rhd\) induced by it. **Theorem 5.1**.: _Assume that conditions in (ID) are satisfied and let \((U,\theta)\) be a fuzzy approximation space with a \(\odot\)-similarity relation \(\theta\) of a finite range, and \(F,G\in{\cal F}(U)\). Then \((F,G)\) is a fuzzy rough set induced by a fuzzy set with a finite range, if and only if the following conditions hold: (1) \(G\in\mbox{Fix}(\underline{\theta})\), \(F\in\mbox{Fix}(\overline{\theta})\), \(G\leq F\), and \(F\) and \(G\) have finite ranges; (2) If \({\cal E}\) is a maximal \(\varepsilon(G)\) class such that each \(\{a\}\subseteq{\cal E}\) is a maximal \(E(F)\) class, then there exists an element \(u\in{\cal E}\) such that \(G(u)=F(u)\); (3) If \(E\) is a maximal \(E(F)\) class such that each \(\{a\}\subseteq E\) is a maximal \(\varepsilon(G)\) class, then there exists an element \(v\in E\) such that \(G(v)=F(v)\)._ Proof.: By definition, \((F,G)\) is a fuzzy rough set if there exists a map \(f\in{\cal F}(U)\) such that \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(f)\). Suppose that \(f\) has a finite range. We prove that the conditions of Theorem 5.1 are satisfied. Indeed, (1) Property (ID) implies \(\overline{\theta}(F)=F\), \(\underline{\theta}(G)=G\), hence \(F\in\mbox{Fix}(\overline{\theta})\) and \(G\in\mbox{Fix}(\underline{\theta})\). Clearly, \(G=\underline{\theta}(f)\leq\overline{\theta}(f)=F\). Because \(f\) has a finite range, in view of Lemma 2.2, \(F\) and \(G\) also have finite ranges. In view of Corollary 4.8, conditions (2) and (3) are also satisfied. Conversely, suppose that conditions (1), (2) and (3) are satisfied by \(F\) and \(G\). In order to prove that \((F,G)\) is a fuzzy rough set, we will construct a fuzzy set \(f\in{\cal F}(U)\) with \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(f)\). Since \(F\) and \(G\) are of a finite range, in view of Corollary 4.4, for each \(E(F)\) class \(E\) there exists a maximal \(E(F)\) class \(E_{M}\) such that \(E\leq_{R(F)}E_{M}\), and for any \(\varepsilon(G)\) class \({\cal E}\) there is a maximal \(\varepsilon(G)\) class \({\cal E}_{M}\) with \({\cal E}\leq_{\varrho(G)}{\cal E}_{M}\). Denote the family of maximal \(\varepsilon(G)\) classes by \(\{{\cal E}_{t}\mid t\in T\}\). As a first step, from each class \({\cal E}_{t}\), \(t\in T\) we select exactly one element \(a_{t}\in{\cal E}_{t}\) as follows: 1) If \({\cal E}_{t}\) contains an element \(p_{t}\) which does not belong to any maximal \(E(F)\) class, then we select it and set \(a_{t}:=p_{t}\). 2) If in \({\cal E}_{t}\) there is no element of type 1), however there exists an element \(q_{t}\in{\cal E}_{t}\) with \(G(q_{t})=F(q_{t})\), then we select it and set \(a_{t}:=q_{t}\). 3) If there are no elements of type 1) or 2) in \({\cal E}_{t}\), then we select an element \(r_{t}\in{\cal E}_{t}\) such that \(\{r_{t}\}\) is not a maximal \(E(F)\) class, and we set \(a_{t}:=r_{t}\). First, we show that we can always effectuate such a selection: assume by contradiction that in some class \({\cal E}_{t}\) there are no elements of type 1), 2) or 3). This means that for each \(a_{t}\in{\cal E}_{t}\) the set \(\{a_{t}\}\) is a maximal \(E(F)\) class. Then by Corollary 4.8(ii) there exists an element \(v\in{\cal E}_{t}\) with \(F(v)=G(v)\). Since this means that \(v\in{\cal E}_{t}\) is of type 2), this is a contradiction. As next step, we construct a fuzzy set \(f\in{\cal F}(U)\) as follows: \[f(x)=\left\{\begin{array}{l}G(x),\,\mbox{if}\,\,x\in\{a_{t}\mid t\in T\};\\ F(x),\,\mbox{if}\,\,x\in U\setminus\{a_{t}\mid t\in T\}\end{array}\right. \tag{3}\] By its construction, \(f\) also has a finite range. Now, we prove that in any maximal \(E(F)\) class \(E\) there exists an element \(u\in E\) with \(f(u)=F(u)\). By our construction, this would mean that any such class \(E\) contains an element \(x_{0}\in U\setminus\{a_{t}\mid t\in T\}\) or an element \(a_{t}=q_{t}\in E\) of type 2) with \(f(q_{t})=G(q_{t})=F(q_{t})\). By way of contradiction, assume that there is a maximal \(E(F)\) class \(E_{M}\) with \(E_{M}\subseteq\{a_{t}\mid t\in T\}\) and \(F(x)\neq f(x)=G(x)\), for all \(x\in E_{M}\). Then \(E_{M}=\{a_{s}\mid s\in S\}\), for some nonempty \(S\subseteq T\). Observe that in this case \(E_{M}\) cannot contain elements of type 1) and 2). Hence, by our construction, for any element \(a_{s}\), \(s\in S\) the set \(\{a_{s}\}\) is not a maximal \(E(F)\) class. Thus \(E_{M}\) is not a one-element set, i.e. \(|S|\geq 2\). Observe also, that we can exclude the case when each element \(a_{s},s\in S\) belongs to an \({\cal E}_{t}\) class with a single element. Indeed, as in such case each \(\{a_{s}\}\subseteq E_{M}\) would be a maximal \(\varepsilon(G)\) class, and by Corollary 4.8(i) we would obtain \(G(a_{s_{0}})=F(a_{s_{0}})\), for some \(a_{s_{0}}\in E_{M}\), contrary to our assumption. Hence there exists an element \(a_{s^{*}}\in E_{M}\) which was chosen from a maximal \(\varepsilon(G)\) class \({\cal E}_{s^{*}}\) with \(|{\cal E}_{s^{*}}|\geq 2\). Since \(a_{s^{*}}\in E_{M}\cap{\cal E}_{s^{*}}\), in view of Corollary 4.2(i), we have \(E_{M}\subseteq{\cal E}_{s^{*}}\) or \({\cal E}_{s^{*}}\subseteq E_{M}\). Since both \(E_{M}\) and \({\cal E}_{s^{*}}\) have at least two elements, both cases would imply that from the class \({\cal E}_{s^{*}}\) at least two elements had been inserted into the set \(\{a_{t}\mid t\in T\}\), in contradiction to our construction for \(\{a_{t}\mid t\in T\}\). Thus we proved that in any maximal \(E(F)\) class \(E\) there is an element \(u\in E\) with \(f(u)=F(u)\). It is also clear, that by our construction from each maximal \(\varepsilon(G)\) class \(\mathcal{E}_{t}\), \(t\in T\) an element \(v=a_{t}\in\mathcal{E}_{t}\) had been selected with \(f(v)=G(v)\). Since by definition \(G\leq f\leq F\), applying Proposition 4.5 we obtain \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(f)\), and our proof is completed. ## 6 Further properties of \(E(f)\) and \(\varepsilon(G)\) classes In this section we deduce some additional properties of \(E(F)\) and \(\varepsilon(G)\) classes which will be used to prove our main Theorem 7.4. In the whole section we assume that for all \(x,y\in[0,1]\) condition \[x\odot y=\min(x,y),\,x\rhd y:=\ \max(n(x),y),\,n\text{ is an involutive negator}\] (C) holds, and that \(\theta\) is a similarity relation. Then \(\rhd\) is an S-implicator, and for \(n(x)=1-x\), we re-obtain the Kleene-Dienes implicator, therefore our \(\rhd\) is an extension of it. Clearly, if (C) holds then (D) and (ID) are also satisfied. **Proposition 6.1**.: _(i) \(E\subseteq U\) is a maximal \(E(F)\) class if and only if \(\theta(a,z)<F(a)=F(b)\leq\theta(a,b)\), for all \(a,b\in E\) and \(z\notin E\); (ii) \(\mathcal{E}\subseteq U\) is a maximal \(\varepsilon(G)\) class, if and only if \(n(\theta(a,b))\leq G(a)=G(b)<n(\theta(a,z))\), for all \(a,b\in\mathcal{E}\) and \(z\notin\mathcal{E}\)._ Proof.: (i) If \(E\subseteq U\) is a maximal \(E(F)\) class, then by Lemma 4.1(i) and (iii), we have \(F(a)=F(b)\leq\theta(a,b)\) and \(\theta(a,z)<F(a)\), for all \(a,b\in E\) and \(z\notin E\). Conversely, let \(E\subseteq U\) and assume that all the relations from (i) are satisfied. Then, \(F(a)=\min(\theta(a,b),F(b))\), for all \(a,b\in E\), i.e. we get \((a,b)\in R(F)\) for all \(a,b\in E\), and in view of Corollary 3.3, we have \((a,z)\notin R(F)\) for all \(z\notin E\). Hence \((a,b)\in R(F)\cap R(F)^{-1}=E(F)\) holds for all \(a,b\in E\), and \((a,z)\notin E(F)\) for each \(z\notin E\). This means that \(E\) is an \(E(F)\) class. We also get \(E\nleq[z]_{E(F)}\) for all \(z\notin E\), because \((a,z)\notin R(F)\). Thus \(E\) is a maximal \(E(F)\) class. (ii) If \(\mathcal{E}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then in view of Lemma 4.1(iv) \(n(\theta(a,b))\leq G(a)=G(b)\), for all \(a,b\in\mathcal{E}\) and we have \(\mathcal{E}\nleq[z]_{\varepsilon(G)}\), for any \(z\notin\mathcal{E}\). Then \((a,z)\notin\varrho(G)\) implies \(G(a)<n(\theta(a,z))\), according to Lemma 4.1(iv). The converse implication is proved analogously as in (i). **Lemma 6.2**.: _Let \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\), for some \(f,g\in\mathcal{F}(U)\), and let \(E\) be an \(E(F)\) class and \(\mathcal{E}\) be an \(\varepsilon(G)\) class such that \(E\cap\mathcal{E}\neq\emptyset\). Then the following assertions hold: (i) If \((x,y)\notin R(F)\), for some \(x\in E\) and \(y\in\mathcal{E}\), then \((x,z)\in R(F)\) implies \((y,z)\in\varrho(G)\), for all \(z\notin E\cup\mathcal{E}\)._ _(ii) If \((x,y)\notin\varrho(G)\), for some \(x\in{\cal E}\) and \(y\in E\), then \((x,z)\in\varrho(G)\) implies \((y,z)\in R(F)\), for all \(z\notin E\cup{\cal E}\)._ Proof.: Let \(a\in E\cap{\cal E}\). (i) Observe that the relations \((x,y)\notin R(F)\) and \((x,a)\in R(F)\) exclude \((a,y)\in R(F)\). Thus \((a,y)\notin R(F)\) yields \(F(a)>\theta(a,y)\), by Corollary 3.3. Now, let \((x,z)\in R(F)\) and assume by contradiction \((y,z)\notin\varrho(G)\). Then \(a,y\in{\cal E}\) and \(z\notin{\cal E}\) imply \(\theta(a,z)<\theta(a,y)\). On the other hand, \((a,x)\in R(F)\) and \((x,z)\in R(F)\) imply \((a,z)\in R(F)\). Hence, Proposition 6.1(i) yields \(F(a)\leq\theta(a,z)<\theta(a,y)\), a contradiction to \(F(a)>\theta(a,y)\). This proves \((y,z)\in\varrho(G)\). (ii) By Proposition 3.2(iv) we have \(\varrho(G)=R(n(G))\), \(R(F)=\varrho(n(F))\), and by Lemma 4.1 (v), \({\cal E}\) is an \(E(n(G))\) class, and \(E\) is an \(\varepsilon(n(F))\) class. Hence \((x,y)\notin\varrho(G)\) for some \(x\in{\cal E}\) and \(y\in E\) and \((x,z)\in\varrho(G)\) is equivalent to \((x,y)\notin R(n(G))\) and \((x,z)\in R(n(G))\), therefore, \(n(G)=n(\underline{\theta}(g))=\overline{\theta}(n(g))\) and \(n(F)=n(\overline{\theta}(f))=\underline{\theta}(n(f))\) form a pair that replaces in the context of (ii) the pair \((F,G)\) from (i). Thus \((y,z)\in\varrho(n(F))=R(F)\), in view of (i). **Corollary 6.3**.: _Let \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\), for some \(f,g\in{\cal F}(U)\) and let \(E\) be an \(E(F)\) class and \({\cal E}\) an \(\varepsilon(G)\) class such that \(E\cap{\cal E}\neq\emptyset\). (i) If \({\cal E}\) is a maximal \(\varepsilon(G)\) class, then \((x,y)\in R(F)\) for all \(x\in E\) and \(y\in{\cal E}\) or \(E\subseteq{\cal E}\) and there is no \(t\in E\) and \(z\notin{\cal E}\) with \((t,z)\in R(F)\). (ii) If \({\cal E}\) is a maximal \(\varepsilon(G)\) class with \(E\subsetneq{\cal E}\) and there is no element \(x\in E\) and \(y\in{\cal E}\setminus E\) with \((x,y)\in R(F)\), then \(E\) is a maximal \(E(F)\) class. (iii) If \(E\) is a maximal \(E(F)\) class, then \((x,y)\in\varrho(G)\) for all \(x\in{\cal E}\) and \(y\in E\) or \({\cal E}\subseteq E\) and there is no \(t\in{\cal E}\) and \(z\notin E\) with \((t,z)\in\varrho(G)\). (iv) If \(E\) is a maximal \(E(F)\) class such that \({\cal E}\subsetneqq E\) and there is no element \(x\in{\cal E}\) and \(y\in E\setminus{\cal E}\) with \((x,y)\in\varrho(G)\), then \({\cal E}\) is a maximal \(\varepsilon(G)\) class._ Proof.: (i) Let \({\cal E}\) be a maximal \(\varepsilon(G)\) class and assume \(E\nsubseteq{\cal E}\). Then there exists \(a\in E\setminus{\cal E}\), and for all \(y\in{\cal E}\), \((y,a)\notin\varrho(G)\) by maximality of \({\cal E}\). Hence, in view of Corollary 4.2(ii) we have \((a,y)\in R(F)\), and because \((x,a)\in R(F)\) for each \(x\in E\), we get \((x,y)\in R(F)\) for all \(x\in E\) and \(y\in{\cal E}\). Consider now the case when \(E\subseteq{\cal E}\) and there are \(x\in E\), \(y\in{\cal E}\) with \((x,y)\notin R(F).\) Then \(E\cup{\cal E}={\cal E}\). Assume that there exist some elements \(t\in E\) and \(z\notin{\cal E}\) with \((t,z)\in R(F)\). Then \((x,t)\in R(F)\) also yields \((x,z)\in R(F)\). Now, applying Lemma 6.2(i) we obtain \((y,z)\in\varrho(G)\). Since \({\cal E}\) is a maximal \(\varepsilon(G)\) class and \(z\notin{\cal E}\), this is not possible, and this means that the second part of (i) holds. (ii) Suppose that for all \(x\in E\) and \(y\in{\cal E}\backslash E\) we have \((x,y)\notin R(F)\), and let \(z\notin{\cal E}\). In view of Lemma 6.2(i), \((x,z)\in R(F)\) for some \(x\in E\) would imply \((y,z)\in\varrho(G)\), for all \(y\in\mathcal{E}\) - in contradiction to the fact that \(\mathcal{E}\) is a maximal \(\varepsilon(G)\) class. Thus we deduce \((x,z)\notin R(F)\), for all \(x\in E\) and \(z\notin E\). This means that \(E\) is a maximal \(E(F)\) class. The proofs of (iii) and (iv) are duals of the proofs of (i) and (ii). **Proposition 6.4**.: _Let \(\theta\) be a similarity relation with a finite range, and \(F=\overline{\theta}(f)\), \(G=\underline{\theta}(g)\), for some \(f,g\in\mathcal{F}(U)\). (i) If \(\{a\}\subseteq U\) is a maximal \(E(F)\) class, then for any \(h\in\mathcal{F}(U)\) with \(\overline{\theta}(h)(a)\geq F(a)\), we have \(\overline{\theta}(h)(a)=h(a)\). (ii) If \(\{b\}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then for any \(h\in\mathcal{F}(U)\) with \(\underline{\theta}((h)(b)\leq G(b)\), we have \(\underline{\theta}(h)(b)=h(b)\)._ Proof.: (i) If \(\{a\}\subseteq U\) is a maximal \(E(F)\) class, then \(F(a)>\theta(a,y)\), for all \(y\in U\), \(y\neq a\), according to Corollary 3.3. Now, we can write: \(\overline{\theta}(h)(a)=\bigvee\{\min(\theta(a,y),h(y))\mid y\in U\}=\) \(h(a)\vee(\bigvee\{\min(\theta(a,y),h(y))\mid y\in U\setminus\{a\}\})\), and \(\bigvee\{\min(\theta(a,y),h(y))\mid y\in U\setminus\{a\}\}\leq\bigvee\{\theta( a,y)\mid y\in U\setminus\{a\}\}<F(a)\leq\overline{\theta}(h)(a)\), because \(\theta\) is of a finite range. This implies \(\overline{\theta}(h)(a)=h(a)\). (ii) If \(\{b\}\subseteq U\) is a maximal \(\varepsilon(G)\) class, then \(G(b)<n(\theta(b,y))\), for all \(y\in U\), \(y\neq b\), according to Corollary 3.3. We can write: \(\underline{\theta}((h)(b)=\bigwedge\{\max(n(\theta(b,y)),h(y))\mid y\in U\}=\) \(h(b)\wedge(\bigwedge\{\max(n(\theta(b,y)),h(y))\mid y\in U\setminus\{b\}\})\), and \(\bigwedge\{\max(n(\theta(b,y)),h(y))\mid y\in U\setminus\{b\}\}\geq\bigwedge\{n (\theta(b,y))\mid y\in U\setminus\{b\}\}>G(b)\geq\underline{\theta}(h)(b)\), since \(\theta\) is of a finite range. This yields \(\underline{\theta}(h)(b)=h(b)\). ## 7 The lattice of fuzzy rough sets Clearly, fuzzy rough sets corresponding to an approximation space \((U,\theta)\), can be ordered as follows: \[\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\leq\big{(}\underline {\theta}(g),\overline{\theta}(g)\big{)}\Leftrightarrow\underline{\theta}(f) \leq\underline{\theta}(g)\text{ and }\overline{\theta}(f)\leq\overline{\theta}(g), \tag{4}\] obtaining a poset \((\mathcal{F}\mathcal{R}(U,\theta),\leq)\). If \(\theta\) is reflexive, then \((\mathbf{0},\mathbf{0})\) and \((\mathbf{1},\mathbf{1})\) are its least and greatest elements. If conditions in (D) hold, \(n(\overline{\theta}(f))=\underline{\theta}(n(f))\) and \(n(\underline{\theta}(f))=\overline{\theta}(n(f))\) imply \((n(\overline{\theta}(f),n(\underline{\theta}(f))\in\mathcal{F}\mathcal{R}(U,\theta)\), for all \(f\in\mathcal{F}(U)\). As \(n\) is an involutive negator, \(\Phi\colon\mathcal{F}\mathcal{R}(U,\theta)\to\mathcal{F}\mathcal{R}(U,\theta)\), \(\Phi(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)})=(n(\overline{ \theta}(f),n(\underline{\theta}(f))\) is an involution, i.e. \(\Phi(\Phi\left(\underline{\theta}(f),\overline{\theta}(f)\right))=\big{(} \underline{\theta}(f),\overline{\theta}(f)\big{)}\). Since \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\leq\big{(}\underline {\theta}(g),\overline{\theta}(g)\big{)}\Leftrightarrow(n(\overline{\theta}( g),n(\underline{\theta}(g))\leq(n(\overline{\theta}(f),n(\underline{\theta}(f))\), we have \[\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\leq\big{(}\underline {\theta}(g),\overline{\theta}(g)\big{)}\Leftrightarrow\Phi\left(\underline{ \theta}(g),\overline{\theta}(g)\right)\leq\Phi\left(\underline{\theta}(f), \overline{\theta}(f)\right),\] meaning that \(\Phi\) is a dual order-isomorphism. Thus \((\mathcal{FR}(U,\theta),\leq)\) is a self-dual poset, whenever conditions in (D) hold. In this section we will deduce some conditions under which \((\mathcal{FR}(U,\theta),\leq)\) is a lattice. Now let \(L\) be a complete sublattice of \([0,1]\), and let \(\mathcal{F}(U,L)\) stand for the family of all fuzzy sets \(f\colon U\to L\). The system of all \(f\in\mathcal{F}(U,L)\) with a finite range is denoted by \(\mathcal{F}_{fr}(U,L)\). If \(L=[0,1]\), then we write simply \(\mathcal{F}_{fr}(U)\). As \(0,1\in L\), we have \(\mathbf{0},\mathbf{1}\in\mathcal{F}_{fr}(U,L)\). It is obvious that for any \(f_{1},f_{2}\in\mathcal{F}_{fr}(U,L)\), \(f_{1}\lor f_{2}=\max(f_{1},f_{2})\) and \(f_{1}\wedge f_{2}=\min(f_{1},f_{2})\) are of a finite range and their values are in \(L\), hence \((\mathcal{F}_{fr}(U,L),\leq)\) is a bounded distributive lattice. Clearly, for any \(f\in\mathcal{F}(U,L)\) with a finite range and any negator \(n\), the fuzzy set \(n(f)\) also has a finite range, i.e. \(n(f)\in\mathcal{F}_{fr}(U,L)\). Further, if relation \(\theta\) has a finite range, then in view of Lemma 2.2, for any \(f\in\mathcal{F}_{fr}(U)\): \(\underline{\theta}(f),\overline{\theta}(f)\in\mathcal{F}_{fr}(U)\). In all what follows, suppose that condition (C) holds with \(n(L)\subseteq L\), and \(\theta\colon U\times U\to L\) is a similarity relation. Then \[\overline{\theta}(f)(x) =\bigvee\{\min(\theta(x,y),f(y))\mid y\in U\}\text{ and }\] \[\underline{\theta}(f)(x) =\bigwedge\{\max(n(\theta(x,y)),f(y))\mid y\in U\},\] for all \(x\in U\). As \(L\) is closed w.r.t. arbitrary joins and meets, and \(n(L)\subseteq L\), we get that \(\underline{\theta}(f),\overline{\theta}(f)\in\mathcal{F}_{fr}(U,L)\). Now consider the poset defined on \[\mathcal{H}:=\{\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\mid f \in\mathcal{F}_{fr}(U,L)\}.\] We will prove that \((\mathcal{H},\leq)\) is a lattice, moreover if \(U\) or \(L\) is finite, then it is a complete lattice. This approach is motivated by the following examples: 1) If \(U\) is a finite set, then \(\theta\) and all \(f\in\mathcal{F}(U)\) have finite ranges. Hence for \(L=[0,1]\) we have \(\mathcal{F}_{fr}(U,L)=\mathcal{F}(U)\), and \((\mathcal{H},\leq)\) equals to \((\mathcal{FR}(U,\theta),\leq)\). 2) If \(L\) is a finite chain with \(0,1\in L\), then any \(f\in\mathcal{F}(U,L)\) has a finite range, hence \(\mathcal{F}_{fr}(U,L)=\mathcal{F}(U,L)\), and \((\mathcal{H},\leq)\) is the same as \((\mathcal{FR}(U,L),\leq)\). **Remark 7.1**.: _(a) The relations \(\underline{\theta}(f_{1})\wedge\underline{\theta}(f_{1})=\underline{\theta} \left(f_{1}\wedge f_{2}\right)\) and \(\overline{\theta}(f_{1})\vee\overline{\theta}(f_{2})=\overline{\theta}(f_{1} \lor f_{2})\) always hold (see e.g. [4]) for any \(f_{1},f_{2}\in\mathcal{F}(U)\). Assume now that condition (C) holds, or \(\odot\) is a left continuous t-norm and \(\rhd\) is its \(R\)-implicator. It is known (see e.g. [16]) that in this case the equalities_ \(\bigwedge\{\,\,\underline{\theta}(f_{i})\mid i\in I\}=\,\,\underline{\theta} \left(\bigwedge\{f_{i}\mid i\in I\}\right)\)_, \(\bigvee\{\overline{\theta}(f_{i})\mid i\in I\}=\overline{\theta}\left( \bigvee\{f_{i}\mid i\in I\}\right)\) also hold for any (nonempty) system \(f_{i}\in\mathcal{F}(U)\), \(i\in I\). (b) If now \(L\subseteq[0,1]\) is a complete lattice and \(\theta\colon U\times U\to L\), then clearly, for any \(f_{i}\in\mathcal{F}(U,L)\), \(i\in I\) we get \(\bigwedge\{f_{i}\mid i\in I\},\bigvee\{f_{i}\mid i\in I\}\in\mathcal{F}(U,L)\) and \(\bigwedge\{\,\underline{\theta}(f_{i})\mid i\in I\}=\underline{\theta}\left(\bigwedge \{f_{i}\mid i\in I\}\right)\in\mathcal{F}(U,L)\), \(\overline{\theta}\left(\bigvee\{f_{i}\mid i\in I\}\right)\in\mathcal{F}(U,L)\). (c) As in this case conditions from (ID) also hold, in view of [4], for a \(\odot\)-similarity relation \(\theta\), \(f\mapsto\underline{\theta}(f)\), \(f\in\mathcal{F}(U,L)\) is an interior operator, and the map \(f\mapsto\overline{\theta}(f)\), \(f\in\mathcal{F}(U,L)\) is a closure operator. Hence \(\left(\text{Fix}_{L}\left(\underline{\theta}\right),\leq\right)\) and \(\left(\text{Fix}_{L}\left(\overline{\theta}\right),\leq\right)\) are complete lattices, where \(\text{Fix}_{L}\left(\underline{\theta}\right):=\{f\in\mathcal{F}(U,L)\mid \underline{\theta}(f)=f\}\) and \(\text{Fix}_{L}\left(\overline{\theta}\right):=\{f\in\mathcal{F}(U,L)\mid \overline{\theta}(f)=f\}\)._ **Proposition 7.2**.: _Assume that conditions in (ID) are satisfied, and let \(L\subseteq[0,1]\) be a complete lattice and \(\theta\colon U\times U\to L\) be a \(\odot\)-similarity relation. Then \(\left(\text{Fix}_{L}\left(\overline{\theta}\right),\leq\right)\) and \(\left(\text{Fix}_{L}\left(\underline{\theta}\right),\leq\right)\) are complete sublattices of \(\mathcal{F}(U,L)\)._ Proof.: Let \(f_{i}\in\text{Fix}_{L}\left(\overline{\theta}\right)\), \(i\in I\) arbitrary. Then, in view of Remark 7.1, \(\bigvee\{f_{i}\mid i\in I\}\in\mathcal{F}(U,L)\), and \(\overline{\theta}\left(\bigvee\{f_{i}\mid i\in I\}\right)=\bigvee\{\overline {\theta}(f_{i})\mid i\in I\}=\bigvee\{f_{i}\mid i\in I\}\). Hence \(\bigvee\{f_{i}\mid i\in I\}\in\text{Fix}_{L}\left(\overline{\theta}\right)\). As \(\text{Fix}_{L}\left(\overline{\theta}\right)\) is the system of closed sets of the operator \(f\mapsto\overline{\theta}(f)\) and \(f_{i}\in\text{Fix}_{L}\left(\overline{\theta}\right)\), \(i\in I\), we also have \(\bigwedge\limits_{i\in I}f_{i}\in\text{Fix}_{L}\left(\overline{\theta}\right)\). Hence \(\left(\text{Fix}_{L}\left(\overline{\theta}\right),\leq\right)\) is a complete sublattice of \(\left(\mathcal{F}(U,L),\leq\right)\). The claim that \(\left(\text{Fix}_{L}\left(\underline{\theta}\right),\leq\right)\) is complete sublattice of \(\left(\mathcal{F}(U,L),\leq\right)\) is proved dually. **Corollary 7.3**.: _Let \(\theta\colon U\times U\to L\) be a similarity relation with a finite range on \(U\), \(f_{i}\in\mathcal{F}(U,L)\), \(i\in I\), \(F=\bigwedge\{\overline{\theta}(f_{i})\mid i\in I\}\) and let \(\{a\}\subseteq U\) be a maximal \(E(F)\) class. Then \(F(a)=\bigwedge\{f_{i}(a)\mid i\in I\}\)._ Proof.: In view of Proposition 7.2 we have \(F=\bigwedge\{\overline{\theta}(f_{i})\mid i\in I\}\in\text{Fix}_{L}(\overline{ \theta})\), i.e. \(F=\overline{\theta}(F)\). Since \(\overline{\theta}(f_{i})(a)\geq F(a)\), \(i\in I\), by using Proposition 6.4(i) we obtain \(\overline{\theta}(f_{i})(a)=f_{i}(a)\), for all \(i\in I\). This yields \(F(a)=\bigwedge\{f_{i}(a)\mid i\in I\}\). **Theorem 7.4**.: _Let \(\theta\colon U\times U\to L\) be a similarity relation of a finite range, and assume that condition (C) holds with a negator satisfying \(n(L)\subseteq L\). (i) If the fuzzy sets \(\bigwedge\limits_{i\in I}f_{i}\), \(\bigwedge\limits_{i\in I}\overline{\theta}(f_{i})\), \(f_{i}\in\mathcal{F}(U,L)\), \(i\in I\) have finite ranges, then the infimum of fuzzy rough sets \(\left(\underline{\theta}(f_{i}),\overline{\theta}(f_{i})\right)\), \(i\in I\) exists in \(\left(\mathcal{F}\mathcal{R}(U,L),\leq\right)\) and its components have finite ranges. (ii) \(\left(\mathcal{H},\leq\right)=\left(\{\left(\underline{\theta}(f),\overline{ \theta}(f)\right)\mid f\in\mathcal{F}_{fr}(U,L)\},\leq\right)\) is a lattice. (iii) If \(U\) or \(L\) is finite, then \(\left(\mathcal{F}\mathcal{R}(U,L),\leq\right)\) is a complete lattice._ Proof.: (i) Denote \(G=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})\) and \(F=\bigwedge\limits_{i\in I}\overline{\theta}(f_{i})\). Then \(G,F\in\mathcal{F}(U,L)\), by Remark 7.1(b), and we have \(\underline{\theta}(G)=G\) and \(\overline{\theta}(\overline{\theta}(f_{i}))=\overline{\theta}(f_{i})\), \(i\in I\), according to Remark 7.1(c). Thus \(G\in\text{Fix}_{L}(\underline{\theta})\). Since \(\theta\) and \(\bigwedge\limits_{i\in I}f_{i}\) have finite ranges, \(G\) also has a finite range. As \(\overline{\theta}(f_{i})\in\mbox{Fix}_{L}(\overline{\theta})\), Proposition 7.2 gives \(F\in\mbox{Fix}_{L}(\overline{\theta})\), and by assumption \(F\) has a finite range. Clearly, \(G=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})\leq\overline{\theta}(f_{i})\), for all \(i\in I\), whence \(G\leq F\). Using \(G\) and \(F\) we will construct a fuzzy set \(f\in\mathcal{F}(U,L)\) such that \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\) equals to \(\inf\{(\underline{\theta}(f_{i}),\overline{\theta}(f_{i}))\mid i\in I\}\). First, from each maximal \(\varepsilon(G)\) class \(\mathcal{E}_{t}\), \(t\in T\) we select exactly one element \(b_{t}\in\mathcal{E}_{t}\) as follows: 1) If \(\mathcal{E}_{t}\) contains an element \(q_{t}\in\mathcal{E}_{t}\) with \(G(q_{t})=F(q_{t})\), then we set \(b_{t}:=q_{t}\). 2) If there are no such elements in \(\mathcal{E}_{t}\), however there exists an \(s_{t}\in\mathcal{E}_{t}\) such that \(\{s_{t}\}\) is not an \(E(F)\) class, then we choose it and set \(b_{t}:=s_{t}\). 3) If there are no elements of type 1) or 2) in in \(\mathcal{E}_{t}\), then we select an element \(r_{t}\in\mathcal{E}_{t}\) such that \(\{r_{t}\}\) is not a maximal \(E(F)\) class, and we set \(b_{t}:=r_{t}\). Now we show that we can always manage such a selection. Indeed, assume by contradiction that in some class \(\mathcal{E}_{z}\) there are no elements of type 1), 2) and 3). This means that for each \(x\in\mathcal{E}_{z}\) the set \(\{x\}\) is a maximal \(E(F)\) class. Then in view of Corollary 7.3, we have \(F(x)=\bigwedge\limits_{i\in I}f_{i}(x)\), for each \(x\in\mathcal{E}_{z}\). As \(\bigwedge\limits_{i\in I}f_{i}(x)\) has a finite range, by Proposition 4.6(ii) we get \(G(y)=\min\{\bigwedge\limits_{i\in I}f_{i}(x)\mid x\in\mathcal{E}_{z}\}\), for all \(y\in\mathcal{E}_{z}\), because \(G=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})\). Hence, there exists an element \(v\in\mathcal{E}_{z}\) such that \(G(v)=\bigwedge\limits_{i\in I}f_{i}(v)=F(v)\). Since this result means that \(v\) is an element of type 1) in \(\mathcal{E}_{z}\), this is a contradiction. As next step, we construct a fuzzy set \(f\in\mathcal{F}(U,L)\) as follows: \[f(x)=\left\{\begin{array}{l}G(x),\,\mbox{if }x\in\{b_{t}\mid t\in T\};\\ F(x),\,\mbox{if }x\in U\setminus\{b_{t}\mid t\in T\}\end{array}\right. \tag{5}\] As \(G,F\in\mathcal{F}(U,L)\), we have \(f\in\mathcal{F}(U,L)\). Since \(F\) and \(G\) have finite ranges, \(f\) also has a finite range. As from each maximal \(\varepsilon(G)\) class \(\mathcal{E}_{t}\), \(t\in T\) an element \(b_{t}\in\mathcal{E}_{t}\) was selected and \(f(b_{t})=G(b_{t})\), \(f\geq G\) hold, by Proposition 4.5(ii) we have \(\underline{\theta}(f)=G=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})\). We prove that \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\) is the infimum of the system \((\underline{\theta}(f_{i}),\overline{\theta}(f_{i})),i\in I\). Thus we are going to show that \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\) is a lower bound of \((\underline{\theta}(f_{i}),\overline{\theta}(f_{i})),i\in I\) and for any \(h\in\mathcal{F}(U,L)\) with \((\underline{\theta}(h),\overline{\theta}(h))\leq(\underline{\theta}(f_{i}), \overline{\theta}(f_{i}))\), \(i\in I\) we have \((\underline{\theta}(h),\overline{\theta}(h))\leq\big{(}\underline{\theta}(f), \overline{\theta}(f)\big{)}\). As by definition \(f\leq F\), we also have \(\overline{\theta}(f)\leq\overline{\theta}(F)=F\leq\overline{\theta}(f_{i})\), \(i\in I\). Since \(\underline{\theta}(f)=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})\leq \underline{\theta}(f_{i})\), \(i\in I\), now \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\) is a lower bound of \((\underline{\theta}(f_{i}),\overline{\theta}(f_{i})),i\in I\) and condition \(\overline{\theta}(h)\leq\overline{\theta}(f_{i})\), \(i\in I\) is equivalent to \(\overline{\theta}(h)\leq\bigwedge\limits_{i\in I}\overline{\theta}(f_{i})=F\). Since \(\underline{\theta}(f)=\underline{\theta}(\bigwedge\limits_{i\in I}f_{i})= \bigwedge\limits_{i\in I}\underline{\theta}(f_{i})\), we also have \(\underline{\theta}(h)\leq\underline{\theta}(f_{i})\), \(i\in I\Longleftrightarrow\underline{\theta}(h)\leq\bigwedge\limits_{i\in I} \underline{\theta}(f_{i})=\underline{\theta}(f)=G\). Hence to prove \((\underline{\theta}(h),\overline{\theta}(h))\leq\big{(}\underline{\theta}(f), \overline{\theta}(f)\big{)}\), for all sets \(h\in\mathcal{F}(U,L)\) with \((\underline{\theta}(h),\overline{\theta}(h))\leq(\underline{\theta}(f_{i}), \overline{\theta}(f_{i}))\), \(i\in I\), it is enough to show that \(\overline{\theta}(h)\leq\overline{\theta}(f)\) holds for any \(h\in\mathcal{F}(U,L)\) with \(\underline{\theta}(h)\leq G\) and \(\overline{\theta}(h)\leq F\). Take any \(h\) with this property and any \(x\in U\). If \(x\in U\setminus\{b_{t}\mid t\in T\}\) or \(x=b_{t_{0}}\) for some \(t_{0}\in T\) with \(G(b_{t_{0}})=F(b_{t_{0}})\), then \(f(x)=F(x)\), hence \(h(x)\leq\overline{\theta}(h)(x)\leq F(x)=f(x)\leq\overline{\theta}(f)(x)\). Let \(x=b_{t_{0}}\), for some \(t_{0}\in T\) such that \(G(b_{t_{0}})\neq F(b_{t_{0}})\). Then \(f(b_{t_{0}})=G(b_{t_{0}})\), by our construction. If \(\{b_{t_{0}}\}\) is a maximal \(\varepsilon(G)\) class, then in view of Proposition 6.4(ii), \(\underline{\theta}(h)(b_{t_{0}})\leq G(b_{t_{0}})\) implies \(h(b_{t_{0}})=\underline{\theta}(h)(b_{t_{0}})\leq G(b_{t_{0}})\), i.e. we obtain \(h(x)\leq G(x)=f(x)\leq\overline{\theta}(f)(x)\). Assume now that \(\mathcal{E}_{t_{0}}\), the maximal \(\varepsilon(G)\) class containing \(b_{t_{0}}\), has at least two elements. Denote the \(E(F)\) class containing \(b_{t_{0}}\) by \(E_{0}\). If \(E_{0}\not\subseteq\mathcal{E}_{t_{0}}\), then there exists a \(z_{0}\in E_{0}\setminus\mathcal{E}_{t_{0}}\), and we have \((y,z_{0})\notin\varrho(G)\) for each \(y\in\mathcal{E}_{t_{0}}\), because \(\mathcal{E}_{t_{0}}\) is a maximal \(\varepsilon(G)\) class. Hence, by Corollary 4.2(ii), we get \((z,y)\in R(F)\) for all \(z\in E_{0}\) and \(y\in\mathcal{E}_{t_{0}}\). Thus \((b_{t_{0}},c)\in R(F)\) for any \(c\in\mathcal{E}_{t_{0}}\), \(c\neq b_{t_{0}}\). Clearly, \(c\notin\{b_{t}\mid t\in T\}\), because only a single element \(b_{t_{0}}\) was selected from \(\mathcal{E}_{t_{0}}\), and hence \(f(c)=F(c)\). Since \((b_{t_{0}},c)\in R(F)\) and \(f\leq F\), by applying Proposition 3.4(i) we get \(\overline{\theta}(f)(b_{t_{0}})=F(b_{t_{0}})\). Thus we obtain \(h(b_{t_{0}})\leq\overline{\theta}(h)(b_{t_{0}})\leq F(b_{t_{0}})=\overline{ \theta}(f)(b_{t_{0}})\), i.e. \(h(x)\leq\overline{\theta}(f)(x)\). If \(E_{0}\subseteq\mathcal{E}_{t_{0}}\), then we claim that \((b_{t_{0}},e)\in R(F)\) for some element \(e\in\mathcal{E}_{t_{0}}\setminus\{b_{t_{0}}\}\) (such an element exists, because \(|\mathcal{E}_{t_{0}}|\geq 2\)). Clearly, if \(E_{0}\) has at least two elements, then \(e\) can be chosen as any element from \(E_{0}\setminus\{b_{t_{0}}\}\). If \(E_{0}=\{b_{t_{0}}\}\), then in view of our construction, the element \(b_{t_{0}}\) is of type 3), i.e. \(\{b_{t_{0}}\}\) is an \(E(F)\) class which is not maximal. However, if \((b_{t_{0}},e)\notin R(F)\) would hold for each \(e\in\mathcal{E}_{t_{0}}\setminus\{b_{t_{0}}\}\), then in view of Corollary 6.3(ii), \(\{b_{t_{0}}\}\) would be a maximal \(E(F)\) class, contrary to our hypothesis. As no element different from \(b_{t_{0}}\) was selected from \(\mathcal{E}_{t_{0}}\), we have \(e\notin\{b_{t}\mid t\in T\}\), and hence \(f(e)=F(e)\). Since \((b_{t_{0}},e)\in R(F)\), repeating now the previous argument, we obtain again \(h(b_{t_{0}})\leq\overline{\theta}(f)(b_{t_{0}})\), i.e. \(h(x)\leq\overline{\theta}(f)(x)\). Hence for each \(x\in U\) we obtained \(h(x)\leq\overline{\theta}(f)(x)\). Thus \(h\leq\overline{\theta}(f)\). In view of [4], this implies \(\overline{\theta}(h)\leq\overline{\theta}\left(\overline{\theta}(f)\right)= \overline{\theta}(f)\). Thus \(\big{(}\underline{\theta}(f),\overline{\theta}(f)\big{)}\) is the infimum of \((\underline{\theta}(f_{i}),\overline{\theta}(f_{i}))\), \(i\in I\). Since \(f\in\mathcal{F}(U,L)\) has a finite range, \(\underline{\theta}(f),\overline{\theta}(f)\in\mathcal{F}(U,L)\) also have finite ranges. (ii) For any \(f_{1},f_{2}\in\mathcal{F}_{fr}(U,L)\), \(f_{1}\wedge f_{2}\) has a finite range. By Lemma 2.2, as \(\overline{\theta}(f_{1}),\overline{\theta}(f_{2})\in\mathcal{F}_{fr}(U,L)\), \(\overline{\theta}(f_{1})\wedge\overline{\theta}(f_{2})\) also has a finite range. Applying now (i) with \(I=\{1,2\}\), we get that \((\mathcal{H},\leq)\) is a \(\wedge\)-semilattice. Since condition (C) implies property (D), \((\mathcal{H},\leq)\) is self-dual, and hence it is a lattice. (iii) If \(U\) or \(L\) is finite, then \(\theta\) and each \(f\in\mathcal{F}(U,L)\) have finite ranges, i.e. \(\mathcal{F}(U,L)=\mathcal{F}_{fr}(U,L)\). As for any \(f_{i}\in\mathcal{F}(U,L)\), \(i\in I\) we have \(\bigwedge\limits_{i\in I}f_{i}\), \(\bigwedge\limits_{i\in I}\overline{\theta}(f_{i})\in\mathcal{F}(U,L)\), the fuzzy sets \(\bigwedge\limits_{i\in I}f_{i}\) and \(\bigwedge\limits_{i\in I}\overline{\theta}(f_{i})\) also have finite ranges. Hence, in view of (i), \(\inf\{(\underline{\theta}(f_{i}),\overline{\theta}(f_{i}))\mid i\in I\}\) always exists, i.e. \((\mathcal{H},\leq)\) is a complete \(\wedge\)-semilattice. Since \((\mathcal{H},\leq)\) is self-dual, it is a complete lattice. **Remark 7.5**.: _If for a system \(f_{i}\in\mathcal{F}(U,L)\), \(i\in I\) we have \(\left(\underline{\theta}(f),\overline{\theta}(f)\right)=\left(\bigwedge\{ \underline{\theta}\left(f_{i}\right)\mid i\in I\},\bigwedge\{\overline{\theta} \left(f_{i}\right)\mid i\in I\}\right)\), for an \(f\in\mathcal{F}(U,L)\), then \(\left(\underline{\theta}(f),\overline{\theta}(f)\right)\) equals to the infimum of \(\left(\underline{\theta}(f_{i}),\overline{\theta}(f_{i})\right),i\in I\). Indeed, for any \(h\in\mathcal{F}(U,L)\) with \(\left(\underline{\theta}(h),\overline{\theta}(h)\right)\leq\left(\underline{ \theta}(f_{i}),\overline{\theta}(f_{i})\right)\), \(i\in I\) we get \(\left(\underline{\theta}(h),\overline{\theta}(h)\right)\leq\left(\underline{ \theta}(f),\overline{\theta}(f)\right)\), meaning that \(\left(\underline{\theta}(f),\overline{\theta}(f)\right)\) is the infimum of \(\left(\underline{\theta}(f_{i}),\overline{\theta}(f_{i})\right),i\in I\). Analogously, \(\left(\bigvee\{\underline{\theta}\left(f_{i}\right)\mid i\in I\},\bigvee\{ \overline{\theta}\left(f_{i}\right)\mid i\in I\}\right)\) is the supremum of \(\left(\underline{\theta}(f_{i}),\overline{\theta}(f_{i})\right),i\in I\) whenever \(\left(\bigvee\{\underline{\theta}\left(f_{i}\right)\mid i\in I\},\bigvee\{ \overline{\theta}\left(f_{i}\right)\mid i\in I\}\right)\in\mathcal{F}\mathcal{ R}(U,L)\)._ **Example 7.6**.: Here we show how a meet \(\left(\underline{\theta}(f_{1}),\overline{\theta}(f_{1})\right)\wedge\left( \underline{\theta}(f_{2}),\overline{\theta}(f_{2})\right)\) can be calculated by using construction (5) in the proof of Theorem 7.4. The similarity relation \(\theta\) is given on Figure 3, and \(L=\{0,0.1,0.25,0.5,0.75,1\}\). The fuzzy sets \(f_{1},f_{2}\) and their approximations are given in Table 2. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(u\) & \(a\) & \(b\) & \(c\) \\ \hline \(f_{1}(u)\) & 1 & 0.1 & 0.5 \\ \hline \(\overline{\theta}(f_{1})(u)\) & 1 & 0.75 & 0.5 \\ \hline \(\underline{\theta}(f_{1})(u)\) & 0.25 & 0.1 & 0.5 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(u\) & \(a\) & \(b\) & \(c\) \\ \hline \(f_{2}(u)\) & 0.1 & 1 & 0.5 \\ \hline \(\overline{\theta}(f_{2})(u)\) & 0.75 & 1 & 0.5 \\ \hline \(\underline{\theta}(f_{2})(u)\) & 0.1 & 0.25 & 0.5 \\ \hline \end{tabular} \end{table} Table 2: The fuzzy sets \(f_{1}\) and \(f_{2}\) of Example 7.6 and their approximations Figure 3: The fuzzy similarity relation \(\theta\) of Example 7.6 The corresponding fuzzy rough sets are represented in the form \(\alpha_{1}=\begin{pmatrix}1&0.75&0.5\\ 0.25&0.1&0.5\end{pmatrix}\) and \(\alpha_{2}=\begin{pmatrix}0.75&1&0.5\\ 0.1&0.25&0.5\end{pmatrix}\), where the first row stands for the upper approximations and the second row shows their lower approximations. Computing the meets \(F=\overline{\theta}(f_{1})\wedge\overline{\theta}(f_{2})\) and \(G=\underline{\theta}(f_{1})\wedge\underline{\theta}(f_{2})\), we obtain the pair \(\begin{pmatrix}F\\ G\end{pmatrix}=\begin{pmatrix}0.75&0.75&0.5\\ 0.1&0.1&0.5\end{pmatrix}\), which is not a fuzzy rough set. The quasiorders induced by \(F\) and \(G\) are given in Figure 4. Observe that each element is a maximal \(\varepsilon(G)\) class. Hence, applying formula (5) from the proof of Theorem 7.4, we obtain the reference set \(f:=G\), and as a corresponding fuzzy rough set \(\begin{pmatrix}0.25&0.25&0.5\\ 0.1&0.1&0.5\end{pmatrix}\). **Example 7.7**.: Let us consider the similarity relation \(\theta\) on Figure 5, and set \(L=\{0,0.5,1\}\). The lattice \((\mathcal{H},\leq)\) of fuzzy rough sets is shown on Figure 6. Figure 4: The quasiorders \(\varrho(G)\) and \(R(F)\). Figure 5: The similarity relation \(\theta\) of Example 7.7 ## Conclusions The properties of a poset formed by fuzzy rough sets depend strongly both on the framework in which the approximations are defined (t-norm \(\odot\) - implicator \(\rhd\)), and on the properties of the approximation space \((U,\theta)\). The majority of our arguments work only under some finiteness conditions imposed on the domain or range of the fuzzy reference sets and of the relation \(\theta\). We hope that these conditions can be replaced with weaker ones (see e.g. [25]) or with conditions related to some topology defined on \(U\). In case of a finite universe or range set \(L\), we were able to show that \((\mathcal{FR}(U,L),\leq)\) is a lattice only for a similarity relation \(\theta\) in a particular context (min t-norm and S-implicator), by using property (D). It would be interesting to check if the proof can be extended for fuzzy quasiorders or other types of relations. Theorem 5.1 seems to suggest that such a result can be obtained even in a general context (of a t-norm and a related implicator) for a t-similarity relation \(\theta\) with some (strong) particular properties, even in the absence of the property (D). This can serve as a further research goal. Even in conditions of Theorem 7.4, the lattices formed by fuzzy rough sets are not distributive in general - this is shown in Example 7.8 below. Hence an interesting question could be if these lattices have any characteristic common properties. We can see that for some particular approximation spaces as in Example 7.6, we even obtain a particular distributive lattice (a so-called Figure 6: The lattice of fuzzy rough sets for Example 7.7 double Stone lattice). Therefore, it makes sense to ask under what conditions imposed on \((U,\theta)\) will we obtain a distributive lattice \(\mathcal{FR}(U,L)\). **Example 7.8**.: Let \(U\), \(L\), the similarity relation \(\theta\) be as in Example 7.7, and let us consider the fuzzy rough sets \(\alpha_{1},\alpha_{2}\) from 7.7 and \(c=\begin{pmatrix}0.5&0.5&0.5\\ 0.5&0.5&0.5\end{pmatrix}\). We prove that \((\alpha_{1}\wedge\alpha_{2})\lor c\neq(\alpha_{1}\lor c)\wedge(\alpha_{2}\lor c)\): Indeed, by Example 7.7, \(\alpha_{1}\wedge\alpha_{2}=\begin{pmatrix}0.25&0.25&0.5\\ 0.1&0.1&0.5\end{pmatrix}<c\), and hence \((\alpha_{1}\wedge\alpha_{2})\lor c=c\). In view of Remark 7.5 we have \(\alpha_{1}\lor c=\begin{pmatrix}1&0.75&0.5\\ 0.5&0.5&0.5\end{pmatrix}\) and \(\alpha_{2}\lor c=\begin{pmatrix}0.75&1&0.5\\ 0.5&0.5&0.5\end{pmatrix}\), because \(\alpha_{1}\lor c=\begin{pmatrix}\overline{\theta}(h_{1})\\ \underline{\theta}(h_{1})\end{pmatrix}\) and \(\alpha_{2}\lor c=\begin{pmatrix}\overline{\theta}(h_{2})\\ \underline{\theta}(h_{2})\end{pmatrix}\), where \(h_{1}=1/a+0.5/b+0.5/c\) and \(h_{2}=0.5/a+1/b+0.5/c\). Now, observe that \(\begin{pmatrix}\overline{\theta}(h_{1})\wedge\overline{\theta}(h_{2})\\ \underline{\theta}(h_{1})\wedge\underline{\theta}(h_{2})\end{pmatrix}= \begin{pmatrix}0.75&0.75&0.5\\ 0.5&0.5&0.5\end{pmatrix}\) is a fuzzy rough set induced by the fuzzy set \(m=0.75/a+0.5/b+0.5/c\). In view of Remark 7.5 this means that \((\alpha_{1}\lor c)\wedge(\alpha_{2}\lor c)=\begin{pmatrix}0.75&0.75&0.5\\ 0.5&0.5&0.5\end{pmatrix}\neq c\).
2309.05820
On the large scale geometry of big mapping class groups of surfaces with a unique maximal end
Building on the work of K. Mann and K. Rafi, we analyze the large scale geometry of big mapping class groups of surfaces with a unique maximal end. We obtain a complete characterization of those that are globally CB, which does not require the tameness condition. We prove that, for surfaces with a unique maximal end, any locally CB big mapping class group is CB generated, and we give an explicit criterion for determining which big mapping class groups are CB generated. Finally, we give an example of a non-tame surface whose mapping class group is CB generated but is not globally CB.
Rita Jiménez Rolland, Israel Morales
2023-09-11T20:55:41Z
http://arxiv.org/abs/2309.05820v2
# On the large scale geometry of big mapping class groups of surfaces with a unique maximal end ###### Abstract. Building on the work of K. Mann and K. Rafi [13], we analyze the large scale geometry of big mapping class groups of surfaces with a unique maximal end. We obtain a complete characterization of those that are globally CB, which does not require the _tameness_ condition. We prove that, for surfaces with a unique maximal end, any locally CB big mapping class group is CB generated, and we give an explicit criterion for determining which big mapping class groups are CB generated. Finally, we answer [13, Problem 6.12] by giving an example of a non-tame surface whose mapping class group is CB generated but is not globally CB. ## 1. Introduction Let \(\Sigma\) be an infinite-type surface and \(\operatorname{Map}(\Sigma)\) be the group of all isotopy classes of orientation-preserving self-homeomorhisms of \(\Sigma\). This group is called the _(big) mapping class group_ of \(\Sigma\). If we equip the homeomorphism group with the compact-open topology then \(\operatorname{Map}(\Sigma)\) is a Polish groupwith respect to the quotient topology. For an overview about big mapping class groups the reader may consult [1]. Recently, C. Rosendal [14] used the notion of _coarsely bounded_ sets in order to extend the framework of geometric group theory to the broader context of topological groups. **Definition 1**.: Let \(G\) be a topological group. A subset of \(G\) is _CB (coarsely bounded)_ if it has finite diameter for every compatible left-invariant metric on \(G\). The group \(G\) is _locally CB_ if it admits a CB neighborhood of the identity. If \(G\) is generated by a CB subset we say that \(G\) is _CB generated_. With this language, Rosendal extends Gromov's fundamental theorem of geometric group theory showing that if \(A_{1}\) and \(A_{2}\) are two CB generating sets of a Polish group \(G\) then the respective word metrics on \(G\) are quasi-isometric [14, Proposition 2.72]. In other words, a CB generated Polish group has a well-defined quasi-isometric type. On the other hand, it was proved that being locally CB is a necessary condition for a Polish group to be CB generated [14, Theorem 1.2]. Based on Rosendal's framework, K. Mann and K. Rafi started in [13] the study of the large scale geometry of big mapping class groups. They obtained a classification of locally CB big mapping class groups [13, Theorem 1.4] and, under the assumption of _tameness_, the classification of CB generated big mapping class groups [13, Theorem 1.6]. Studying the mapping class groups of an infinite-type surface \(\Sigma\) can be complicated since it involves understanding the homeomorphism group of the space of ends of \(\Sigma\), which is denoted by \(\operatorname{E}(\Sigma)\). In order to address this, Mann and Rafi introduced a preorder on the space of ends of a Introduction Let \(\Sigma\) be an infinite-type surface with a unique maximal end \(x\). We say that a open subset \(U\) of an infinite-type surface \(\Sigma\) is a _neighborhood_ of an end \(x\in\mathrm{E}(\Sigma)\) if its ends space contains \(x\), that is, \(x\in\mathrm{E}(U)\). A surface \(\Sigma\) has _a unique maximal end_ if there is only one end \(x\) of \(\Sigma\) such that any neighborhood of \(x\) has a copy of some neighborhood of any other end \(y\neq x\) (see Section 2 for a precise definition). In this paper we focus our study in the large scale geometry of big mapping class groups of surfaces with a unique maximal end. These surfaces have big mapping class groups that have exhibited a different behavior from the rest of big mapping class groups. For instance, \(\mathrm{Map}(\Sigma)\) has a dense conjugacy class if and only if \(\Sigma\) does not have no nondisplaceable finite-type subsurfaces and has a unique maximal end, see [11, 12]. In addition, some examples of big mapping class groups that do not satisfy the _automatic continuity_ property are in this class [13, 14]. We start our study by giving an alternative characterization of locally CB big mapping class groups for surfaces with a unique maximal end. **Theorem 1.1**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end \(x\). Then \(\mathrm{Map}(\Sigma)\) is locally CB if and only if there is a connected finite-type subsurface \(K\) of \(\Sigma\) with the following properties:_ 1. \(\Sigma\setminus K=\Sigma_{0}\sqcup\Sigma_{1}\sqcup\cdots\sqcup\Sigma_{n}\)_, where the closure of each_ \(\Sigma_{i}\) _in_ \(\Sigma\) _is of infinite-type with one boundary component,_ \(g(\Sigma_{i})\in\{0,\infty\}\) _and_ \(x\in E(\Sigma_{0})\)_._ 2. _For any subsurface_ \(U\subseteq\Sigma_{0}\) _neighborhood of_ \(x\) _there is_ \(f_{U}\in Homeo^{+}(\Sigma)\) _such that either_ \(f_{U}(\Sigma_{0})\subseteq U\) _or_ \(f_{U}(\Sigma\setminus U)\subseteq U\)_._ _Moreover, if \(\mathrm{Map}(\Sigma)\) is locally CB then \(\mathcal{V}_{K}:=\{f\in\mathrm{Map}(\Sigma):\,f|_{K}=Id_{K}\}\) is a CB neighborhood of the identity._ **Remark 1**.: Notice that our condition (1) in Theorem 1.1 is the same as item (1) in [13, Theorem 1.4]. The differences between the statements lie in item (2). Briefly, for surfaces with a unique maximal end, item (2) in [13, Theorem 1.4] consists of the following: _i)_\(\mathrm{E}(\Sigma_{0})\) is self-similar, _ii)_ each complementary component of \(K\) different from \(\Sigma_{0}\) is mapped inside \(\Sigma_{0}\) by a homeomorphism and, _iii)_ if \(K\) is not empty, for each neighborhood \(U\subseteq\Sigma_{0}\) of the unique maximal end there is a homeomorphism \(f\) such that \(f(\Sigma_{0})\subset U\). In our result, we do not require conditions _i)_ and _ii)_. Instead, we consider condition _iii)_ and include the possibility that for some neighborhoods \(U\subseteq\Sigma_{0}\) of the unique maximal end there exists a homeomorphism \(f_{U}\) such that \(f_{U}(\Sigma\setminus U)\subseteq U\). This last consideration does not appear in the statement of item (2) of [13, Theorem 1.4]. We prove that this possibility can only appear if \(\mathrm{Map}(\Sigma)\) is globally CB and the subsurface \(K\) of \(\Sigma\) can be taken to be empty; see Corollary 5.1. We use Theorem 1.1 to give an alternative proof of the self-similarity of the space of ends of an infinite-type surface with a unique maximal end whose mapping class group is locally CB; see also [13, Proposition 5.4]. **Theorem 1.2**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end and suppose \(\mathrm{Map}(\Sigma)\) is locally CB. Then, the space of ends \(\mathrm{E}(\Sigma)\) is self-similar._ **Definition 2**.: A neighborhood \(U\) of an end \(x\) is _stable_[14, Definition 4.14] if for every neighborhood \(U^{\prime}\subseteq U\) of \(x\) there is a homeomorphic copy of \(U\) inside \(U^{\prime}\). We say that a surface \(\Sigma\) is _tame_[14, Definition 6.14] if every end of \(\Sigma\) which is either of maximal type or any immediate predecessor of an end of maximal type has a stable neighborhood. In [14], Mann and Rafi give a characterization of globally CB big mapping class groups under the hypothesis of tameness [14, Theorem 1.5]. We prove that in the case of surfaces with a unique maximal end, the tameness hypothesis is not needed. **Theorem 1.3**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end and suppose that \(\operatorname{Map}(\Sigma)\) is locally CB. Then \(\operatorname{Map}(\Sigma)\) is globally CB if and only if the genus of \(\Sigma\) is zero or infinite._ Thanks to Theorem 1.3 we have a better understanding of those surfaces with locally but not globally CB mapping class group. **Corollary 1.4**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end and suppose that \(\operatorname{Map}(\Sigma)\) is locally CB but not globally CB. Then \(\Sigma\) has finite nonzero genus and the space of ends of \(\Sigma\) is uncountable._ Under the asumption of tameness, [14, Theorem 1.6] states that a locally CB big mapping class group is CB generated if and only if the space of ends of the surface is not of _limit type_ and is of _finite rank_, see [14, Definitions 6.2 & 6.5]. For surfaces \(\Sigma\) with a unique maximal end, it can be shown that \(\operatorname{Map}(\Sigma)\) is always of finite rank and not of limit type. We see that the tameness hypothesis is actually not needed in order to be CB generated. **Theorem 1.5**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end and suppose that \(\operatorname{Map}(\Sigma)\) is locally CB. Then \(\operatorname{Map}(\Sigma)\) is CB generated._ Recall that a necessary condition for a group to be CB generated is that the group is locally CB. Our Theorem 1.5 shows that it is also a sufficient condition for big mapping class groups of surfaces with a unique maximal end. Recently, T. Hill [15] observed the same phenomena for pure mapping class groups. Additionally, it follows that our Theorem 1.1 and [14, Theorem 1.4] give explicit criteria for determining which big mapping class groups are CB generated. **Remark 2**.: It follows from Theorem 1.5 and the work of Horbez, Qing and Rafi [14, Theorem 1 and Corollary 2] that for CB generated big mapping class groups of surfaces \(\Sigma\) with a unique maximal end, either 1. \(\operatorname{Map}(\Sigma)\) is globally CB and therefore the quasi-isometric type of \(\operatorname{Map}(\Sigma)\) is trivial or, 2. \(\operatorname{Map}(\Sigma)\) admits a continuous and non-elementary action by isometries on a hyperbolic space; in this situation, the space of non-trivial quasi-morphisms of \(\operatorname{Map}(\Sigma)\) has infinite dimension. **Answering a question of Mann and Rafi**. Thanks to our Theorem 1.5 we get a positive answer to [14, Problem 6.12]. **Theorem 1.6**.: _There exists a non-tame infinite-type surface whose mapping class group is CB generated but it is not globally CB._ **Remark 3**.: In [10, Example 6.12] Mann and Rafi constructed a non-tame surface with a unique maximal end whose mapping class group is globally CB. We note that in this example the set of immediate predecessors of the unique maximal end is countable. We can modify the Mann-Rafi example by adding finite nonzero genus to obtain a non-tame surface with a unique maximal end. However, by Proposition 5.2 below, this new surface is not locally CB because the unique maximal end has countably many immediate predecessors. Our example satisfies that the equivalence class of each immediate predecessor of the unique maximal end is uncountable, the unique maximal end has stable neighborhoods, and some of the immediate predecessors of the unique maximal end do not have stable neighborhoods. Proof of Theorem 1.6.: Our example is inspired by the constructions carried out at [10, Sections 2 & 3]. For each \(n\in\mathbb{N}\), let \(D_{n}\) be the surface of genus zero with one boundary component and such that: 1) \(\operatorname{E}(D_{n})=C_{n}\sqcup Q_{n}\) where \(C_{n}\) is homeomorphic to a Cantor set and \(Q_{n}\) is a countable set, 2) \(\operatorname{E}(D_{n})\) has Cantor-Bendixson rank \(n\) and, 3) for each derived set \(\Omega\) of \(\operatorname{E}(D_{n})\) that has isolated points, the accumulation set of the isolated points of \(\Omega\) contains \(C_{n}\); see Figure 1. From the construction it follows that the ends in \(C_{n}\) are not comparable with the ends in \(C_{m}\) if \(n\neq m\). Let \(T\) be the surface obtained from the Cantor tree surface by placing \(2^{n}\) copies of \(D_{n}\) at each \(n\)th-level of the tree, one for each bifurcation. Let \(C\) denote the set of ends of \(T\) coming from the ends of the Cantor tree surface. Observe that each end \(x\) in \(C\) does not have stable neighborhoods. Indeed, take \(U\) a neighborhood of \(x\) and let \(n_{U}\) be the smallest natural number such that \(C_{n_{U}}\cap\operatorname{E}(U)\neq\emptyset\). By construction, we can find a neighborhood \(V\subseteq U\) of \(x\) with \(\operatorname{E}(V)\) not containing points of \(C_{n_{U}}\). Given that the ends in \(C_{n}\) are not comparable with the ends in \(C_{m}\) if \(n\neq m\), \(V\) cannot contain homeomorphic copies of \(U\). Figure 1. Surface \(D_{n+1}\) with one boundary component and whose space of ends has Cantor-Bendixson rank \(n+1\) with \((n+1)\)th-derived set homeomorphic to a Cantor set \(C_{n+1}\) and, each point of \(C_{n+1}\) is accumulated by points homeomorphic to \(\omega^{n}+1\). Let \(g\) be a nonzero natural number and \(F_{g}\) be the surface of genus \(g\) and space of ends homeomorphic to \(\omega+1\). Finally, we define \(\Sigma\) to be the surface obtained from \(F_{g}\) by replacing a neighborhood of each of its isolated ends with a copy of the surface \(T\); see Figure 2. Note that the space of ends of \(\Sigma\) is self-similar, \(\Sigma\) has a unique maximal end and therefore the unique maximal end of \(\Sigma\) has stable neighborhoods. Moreover, each maximal end of \(T\) is an immediate predecessor end of the unique maximal end of \(\Sigma\) and, more importantly, some of them have no stable neighborhoods as we explained before. Therefore the surface \(\Sigma\) is not tame. Since \(\Sigma\) has finite nonzero genus, then Map(\(\Sigma\)) is not globally CB. Let \(K\) be the compact subsurface of \(\Sigma\) with two boundary components, of genus \(g\) and complementary components \(\Sigma_{0}\) and \(\Sigma_{1}\), where \(\Sigma_{1}\) is exactly the first copy \(T\) and \(\Sigma_{0}\) contains the remaining copies of \(T\) in the construction of \(\Sigma\). The reader can verify that \(K\) is the desired subsurface in Theorem 1.1 for which \(\mathcal{V}_{K}\) defines a CB-neighborhood of the identity. So, Map(\(\Sigma\)) is locally CB and therefore it is CB generated by Theorem 1.5. **Outline**. In Section 2 we present the preliminaries, Section 3 is devoted to the proof of Theorem 1.1, Section 4 to the proof of Theorem 1.2 and Theorem 1.3, Section 5 to the proof of Corollary 1.4. Finally, in Section 6 we give the proof of Theorem 1.5. **Acknowledgements**. We are grateful for the financial support of CONAHCYT grant CF 2019-217392. The first author acknowledges funding from a DGAPA-UNAM PASPA sabbatical fellowship and the second author was supported by a DGAPA-UNAM postdoctoral fellowship. We thank K. Mann for kindly answering all our questions and A. Randecker and J. Hernandez-Hernandez for helpful comments on an earlier draft of this paper. Figure 2. Non-tame surface with a unique maximal end and whose mapping class group is CB generated but not globally CB. ## 2. Preliminaries **Topological surfaces**. All our surfaces are assumed to be connected, orientable and possibly with non-empty boundary. The boundary of a surface \(\Sigma\) is denoted by \(\partial\Sigma\) and always supposed to be compact. A surface is of _finite type_ if its fundamental group is finitely generated. Otherwise, we say that it is of _infinite type_. Unless otherwise specified, infinite-type surfaces will be assumed to have empty boundary. Finite-type surfaces are classified, up to homeomorphisms, by their genus, number of punctures and number of boundary components. An infinite-type surface \(\Sigma\) with empty boundary is classified, up to homeomorphisms, by their genus (which can be infinite) and a pair of nested topological spaces \(\mathrm{E}_{\infty}(\Sigma)\subseteq\mathrm{E}(\Sigma)\). The space \(\mathrm{E}(\Sigma)\) is called the _space of ends_ of \(\Sigma\) and it is homeomorphic to a clopen subset of the Cantor space. The space \(\mathrm{E}_{\infty}(\Sigma)\) is a closed subspace of the space of ends and it encodes all ends of \(\Sigma\) which are accumulated by genus. Moreover, \(\hat{\Sigma}:=\Sigma\cup\mathrm{E}(\Sigma)\) is compact and it is called the _Freudenthal compactification_ of \(\Sigma\). We refer the reader to the work of Richards [14] and the book of Ahlfors and Sario [1] for a detailed discussion about classification of surfaces. Any homeomorphism \(f:\Sigma\to\Sigma\) has a unique homeomorphism extension \(\hat{f}:\hat{\Sigma}\to\hat{\Sigma}\). In particular, the restriction of \(\hat{f}\) to \(\mathrm{E}(\Sigma)\) induces a homeomorphism of the nested pair \((\mathrm{E}(\Sigma),\mathrm{E}_{\infty}(\Sigma))\) to itself. From [14] we have that if the nested pair of spaces \((A,B)\subseteq(\mathrm{E}(\Sigma),\mathrm{E}_{\infty}(\Sigma))\) is homeomorphic to the nested pair \((A^{\prime},B^{\prime})\subseteq(\mathrm{E}(\Sigma),\mathrm{E}_{\infty}( \Sigma))\) then there is a homeomorphism \(f:\Sigma\to\Sigma\) such that its extension \(\hat{f}\) sends the nested pair \((A,B)\) into \((A^{\prime},B^{\prime})\). We assume that any homeomorphism between subsets of \(\mathrm{E}(\Sigma)\) is induced by a homeomorphism of the surface \(\Sigma\). Abusing of notation, we will usually refer simply by \(f\) to the homeomorphism of the surface \(\Sigma\) or to its extension to \(\hat{\Sigma}\). A _simple closed curve_ in \(\Sigma\) is an embedding of the circle into \(\Sigma\). A simple closed curve is _essential_ if it is not homotopically trivial. All curves we consider in this paper will be essential, so we refer to them simply as _curves_. We say that a curve \(\alpha\) in \(\Sigma\) is _separating_ if \(\Sigma\setminus\alpha\) is disconnected. By a subsurface of \(\Sigma\) we mean a subspace \(S\subseteq\Sigma\) that is a surface itself (possibly with non-empty boundary). If we do not specify it in the paper, we assume that all boundary curves of a subsurface are separating curves in \(\Sigma\). Furthermore, any subsurface of finite type is assumed to have non-empty boundary. **Big mapping class groups**. The _mapping class group_ of a surface (of finite or infinite type) \(\Sigma\), denoted by \(\mathrm{Map}(\Sigma)\), is the group of all isotopy classes of orientation-preserving self-homeomorhisms of \(\Sigma\); if \(\partial\Sigma\neq\emptyset\), then we require that all homeomorphisms and isotopies fix \(\partial\Sigma\) pointwise. If we equip the homeomorphism group of \(\Sigma\) with the compact-open topology then \(\mathrm{Map}(\Sigma)\) is a Polish group with respect to the quotient topology. In recent literature, the mapping class groups of infinite-type surfaces are often called _big mapping class groups_. For the rest of the paper, \(\Sigma\) denotes an infinite-type surface. Moreover, any homeomorphism \(f\) of \(\Sigma\) to itself is assumed to be orientation-preserving. Given a subsurface \(S\subseteq\Sigma\) we denote by \(\mathcal{V}_{S}\) the subgroup of \(\mathrm{Map}(\Sigma)\) defined by all the homeomorphisms \(f:\Sigma\to\Sigma\) such that \(f|_{S}=Id_{S}\) up to isotopy. By the Alexander's method [12], if \(S\) is a finite-type subsurface of \(\Sigma\) then \(\mathcal{V}_{S}\) is an open subgroup. Moreover, the collection of open subgroups \(\mathcal{V}_{S}\) where \(S\) runs over all finite-type subsurfaces of \(\Sigma\) forms a base of neighborhoods of the identity. Therefore, \(\operatorname{Map}(\Sigma)\) is first countable and, in particular, \(\operatorname{Map}(\Sigma)\) is a non-Archimedian1 group. Throughout the article, we often use the following fact: for any neighborhood \(V\) of the identity in \(\operatorname{Map}(\Sigma)\) there is a finite-type subsurface \(S\subseteq\Sigma\) such that \(\mathcal{V}_{S}\subseteq V\). Footnote 1: A Polish group is _non-Archimedian_ if the identity has a basis of open subgroups, see [1]. **Large scale geometry of Polish groups**. We use the following characterization of coarsely bounded sets (CB sets). **Theorem 2.1** (Proposition 2.7, [12]).: _Let \(G\) be a Polish group and \(A\) be a subset of \(G\). The following are equivalent_ 1. \(A\) _is CB._ 2. _For every neighborhood_ \(V\) _of the identity in_ \(G\)_, there is a finite subset_ \(F\subseteq G\) _and some_ \(k\geq 1\) _such that_ \(A\subseteq(FV)^{k}\)_._ ### Partial order on the space of ends We recall the partial order on the space of ends \(\operatorname{E}(\Sigma)\) of a surface \(\Sigma\) introduced by Mann and Rafi in [13]. **Definition 3**.: Let \(x,y\in\operatorname{E}(\Sigma)\). We define the binary relation on \(\operatorname{E}(\Sigma)\) where \(y\preceq x\) if for any neighborhood \(U_{x}\) of \(x\) in \(\operatorname{E}(\Sigma)\) there is a neihgborhood \(U_{y}\) of \(y\) and a homeomorphism \(f\) of the surface \(\Sigma\) such that \(f(U_{y})\subseteq U_{x}\). We obtain an equivalence relation on \(\operatorname{E}(\Sigma)\) declaring that two ends \(x,y\in\operatorname{E}(\Sigma)\) are of the _same type_ if \(y\preceq x\) and \(x\preceq y\). Equivalently, \(x\) and \(y\) are of the same type if and only if there exists a homeomorphism \(h\) of \(\Sigma\) such that \(h(x)=y\); see [13, Theorem 1.2]. Define \(y\prec x\) if \(y\preceq x\) but \(x\) and \(y\) are not of the same type. The relation \(\prec\) define a partial order on the set of equivalence classes of ends. **Proposition 2.2** (Proposition 4.7, [13]).: _The partial order \(\prec\) has maximal elements. Moreover, the equivalence class of a maximal element is either finite or a Cantor set._ We denote by \(E(x)\) the equivalence class of \(x\in\operatorname{E}(\Sigma)\) and by \(\mathcal{M}(\Sigma)\) the set of all maximal ends for \(\prec\). **Definition 4** (Unique maximal end).: If \(|\mathcal{M}(\Sigma)|=1\) we say that \(\Sigma\) has a _unique maximal end_. Mann and Rafi also introduced the notion of self-similar space of ends that we recall now along with some of their results that will be needed for our proofs below. **Definition 5**.: We say that the space of ends \((\operatorname{E}(\Sigma),\operatorname{E}_{\infty}(\Sigma))\) of an infinite-type surface \(\Sigma\) is _self-similar_ if for any decomposition of \(\operatorname{E}(\Sigma)\) into pairwise disjoint clopen sets \[\operatorname{E}(\Sigma)=E_{1}\sqcup E_{2}\sqcup\ldots\sqcup E_{n}\] there exists a clopen set \(D\) in some \(E_{i}\) such that \((D,D\cap\operatorname{E}_{\infty}(\Sigma))\) is homeomorphic to \((\operatorname{E}(\Sigma),\operatorname{E}_{\infty}(\Sigma))\). **Definition 6**.: Let \(\Sigma\) be an infinite-type surface. A finite-type subsurface \(K\) of \(\Sigma\), possible disconnected, is _nondisplaceable_ if for each homeomorphism \(f\) of \(\Sigma\) we have that \(f(K)\cap K\neq\emptyset\). **Theorem 2.3** (Theorem 1.9, [14]).: _Let \(\Sigma\) be an infinite-type surface. If \(\Sigma\) contains a nondisplaceable finite-type subsurface then \(\operatorname{Map}(\Sigma)\) is not globally CB._ **Proposition 2.4** (Proposition 3.1, [14]).: _Let \(\Sigma\) be an infinite-type surface of infinite or zero genus. If \(E(\Sigma)\) is self-similar then \(\operatorname{Map}(\Sigma)\) is globally CB._ **Lemma 2.5** (Lemma 4.12, [14]).: _Suppose \(\Sigma\) is an infinite-type surface with a unique maximal end and such that it has no nondisplaceable finite-type subsurfaces. Then \(E(\Sigma)\) is self-similar._ **Theorem 2.6**.: _Let \(\Sigma\) be an infinite-type surface with zero or infinite genus and with one maximal end. Then \(\operatorname{Map}(\Sigma)\) is globally CB if and only if \(E(\Sigma)\) is self-similar._ Proof.: Suppose that \(\operatorname{Map}(\Sigma)\) is globally CB. By Theorem 2.3, \(\Sigma\) does not have displaceable finite-type subsurfaces, then by Lemma 2.5, \(E(\Sigma)\) is self-similar. The sufficiency part is given by Proposition 2.4. **Definition 7**.: Suppose \(\Sigma\) has a unique maximal end \(x\) and \(\alpha\) is a separating curve in \(\Sigma\). * The _interior_ of \(\alpha\) is defined as the only connected component of \(\Sigma\setminus\alpha\) that is a neighborhood of the unique maximal end \(x\). We denote it by \(\operatorname{Int}(\alpha)\). * The complement of \(\operatorname{Int}(\alpha)\cup\alpha\) in \(\Sigma\) is called the _exterior_ of \(\alpha\) and it is denoted by \(\operatorname{Ext}(\alpha)\). Observe that for each homeomorphism \(f\) of \(\Sigma\) \[\operatorname{Int}(f(\alpha))=f(\operatorname{Int}(\alpha))\qquad\text{and} \qquad\operatorname{Ext}(f(\alpha))=f(\operatorname{Ext}(\alpha)).\] If \(\Sigma\) has a unique maximal end \(x\), then any neighborhood of \(x\) contains a subsurface with one boundary component whose interior is a neighborhood of \(x\). This fact is often used throughout the work. Recall that we are assuming that subsurfaces have separating boundary curves. ## 3. Proof of Theorem 1.1 First we prove the necessity condition of Theorem 1.1 and after three preparatory lemmas we give the proof of the sufficiency part. For the necessity part we use the following lemma that is a consequence of [14, Lemma 5.2]. **Lemma 3.1**.: _Let \(\Sigma\) be an infinite-type surface and \(K\) be a finite-type subsurface of \(\Sigma\). If \(\mathcal{V}_{K}\) is CB then every finite-type subsurface \(S\) (possibly disconnected) contained in \(\Sigma\setminus K\) is \(\operatorname{Map}(\Sigma)\)-displaceable._ Proof of the necessity part of Theorem 1.1.: We assume that \(\operatorname{Map}(\Sigma)\) is locally CB. Let \(V\) be a CB neighborhood of the identity in \(\operatorname{Map}(\Sigma)\). Take a connected finite-type subsurface \(K\) of \(\Sigma\) such that \(\mathcal{V}_{K}\subseteq V\). We have that \(\mathcal{V}_{K}\) is CB. By enlarging \(K\) (and therefore, shrinking \(\mathcal{V}_{K}\)) if it were necessary, we can assume that \(K\) satisfies item (1), that is, the closure of each complementary component of \(K\) in \(\Sigma\) is of infinite-type with one boundary component either with zero or infinite genus. Without loss of generality, we can assume that the unique maximal end \(x\) is an end of \(\Sigma_{0}\). Now we prove item (2). Let \(U\) be the interior of a connected subsurface of \(\Sigma_{0}\) with one boundary component that is a neighborhood of \(x\) in \(\Sigma\). If \(U\) is isotopic \(\Sigma_{0}\) it is enough to find isotopic to \(Id_{\Sigma}\) such that \(f_{U}(U)=\Sigma_{0}\). Now, suppose that \(U\subseteq\Sigma_{0}\) is not isotopic to \(\Sigma_{0}\). Then there is a pair of pants \(P\subset\Sigma_{0}\) such that \(\partial U\sqcup\partial\Sigma_{0}\subseteq\partial P\) and \(\Sigma\setminus P=(\Sigma\setminus\overline{\Sigma_{0}})\sqcup U\sqcup W\). As \(\mathcal{V}_{K}\) is CB, by Lemma 3.1 there is a homeomorphism \(f\) such that \(f(P)\cap P=\emptyset\). We claim that, up to replacing \(f\) by its inverse, we can assume that \(f(P)\subset U\). Indeed, observe that either \(f(P)\subset U\) or \(f(P)\subset\Sigma\setminus U\). If \(f(P)\subset\Sigma\setminus U\) then \(P\subset\mathrm{Int}(f(\partial U))\). Given that \(U=\mathrm{Int}(\partial U)\) and \(U\) is a neighborhood of the unique maximal end then \(\mathrm{Int}(f(\partial U))=f(\mathrm{Int}(\partial U))=f(U)\); then \(f^{-1}(P)\subset U\). Assume \(f(P)\subseteq U\) and we set \(f_{U}:=f\). Again, as \(U\) is a neighborhood of \(x\) and \(\partial U\) is a separating curve in \(\Sigma\), there are two possibilities for \(f_{U}(\partial U)\): either \(\mathrm{Int}(f_{U}(\partial U))\subseteq U\) or \(\mathrm{Ext}(f_{U}(\partial U))\subseteq U\). Since \(f_{U}(\Sigma_{0})\) is a neighborhood of the unique maximal end \(x\) and \(\mathrm{Int}(\partial U)\subseteq\mathrm{Int}(\partial\Sigma_{0})\), if \(\mathrm{Int}(f_{U}(\partial U))\subseteq U\) then necessarily \(\mathrm{Int}(f_{U}(\partial\Sigma_{0}))\subseteq U\). In this case we obtain that \(f(\Sigma_{0})\subseteq U\) because \(\Sigma_{0}=\mathrm{Int}(\partial\Sigma_{0})\), see Figure 3 a). Finally, since \(\mathrm{Ext}(\partial U)=\Sigma\setminus(U\cup\partial U)\) and \(\mathrm{Ext}(f_{U}(\partial U))=f_{U}(\mathrm{Ext}(\partial U))\), if \(\mathrm{Ext}(f_{U}(\partial U))\subseteq U\) then \(f_{U}(\Sigma\setminus U)\subseteq U\), see Figure 3 b). In conclusion, either \(f_{U}(\Sigma_{0})\subseteq U\) or \(f_{U}(\Sigma\setminus U)\subseteq U\). For the proof of the sufficiency part of Theorem 1.1 we use three lemmas. Assume that \(\Sigma\) has a unique maximal end \(x\) and let \(K\) be a connected finite-type subsurface of \(\Sigma\) with complementary subsurfaces \(\Sigma_{0},\Sigma_{1},\ldots,\Sigma_{n}\), i.e, \[\Sigma\setminus K=\Sigma_{0}\sqcup\Sigma_{1}\sqcup\cdots\sqcup\Sigma_{n}.\] Additionally, suppose that the closure in \(\Sigma\) of each \(\Sigma_{i}\) is an infinite-type surface of zero or infinite genus with one boundary component and that \(\Sigma_{0}\) is a neighborhood of the unique maximal end \(x\). **Lemma 3.2**.: _Suppose \(U\subseteq\Sigma_{0}\) is a neighborhood of \(x\) such that for each subsurface \(\widetilde{U}\subseteq U\) neighborhood of \(x\) there is a homeomorphism \(f_{\widetilde{U}}\) such that \(f_{\widetilde{U}}(\Sigma_{0})\subseteq\widetilde{U}\). Then for each \(1\leq i\leq n\) there exists a homeomorphism \(f_{i}\) such that \(f_{i}(\Sigma_{i})\subseteq U\)._ **Remark 4**.: The hypothesis of Lemma 3.2 implies that \(U\) is a stable neighborhood of \(x\), see Definition 2. We point out that Lemma 3.2 can be obtained using [11, Lemma 4.18]. Here we provide a self-contained proof. Proof.: Let \(1\leq i\leq n\) fixed. Observe that \(\mathrm{E}(U)\) contains a homeomorphic copy of \(\mathrm{E}(\Sigma_{i})\). Indeed, given that \(U\) is a neighborhood of the unique maximal end \(x\) and \(\mathrm{E}(\Sigma_{i})\) is a compact subset of \(\mathrm{E}(\Sigma)\) then there is a finite collection \(\{N_{j}\}_{j=1}^{m}\) of disjoint clopen subsets that covers \(\mathrm{E}(\Sigma_{i})\) and such that each \(N_{j}\) is mapped inside \(\mathrm{E}(U)\) by a homeomorphism \(h_{j}\). Now, we use the hypothesis of the lemma to make the collection \(\{h_{j}(N_{j})\}_{j=1}^{m}\) disjoint inside of \(\mathrm{E}(U)\). Now, since \(\Sigma_{i}\) has zero or infinite genus, we can find a homeomorphic copy \(\Sigma_{i}^{\prime}\) of \(\Sigma_{i}\) contained in \(U\). In order to obtain the desired homeomorphism \(f_{i}\), let \(P\subseteq\Sigma\) a pair of pants such that \(\partial P\) contains the boundary curves of \(\Sigma_{i}\) and \(\Sigma_{i}^{\prime}\), see Figure 4. Then \(f_{i}\) is the homeomorphism supported on \(P\cup\Sigma_{i}\cup\Sigma_{i}^{\prime}\) and sends \(\Sigma_{i}\) onto \(\Sigma_{i}^{\prime}\). The support of a homeomorphism \(f:\Sigma\to\Sigma\), denoted by \(\mathrm{supp}(f)\), is defined as the closure in \(\Sigma\) of the set \(\{s\in\Sigma\mid f(s)=s\}\). **Lemma 3.3**.: _Let \(W\) be a subsurface of \(\Sigma\) neighborhood of \(x\) (may be equal to \(\Sigma\)) and suppose that for each subsurface \(U\subseteq W\) neighborhood of \(x\) there is a homeomorphism \(f_{U}\) such that \(f_{U}(\Sigma\setminus U)\subseteq U\). Then \(\mathrm{Map}(\Sigma)\) is globally CB._ Proof.: We prove that any finite-type subsurface of \(\Sigma\) is displaceable; in particular, \(\Sigma\) has zero or infinite genus. Let \(S\) be a finite-type subsurface of \(\Sigma\). Then we can construct a connected subsurface \(U_{S}\subseteq W\) that defines a neighborhood of \(x\) and such that \(U_{S}\cap S=\emptyset\). Applying the hypothesis of the lemma, there is a homeomorphism \(f\) such that \(f(\Sigma\setminus U_{S})\subseteq U_{S}\). As \(S\subseteq\Sigma\setminus U_{S}\) we obtain that \(f(S)\cap S=\emptyset\). Now, by Lemma 2.5, \(\mathrm{E}(\Sigma)\) is self-similar and, by Theorem 2.6, \(\mathrm{Map}(\Sigma)\) is globally CB. **Lemma 3.4**.: _Let \(T\) be a finite-type subsurface of \(\Sigma\) containing \(K\) such that if \(U_{T}\) is the only connected component of \(\Sigma\setminus T\) which is a neighborhood of \(x\) then_ 1. _there is a homeomorphism_ \(f_{0}\) _with_ \(f_{0}(\Sigma_{0})\subseteq U_{T}\) _and_ 2. _for each_ \(1\leq i\leq n\) _there is a homeomorphism_ \(f_{i}\) _with_ \(f_{i}(\Sigma_{i})\subseteq\Sigma_{0}\)_._ _Then \(\mathcal{V}_{K}\subseteq(F\mathcal{V}_{T})^{4n+2}\) where \(F=\{f_{i}^{\pm 1}\}_{i=0}^{n}\)._ Proof.: Let \(g\in\mathcal{V}_{K}\). Then \(g=g_{0}g_{1}\cdots g_{n}\) where each \(g_{i}\) has its support in \(\Sigma_{i}\). Observe that \(f_{0}^{-1}(T)\subseteq\Sigma\setminus f_{0}^{-1}(U_{T})\subseteq\Sigma\setminus \Sigma_{0}\). Since \(g_{0}\) has its support on \(\Sigma_{0}\) then \(f_{0}g_{0}f_{0}^{-1}\in\mathcal{V}_{T}\). Therefore \(g_{0}\in(F\mathcal{V}_{T})^{2}\). Now, let \(1\leq i\leq n\). We notice that \(f_{i}g_{i}f_{i}^{-1}\) has its support on \(\Sigma_{0}\). This is because \(g_{i}\) has its support on \(\Sigma_{i}\) and \(\operatorname{supp}(f_{i}g_{i}f_{i}^{-1})=f_{i}(\operatorname{supp}(g_{i}))\). As in the previous paragraph we have that \(f_{i}g_{i}f_{i}^{-1}\in(F\mathcal{V}_{T})^{2}\) and therefore \(g_{i}\in(F\mathcal{V}_{T})^{4}\). Putting all together we obtain the desired result. Proof of the sufficient part of Theorem 1.1.: Let \(K\) be a finite-type subsurface of \(\Sigma\) satisfying the requirements of the statement of Theorem 1.1. We prove that \(\mathcal{V}_{K}\) is CB. Let \(V\) be an arbitrary neighborhood of the identity in \(\operatorname{Map}(\Sigma)\). By Theorem 2.1, we need to show that there exists a finite set \(F\subset\operatorname{Map}(\Sigma)\) and \(m\geq 1\) such that \(\mathcal{V}_{K}\subseteq(FV)^{m}\). Take \(T\) a finite-type subsurface of \(\Sigma\) where each boundary component is a separating curve, with \(K\subseteq T\) and such that \(\mathcal{V}_{T}\subseteq V\cap\mathcal{V}_{K}\). Define \(U_{T}\) as the unique connected component of \(\Sigma\setminus T\) neighborhood of the unique maximal end \(x\). There are two cases for \(U_{T}\): 1. There is a connected subsurface \(U\subseteq U_{T}\) neighborhood of \(x\) for which there exists a homeomorphism \(f_{U}\) such that \(f_{U}(\Sigma_{0})\subseteq U\). 2. If item 1) does not hold then for every \(U\subseteq U_{T}\) a connected subsurface with one boundary component and \(x\in E(U)\) there is a homeomorphims \(f_{U}\) with \(f_{U}(\Sigma\setminus U)\subseteq U\). Suppose we are in item _1)_. Applying again the hypothesis to \(U\) we have two possibilities: 1. there is a connected subsurface \(\widetilde{U}\subseteq U\) neighborhood of \(x\) for which there exists a homeomorphism \(f_{\widetilde{U}}\) such that \(f_{\widetilde{U}}(\Sigma\setminus\widetilde{U})\subseteq\widetilde{U}\). 2. Item _i)_ does not hold. Then for every connected subsurface \(\widetilde{U}\subseteq U\) neighborhood of \(x\) there is a homeomorphism \(f_{\widetilde{U}}\) such that \(f_{\widetilde{U}}(\Sigma_{0})\subseteq\widetilde{U}\). If item _i)_ holds, then letting \(f_{0}:=f_{U}\) and \(f_{i}:=f_{\widetilde{U}}\) for each \(1\leq i\leq n\) in Lemma 3.4 we have that \(\mathcal{V}_{K}\subseteq(F\mathcal{V}_{T})^{4n+2}\subseteq(FV)^{4n+2}\) where \(F:=\{f_{i}^{\pm 1}\}_{i=0}^{n}\). Now, suppose item _ii)_ holds. We set \(f_{0}:=f_{U}\) and we use Lemma 3.2 to obtain for each \(1\leq i\leq n\) a homeomorphism \(f_{i}\) such that \(f_{i}(\Sigma_{i})\subseteq U\subseteq\Sigma_{0}\). Applying Lemma 3.4 we conclude that \(\mathcal{V}_{K}\subseteq(F\mathcal{V}_{T})^{4n+2}\subseteq(FV)^{4n+2}\) where \(F:=\{f_{i}^{\pm 1}\}_{i=0}^{n}\). Finally, if item _2)_ holds then by Lemma 3.3\(\operatorname{Map}(\Sigma)\) is globally CB and, in particular, it is locally CB. ## 4. Proof of Theorems 1.2 and 1.3 We use the following result. **Lemma 4.1** (Lemma 4.10, [10]).: \(\operatorname{E}(\Sigma)\) _is self-similar if and only if for any decomposition \(\operatorname{E}(\Sigma)=A_{1}\sqcup A_{2}\) into clopen subsets there is some \(A_{i}\) that contains a homeomorphic copy of \(\operatorname{E}(\Sigma)\) Proof of Theorem 1.2.: If \(\Sigma\) has no nondisplaceable subsurfaces of finite type then the result is given by Lemma 2.5. Suppose \(\Sigma\) has a nondisplaceable finite-type subsurface \(S\subseteq\Sigma\). We use Lemma 4.1 to prove that \(\operatorname{E}(\Sigma)\) is self-similar. Let \(\operatorname{E}(\Sigma)=A_{1}\sqcup A_{2}\) be a decomposition of \(\operatorname{E}(\Sigma)\) into clopen subsets. Let \(K\) be a finite-type subsurface as in Theorem 1.1. Since \(S\) is of finite-type then there is \(U\subseteq\Sigma_{0}\) a neighborhood of the unique maximal end \(x\) such that \(U\cap S=\emptyset\). Given that \(S\) is a nondisplaceable subsurface of \(\Sigma\), the subset \(U\) satisfies that for any \(\widetilde{U}\subseteq U\) neighborhood of \(x\) there is a homeomorphism \(f_{\widetilde{U}}\) such that \(f_{\widetilde{U}}(\Sigma_{0})\subseteq\widetilde{U}\). Hence, \(U\) is a stable neighborhood of \(x\). Without lost of generality we can suppose that \(A_{1}\) contains \(x\). If necessary, we can take \(U\) small enough such that \(\operatorname{E}(U)\subseteq A_{1}\). By the property satisfied by \(U\), there is a homeomorphic copy \(\Sigma_{0}^{\prime}\) of \(\Sigma_{0}\) contained in \(U\). On the other hand, by Lemma 3.2, for each \(i=1\ldots,n\), there is \(\Sigma_{i}^{\prime}\subseteq U\) homeomorphic to \(\Sigma_{i}\). Finally, let \(P\) denote the set of ends of \(\Sigma\) contained into \(K\). Remember that \(P\) consist of a finite number of punctures. Given that \(\Sigma\) has a unique maximal end there is a copy \(P^{\prime}\) of \(P\) inside \(\operatorname{E}(U)\). Now, using again the property of \(U\) we can make that the collection \(\{\Sigma_{i}^{\prime}\}_{i=0}^{n}\) pairwise disjoint and disjoint from \(P^{\prime}\). This implies that \(\operatorname{E}(U)\) (and therefore \(A_{1}\)) contains a homeomorphic copy of \(\operatorname{E}(\Sigma)\). Proof of Theorem 1.3.: If \(\Sigma\) has zero or infinite genus, combine Theorems 1.2 and 2.6 to conclude that \(\operatorname{Map}(\Sigma)\) is globally \(\operatorname{CB}\). If \(\Sigma\) has finite non-zero genus then \(\operatorname{Map}(\Sigma)\) is not globally \(\operatorname{CB}\) by Theorem 2.3. ## 5. Proof of Corollary 1.4 Suppose that \(\operatorname{Map}(\Sigma)\) is locally \(\operatorname{CB}\) and \(K\) is a finite-type subsurface as in Theorem 1.1. If additionally \(0<g(\Sigma)<\infty\), then all the genus of \(\Sigma\) is contained in \(K\) and therefore for any subsurface \(U\subseteq\Sigma_{0}\) whose interior is a neighborhood of the unique maximal end \(x\) does not exist a homeomorphism \(f\) satisfying that \(f(\Sigma\setminus U)\subseteq U\). This proves the following: **Corollary 5.1**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end \(x\) and \(0<g(\Sigma)<\infty\). Then \(\operatorname{Map}(\Sigma)\) is locally \(CB\) if and only if there is a connected finite-type subsurface \(K\) of \(\Sigma\) with the following properties:_ 1. \(\Sigma\setminus K=\Sigma_{0}\sqcup\Sigma_{1}\sqcup\cdots\sqcup\Sigma_{n}\) _where the closure of each_ \(\Sigma_{i}\) _is a surface of infinite-type with one boundary component,_ \(g(\Sigma_{i})\in\{0,\infty\}\) _and_ \(\Sigma_{0}\) _is a neighborhood of_ \(x\) _and,_ 2. _for any subsurface_ \(U\subseteq\Sigma_{0}\) _neighborhood of_ \(x\) _there is a homeomorphism_ \(f_{U}\) _such that_ \(f_{U}(\Sigma_{0})\subseteq U\)_._ Proof of Corollary 1.4.: (By contradiction) Suppose that \(\operatorname{E}(\Sigma)\) is countable. We prove that \(\operatorname{Map}(\Sigma)\) is not locally \(\operatorname{CB}\) by showing that item (2) in Corollary 5.1 does not occur for any finite-type subsurface \(K\) satisfying item (1). So, let \(K\) be a finite-type subsurface of \(\Sigma\) satisfying (1) of Corollary 5.1. As \(E(\Sigma)\) is homeomorphic to \(\omega^{\alpha}+1\) with \(\alpha\) a countable ordinal and \(\Sigma_{0}\) is a neighborhood of the unique maximal end \(x\), then \(E(\Sigma\setminus\Sigma_{0})\) contains a finite number of immediate predecessors of \(x\). Taking \(U\subseteq\Sigma_{0}\) neighborhood of \(x\) such that \(E(\Sigma\setminus U)\) contains more immediate predecessors of \(x\) than \(E(\Sigma\setminus\Sigma_{0})\) we have that for this \(U\) there does not exist a homeomorphism \(f_{U}\) such that \(f_{U}(\Sigma_{0})\subseteq U\) The ideas in the proof of Corollary 1.4 can be applied to obtain the following result: **Proposition 5.2**.: _Let \(\Sigma\) be an infinite-type surface with a unique maximal end \(x\) and \(0<g(\Sigma)<\infty\). If \(x\) has countably many (infinite) immediate predecessors then \(\operatorname{Map}(\Sigma)\) is not locally CB._ ## 6. Proof of Theorem 1.5 A globally CB group is in particular CB generated. Suppose that \(\operatorname{Map}(\Sigma)\) is locally CB but not globally CB. By Theorem 1.3, the surface \(\Sigma\) has finite nonzero genus. Let \(K\subseteq\Sigma\) be a finite type subsurface as in Theorem 1.1, that is, \[\Sigma\setminus K=\Sigma_{0}\sqcup\Sigma_{1}\sqcup\cdots\sqcup\Sigma_{n},\] where each \(\Sigma_{i}\) is of infinite type and has genus zero, \(\Sigma_{0}\) is a neighborhood of the unique maximal end of \(\Sigma\) and, \(\mathcal{V}_{K}\) is a CB neighborhood of the identity. Denote by \(P_{K}\) to the union of all \(\Sigma_{i}\) for \(i=1,\ldots,n\). Observe that \(E(\Sigma)\) is self-similar (by Theorem 1.2), \(K\) is compact with \(g(K)=g(\Sigma)\) and \(n+1\) boundary components. We use the following lemma that appears in [10, Observation 6.9]. Abusing of notation, given a finite-type subsurface \(S\) of \(\Sigma\) with compact boundary we think of an element of \(\operatorname{Map}(S)\) as an element of \(\operatorname{Map}(\Sigma)\) by extending it by the identity on the complement of \(S\) in \(\Sigma\). **Lemma 6.1** (Observation 6.9, [10]).: _Let \(\Sigma\) be an infinite-type surface possibly with nonempty boundary and \(S\subseteq\Sigma\) be a finite-type subsurface. Then there is \(D_{S}\subseteq\operatorname{Map}(\Sigma)\) a finite set of Dehn twist such that for any finite-type subsurface \(S^{\prime}\subseteq\Sigma\)_ \[\operatorname{Map}(S^{\prime})\subseteq\langle D_{S}\cup\mathcal{V}_{S} \rangle\subseteq\operatorname{Map}(\Sigma).\] Proof of Theorem 1.5.: Applying Lemma 6.1 to the subsurface \(K\), let \(D_{K}\) be a finite set of Dehn twists such that for every finite type subsurface \(S^{\prime}\) of \(\Sigma\), \(\operatorname{Map}(S^{\prime})\) is contained in the group generated by \(D_{K}\cup\mathcal{V}_{K}\). As \(\operatorname{E}(\Sigma)\) is self-similar, there is \(g_{K}\in\operatorname{Map}(\Sigma)\) such that \(g_{K}(P_{K})\subseteq\Sigma_{0}\). Let \(G\) be the subgroup of \(\operatorname{Map}(\Sigma)\) generated by the CB set \(\{g_{K}\}\cup D_{K}\cup\mathcal{V}_{K}\). We show that \(\operatorname{Map}(\Sigma)\) coincides with \(G\). Let \(f\in\operatorname{Map}(\Sigma)\). First we prove that there exist \(f^{\prime},f^{\prime\prime}\in G\) such that \(f^{\prime}f^{-1}f^{\prime\prime}|_{P_{K}}=Id_{P_{K}}\). Indeed, take \(U\subseteq\Sigma_{0}\) a neighborhood of the unique maximal end \(x\) of \(\Sigma\) such that \(f(U)\subseteq\Sigma_{0}\). As the space of ends of \(\Sigma\) is self-similar, \(\operatorname{E}(U)\) contains a homeomorphic copy of \(\operatorname{E}(\Sigma)\), and therefore there is \(P^{\prime}_{K}\subseteq U\) homeomorphic to \(P_{K}\). In particular, \(f(P^{\prime}_{K})\subseteq\Sigma_{0}\). Now, as \(\Sigma_{0}\) has genus zero there is \(h\in\operatorname{Map}(\Sigma_{0})\) such that \(hf(P^{\prime}_{K})=g_{K}(P_{K})\). So, \(f^{-1}h^{-1}g_{K}(P_{K})\subseteq\Sigma_{0}\). As \(g_{K}(P_{K})\) is also contained into \(\Sigma_{0}\), there is \(h^{\prime}\in\operatorname{Map}(\Sigma_{0})\) such that \(h^{\prime}f^{-1}h^{-1}g_{K}(P_{K})=g_{K}(P_{K})\) and \(h^{\prime}f^{-1}h^{-1}g_{K}|_{\partial P_{K}}=g_{K}|_{\partial P_{K}}\), in other words, \(g_{K}^{-1}h^{\prime}f^{-1}h^{-1}g_{K}(P_{K})=P_{K}\) and \(g_{K}^{-1}h^{\prime}f^{-1}h^{-1}g_{K}|_{\partial P_{K}}=Id_{\partial P_{K}}\). Finally, we can find an element \(w\in\operatorname{Map}(P_{K})\subseteq\mathcal{V}_{K}\) such that the restriction of \(wg_{K}^{-1}h^{\prime}f^{-1}h^{-1}g_{K}\) to \(P_{K}\) is equal to \(Id_{P_{K}}\). Letting \(f^{\prime}:=wg_{K}^{-1}h^{\prime}\) and \(f^{\prime\prime}:=h^{-1}g_{K}\) we obtain the desired result. Let \(g:=f^{\prime}f^{-1}f^{\prime\prime}\) and put \(S^{\prime}:=K\cup g(K)\). Again, by Lemma 6.1, \(\operatorname{Map}(S^{\prime})\) is contained in the group generated by \(D_{K}\cup\mathcal{V}_{K}\) and then it is contained in the group \(G\). Now, observe that \(\partial\Sigma_{0}\) and \(g(\partial\Sigma_{0})\) are essential separating curves of the same topological type in \(S^{\prime}\). Then there is \(g^{\prime}\in\operatorname{Map}(S^{\prime})\subseteq G\) such that \(g^{\prime}g\) is the identity on \(K\), that is, \(g^{\prime}g\in\mathcal{V}_{K}\). Therefore, \(f\in G\)
2309.12384
Probability of Default modelling with Lévy-driven Ornstein-Uhlenbeck processes and applications in credit risk under the IFRS 9
In this paper we develop a framework for estimating Probability of Default (PD) based on stochastic models governing an appropriate asset value processes. In particular, we build upon a L\'evy-driven Ornstein-Uhlenbeck process and consider a generalized model that incorporates multiple latent variables affecting the evolution of the process. We obtain an Integral Equation (IE) formulation for the corresponding PD as a function of the initial position of the asset value process and the time until maturity, from which we then prove that the PD function satisfies an appropriate Partial Integro-Differential Equation (PIDE). These representations allow us to show that appropriate weak (viscosity) as well as strong solutions exist, and develop subsequent numerical schemes for the estimation of the PD function. Such a framework is necessary under the newly introduced International Financial Reporting Standards (IFRS) 9 regulation, which has imposed further requirements on the sophistication and rigor underlying credit modelling methodologies. We consider special cases of the generalized model that can be used for applications to credit risk modelling and provide examples specific to provisioning under IFRS 9, and more.
Kyriakos Georgiou, Athanasios N. Yannacopoulos
2023-09-21T16:54:05Z
http://arxiv.org/abs/2309.12384v1
Probability of Default modelling with Levy-driven Ornstein-Uhlenbeck processes and applications in credit risk under the IFRS 9 ###### Abstract In this paper we develop a framework for estimating Probability of Default (PD) based on stochastic models governing an appropriate asset value processes. In particular, we build upon a Levy-driven Ornstein-Uhlenbeck process and consider a generalized model that incorporates multiple latent variables affecting the evolution of the process. We obtain an Integral Equation (IE) formulation for the corresponding PD as a function of the initial position of the asset value process and the time until maturity, from which we then prove that the PD function satisfies an appropriate Partial Integro-Differential Equation (PIDE). These representations allow us to show that appropriate weak (viscosity) as well as strong solutions exist, and develop subsequent numerical schemes for the estimation of the PD function. Such a framework is necessary under the newly-introduced International Financial Reporting Standards (IFRS) 9 regulation, which has imposed further requirements on the sophistication and rigor underlying credit modelling methodologies. We consider special cases of the generalized model that can be used for applications to credit risk modelling and provide examples specific to provisioning under IFRS 9, and more. **Keywords**: stochastic modeling, probability, default, credit risk, numerical methods. **Mathematics Subject Classification**: 60H30, 45K05 (Primary), 91G40, 91G60, 91-08 (Secondary). ###### Contents * 1 Introduction * 2 Aims and modelling framework * 3 The generalized asset value model and PD function * 3.1 Regime switching and stochastic volatility models * 3.2 The generalized model * 3.3 The Probability of Default function Integral characterization and properties of the PD function * 4.1 Required notation for the Integral Equations * 4.2 Integral Equation formulations of the PD functions * 4.3 Properties and existence of solutions * 5 Partial Integro-Differential Equations for the PD function * 5.1 Viscosity solutions * 5.2 The survival probability as a classical solution of PIDEs derived from the IE formulations * 5.3 Regularity of the one-dimensional PD function * 6 Numerical estimation of PD functions * 6.1 One dimensional model * 6.2 Regime switching * 6.3 Stochastic volatility model * 7 Applications in credit risk * 7.1 IFRS 9 provision calculations * 7.1.1 Stage 1 provisions * 7.1.2 Stage 2 Provisions * 7.1.3 Provisions under the regime switching model * 7.2 Further Applications in credit risk modelling * 7.2.1 Pricing of Credit Default Swaps * 7.2.2 Credit Portfolio Optimization * 8 Conclusion * A Stochastic processes * A.1 The continuous Ornstein Uhlenbeck process * A.2 Continuous Time Markov Chain * A.3 Levy processes * Uhlenbeck models * B.1 Regularity of solutions to parabolic PDEs * C Kolmogorov equations for regime switching and stochastic volatility models * D PIDEs for the PD functions in Sobolev spaces * E Existence and continuity of the PD function * F Regularity of solutions to parabolic PDEs ## 1 Introduction One of the main issues currently concerning financial institutions is the implementation of the new International Financial Reporting Standards (IFRS) 9. Due to the financial crisis, the purpose of the updated standards is to introduce a framework under which institutions forecast credit losses (for loan provisioning purposes). Specifically, "under the impairment approach in IFRS 9 it is no longer necessary for a credit event to have occurred before credit losses are recognised. Instead, an entity always accounts for expected credit losses, and changes in those expected credit losses. The amount of expected credit losses is updated at each reporting date to reflect changes in credit risk since initial recognition and, consequently, more timely information is provided about expected credit losses". Furthermore, "the objective of the impairment requirements is to recognise lifetime expected credit losses for all financial instruments for which there have been significant increases in credit risk since initial recognition -- whether assessed on an individual or collective basis -- considering all reasonable and supportable information" (IFRS 9 Red Book). Further details and research are given in [11] and [56]. Hence, loan provisioning regulations under IFRS 9, require financial institutions to consider expected losses based on the current credit state of each loan and the possible future losses. This estimation requires knowledge of the lifetime Probability of Default (PD) for all loan exposures, as well as additional risk parameters such as the Loss Given Default (LGD) and the Exposure at Default (EAD), and finally being able to update these quantities dynamically under changing market conditions. The estimation of future losses (i.e., forward looking provisions) can therefore be tackled by employing the theory of stochastic processes and their dynamics. Under IFRS 9, it is now mandatory for financial institutions to classify loans into three distinct categories, known as the IFRS 9 Stages. Specifically, Stage 1 loans are considered performing, Stage 2 contains loans which have displayed a significant increase in credit risk and Stage 3 contains all Non-Performing loans (NPLs), considered to have defaulted. As mentioned, the institutions must forecast future losses specifically for the Stage 2 loans, which are considered to be at risk. The resulting Stage 2 provisions are referred to as the Expected Lifetime Provisions (ECL). In the present work, we will focus on portfolios of corporate and small business loans, where it is common practice to consider the company's assets to be governed by stochastic process (see e.g., [9] and [8]). Under this assumption, the PD associated with each loan depends on the underlying asset process and we can define the PD as the probability that the asset process falls below a fixed threshold. Perhaps one of the most influential changes due to the IFRS 9 is the requirement for financial institutions to consider Expected Lifetime Provisions (ECL), whereby future losses must be forecast using mathematically robust and rigorous methods, for all Stage 2 credit exposures, which are considered to have displayed a significant increase in risk. This estimation requires knowledge of the lifetime Probability of Default (PD) for all loan exposures, as well as additional risk parameters such as the Loss Given Default (LGD) and the Exposure at Default (EAD), and finally being able to update these quantities dynamically under changing market conditions. There exist recent papers detailing and studying the ECL calculation, such as [11] and [56]. We extend this modelling framework by employing the theory of stochastic processes and their dynamics for the estimation of lifetime PDs and future losses (i.e., forward looking provisions). In this paper, we aim to develop a stochastic modelling framework under which the PD process can be considered mathematically and practically. Using this approach we can address in a robust and efficient manner the challenging provisioning, forecasting and pricing tasks under IFRS 9. In general, calculating default probabilities both analytically and numerically is of paramount importance in risk management and a broad range of financial applications. However, particularly under IFRS 9, credit loss forecasting has introduced the need for robust structural models, that can be used for pricing and provisioning purposes. To this end, we will consider stochastic models for the evolution the underlying asset value process, whose default will be studied as an appropriate first time-hitting problem. Therefore, it can be assumed that we are working mainly within portfolios of corporate and small business loans, where it is common practice to consider the company's assets to be governed by stochastic process (see e.g., [9] and [8]). Under this assumption, the PD associated with each loan depends on the underlying asset process and we can define the PD as the probability that the asset process falls below a fixed threshold. More specifically, we will assume that the asset process is governed by an Ornstein - Uhlenbeck (OU) process with a jump component, a member of the family of jump diffusion processes. We note that practitioners may consider the evolution of asset-dependent processes instead, e.g., returns; such processes can still be described by similar stochastic models, rendering the methods proposed in this paper applicable in these cases, as well. For brevity, hereinafter we will refer to this underlying process as the asset process, with the understanding that is can be replaced with any related dynamics considered appropriate by practitioners. Important theoretical background of such processes and their properties are given in [2] and [46]. The jump process will account for abrupt changes in the asset processes, which are very common in practice and are closely related to loan defaults. Obtaining the evolution of the PD values, based on the stochastic asset model, will allow us to then tackle various modelling tasks which are currently open problems for financial institutions under the IFRS 9 framework. It is common in the literature to consider two separate cases for default probabilities: * A variable starting time and constant time horizon, defined by the process. * A variable time horizon and constant starting time. Our results in this paper constitute a generalization that combines these two cases (as analyzed in [42] and [44]). Specifically, we consider a generalized "Probability of Default function" and prove that, under certain homogeneity assumptions, we can obtain Integral Equations (IEs) and Partial Integro-Differential Equations (PIDEs) for both aforementioned PD cases. Finally, using the IE formulation we will prove the existence of the PD values and the solvability of the PIDEs in the viscosity sense (details are given in the corresponding Section), in order to obtain estimates that can be used for the aforementioned modelling tasks, without having to always assume and/or prove strict regularity conditions, and further consider the conditions under which these solutions can be considered strong, with the required smoothness. The above methodology will be used to consider real-life examples of loan provisioning calculations, scenario analysis and pricing, exemplifying the wide range of credit risk modelling tasks the proposed methodology can address. Finally, we note that, even though motivated by credit risk, the approaches detailed in this paper can find applications in other areas of financial mathematics, such as derivatives pricing, where the use of stochastic modelling remains prevalent, e.g., in the pricing of barrier options. This paper is structured as follows. In Section 2 we recall the background of similar stochastic processes in the literature and define the corresponding Probability of Default functions, in Section 3 we present the generalized model for the asset value process and discuss special cases which are applicable in the IFRS 9 framework. In Section 4 we obtain IEs for the PD functions under each model considered and use them to prove that the PD functions satisfy certain mathematical properties, and in Section 5 we obtain the PIDEs for the PD functions. The remainder of the paper is dedicated to numerical approximations and applications: in Section 6 we construct appropriate Finite Difference numerical schemes to approximate the PD functions and, finally, in Section 7, we present examples of the proposed methods applied to IFRS 9 provisioning, credit derivatives pricing and credit optimization problems. Aims and modelling framework We start by discussing the PD process which, in its most general form, can be written as a function of both the starting time and maturity, as well as of the initial position of the corresponding asset value process. Furthermore, to accurately model real-life dynamics, it is necessary to account for the dependence on latent variables which affect the PD. Incorporating such processes, which in practice are e.g., macroeconomic variables or different market regimes, is of paramount importance as it largely affects PD estimation and subsequent modelling results. Rigorously accounting for these exogenous variables is therefore necessary, and will allow us to consider a large family of stochastic processes that can be used in practice. To begin, consider compact and bounded sets \(\mathcal{D},\mathcal{D}_{i}\subset\mathbb{R}\), for \(i=1,\ldots,d\). Then, define the PD function: **Definition 2.1**.: Consider \(x\in\mathcal{D}\) and the vector of (discrete or continuous) stochastic processes \((X^{i}_{t})_{t\geq 0}\) with corresponding state spaces \(\mathcal{D}_{i}\), for \(i=1,\ldots,d\). Furthermore, consider the stochastic asset value process \((G_{t})_{t\geq s}\), with initial value \(G_{s}=x\) and which depends on \((X^{i}_{t})_{t\geq 0}\), for \(i=1,\ldots,d\). Then, we define the Probability of Default function \(\Psi:\mathcal{D}\times\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{d}\times [0,T]\times[0,T]\rightarrow[0,1]\), for some fixed \(T>0\), by: \[\Psi(x,x^{1}_{s},x^{2}_{s},\ldots,x^{d}_{s},s,t)=\mathbb{P}\Big{(}\inf_{s \leq r\leq t}G_{r}\leq 0|G_{s}=x,X^{1}_{s}=x^{1}_{s},X^{2}_{s}=x^{2}_{s}, \ldots,X^{d}_{s}=x^{d}_{s}\Big{)}, \tag{1}\] and the corresponding survival probability \(\Phi:\mathcal{D}\times\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{d}\times [0,T]\times[0,T]\rightarrow[0,1]\) by: \[\Phi(x,x^{1}_{s},x^{2}_{s},\ldots,x^{d}_{s},s,t)=1-\Psi(x,x^{1}_{s},x^{2}_{s},\ldots,x^{d}_{s},s,t). \tag{2}\] To motivate this definition and its usefulness, notice that by fixing \(s\) we obtain the standard finite-horizon ruin probability (see e.g., [42]), whereas by fixing \(t\) we obtain the ruin probability with variable starting time, as defined in [44], which can be used to define a martingale. Finally, allowing \(t\rightarrow\infty\) we obtain the infinite-horizon ruin probability. Modelling the evolution of PD functions has become even more important under IFRS 9, due to the increased complexity of provision calculations and Staging criteria. In general, all aforementioned PD functions, corresponding to variable maturity or starting times find many applications and have been considered in the field of credit risk, such as in [6], [50] and [57]. For example, the case of a variable maturity is often referred to as the Lifetime Probability of Default and is used extensively for provisioning and pricing purposes. Particularly in the context of IFRS 9 modelling, the Lifetime Probability of Default is used to assess credit risk at origination, as well as for Expected Lifetime Provisions for Stage 2 loans. We give detailed examples of such calculations in Section 7. As mentioned, it is standard in the field of financial mathematics to consider the evolution of a debtor's assets to be governed by a stochastic process. A well-documented process that is used in various such applications is the Ornstein-Uhlenbeck (OU) process. In Appendix A.1 we recall important properties of the OU process, a generalized version of which we will consider in this paper. OU models have been considered in past research and many applications. For example, a well-known special case is the Vasicek model [53]. Further work has explored the Merton model for default with underlying dynamics given by the continuous OU process, and has been extended to cases incorporating jumps. These find important applications particularly in credit risk modelling and pricing; see e.g., [36] and [9], respectively. By including a jump process to the continuous OU asset process, we obtain the Levy-driven (jump) Ornstein-Uhlenbeck process, defined in (3). \[dG_{u}=k(\theta-G_{u})du+\sigma dB_{u}+\int_{\mathbb{R}}zN(du,dz),\ \ \ G_{0}=x. \tag{3}\] This is a natural generalization, as significant changes in credit events are often abrupt and unpredictable (particularly a deterioration in creditworthiness), corresponding to a discontinuous component in the driving stochastic process. Indeed, the goal of credit risk requirements under IFRS 9 is to ensure that financial institutions and their customers are protected against such rare and unexpected events and the subsequent losses. It is therefore important to capture the effect of such events mathematically, which is why this model will form the basis of our analysis and will be used to construct more sophisticated models in the next section. We will employ the fact that the jump OU process is time homogeneous so that, rather than considering a starting time \(s\) and initial position \(G_{s}\), we can define the time until maturity by \(u:=t-s\) and consider \((G_{u})_{u\geq 0}\), as above, equivalently. This is an important property that we will take advantage of to simplify the PD estimation. To conclude, we note that the use of Levy processes for financial modeling is well-documented and established. [50] gives an extensive analysis of Levy processes and their use for asset process modelling, credit derivatives pricing and more. In [41] and [47] the authors consider a Levy-driven OU process, and Levy multivariate models for assets processes. The former fits the model parameters to the General Motors stock price, while the latter considers many different indices, obtaining suprisingly accurate results. We also refer the interested reader to [5] for a detailed analysis of the properties of the multivariate model. Seminal work has also been done in the study of Levy-driven OU processes in [9]. Finally, well-documented numerical methods exist for the calibration of stochastic models with jumps, such as the Yuima framework for stochastic differential equations in R statistical language [16]. ## 3 The generalized asset value model and PD function In this section we develop a stochastic model that incorporates the exogenous variables required when considering asset value processes. To incorporate such effects, we build upon the family of regime switching and stochastic volatility models, as described below. We combine these to produce a generalized model, which we will use to construct a framework that encapsulates a large family of stochastic processes that can be used for asset value modelling and subsequent credit risk calculations. In addition to the mathematical results presented in this paper, we highlight that the framework developed using the generalized model addresses the strict requirements under IFRS 9, whereby credit risk modelling is required to incorporate multiple appropriate latent variables, whilst adhering to mathematical rigor. ### Regime switching and stochastic volatility models First recall that loan exposures under the IFRS 9 framework are now classified into three Stages. Each of these Stages correspond to a given level of risk, with the most noteworthy change being the introduction of Stage 2 loans, i.e., credit exposures which have exhibited a significant increase in credit risk (SICR event, which can be defined by the institution, e.g., as a statistically significant increase in PD, a delinquency warning flag etc.). By definition, changes in the risk profile of an exposure will correspond to changes in the dynamics of the underlying asset process. For example, a debtor may request restructuring, or may be 30 days delinquent. This will trigger a SICR event, which can then affect the underlying asset value process. To capture this dependency we consider a regime switching model for the asset process, whereby the parameters of the stochastic process vary according to the underlying rating (Stage) of the exposure. We can do this by considering the Continuous Time Markov Chain (CTMC) \((R_{t})_{t\geq 0}\) describing the rating at time \(t\), where the set of all loan ratings is denoted by \(\mathcal{R}\), with cardinality \(|\mathcal{R}|=R\). Therefore, we obtain the following jump diffusion with Markov switching model: \[dG_{u}=k(R_{u})\big{(}\theta(R_{u})-G_{u}\big{)}dt+\sigma(R_{u})dB_{u}+\int_{ \mathbb{R}}zN(du,dz),\ \ G_{0}=x,R_{0}=\rho, \tag{4}\] with \(\rho\in\mathcal{R}\). Note that in subsequent sections we adopt the notation \(k_{\rho},\theta_{\rho},\sigma_{\rho}\), for brevity. For a reminder of CTMC processes and their properties see Appendix A.2. In the sequel, to develop a realistic model we want to capture the effects of macroeconomic variables, which naturally affect the evolution of the asset process; this is necessary for the modelling tasks we will consider under IFRS 9, as previously discussed. Typically, such latent variables are incorporated by considering stochastic volatility models, whereby the diffusion term of the asset process also evolves according to a stochastic process, as described by the coupled process: \[\begin{cases}dG_{t}=\mu_{x}(G_{t},Y_{t})dt+\sigma_{x}(G_{t},Y_{t})dB_{t}+\int_ {\mathbb{R}}zN(dt,dz),\ \ G_{s}=x,\\ dY_{t}=\mu_{y}(Y_{t})dt+\sigma_{y}(Y_{t})dW_{t},\ \ \ Y_{s}=y,\end{cases} \tag{5}\] for \(y\in\mathcal{V}\) and where \(B_{t}\) and \(W_{t}\) are independent Brownian motions. Standard cases are the Bates' model, introduced in [10], as well as the Heston model (see [14]), a version of which we consider below. In particular, letting \(\mu_{x}(Y_{t})=k(\theta-Y_{t})\) and \(\sigma_{x}(Y_{t})=\sqrt{Y_{t}}\), we obtain the asset process driven by a stochastic volatility process, which follows the well-established Cox-Ingersoll-Ross (CIR) model, developed in [23] (note that both processes are time-homogeneous): \[\begin{cases}dG_{u}=k(\theta-G_{u})dt+\sqrt{Y_{t}}dB_{t}+\int_{\mathbb{R}}zN( dt,dz),\ \ G_{0}=x,\\ dY_{u}=\kappa(\mu-Y_{u})dt+\xi\sqrt{Y_{u}}dW_{t},\ \ \ Y_{0}=y.\end{cases} \tag{6}\] The above models are widely used in mathematical finance and stochastic modelling. Regime switching is a well-documented approach in financial modelling (see [34]), with applications ranging from macroeconomics (e.g., [3]) to option pricing (e.g., [26], [32]) and interest rate modelling ([30]). In the case of credit risk, the underlying Markov chain is considered as an indication of the market conditions, which significantly impacts credit exposures and ratings. In subsequent sections we add to the multitude of applications by using the PD function that arises from the regime switching model to estimate Lifetime provisions and scenario analysis under IFRS 9. When considering regime switching in asset processes it is important to note that financial institutions may currently have various credit rating systems, which are not compatible with the IFRS 9 staging (which requires three district Stages for exposure ratings). However, recent work has shown that this is not restrictive and IFRS 9 - compatible transition matrices can be obtained from the existing internal ratings (see [28]). Finally, we refer the reader to [58] for a detailed analysis of more general regime switching jump diffusion processes, where the authors also consider the dynamics of the underlying Markov process to be a function of the initial position of the jump diffusion. The stochastic volatility model is a natural extension, as it can be seen as the limit process of the regime switching model, as \(\mathcal{R}=\mathbb{R}_{+}\). Such models, in the case of both continuous and jump processes, have also been considered for numerous applications in mathematical finance, particularly in pricing and hedging, such as in [52] and [29]. ### The generalized model Both the aforementioned models have important applications in credit risk modelling under the IFRS 9 framework. Combining the two, we obtain a generalized model that captures all the observable or latent variables required to estimate the PD evolution and subsequently tackle the IFRS 9 modelling tasks. This generalized model is of the form: \[\begin{cases}dG_{u}=k(Y_{u},R_{u})\big{(}\theta(Y_{u},R_{u})-G_{u} \big{)}dt+\sigma(Y_{u},R_{u})dB_{u}+\int_{\mathbb{R}}zN(du,dz),&G_{0}=x,\\ dY_{u}=\kappa(\mu-Y_{u})dt+\xi\sqrt{Y_{u}}dW_{t}.\end{cases} \tag{7}\] Specifically, we will be considering a combination of (4) and (6), which gives rise to the following. **Definition 3.1**.: Under the generalized model, the asset value process is defined by the triple \((G_{t},R_{t},Y_{t})_{t\geq 0}\), capturing both the switching and volatility processes, and is given by: \[\begin{cases}dG_{u}=k(R_{u})\big{(}\theta(R_{u})-G_{u}\big{)}dt+ \sigma(R_{u})\sqrt{Y_{t}}dB_{u}+\int_{\mathbb{R}}zN(du,dz),&G_{0}=x,\\ dY_{u}=\kappa(\mu-Y_{u})dt+\xi\sqrt{Y_{u}}dW_{t},\end{cases} \tag{8}\] with \(G_{0}=x,R_{0}=\rho\) and \(Y_{0}=y\). Before moving on to define the appropriate Probability of Default functions, it will be useful to define some notation. **Remark 3.2**.: An important note is that the transition probability of the generalized OU process is also uniformly continuous. This follows from the fact that its transition probability, for given \((R_{0},Y_{0})=(\rho,y)\), \(p(x^{\prime},x,t;\rho,y)\), is simply the coupling of the transition densities of the corresponding stochastic volatility models, which are continuous functions (see e.g., [1]), and is therefore uniformly continuous on all closed and bounded intervals \(\mathcal{D}\) we will consider. **Notation 3.3**.: 1. Throughout the remainder of this paper, we employ the notation \(Z_{u}^{x}\) to represent the stochastic process \((Z_{u})_{u\geq 0}\), with \(Z_{0}=x\), where appropriate. We also generalize this notation to incorporate cases with additional underlying variables \(X_{t}^{1},X_{t}^{2},\ldots,X_{t}^{n}\), by writing \(Z_{u}^{(x_{1},x_{2},\ldots,x_{n})}\) to represent \((Z_{u})_{u\geq 0}\) with \(X_{0}^{i}=x_{i}\) for \(i=1,2,\ldots,n\), (the superscripts are to be understood as indices, i.e., the \(i-\)th underlying variable is \((X_{t}^{i})_{t\geq 0}\)). 2. In the following sections, when referring to the transition densities of the regime switching, stochastic volatility and generalized OU models, we will omit the dependence on the latent variables for brevity, as it will be obvious from the context. ### The Probability of Default function Following the definition of the PD function, as given in (1), under the generalized model (8) we will condition on the initial state of the regime switching and stochastic volatility processes, i.e., \(\rho\) and \(y\), to obtain: \[\Psi(x,\rho,y,s,t):=\mathbb{P}\Big{(}\inf_{s\leq r\leq t}G_{r}\leq 0 |G_{s}=x,R_{s}=\rho,Y_{s}=y\Big{)}. \tag{9}\] Under this assumption, we can utilize the time homogeneity property to write the PD function more succinctly, whilst still being able to obtain the evolution of the PD, both in the case of a variable maturity and variable starting time. We describe this in the lemma below. **Lemma 3.4**.: _Under the generalized model (8), the PD function with variable maturity \(\tilde{\Psi}(x,\rho,y,s;t)\) and variable starting time \(\tilde{\Psi}(x,\rho,y,s;t)\), can be retrieved from the generalized function \(\Psi(x,\rho,y,u)\), where \(u=t-s\) represents the remaining time until maturity._ Proof.: As mentioned, this observation follows immediately from the time homogeneity of the asset process, since: \[\Psi(x,s,t) =\mathbb{P}\Big{(}\inf_{s\leq r\leq t}G_{r}\leq 0|G_{s}=x,R_{s}= \rho,Y_{s}=s\Big{)}\] \[=\mathbb{P}\Big{(}\inf_{0\leq r\leq t-s}G_{r}\leq 0|G_{0}=x,R_{0}= \rho,Y_{0}=y\Big{)}=\Psi(x,\rho,y,0,t-s). \tag{10}\] We can now write \(\Phi(x,\rho,y,u)\) with \(u:=t-s\) representing the remaining time until maturity. The above shows that, by fixing the appropriate time and with a simple change of variables, we can obtain the evolution of both PD processes. It easily follows that this approach can be generalized to any time homogeneous stochastic processes. Throughout the remainder of this paper, we will therefore use the following formulation of the PD and corresponding survival process: \[\Psi(x,\rho,y,u):=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}\leq 0|G_ {0}=x,R_{0}=\rho,Y_{0}=y\Big{)}\equiv\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{(x, \rho,y)}\leq 0\Big{)}, \tag{11}\] \[\Phi(x,\rho,y,u):=1-\Psi(x,\rho,y,u)=\mathbb{P}\Big{(}\inf_{r\leq u }G_{r}^{(x,\rho,y)}>0\Big{)}. \tag{12}\] **Remark 3.5**.: It is worth emphasizing that we use the general term "time until maturity" purposefully, as it captures both PD cases, under the homogeneity assumption. We will continue to use this term throughout this and subsequent papers, to instill the importance of this generalization. Furthermore, the homogeneity assumption is strong, yet fair. Particularly in the case of corporate and/or small business loans, it is natural to consider such asset processes, since credit risk modelling is often done across complete financial/business cycles (e.g., years or quarters), over which the evolution of the asset process (or related return processes) will have similar dynamics, regardless of the exact point in time. However, even without this assumption, the approaches developed in this paper can be used by fixing the either the starting or maturity time in order to obtain whichever case of the PD process the modeller requires. Hence, this framework is useful for PD modelling under any asset value process. Using the generalized model and the corresponding PD (or survival) process (11) (or (12)), we can prove certain mathematical properties that are required to ensure the existence of appropriate solutions for the Partial Integro-differential Equations (PIDEs) we will obtain for the PD functions. This creates a complete and robust mathematical framework which can be applied even without assuming or proving regularity, and is therefore applicable to a wide range of asset value models. At the same time, it is important to discuss the practical implications and applicability of the approaches described in this and subsequent sections. When considering real-life credit risk modelling tasks the state of the regime (e.g., the IFRS 9 Stage), and/or the value of any underlying macroeconomic factors may be observable and can therefore be inserted explicitly into the generalized model (8), thereby obtaining the regime switching or stochastic volatility model, with PD functions given by: \[\Psi(x,\rho,u)=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{(x,\rho)}>0\Big{)}, \tag{13}\] \[\Psi(x,y,u)=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{(x,y)}>0\Big{)}, \tag{14}\] respectively, and corresponding PD (or survival) functions \(\Psi(x,\rho,u)\) and \(\Psi(x,y,u)\) (or \(\Phi(x,\rho,u)\) and \(\Phi(x,y,u)\)). These models can then be used for the tasks we consider under IFRS 9, and credit risk more generally. For this reason, in Section 6 we develop numerical schemes for such simplified versions of the generalized model, starting with the one-dimensional Levy-driven OU asset and its corresponding PD function \(\Psi(x,u)=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{x}\leq 0\Big{)}\) (and survival \(\Phi(x,u)\)), and continuing with the regime switching and stochastic volatility models, which will be used for related applications. **Remark 3.6**.: We refer the reader to Appendix B for a brief overview of useful PDEs that the survival processes and transition densities of the non-jump versions of the above models satisfy (i.e., the versions which do not contain the Levy jump process). These, known as Kolmogorov backward equations, can be written in terms of the infinitesimal generators of the stochastic processes, and will be referred to in the subsequent sections, where we will obtain similar equations for the Levy-driven models. We also note that we adopt the notation for the generator operators used in these equations for the remainder of the paper, for convenience. ## 4 Integral characterization and properties of the PD function As previously mentioned, our approach ultimately relies on deriving and solving (PIDEs) for the PD functions. To obtain these equations, we will first consider an integral equation (IE) characterization of the PD under the generalized model, from which we can obtain similar representations for the simplified models. We develop these IEs in this section, which will allow us to prove that the PD functions enjoy the properties required so as to be considered appropriate (either weak or strong) solutions to the PIDEs. ### Required notation for the Integral Equations To ease the calculations presented in this section, we first introduce some notation, which will be used for the integral equations. **Definition 4.1**.: Consider \(x\in\mathcal{D}\) and the vector of stochastic processes \((X_{t}^{i})_{t\geq 0}\), with corresponding state space \(\mathcal{D}_{i}\), for \(i=0,1,\ldots,d\). For a fixed time \(u\in[0,T]\), we define the family of operators \(\big{(}\mathcal{T}_{s},s\in[0,u]\big{)}\), acting on a function \(\phi:\mathbb{R}^{d+1}\times[0,T]\to[0,1]\), by: \[\mathcal{T}_{s}\phi(x,x_{0}^{1},x_{0}^{2},\ldots,x_{0}^{d},u)=\mathbb{E}[\phi (x,X_{s}^{1},X_{s}^{2},\ldots,X_{s}^{d},u-s)|X_{0}^{1}=x_{0}^{1},X_{0}^{2}=x_{ 0}^{2},\ldots,X_{0}^{d}=x_{0}^{d}]. \tag{15}\] In our setting, \(x\) corresponds to the initial position of the asset value process, i.e., \(G_{0}=x\), and each \(X_{t}^{i}\) represents a latent variable, such as the CTMC in the regime switching model or the volatility in the stochastic volatility model. To give analytic forms that will be used in these two models, respectively, we specify the following cases: 1. Let \((X^{i}_{t})_{t\geq 0}\), for \(i=1,2,\ldots,d\) be discrete space and independent stochastic processes, with state space \(\mathcal{X}^{i}\), such that \(|\mathcal{X}^{i}|<\infty\), and with transition probabilities \(\pi_{i}(k,x^{i}_{0},t):=\mathbb{P}(X^{i}_{t}=k|X^{i}_{0}=x^{i}_{0})\) for \(k\in\mathcal{X}^{i}\). Then: \[\mathcal{T}_{s}\phi(x, x^{1}_{0},x^{2}_{0},\ldots,x^{d}_{0},u)\] \[=\sum_{x^{1}_{s}\in\mathcal{X}^{1}}\cdots\sum_{x^{d}_{s}\in \mathcal{X}^{d}}\phi(x,x^{1}_{s},x^{2}_{s},\ldots,x^{d}_{s},u-s)\pi_{1}(x^{1}_ {s},x^{1}_{0},s)\cdots\pi_{n}(x^{d}_{s},x^{d}_{0},s).\] (16) 2. Let \((X^{i}_{t})_{t\geq 0}\), for \(i=1,2,\ldots,d\), be continuous and independent random variables, with supports \(\mathcal{D}^{i}\) and transition densities \(q_{i}(k,x^{i}_{0},t):=\mathbb{P}(X^{i}_{t}=k|X^{i}_{0}=x^{i}_{0})\) for \(k\in\mathcal{D}^{i}\). Then: \[\mathcal{T}_{s}\phi(x,x^{1}_{0},x^{2}_{0},\ldots,x^{d}_{0},u)\] \[=\int_{x^{1}_{s}\in\mathcal{D}^{1}}\cdots\int_{x^{d}_{s}\in \mathcal{D}^{d}}\phi(x,x^{1}_{s},x^{2}_{s},\ldots,x^{d}_{s},u-s)q_{1}(x^{1}_ {s},x^{1}_{0},s)\cdots q_{n}(x^{d}_{s},x^{d}_{0},s)dx^{1}_{s}\cdots dx^{d}_{s}.\] (17) It is easy to see that in the case where \((X^{i}_{t})_{t\geq 0}\) contains both discrete and continuous stochastic processes the analytical expression will contains both summation and integral terms. Hereinafter, we will refer to \(\mathcal{T}_{s}\), for a fixed \(s\in[0,u]\) as the \(s-\)_operator_. In the calculations that follow we prefer to express the relevant integral equations in terms of the \(s-\)operator notation for convenience and simplicity. ### Integral Equation formulations of the PD functions The first step in our methodology entails deriving the IEs for the PD functions. As is convention in most of the literature, we perform our calculations with the survival probability and it is easy to see that the same steps can be used for the corresponding PDs. These equations prove to be very useful, as they will allow us to prove continuity and existence results for the PD function. We prove the result under the generalized model, from which analogous results for regime switching and stochastic volatility cases are easily obtained. **Proposition 4.2**.: _Consider the asset value process under the generalized model (8) with jump rate \(\lambda=\nu(\mathbb{R})\) and jump size distribution \(F(z)\). Furthermore, let \(Q(x,\rho,y,u)\) and \(p(x^{\prime},x,s)\) be the survival probability and transition density, respectively, of the non-jump generalized OU process. Then, the survival probability \(\Phi(x,\rho,y,u)\) satisfies the integral equation:_ \[\Phi(x,\rho,y,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{ \mathbb{R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)p(x^{\prime},x,s)dF(z)dx^ {\prime}ds+e^{-\lambda u}Q(x,\rho,y,u). \tag{18}\] Proof.: Consider the natural filtration \(\mathcal{F}_{t}\) generated by the by the tri-variate process \((G_{t},R_{t},Y_{t})\). Recall that we use the notation \(G_{t}^{(x,\rho,y)}\) to represent the asset value process depending on the regime CTMC and volatility process \(Y_{t}\), with \(G_{0}=x\), \(R_{0}=\rho\) and \(Y_{0}=y\). By definition, \(\Phi(x,\rho,y,u)=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{(x,\rho,y)}>0\Big{)}\) and it will therefore be useful to define the martingale: \[M_{s}=\mathbb{E}[\mathds{1}\big{(}\inf_{r\leq u}G_{r}^{(x,\rho,y)}>0\big{)}| \mathcal{F}_{s}], \tag{19}\] for \(s<u\). Furthermore, let \(\tau\) be the time of the first jump and we will also define the stopping time: \[\tau^{*}=\inf\{t<u:\{\Delta G_{t}^{(x,\rho,y)}\neq 0\}\cap\{G_{s}^{(x,\rho,y)}>0, \ \forall s\in[0,t]\}\}, \tag{20}\] i.e., the time the process first jumps, having not yet defaulted. It is easy to check that \(\tau^{*}\) is indeed an \(\mathcal{F}_{t}\)-stopping time. On the event \(\{\tau>u\}\) no jumps occur within the examined time horizon and therefore \(\Phi(x,\rho,y,u)=Q(x,\rho,y,u)\), where recall that \(Q(x,\rho,y,u)\) is the survival probability of the non-jump generalized OU process. On \(\{\tau\leq u\}\), we have: \[\Phi(x,\rho,y,u)=\mathbb{E}[\mathds{1}\big{(}\inf_{r\leq u}G_{r}^{(x,\rho,y)}> 0\big{)}|\mathcal{F}_{0}]=M_{0}=\mathbb{E}[M_{\tau^{*}}], \tag{21}\] where the last step follows from the Optional Stopping Theorem. Notice that \(\mathbb{P}(\tau^{*}=\infty)>0\) and therefore the above is to be understood in an almost sure sense. Now, by the strong Markov property and the time homogeneity of the OU process, it follows that: \[\Phi(x,\rho,y,u)= \mathbb{E}[\Phi(G_{\tau^{*}}^{(x,\rho,y)},R_{\tau^{*}},Y_{\tau^{ *}},u-\tau^{*})]\] \[= \mathbb{E}[\Phi(G_{\tau^{*}}^{*},R_{\tau^{*}},Y_{\tau^{*}},u-\tau ^{*})|G_{0}=x,R_{0}=\rho,Y_{0}=y]. \tag{22}\] Concerning the two cases for the time of the first jump, we can write: \[\Phi(x,\rho,y,u)= \mathbb{E}[\mathds{1}\big{(}\inf_{r\leq u}G_{r}^{(x,\rho,y)}>0 \big{)}\mathds{1}(\tau\leq u)]+\mathbb{E}[\mathds{1}\big{(}\inf_{r\leq u}G_{r} ^{(x,\rho,y)}>0\big{)}\mathds{1}(\tau>u)]\] \[= \mathbb{E}[\mathds{1}\big{(}\inf_{r\leq u}G_{r}^{(x,\rho,y)}>0 \big{)}|\tau\leq u]\mathbb{P}(\tau\leq u)+e^{-\lambda u}Q(x,\rho,y,u). \tag{23}\] Then, using (22), the law of total probability and the definition of the \(s-\)operator the first term can be written as: \[\int_{0}^{u}\lambda e^{-\lambda s}\mathbb{E}[\Phi(G_{s},R_{s},Y_ {s},u-s)|G_{0}=x,R_{0}=\rho,Y_{0}=y]ds\] \[= \int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{\mathbb{ R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{s},Y_{s},u-s)|R_{0}=\rho,Y_{0}=y]p(x^{\prime},x,s )dF(z)dx^{\prime}ds\] \[= \int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{\mathbb{ R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)p(x^{\prime},x,s)dF(z)dx^{\prime}ds, \tag{24}\] where we have conditioned on the pre-jump asset value and the subsequent jump size. Substituting back into (23), we get the required result. It is now straightforward to obtain the analogous integral equations for the PD functions under the regime switching and stochastic volatility models. We present both these results below, omitting the proofs for brevity, as they follow the proof of Proposition 4.2, almost identically. **Corollary 4.3**.: 1. _Consider the asset value process under the regime switching model (_4_) with jump rate and jump size distribution as in Proposition_ 4.2_. Let_ \(Q(x,\rho,u)\) _and_ \(p(x^{\prime},x,s)\) _be the survival probability and transition density, respectively, of the continuous regime switching OU process. Then, the survival probability_ \(\Phi(x,\rho,u)\) _satisfies the integral equation:_ \[\Phi(x,\rho,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{ \mathbb{R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,u)p(x^{\prime},x,s)dF(z)dx^{ \prime}ds+e^{-\lambda u}Q(x,\rho,u).\] (25) _._ 2. _Consider the asset value process under the stochastic volatility model (_6_) with jump rate and jump size distribution as in Proposition_ 4.2_. Let_ \(Q(x,y,u)\) _and_ \(p(x^{\prime},x,s)\) _be the survival probability and transition density, respectively, of the continuous stochastic volatility OU process. Then, the survival probability_ \(\Phi(x,y,u)\) _satisfies the integral equation:_ \[\Phi(x,y,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{\mathbb{R }}\mathcal{T}_{s}\Phi(x^{\prime}+z,y,u)p(x^{\prime},x,s)dF(z)dx^{\prime}ds+e^{ -\lambda u}Q(x,y,u).\] (26) **Remark 4.4**.: The proof above is based on the approach considered in [42] and constitutes an extension for processes where the diffusion term in non-zero. This results in having to consider appropriate stopping times, as well as the transition density of the OU process in the representation above. Furthermore, it is worth mentioning the simple case of the non-jump OU process, the survival probability \(\Phi(x,u)\) satisfies the integral equation: \[\Phi(x,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{\mathbb{R} }\Phi(x^{\prime}+z,u-s)p(x^{\prime},x,s)dF(z)dx^{\prime}ds+e^{-\lambda u}Q(x, u), \tag{27}\] which can be easily derived from the regime switching model by considering a single regime. This can be directly compared to the corresponding integral equation in [42], where similar integral equations are used to study the ruin probability function for an analogous asset value process. On the other hand, in this work, the integral equations can lead to the existence and continuity results that suffice to obtain approximations of the PD functions, as we will see below. **Remark 4.5**.: In the following sections we will often interchange between using the \(s-\)operator formulation and more analytical expressions in terms of an appropriate expected value. In particular, applying the law of total probability, the integral equation for the stochastic volatility model can be written in more detail as: \[\Phi(x,y,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\Phi(x^{\prime}+z,\nu,u-s)q(\nu,y,s)p(x^{\prime},x,s) dF(z)dx^{\prime}d\nu ds+e^{-\lambda u}Q(x,y,u), \tag{28}\] where \(q(\nu,y,s)\) is the transition probability of the CIR volatility process and we have used that, by definition: \[\mathcal{T}_{s}\Phi(x^{\prime}+z,y,u)=\int_{0}^{\infty}\Phi(x^{\prime}+z,\nu, u-s)q(\nu,y,s)d\nu.\] Similarly, for the generalized model, we can write: \[\Phi(x, \rho,y,u)\] \[= \int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{s}^{\rho},\nu,u-s)]q( \nu,y,s)p(x^{\prime},x,s)dF(z)dx^{\prime}d\nu ds+e^{-\lambda u}Q(x,\rho,y,u). \tag{29}\] These analytical versions will be useful when using the integral equations to derive PIDEs for the PD processes, whereas the more concise form will be used when proving the required continuity results that follow. **Remark 4.6**.: Finally, it is also worth noting that, in obtaining the integral equations in this section, we have included the initial values of the regime and volatility processes as additional variables on which the PD function depends. An alternative approach is writing these as a system of integral equations. For example, considering the regime switching model with states in accordance to the IFRS 9 framework (i.e., Stage 1, 2 and 3) we can rewrite \(\Phi_{i}(x,u):=\Phi(x,\rho^{i},u)\) (and similarly \(Q_{i}(x,u):=Q(x,\rho^{i},u)\)), which results in the system of equations: \[\Phi_{i}(x,u)=\int_{0}^{u}\lambda e^{-\lambda s}\int_{0}^{\infty}\int_{\mathbb{ R}}\sum_{j=1}^{3}\Phi_{j}(x,u-s)\pi(R_{j},\rho^{i},s)p(x^{\prime},x,s)dF(z)dx^{ \prime}ds+e^{-\lambda u}Q_{1}(x,u),\] for \(i=1,2,3\), and where we have used that, in this case, \(\mathcal{T}_{s}\Phi(x,\rho,u-s)=\sum_{j=1}^{3}\Phi(x,R_{j},u-s)\pi(R_{j},\rho,s)\), by definition. In our approach, we prefer to account for these externals parameters explicitly, and generalize this representation in the cases of the additional variables. ### Properties and existence of solutions Using the IEs derived above, we can now prove that the PD functions enjoy certain mathematical properties required to obtain the viscosity solutions to the corresponding PIDEs. To consider such solutions, we require \(\Psi\) (equivalently \(\Phi\)) to be a continuous function of \((x,u)\). To this end, we first note that \(\Psi\) (\(\Phi\)) is a monotonically decreasing (increasing) function with respect to \(x\) and a monotonically increasing (decreasing) function with respect to maturity \(u\). Moreover, as it is bounded, we can conclude that it is an integrable function. We can take advantage of the integral equation forms to prove that solutions for the survival probabilities do exist, and moreover, that they are continuous with respect to \(x\) and \(u\). We prove this result for the generalized model, from which the other cases follow easily. **Remark 4.7**.: As mentioned, in what follows we will focus on the survival probability as a function of \((x,u)\), as the additional complexity we are interested in arises from the jump component of the OU process. It is straightforward to reproduce the proofs for the stochastic volatility variable, as well. Finally, the existence of the regime switching process simply creates a coupling that does not affect the results in this section. **Lemma 4.8**.: _The probability of survival function under the generalized model \(\Phi(x,\rho,y,u)\), as defined in (12) is uniformly continuous as a function of \(x\) and \(u\)._ Proof.: To prove this result we will use the integral formulation (18). Consider a fixed \(\epsilon>0\) and \(||(x,u)-(x_{0},u_{0})||<\delta\), for some \(\delta>0\) that will be specified. Then: \[|\Phi(x,\rho,y,u)-\Phi(x_{0},\rho,y,u_{0})|\leq|\Phi(x,\rho,y,u)-\Phi(x_{0}, \rho,y,u)|+|\Phi(x_{0},\rho,y,u)-\Phi(x_{0},\rho,y,u_{0})|\] We will handle each of the terms above separately. We have: * Let \(|x-x_{0}|<\delta_{1}\). Since the transition density of the non-jump OU process is uniformly continuous, we can select \(\delta_{1}\) such that \(|p(x^{\prime},x,s)-p\left(x^{\prime},x_{0},s\right)|<\epsilon/2\). Then: \[|\Phi(x,\rho,y,u)-\Phi(x_{0},\rho,y,u)|\] \[\leq \int_{0}^{u}\lambda e^{-\lambda s}\int_{\mathbb{R}}\int_{\mathbb{ R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)|p(x^{\prime},x,s)-p(x^{\prime},x_{0}, s)|dF(z)dx^{\prime}ds+e^{-\lambda u}|Q(x,u)-Q(x_{0},u)|\] \[\leq \int_{0}^{u}\lambda e^{-\lambda s}\int_{\mathbb{R}}\int_{\mathbb{ R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)\frac{\epsilon}{2}dF(z)dx^{\prime}ds+e^{- \lambda u}\frac{\epsilon}{2}\leq\int_{0}^{u}\lambda e^{-\lambda s}\frac{ \epsilon}{2}ds+e^{-\lambda u}\frac{\epsilon}{2}=\frac{\epsilon}{2}.\] \(u\) : Similarly, let \(|u-u_{0}|<\delta_{2}\). Then: \[|\Phi(x_{0},\rho,y,u)-\Phi\left(x_{0},\rho,y,u_{0}\right)|\leq\] \[\leq \int_{u_{0}}^{u}\lambda e^{-\lambda s}\int_{\mathbb{R}}\int_{ \mathbb{R}}\left|\mathcal{T}_{s}\Phi\left(x^{\prime}+z,\rho,y,u\right)- \mathcal{T}_{s}\Phi\left(x^{\prime}+z,\rho,y,u_{0}\right)\right|p(x^{\prime},x,s)dF(z)dx^{\prime}ds.\] By definition, we have \(\|\mathcal{T}_{s}\|\leq 1\) and therefore: \(|\Phi(x,\rho,y,u)-\Phi\left(x,\rho,y,u_{0}\right)|\leq\int_{u_{0}}^{u}\lambda e ^{-\lambda s}ds\leq(u-u_{0})\max_{s\in[0,u]}\lambda e^{-\lambda s}=\lambda \left(u-u_{0}\right)<\lambda\delta_{2}\). By selecting \(\delta_{2}=\frac{\epsilon}{2\lambda}\) and \(\delta=\delta_{1}\wedge\delta_{2}\) we therefore obtain: \[|\Phi(x,\rho,y,u)-\Phi(x_{0},\rho,y,u_{0})|<\epsilon,\] as required. We can now consider an appropriate fixed point result which will allow us to prove the existence of a solution to the IE for the survival probability. For this result we will refer to the Arzela-Ascoli and Schauder's fixed point theorems, as stated in E.1 and E.2 of the Appendix, respectively. **Proposition 4.9**.: _The integral equation (18) admits a continuous solution._ Proof.: Consider the metric space of all continuous functions \(\Phi(\cdot,\rho,y,\cdot)\) on \(\mathcal{D}\times[0,T]\), denoted by \(X:=C(\mathcal{D}\times[0,T])\). Furthermore, define the functional operator \(\mathcal{A}:X\to X\) by: \[\mathcal{A}\Phi(x,\rho,y,u)=\int_{\mathbb{R}}\int_{\mathbb{R}}\mathcal{T}_{s} \Phi(x^{\prime}+z,\rho,y,u)p(x^{\prime},x,s)dF(z)dx^{\prime}. \tag{30}\] We begin by proving that the operator \(\mathcal{A}\) is: \((i)\) uniformly bounded, \((ii)\) equi-continuous and \((iii)\) compact. We separate these in the steps below: * Uniform boundedness follows easily from the definition of \(\Phi\) and the operator \(\mathcal{T}_{s}\). Specifically: \[|\mathcal{A}\Phi(x,\rho,y,u)|\leq 1.\] * For equi-continuity we must show that, given \(\epsilon>0\), there exists \(\delta>0\) such that if \(||(x,u)-(x_{0},u_{0})||<\delta\) then \(|\mathcal{A}\Phi(x,\rho,y,u)-\mathcal{A}\Phi(x_{0},\rho,y,u_{0})|<\epsilon\), for all \(\Phi\in X\). The proof follows very closely Lemma 4.8, but we include the steps for completeness. To this end, we calculate: \[|\mathcal{A}\Phi(x,\rho,y,u)-\mathcal{A}\Phi(x_{0},\rho,y,u_{0})|\] \[\leq\int_{\mathbb{R}}\int_{\mathbb{R}}|\mathcal{T}_{s}\Phi(x^{ \prime}+z,\rho,y,u)p(x^{\prime},x,s)-\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y, u_{0})p(x^{\prime},x_{0},s)|dF(z)dx^{\prime}.\] Consider the integrand. We can write: \[\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)\Big{(}p(x^{\prime},x,s)-p(x^{ \prime},x_{0},s)\Big{)}+p(x^{\prime},x,s)\Big{(}\mathcal{T}_{s}\Phi(x^{ \prime}+z,\rho,y,u)-\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u_{0})\Big{)},\] and therefore: \[|\mathcal{A}\Phi(x,\rho,y,u)-\mathcal{A}\Phi(x_{0},\rho,y,u_{0})|\leq \int_{\mathbb{R}}\int_{\mathbb{R}}\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)|p(x^ {\prime},x,s)-p(x^{\prime},x_{0},s)|\] \[+p(x^{\prime},x_{0},s)|\mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u)- \mathcal{T}_{s}\Phi(x^{\prime}+z,\rho,y,u_{0})|F(z)dx^{\prime}.\] We know that \(p(x^{\prime},x,s)\) is uniformly continuous in \(x\) and \(\Phi\) is uniformly continuous in \(u\). Therefore, given \(\epsilon>0\) we can select \(\delta_{1}\) such that \[|p(x^{\prime},x,s)-p(x^{\prime},x_{0},s)|<\epsilon/2,\] for all \(x,x_{0}\) such that \(|x-x_{0}|<\delta_{1}\). Furthermore, from Lemma 4.8 recall that, for \(u,u_{0}\) such that \(|u-u_{0}|<\delta_{2}\) we have: \[|\Phi(x^{\prime}+z,\rho,y,u)-\Phi(x^{\prime}+z,\rho,y,u_{0})|<\lambda\delta_{ 2},\] for all \(\Phi\in X\). Select \(\delta_{2}=\frac{\epsilon}{2\lambda}\) and let \(\delta=\delta_{1}\wedge\delta_{2}\). Hence: \[|\mathcal{A}\Phi(x,\rho,y,u)-\mathcal{A}\Phi(x_{0},\rho,y,u_{0})|\] \[\leq \int_{\mathbb{R}}\int_{\mathbb{R}}\mathcal{T}_{s}\Phi(x^{\prime} +z,\rho,y,u)\frac{\epsilon}{2}+p(x^{\prime},x_{0},s)\lambda\delta dF(z)dx^{ \prime}\leq\epsilon. \tag{31}\] Notice that the choice of \(\delta\) does not depend on \(\Phi\), only on the given \(\epsilon\). Hence, \(|\mathcal{A}\Phi(x,\rho,y,u)-\mathcal{A}\Phi(x_{0},\rho,y,u_{0})|<\epsilon\), for all \(\Phi\in X\) whenever \(||(x,u)-(x_{0},u_{0})||<\delta\), i.e., \(\mathcal{A}\) is equi-continuous. 3. We have that \(\mathcal{A}\Phi\) is uniformly bounded and equi-continuous and therefore compact by the Arzela-Ascoli theorem. With the above, we now turn to the main result. Consider now the Banach space \(\mathcal{C}=\{\phi\in X,||\phi||\leq 1\}\), where we have used the standard norm \[||\phi||:=\sup_{x,u}|\phi|.\] With the operator \(\mathcal{A}\) defined as above, we have: \[\Phi(x,\rho,y,u)=\int_{0}^{u}\lambda e^{-\lambda s}\mathcal{A}\Phi(x,\rho,y,u )ds+g(x,u),\] with \(g(x,u):=e^{-\lambda u}Q(x,\rho,y,u)\) and it is natural to define the operator \(\mathcal{P}\Phi\), such that: \[\mathcal{P}\Phi(x,\rho,y,u)=\int_{0}^{u}\lambda e^{-\lambda s}\mathcal{A}\Phi (x,\rho,y,u)ds+g(x,u). \tag{32}\] Recall that \(\mathcal{A}\Phi\) is bounded by \(1\) and, furthermore, by the definition of the operator, we also have that: \[||\mathcal{A}\Phi||\leq||\Phi||.\] Hence: \[||\mathcal{P}\Phi|| \leq||\int_{0}^{u}\lambda e^{-\lambda s}\mathcal{A}\Phi(x,\rho,y, u)ds||+||g||\leq\sup_{x,u}\int_{0}^{u}\lambda e^{-\lambda s}|\mathcal{A}\Phi(x, \rho,y,u)|ds+||g||\] \[\leq\sup_{x,u}(1-e^{-\lambda u})|\Phi|+\sup_{x,u}e^{-\lambda u}|Q( x,\rho,y,u)|\leq\max(||\Phi||,||g||)\leq 1, \tag{33}\] concluding that \(\mathcal{P}\) maps function \(\Phi\in X\) to \(X\). Notice that we can write \(\mathcal{P}=\mathcal{J}\mathcal{A}\), with \(\mathcal{J}\phi=\int_{0}^{u}\lambda e^{-\lambda s}\phi(s)ds\). It is straightforward to see that the linear operator \(\mathcal{J}\) is compact, as is \(\mathcal{A}\), as shown in Lemma 4.8. Therefore, we can apply Schauder's fixed point theorem to conclude that \(\mathcal{P}\) has a fixed point in \(X\), which solves the integral equation (18). Partial Integro-Differential Equations for the PD function Generally, when considering various credit modelling tasks, such as forecasting probability of default and expected losses, it is common throughout the literature to use path simulation techniques, particularly for practical purposes. Specific examples and applications can be seen in e.g., [49] and [54]. On the other hand, we will see in this section that the integral equation representations (25), (26) and (29) lead to PIDEs for the PD functions, which belong to families of well-studied equations. Hence, our approach relies solely on the equations derived in this and the previous section, which can be solved to retrieve the corresponding values, thereby eliminating the need for simulations and the larger errors which accompany such methods. Natural questions arise related to the regularity conditions that the survival function (and hence the corresponding PD function) must satisfy; for example, in the one dimensional Levy-driven OU case, classical solutions of PIDEs would require that \(\Phi(x,u)\in\mathcal{C}^{2,1}\big{(}\mathcal{D}\times[0,T]\big{)}\), where \(\mathcal{D}:=[0,\infty)\), i.e., the survival probability function would have to be twice and once continuously differentiable in \(x\) and \(u\), respectively, on the corresponding domains. In many cases, the required differentiability conditions are often assumed. However, we can avoid making such assumptions by considering viscosity solutions of the PIDEs, a notion introduced in [24]. Viscosity solutions and their applications in finance have been studied in e.g., [22] and [21]. We begin by showing that the generalized PD function is a viscosity solution of a PIDE that will be derived. Then, we continue by showing that this and the PIDEs that result from the regime switching and stochastic volatility models can be derived directly from the corresponding integral equations, if the required regularity conditions hold. In our setting, it is understood that the survival functions are in fact viscosity solutions to these PIDEs, yet these calculations are important for two reasons: firstly, they establish the connection between the integral equations and the corresponding PIDEs, and secondly they constitute an efficient method of obtaining the form of the PIDEs, for which we can then show that the survival functions are indeed viscosity solutions. Finally, it is worth emphasizing the utility of the PIDEs obtained in this section. Specifically, the solutions to these equations are PD values across both initial positions, time and latent variables. Hence, we will see that, by considering numerical schemes for the PIDES, we obtain the complete evolution of the PD process that is required for applications in credit risk modelling. This is clearly preferable to common methods such as Monte Carlo estimations of the PD, where one must perform simulations of the underlying asset process for many different initial positions and time horizons, separately. This process is extremely computationally costly, especially when taking into account the order of convergence of many stochastic simulation schemes. For example, the Euler scheme for the simulation of the asset process we consider has a strong convergence of order \(0.5\), which is required since the PD value depends on the whole path of the asset process. ### Viscosity solutions Viscosity solutions for non-local PDEs have also been studied in e.g., [7], [22] and [33] and references therein. We reiterate the importance of this approach: requiring only continuity of the underlying function, which has been proven for the survival function under the proposed models, we can define solutions of the equations in a weak sense and subsequently approximate them using numerical schemes. We begin by showing that the generalized survival probability is a viscosity solution of an appropriate PIDE (provided in equation (34) below). For completeness, we include the definition of a viscosity solution below (altered to reflect the arguments of the survival function we are studying). **Definition 5.1**.: Consider an integro-differential operator for the function with arguments as above, \(\mathcal{L}f(x,\rho,y,u)\) and a corresponding PIDE \(\mathcal{L}f(x,\rho,y,u)=0\). Then, a function \(\phi(x,\rho,y,u)\) is called a viscosity supersolution (subsolution) of the PIDE if, for any \(\rho\in\mathcal{R}\), for every \((x,y,u)\in\mathcal{D}\times\mathcal{V}\times[0,T]\), and every function \(f(\cdot,\rho,\cdot,\cdot)\in\mathcal{C}^{2,1}\big{(}\tilde{\mathcal{D}}\times[ 0,T]\big{)}\), where \(\tilde{D}:=\mathcal{D}\times\mathcal{V}\), such that \(\phi(x,\rho,y,u)=f(x,\rho,y,u)\) and \(\phi\geq f\) (\(\phi\leq f\)), the inequality \(\mathcal{L}f(x,\rho,y,u)\leq 0\) (\(\mathcal{L}f(x,\rho,y,u)\geq 0\)) holds. A function \(\phi(x,\rho,y,u)\) is a viscosity solution of the PIDE if \(\phi\) is simultaneously a viscosity supersolution and subsolution. In what follows we will show that the survival function is a viscosity solution of an appropriate PIDE given below. We must first show that viscosity solutions do exist for the PIDEs in question. The proof that follows is based on the corresponding result in [12], extended to match the models studied in this work. **Proposition 5.2**.: _The survival probability function \(\Phi(x,\rho,y,u)\) is a viscosity solution of the PIDE \(\mathcal{L}f(x,\rho,y,u)=0\), where:_ \[\mathcal{L}f(x, \rho,y,u):=-\frac{\partial f}{\partial u}(x,\rho,y,u)+k_{\rho}( \theta_{\rho}-x)\frac{\partial f}{\partial x}(x,\rho,y,u)+\kappa(\mu-y)\frac {\partial f}{\partial y}(x,\rho,y,u)\] \[+\frac{1}{2}\sigma_{\rho}^{2}y\frac{\partial^{2}f}{\partial x^{ 2}}(x,\rho,y,u)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}f}{\partial y^{2}}(x, \rho,y,u)+\sum_{j\neq\rho}q_{\rho j}\Big{(}f(x,j,y,u)-f(x,\rho,y,u)\Big{)}\] \[+\int_{\mathbb{R}}\Big{(}f(x+z,\rho,y,u)-f(x,\rho,y,u)\Big{)} \nu(dz). \tag{34}\] Proof.: We must show that \(\Phi(x,\rho,y,u)\) is simultaneously a viscosity supersolution and subsolution of the PIDE. We begin with the supersolution case. For any \(\rho\in\mathcal{R}\), consider fixed \((x,y,u)\in\mathcal{D}\times\mathcal{V}\times[0,T]\), with \(\Phi(x,\rho,y,u)=0\) when \(x\leq 0\), by definition. Furthermore, consider a function \(f(\cdot,\rho,\cdot,\cdot)\in\mathcal{C}^{2,1}\big{(}\tilde{D}\times[0,T]\big{)}\) such that \(\Phi(x,\rho,y,u)=f(x,\rho,y,u)\) and \(\Phi((\cdot,\rho,\cdot,\cdot))\leq f(\cdot,\rho,\cdot,\cdot)\) on \(\mathcal{D}\times\mathcal{V}\times[0,T]\). Now, let \(h>0\) and let \(\epsilon_{x},\epsilon_{y}\) be small enough to ensure that \(f\in\mathcal{C}^{2,1}\Big{(}\tilde{d_{\epsilon}}\times[0,T]\Big{)}\), where \(\tilde{d_{\epsilon}}:=B_{\epsilon_{x}}(x)\times B_{\epsilon_{y}}(y)\), i.e., we are considering a neighborhood of the fixed \((x,u)\) where the functions will "touch". Finally, define the stopping time \(\tau_{h}:=\inf\{t\geq 0:(G_{t}^{x},Y_{t}^{x})\notin(\overline{B_{\epsilon_{x}}(x)} \times\overline{B_{\epsilon_{y}}(y)})\}\wedge h\), noting that we choose \(h<u\) to ensure that \(\tau_{h}<u\). Then, by the Ito formula, and with the operator \(\mathcal{L}\) is in (34), we have: \[f(G_{\tau_{h}}^{x},R_{\tau_{h}}^{\rho}, Y_{\tau_{h}}^{y},u-\tau_{h})-f(G_{0}^{x},R_{0}^{\rho},Y_{0}^{y},u)=f(G_{ \tau_{h}}^{x},R_{\tau_{h}}^{\rho},Y_{\tau_{h}}^{y},u-\tau_{h})-f(x,\rho,y,u)\] \[=\int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)dt+\int_{0}^{\tau_{h}}\sigma\frac{\partial f}{\partial x}(G_{t}^{x},R_{t} ^{\rho},Y_{t}^{y},u-t)dB_{t}\] \[+\int_{0}^{\tau_{h}}\int_{\mathbb{R}}\Big{(}f(G_{t}^{x}+z,R_{t}^{ \rho},Y_{t}^{y},u-t)-f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)\Big{)}\tilde{N}( dt,dz)\] \[+\sum_{j\neq\rho}p_{\rho j}(t_{h})\Big{(}f(G_{\tau_{h}}^{x},j,Y_{ \tau_{h}}^{y},u-\tau_{h})-f(G_{\tau_{h}}^{x},\rho,Y_{\tau_{h}}^{y},u-\tau_{h}) \Big{)}\] \[=\int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)dt+m_{t}, \tag{35}\] where \[m_{t}:= \int_{0}^{\tau_{h}}\sigma\frac{\partial f}{\partial x}(G_{t}^{x},R_ {t}^{\rho},Y_{t}^{y},u-t)dB_{t}\] \[+\int_{0}^{\tau_{h}}\int_{\mathbb{R}}\Big{(}f(G_{t}^{x}+z,R_{t}^{ \rho},Y_{t}^{y},u-t)-f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)\Big{)}\tilde{N}( dt,dz) \tag{36}\] is a martingale and therefore so is the stopped process \(m_{t\wedge\tau_{h}}\). We have that \(f(x,\rho,y,u)=\Phi(x,\rho,y,u)\) and recall that \(\Phi(x,\rho,y,u)=\mathbb{E}[\Phi(G_{\tau_{h}}^{x},R_{\tau_{h}}^{\rho},Y_{\tau_{ h}}^{y},u-\tau_{h})]\) almost surely, from Proposition 4.2. Therefore: \[\Phi(G_{\tau_{h}}^{x},R_{\tau_{h}}^{\rho},Y_{\tau_{h}}^{y},u-\tau_ {h})\geq\] \[f(G_{\tau_{h}}^{x},R_{\tau_{h}}^{\rho},Y_{\tau_{h}}^{y},u-\tau_ {h})=f(x,\rho,y,u)+\int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t }^{y},u-t)dt+m_{t}, \tag{37}\] and so: \[\mathbb{E}[\Phi(G_{\tau_{h}}^{x}, R_{\tau_{h}}^{\rho},Y_{\tau_{h}}^{y},u-\tau_{h})]\geq\mathbb{E}[ \Phi(x,\rho,y,u)]+\mathbb{E}\Big{[}\int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R _{t}^{\rho},Y_{t}^{y},u-t)dt\Big{]}+\mathbb{E}[m_{t}]\] \[\Rightarrow\Phi(x,\rho,y,,u)\geq\Phi(x,\rho,y,u)+\mathbb{E}\Big{[} \int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)dt\Big{]}, \tag{38}\] and hence \(\mathbb{E}\Big{[}\int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t }^{y},u-t)dt\Big{]}\leq 0\). The final step is as in [12]; when \(h\) is sufficiently small we have \(\tau_{h}=h\) and therefore, by the Lebesgue dominated convergence theorem we have: \[\mathcal{A}f(x,\rho,y,u)=\lim_{h\downarrow 0}\frac{1}{h}\mathbb{E}\Big{[} \int_{0}^{\tau_{h}}\mathcal{A}f(G_{t}^{x},R_{t}^{\rho},Y_{t}^{y},u-t)dt\Big{]} \leq 0, \tag{39}\] showing that \(\Phi\) is indeed a viscosity supersolution. It follows directly, by switching the inequalities in the steps above, that \(\Phi\) is a viscosity subsolution and therefore the result is proven. **Remark 5.3**.: Viscosity theory is not the only prism under which we can consider weak solutions. We can also study solutions in appropriate Sobolev spaces, for which we will need to define a notion of weak differentiability, and we can then use standard martingale approaches to obtain the corresponding PIDEs. We include details of this approach in Appendix D. **Remark 5.4**.: We furthermore note that if additional conditions hold, then the strong solutions that occur are equal to the viscosity solutions, as expected. The formulations via viscosity solutions are useful to obtain the form of the PIDEs the PD functions satisfy, and we will see that we can then consider additional conditions that lead to regular solutions through these equations. ### The survival probability as a classical solution of PIDEs derived from the IE formulations To build a consistent framework we must ensure that that the PIDEs obtained above (which are satisfied by the PD functions in the viscosity sense) are derivable from the integral equations formulation in section 4. In this section, we show that the integral equations indeed lead to the corresponding PIDEs. We will begin with the calculations under the regime switching and stochastic volatility models, upon which the corresponding result under the generalized model will be built. In all the results below, we consider a fixed time horizon \(T>0\). **Lemma 5.5**.: * _Under the regime-switching model (_4_), the survival probability_ \(\Phi(x,\rho,u)\) _satisfies the PIDE:_ \[\frac{\partial\Phi}{\partial u}(x,\rho,u)=k_{\rho}(\theta_{\rho}-x )\frac{\partial\Phi}{\partial x}(x,\rho,u)+\frac{1}{2}\sigma_{\rho}^{2}\frac{ \partial^{2}\Phi}{\partial x^{2}}(x,\rho,u)+\sum_{j\neq\rho}q_{\rho j}\Big{(} \Phi(x,j,u)-\Phi(x,\rho,u)\Big{)}\] \[+\int_{\mathbb{R}}\Big{(}\Phi(x+z,\rho,u)-\Phi(x,\rho,u)\Big{)} \nu(dz),\ \ (x,\rho,u)\in\mathcal{D}\times\mathcal{R}\times[0,T],\] (40) _with initial and boundary conditions:_ \[\Phi(x,\rho,0)=\mathbbm{1}_{\{x>0\}},\ \ (x,\rho)\in\mathcal{D} \times\mathcal{R},\] \[\Phi(0,\rho,u)=0,\ \ (\rho,u)\in\mathcal{R}\times[0,T],\] \[\Phi(x,\rho,u)\to 1\ \text{as}\ x\to\infty,\ \ (\rho,u)\in\mathcal{R} \times[0,T],\] _where_ \(q_{ij}\) _are the elements of the generator_ \(Q\) _of the switching process_ \(R_{t}\)_, as defined in (_108_)._ * _Under the stochastic volatility model (_6_), the survival probability_ \(\Phi(x,y,u)\) _satisfies the PIDE:_ \[\frac{\partial\Phi}{\partial u}(x,y,u) =k(\theta-x)\frac{\partial\Phi}{\partial x}(x,y,u)+\kappa(\mu-y) \frac{\partial\Phi}{\partial y}(x,y,u)+\frac{1}{2}y\frac{\partial^{2}\Phi}{ \partial x^{2}}(x,y,u)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}\Phi}{\partial y^ {2}}(x,y,u)\] \[+\int_{\mathbb{R}}\Big{(}\Phi(x+z,y,u)-\Phi(x,y,u)\Big{)}\nu(dz), \ \ (x,y,u)\in\mathcal{D}\times\mathcal{V}\times[0,T],\] (41) _subject to the initial and boundary conditions:_ \[\Phi(x,y,0)=\mathbbm{1}_{\{x>0\}},\ \ (x,y)\in\mathcal{D} \times\mathcal{V},\] \[\Phi(0,y,u)=0,\ \ (y,u)\in\mathcal{V}\times[0,T],\] \[\Phi(x,y,u)\to 1\ \text{as}\ x\to\infty,\ \ (y,u)\in\mathcal{V} \times[0,T],\] \[\frac{\partial\Phi}{\partial y}(x,y,u)=0\ \text{as}\ y\to\infty\ \ (x,u)\in\mathcal{D} \times[0,T].\] (42) Proof.: * Our approach relies on taking advantage of the known fact that the transition densities satisfy the Kolmogorov equation. Specifically, we know that for \(p(\cdot,x,u)\) we have: \[\frac{\partial p}{\partial u}(\cdot,x,u)=k_{\rho}(\theta_{\rho}-x )\frac{\partial p}{\partial x}(\cdot,x,u)+\frac{1}{2}\sigma_{\rho}^{2}\frac{ \partial^{2}p}{\partial x^{2}}(\cdot,x,u).\] (43) Recall that the same holds for the survival distribution of the continuous OU, \(Q(x,\rho,u)\): \[\frac{\partial Q}{\partial u}(x,\rho,u)=\mathcal{L}_{1}Q(x,\rho,u),\] (44) with the generator operator under the regime switching model \(\mathcal{L}_{1}\) given by: \[\mathcal{L}_{1}Q(x,\rho,t):=k_{\rho}(\theta_{\rho}-x)\frac{\partial f}{ \partial x}(x,\rho,t)+\frac{1}{2}\sigma_{\rho}^{2}\frac{\partial^{2}f}{ \partial x^{2}}(x,\rho,t)+\sum_{j\neq\rho}q_{\rho j}\Big{(}Q(x,j,t)-f(x,\rho, t)\Big{)}.\] (45) Furthermore, for the function \(g(\rho,u):=\mathbb{E}[\Phi(\cdot,R_{u},\cdot)|R_{0}=\rho]\), we have that: \[\frac{\partial g}{\partial u}(\rho,u)=\sum_{j\neq\rho}q_{\rho j}\big{(}g(j,u)-g( \rho,u)\big{)}. \tag{46}\] We now begin the calculations for the PIDE by making the change of variables \(t:=u-s\): \[\Phi(x,\rho,u)=\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\mathcal{T}_{u-t}\Phi(x^{\prime}+z,\rho,u)p(x^{\prime},x,u-t)dF(z) dx^{\prime}dt+e^{-\lambda u}Q(x,\rho,u).\] By the Leibniz rule, and substituting in the definition of the operator \(\mathcal{T}_{s}\), we then have: \[\frac{\partial\Phi}{\partial u}(x,\rho,u)=\lambda\int_{0}^{\infty }\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{0}^{\rho},u)]p(x^{\prime},x,0)dF(z)dx^{\prime}\] \[+\int_{0}^{u}-\lambda^{2}e^{-\lambda(u-t)}\int_{0}^{\infty}\int_ {\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},t)]p(x^{\prime},x,u-t )dF(z)dx^{\prime}dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},t)]\frac{\partial p}{ \partial u}(x^{\prime},x,u-t)dF(z)dx^{\prime}dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\frac{\partial}{\partial u}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{ \rho},t)]p(x^{\prime},x,u-t)dF(z)dx^{\prime}dt\] \[-\lambda e^{-\lambda u}Q(x,\rho,u)+e^{-\lambda u}\frac{\partial Q }{\partial u}(x,\rho,u). \tag{47}\] The first term in the expression above can be written as: \[\lambda\int_{0}^{\infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,\rho,u) ]\delta_{x}(x^{\prime})dF(z)dx^{\prime}=\lambda\int_{\mathbb{R}}\Phi(x+z,\rho,u)dF(z).\] On the other hand, using the integral equation for \(\Phi(x,\rho,u)\), the second term can be written as: \[-\lambda\Phi(x,\rho,u)+\lambda e^{-\lambda u}Q(x,\rho,u).\] Combining, we obtain: \[\frac{\partial\Phi}{\partial u}(x,\rho,u)=\int_{0}^{u}\lambda e ^{-\lambda(u-t)}\int_{0}^{\infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+ z,R_{u-t}^{\rho},t)]\frac{\partial p}{\partial u}(x^{\prime},x,u-t)dF(z)dx^{ \prime}dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\frac{\partial}{\partial u}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{ \rho},t)]p(x^{\prime},x,u-t)dF(z)dx^{\prime}dt\] \[+\lambda\int_{\mathbb{R}}\Phi(x+z,\rho,u)dF(z)-\lambda\Phi(x,\rho,u)+e^{-\lambda u}\frac{\partial Q}{\partial u}(x,\rho,u). \tag{48}\] We now consider the generator of \(\Phi(x,\rho,u)\). It is straightforward to separate the components of \(\mathcal{L}_{1}\Phi(x,\rho,u)\) since only the transition density \(p(x^{\prime},x,u-t)\) depends on \(x\) and only \(\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},t)]\). \(z,R_{u-t}^{\rho},t)]\) depends on the regime. We then have: \[\mathcal{L}_{1}\Phi(x,\rho,u)\equiv k_{\rho}(\theta_{\rho}-x)\frac{ \partial\Phi}{\partial x}(x,\rho,u)+\frac{1}{2}\sigma_{\rho}^{2}\frac{\partial^ {2}\Phi}{\partial x^{2}}(x,\rho,u)+\sum_{j\neq\rho}q_{\rho j}\Big{(}\Phi(x,j,u) -\Phi(x,\rho,u)\Big{)}\] \[=e^{-\lambda u}\mathcal{L}_{1}Q(x,\rho,u)+\int_{0}^{u}\lambda e^{ -\lambda(u-t)}\int_{0}^{\infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_ {u-t}^{\rho},t)]\mathcal{L}p(x^{\prime},x,u-t)dF(z)dx^{\prime}dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\sum_{j\neq\rho}q_{\rho j}\Big{(}\mathbb{E}[\Phi(x^{\prime}+z,R_{u- t}^{j},t)]-\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},t)]\Big{)}p(x^{\prime},x,u-t) dF(z)dx^{\prime}dt\] \[=e^{-\lambda u}\frac{\partial Q}{\partial u}(x,\rho,u)+\int_{0}^ {u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{\mathbb{R}}\mathbb{E}[ \Phi(x^{\prime}+z,R_{u-t}^{\rho},t)]\frac{\partial p}{\partial u}(x^{\prime},x,u-t)dF(z)dx^{\prime}dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{ \mathbb{R}}\frac{\partial}{\partial u}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{ \rho},t)]p(x^{\prime},x,u-t)dF(z)dx^{\prime}dt, \tag{49}\] where we have used the fact that \(Q(x,\rho,u)\) and \(p(x^{\prime},x,u)\) satisfy (43) and (44), respectively. Using (48), we obtain: \[\mathcal{L}_{1}\Phi(x,\rho,u)=\frac{\partial\Phi}{\partial u}(x,\rho,u)- \lambda\Big{(}\int_{\mathbb{R}}\Phi(x+z,\rho,u)dF(z)-\Phi(x,\rho,u)\Big{)},\] which, upon rearranging, can be written as: \[\frac{\partial\Phi}{\partial u}(x,\rho,u)=k_{\rho}(\theta_{\rho}- x)\frac{\partial\Phi}{\partial x}(x,\rho,u)+\frac{1}{2}\sigma_{\rho}^{2} \frac{\partial^{2}\Phi}{\partial x^{2}}(x,\rho,u)\] \[\qquad+\sum_{j\neq\rho}q_{\rho j}\Big{(}\Phi(x,j,u)-\Phi(x,\rho,u )\Big{)}+\int_{\mathbb{R}}\Big{(}\Phi(x+z,\rho,u)-\Phi(x,\rho,u)\Big{)}\nu(dz). \tag{50}\] \(ii)\): The proof follows as above. Under model (6), for \(p(\cdot,x,u)\) we now have: \[\frac{\partial p}{\partial t}(\cdot,x,u)=k(\theta-x)\frac{\partial p}{\partial x }(\cdot,x,u)+\frac{1}{2}y\frac{\partial^{2}p}{\partial x^{2}}(\cdot,x,u). \tag{51}\] In this case, we also have to account for the transition density of the underlying volatility process, \(q(\cdot,y,u)\), which satisfies: \[\frac{\partial q}{\partial u}(\cdot,y,u)=\kappa(\mu-y)\frac{\partial q}{ \partial y}(\cdot,y,u)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}q}{\partial y^{2} }(\cdot,y,u), \tag{52}\] and recall that for \(Q(x,y,u)\) we have: \[\frac{\partial Q}{\partial u}(x,y,u)=\mathcal{L}_{2}Q(x,y,u), \tag{53}\] with the generator under the stochastic volatility model \(\mathcal{L}_{2}\) given by: \[\mathcal{L}_{2}f(x,y,t):=k(\theta-x)\frac{\partial f}{\partial x}(x,y,t)+ \kappa(\mu-y)\frac{\partial f}{\partial y}(x,y,t)+\frac{1}{2}y\frac{\partial^ {2}f}{\partial x^{2}}(x,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}Q}{\partial y ^{2}}(x,y,t) \tag{54}\] Differentiating (28) with respect to \(x\) and \(u\), and comparing \(\mathcal{L}_{2}\Phi(x,y,u)\) with \(\frac{\partial\Phi}{\partial u}(x,y,u)\) we obtain the required PIDE, using the same steps as in \((i)\). We can now present the main result for the generalized model. Even though the steps are similar as the cases above, the dependence on both the regime and volatility processes creates additional terms and it is therefore worth outlining the proof in detail. **Theorem 5.6**.: _Under the generalized asset process model (8) the survival probability \(\Phi(x,\rho,y,u)\) satisfies the PIDE:_ \[\frac{\partial\Phi}{\partial u}(x,\rho,y,u)\] \[=k_{\rho}(\theta_{\rho}-x)\frac{\partial\Phi}{\partial x}(x,\rho,y,u)+\kappa(\mu-y)\frac{\partial\Phi}{\partial y}(x,\rho,y,u)+\frac{1}{2} \sigma_{\rho}^{2}y\frac{\partial^{2}\Phi}{\partial x^{2}}(x,\rho,y,u)+\frac{1} {2}\xi^{2}y\frac{\partial^{2}\Phi}{\partial y^{2}}(x,\rho,y,u)\] \[+\sum_{j\neq\rho}q_{\rho j}\Big{(}\Phi(x,j,y,u)-\Phi(x,\rho,y,u) \Big{)}+\int_{\mathbb{R}}\Big{(}\Phi(x+z,\rho,y,u)-\Phi(x,\rho,y,u)\Big{)}\nu( dz), \tag{55}\] _for \((x,\rho,y,u)\in\mathcal{D}\times\mathcal{R}\times\mathcal{V}\times[0,T]\), with initial and boundary conditions:_ \[\Phi(x,\rho,y,0)=\mathbbm{1}_{\{x>0\}},\ \ (x,\rho,y)\in \mathcal{D}\times\mathcal{R}\times\mathcal{V},\] \[\Phi(0,\rho,y,u)=0,\ \ (\rho,y,u)\in\mathcal{R}\times\mathcal{V} \times[0,T],\] \[\Phi(x,\rho,y,u)\to 1,\text{ as }x\rightarrow\infty,\ \ (\rho,y,u)\in \mathcal{R}\times\mathcal{V}\times[0,T],\] \[\frac{\partial\Phi}{\partial y}(x,y,u)=0,\text{ as }y\rightarrow\infty\ \ (x,\rho,u)\in \mathcal{D}\times\mathcal{R}\times[0,T]. \tag{56}\] Proof.: For the transition density \(p(\cdot,x,u)\) we have: \[\frac{\partial p}{\partial t}(\cdot,x,u)=k_{\rho}(\theta-x)\frac{\partial p} {\partial x}(\cdot,x,u)+\frac{1}{2}\sigma_{\rho}^{2}y\frac{\partial^{2}p}{ \partial x^{2}}(\cdot,x,u), \tag{57}\] and for \(q(\cdot,y,u)\) and \(g(\rho,u):=\mathbb{E}[\Phi(\cdot,R_{u}^{\rho},\cdot,\cdot)]\) we know that (52) and (46) hold, respectively. In this case, for \(Q(x,\rho,y,u)\) we have: \[\frac{\partial Q}{\partial u}(x,\rho,y,u)=\mathcal{L}_{3}Q(x,\rho,y,u), \tag{58}\] with the generator under the generalized model, \(\mathcal{L}_{3}\), given by: \[\mathcal{L}_{3}f(x,\rho,y,u):=k_{\rho}(\theta_{\rho}-x)\frac{ \partial f}{\partial x}(x,\rho,y,t)+\kappa(\mu-y)\frac{\partial f}{\partial y }(x,\rho,y,t)\] \[+\frac{1}{2}\sigma_{\rho}^{2}y\frac{\partial^{2}f}{\partial x^{2} }(x,\rho,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}f}{\partial y^{2}}(x,\rho, y,t)+\sum_{j\neq\rho}q_{\rho j}\Big{(}f(x,j,y,t)-f(x,\rho,y,t)\Big{)}, \tag{59}\] As in the results above it will be useful to work with the definition of \(\mathcal{T}_{s}\), i.e., version (26) of the integral equation. With the change of variables \(t=u-s\) and applying the Leibniz rule we now get: \[\frac{\partial\Phi}{\partial u} (x,\rho,u)=\lambda\int_{0}^{\infty}\int_{0}^{\infty}\int_{\mathbb{ R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{0}^{\rho},\nu,u)]q(\nu,y,0)p(x^{\prime},x,0)dF(z)dx^{ \prime}d\nu\] \[+\int_{0}^{u}-\lambda^{2}e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0 }^{\infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},\nu,t)] q(\nu,y,u-t)p(x^{\prime},x,u-t)dF(z)dx^{\prime}d\nu dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\frac{\partial}{\partial u}\mathbb{E}[\Phi(x^{\prime }+z,R_{u-t}^{\rho},\nu,t)]q(\nu,y,u-t)p(x^{\prime},x,u-t)dF(z)dx^{\prime}d\nu dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},\nu,t)] \frac{\partial q}{\partial u}(\nu,y,u-t)p(x^{\prime},x,u-t)dF(z)dx^{\prime}d \nu dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t}^{\rho},\nu,t)]q( \nu,y,u-t)\frac{\partial p}{\partial u}(x^{\prime},x,u-t)dF(z)dx^{\prime}d \nu dt\] \[-\lambda e^{-\lambda u}Q(x,\rho,y,u)+e^{-\lambda u}\frac{ \partial Q}{\partial u}(x,\rho,y,u). \tag{60}\] From the first term we have: \[\lambda\int_{0}^{\infty}\int_{0}^{\infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{ \prime}+z,\rho,\nu,u)]\delta_{y}(\nu)\delta_{x}(x^{\prime})dF(z)dx^{\prime}d \nu=\lambda\int_{\mathbb{R}}\Phi(x+z,\rho,y,u)dF(z),\] whereas, from (29), the second term can be written as \(-\lambda\Phi(x,\rho,y,u)+\lambda e^{-\lambda u}Q(x,\rho,y,u)\), and hence, we have: \[\frac{\partial\Phi}{\partial u}(x,\rho,y,u)=\lambda\int_{\mathbb{ R}}\Phi(x+z,\rho,y,u)dF(z)-\lambda\Phi(x,\rho,y,u)+e^{-\lambda u}\frac{ \partial Q}{\partial u}(x,\rho,y,u)\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t},\nu,t)]\frac{ \partial q}{\partial u}(\nu,y,u-t)p(x^{\prime},x,u-t)dF(z)dx^{\prime}d\nu dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\mathbb{E}[\Phi(x^{\prime}+z,R_{u-t},\nu,t)]q(\nu,y,u -t)\frac{\partial p}{\partial u}(x^{\prime},x,u-t)dF(z)dx^{\prime}d\nu dt\] \[+\int_{0}^{u}\lambda e^{-\lambda(u-t)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{\mathbb{R}}\frac{\partial}{\partial u}\mathbb{E}[\Phi(x^{\prime }+z,R_{u-t},\nu,t)]q(\nu,y,u-t)p(x^{\prime},x,u-t)dF(z)dx^{\prime}d\nu dt. \tag{61}\] Taking the derivatives of \(\Phi(x,\rho,y,u)\) with respect to \(x\) and \(y\) is straightforward. Therefore, using (61) with (57), (52) and (46), we obtain: \[\mathcal{L}_{3}\Phi(x,\rho,y,u)=\frac{\partial\Phi}{\partial u}(x,\rho,y,u)- \lambda\Big{(}\int_{\mathbb{R}}\Phi(x+z,\rho,y,u)dF(z)-\Phi(x,\rho,y,u)\Big{)},\] and the expected PIDE follows. It is easy to see that the same steps can be used to obtain an equivalent PIDE for the simple PD function (corresponding to a regime switching model with one regime), whose integral equation representation is given by (27). The PIDE is shown below; we omit the proof for brevity, as it follows directly as a special case of the results above. **Corollary 5.7**.: _Under model (3) the survival probability \(\Phi(x,u)\) satisfies the PIDE:_ \[\frac{\partial\Phi}{\partial u}=k(\theta-x)\frac{\partial\Phi}{\partial x}+ \frac{1}{2}\sigma^{2}\frac{\partial^{2}\Phi}{\partial x^{2}}+\int_{\mathbb{R}} \Big{(}\Phi(x+z,u)-\Phi(x,u)\Big{)}\nu(dz)=0,\ \ (x,u)\in\mathcal{D}\times[0,T], \tag{62}\] _with initial and boundary conditions:_ \[\Phi(x,0)=\mathbb{1}_{\{x>0\}},\ \ x\in\mathcal{D},\] \[\Phi(0,u)=0,\ \ u\in[0,T],\] \[\Phi(x,u)\to 1\text{ as }x\rightarrow\infty,\ \ u\in[0,T]. \tag{63}\] **Remark 5.8**.: It is worth noting that the approach we have developed results in PIDEs consistent with the ruin probability examined in [42], where the asset process is given by: \[X_{t}(x)=x+\int_{0}^{t}r(X_{s}(x)+c)ds+\sum_{i=1}^{N_{t}}(-Y_{i}),\] with jump distribution \(f(y)\) on \(\mathbb{R}_{+}\) and jump intensity \(\lambda\). It is shown that the survival probability \(\phi(x,t)=\mathbb{P}\Big{(}\inf_{r\leq t}X_{r}(x)>0\Big{)}\) satisfies the PIDE: \[\frac{\partial\phi}{\partial t}-(rx+c)\frac{\partial\phi}{\partial x}+\lambda \Big{(}\phi(x,t)-\int_{0}^{x}\phi(x-y,t)dF(y)\Big{)}=0.\] Noting that \(\phi(x,t)=0\) for \(x<0\), this can be rewritten as: \[\frac{\partial\phi}{\partial t}-(rx+c)\frac{\partial\phi}{\partial x}- \lambda\Big{(}\int_{\mathbb{R}}\big{(}\phi\big{(}x+(-y),t\big{)}-\phi(x,t) \big{)}dF(y)\Big{)}=0,\] and therefore, from the definition of the Levy measure \(\nu(\cdot)\), we obtain: \[\frac{\partial\phi}{\partial t}=(rx+c)\frac{\partial\phi}{\partial x}+\int_{ \mathbb{R}}\Big{(}\phi\big{(}x+(-y),t\big{)}-\phi(x,t)\Big{)}\nu(dy)=0,\] We can see that this PIDE is equivalent to that in (62), given that the diffusion term is zero and therefore the second derivative to \(x\) is zero. This confirms that our results are consistent with existing definitions and models that have been studied in the literature, with the important difference that we take advantage of the integral equations and corresponding continuity results, which allow us to consider more complex models and to obtain appropriate solutions in all cases. ### Regularity of the one-dimensional PD function Finally, we study the case of the one dimensional model to show that the resulting survival and PD function enjoys the properties required to consider equation (62) in the strict sense. Specifically, this requires that \(\Phi(x,u)\) is (at least) twice and once differentiable with respect to the spatial and temporal variables, respectively. These results are based on the corresponding calculations for the general family of second order parabolic PIDEs, as studied in [27]. A reminder of the main results that we will use are given in Appendix F. We first rewrite (62) in a concise form for the calculations that follow. Note that we adopt the notation of the appropriate spaces used in [27], which is also used in Appendix F. It will be useful to set \(\mathcal{D}:=(0,\infty)\). This will allow us to consider a smooth initial condition, rather than the Heaviside function as in formulation of the PIDE in Corollary 5.7, since \(\Phi(x,u)=0\), for \(x\leq 0\), by definition. Then, for a fixed time until maturity \(T>0\), we write: \[\begin{cases}L\Phi(x,u)=I\Phi(x,u)&\text{ for }(x,u)\in Q_{T}:=\mathcal{D}\times[0,T] \\ \Phi(x,0)=1&\text{ for }x\in\mathcal{D}\\ \Phi(x,t)=1_{x>0}&\text{ for }x\in\Sigma_{T}:=\partial\mathcal{D}\times[0,T], \end{cases} \tag{64}\] where we define the operator \(L\) by \(L\Phi(x,u):=\frac{\partial\Phi}{\partial u}-\mathcal{L}\Phi(x,u)\) and the integral operator \(I\) by \(I\Phi(x,u):=\int_{\mathbb{R}}\Big{(}\Phi(x+z,u)-\Phi(x,u)\Big{)}\nu(dz)\), and \(\partial\mathcal{D}\) is the standard notation for the boundary of domain \(\mathcal{D}\). Our aim is to show that the above PIDE has a solution satisfying appropriate regularity conditions. To this end, we first define some relevant function spaces that will be required for the subsequent regularity results. **Definition 5.9**.: Consider \(\Omega\subset\mathbb{R}^{n}\) an open set, with closure \(\bar{\Omega}\). Furthermore, consider a fixed time horizon \(T>0\) and define \(Q_{T}=\Omega\times[0,T]\), with closure \(\bar{Q}_{T}\). We then define the following spaces, for \(0<\alpha<1\): * \(C^{0}(\bar{\Omega})\) is the Banach space of bounded continuous functions in \(\bar{\Omega}\), with the natural supremum norm: \[\|\cdot\|_{C^{0}(\bar{\Omega})}\equiv\|\cdot\|_{0,\bar{\Omega}}=\sup_{\Omega} |\cdot|\] * \(C^{2,1}\left(\bar{Q}_{T}\right)\) is the Banach space of functions \(\varphi(x,t)\) belonging to \(C^{0}\left(\bar{Q}_{T}\right)\) together their derivatives \(\frac{\partial f}{\partial x},\frac{\partial^{2}f}{\partial x^{2}},\frac{ \partial f}{\partial t}\) in \(\bar{Q}_{T}\) with natural norm. * \(C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)\) is the Banach space of function \(\varphi\) in \(C^{0}\left(\bar{Q}_{T}\right)\) which are Holder continuous in \(\bar{Q}_{T}\) with exponent \(\alpha\) in \(x\) and \(\frac{\alpha}{2}\) in \(t\) i.e. having a finite value for the seminorm \[\langle f\rangle_{\bar{Q}_{T}}^{(\alpha)}\equiv\langle f\rangle_{x,\bar{Q}_{T} }^{(\alpha)}+\langle f\rangle_{t,\bar{Q}_{T}}^{\left(\frac{\alpha}{2}\right)}\] where \[\langle f\rangle_{x,\bar{Q}_{T}}^{(\alpha)}=\inf\left\{C\geq 0:\left|f(x,t)-f \left(x^{\prime},t\right)\right|\leq C\left|x-x^{\prime}\right|^{\alpha}, \forall x,x^{\prime},t\right\}\] \[\langle f\rangle_{t,\bar{Q}_{T}}^{\left(\frac{\alpha}{2}\right)}=\inf \left\{C\geq 0:\left|f(x,t)-f\left(x,t^{\prime}\right)\right|\leq C\left|t-t^{ \prime}\right|^{\frac{\alpha}{2}},\forall x,t,t^{\prime}\right\}\] The quantity \[\|f\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\equiv\|f\|_{\alpha,\bar{Q}_{T}}=\|f\|_{0,\bar{Q}_{T}}+\langle f\rangle_{\bar{Q}_{T}}^{(\alpha)}\] defines a norm. * \(C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) is the Banach space of functions \(f(x,t)\) in \(C^{2,1}\left(\bar{Q}_{T}\right)\) having a finite value for the seminorm: \[\langle f\rangle_{\bar{Q}_{T}}^{(2+\alpha)}=\langle\partial_{t}f\rangle_{ \bar{Q}_{T}}^{(\alpha)}+\sum_{i,j=1}^{d}\,\langle\partial_{ij}f\rangle_{\bar {Q}_{T}}^{(\alpha)}+\sum_{i=1}^{d}\,\langle\partial_{i}f\rangle_{t,\bar{Q}_{T} }^{\frac{1+\alpha}{2}}\,.\] Then, the quantity \[\|f\|_{C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)}\equiv\|f\|_{2+ \alpha,\bar{Q}_{T}}=\sum_{2r+s\leq 2}\|\partial_{t}^{r}\partial_{x}^{s}f\|_{0,\bar{Q}_{T} }+\langle f\rangle_{\bar{Q}_{T}}^{(2+\alpha)}\] defines a norm. **Proposition 5.10**.: _Consider a fixed time horizon \(T>0\) and the space \(Q_{T}:=\mathcal{D}\times[0,T]\), with closure \(\bar{Q}_{T}\). Then, PDE (64) has a solution \(\Phi(x,u)\), such that \(\Phi(x,u)\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\)._ Proof.: The proof relies on an appropriate fixed point argument. To this end, we first define the mapping \(\mathcal{T}v=\Phi\), such that \(v\) is a solution of \(L\Phi(x,u)=Iv\). Note that from Theorem F.1, there exists a unique \(\Phi\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) solving the local counterpart of (64), where the right hand side of the PDE is zero. Consider now a function \(v\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\). It follows that \(Iv\in C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)\) and we also have that: \[\|Iv\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\leq\varepsilon\| \nabla v\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}+C(\varepsilon )\|v\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)},\] from Theorem F.2. Furthermore, by definition of the mapping \(\mathcal{T}\) we have that if \(v\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) then \(\Phi=\mathcal{T}v\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\). Hence, \(\mathcal{T}\) is a map from \(C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) onto itself and is also single-valued, by the uniqueness of the solution of the PDE. We will now show that \(\mathcal{T}\) is also a contraction in order to then apply Banach's fixed point argument. To this end, consider \(v,v^{\prime}\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\), with the corresponding mappings \(Tv,Tv^{\prime}\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\). By the definition of the mapping \(T\) have that: \[\begin{cases}L\Phi(x,u)=Iv&\text{ for }(x,u)\in Q_{T}:=\mathcal{D}\times[0,T]\\ L\Phi^{\prime}(x,u)=Iv^{\prime}&\text{ for }(x,u)\in Q_{T}:=\mathcal{D} \times[0,T],\end{cases} \tag{65}\] and therefore \(L\hat{\Phi}(x,u)=I\hat{v}\), with \(\hat{\Phi}:=\Phi-\Phi^{\prime}\) and \(\hat{v}\) is defined analogously. Hence: \[\|\hat{\Phi}\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T} \right)}= \|\mathcal{T}v-\mathcal{T}v^{\prime}\|_{C^{\alpha,\frac{\alpha}{2 }}\left(\bar{Q}_{T}\right)}\leq C\|I\hat{v}\|_{C^{\alpha,\frac{\alpha}{2}} \left(\bar{Q}_{T}\right)}\] \[\leq \varepsilon\|\nabla\hat{v}\|_{C^{\alpha,\frac{\alpha}{2}}\left( \bar{Q}_{T}\right)}+C(\varepsilon)\|\hat{v}\|_{C^{\alpha,\frac{\alpha}{2}} \left(\bar{Q}_{T}\right)}, \tag{66}\] where the last inequality follows from Theorem F.2. To show that \(\mathcal{T}\) is indeed a contraction, we first note that all terms in the final expression above are bounded by \(\|\hat{v}\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\). We need \(\|\hat{\Phi}\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\leq k\| \hat{v}\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\), with \(k<1\). For this, notice that the first term in the final expression above can be made arbitrarily small, however the second depends on the \(C\) value, which in turn depends on the time horizon \(T\). We can therefore make \(C(\epsilon)<1\) if we consider a small enough horizon, i.e., \(T=\delta\), creating a solution \(\Phi(x,u)\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{\delta}\right)\). We can apply this approach to find an appropriate solution in \(\Omega\times[\delta,2\delta]\), continuing until the entire interval \([0,T]\) is covered. **Remark 5.11**.: Based on the results given by [27], the result above can be extended for higher dimensions with \(x\in\mathbb{R}^{d}\) and an analogous parabolic operator \(L\). Therefore, Proposition 5.10 holds for the stochastic volatility model, as well. Furthermore, in the case of the regime switching model we obtain a simple coupling of parabolic PIDEs (identical to that for the one dimensional model), and it is therefore expected that the regularity result holds for the PD function \(\Phi(x,\cdot,u)\). With this result we have shown that we can go beyond the notion of viscosity solutions in the case of the PD functions and obtain solutions that satisfy all required regularity properties. In the next paper we will consider numerical solutions to the PIDEs, some of which we can now interpret as strong solutions, under the conditions mentioned above. Numerical estimation of PD functions In this paper, we will develop numerical schemes to solve the PIDEs obtained in Section 5 and use the resulting PD values in specific examples of the aforementioned IFRS 9 modelling tasks. We choose to focus on the numerical solutions of the PIDEs, rather than the corresponding integral equations, as we will be able to employ standard finite difference schemes to estimate the solutions, as detailed below. For clarity and illustrative purposes, we will first consider the one dimensional OU model given by (3) (recall that we have shown that the PD function \(\Phi(x,u)\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) is a strong solution to the corresponding PIDE). From the resulting finite difference scheme we will then be able to build the solutions for the regime switching and stochastic volatility models. We will focus on these models for our numerical solutions, as they cover the applications in credit risk that we consider in this paper, noting the case of the generalized model (8) can be developed by combining the methods that follow. However, due to the additional terms, the corresponding numerical scheme suffers from the well-known "curse of dimensionality" problem. Our approach in this Section follows the methodology developed in [21] and [25]. We extend the numerical schemes by considering variable coefficients and further develop the corresponding methods for the regime switching and stochastic volatility models. Due to the additional variables, we will see that these require careful handling of the derivative discretizations to ensure the required stability and monotonicity properties hold. As mentioned, we will start with the one dimensional model, which produces the PIDE given in (62), the finite difference scheme for which is similar to that developed in [21]. However, this first step will allow us to explicitly account for the variable drift term and detail its effect on the finite difference scheme and is therefore worth presenting the analytical calculations. ### One dimensional model Before implementing the numerical methods, it is important to discuss the spatial and temporal domains over which the schemes will be solved. We consider a spatial domain \(x\in\mathcal{D}\subset\mathbb{R}\). Therefore, for the construction of the numerical scheme one can consider the interval \(x\in[0,S]\) with non-trivial solutions \(\Phi(x,u)\in(0,1)\) (in practice, the value of \(S\) depends on the parameters of the underlying processes and its approximation may require Monte Carlo simulations). However, given that the PIDEs contain the non-local integral terms, and the OU process is defined on \(\mathbb{R}\), we will define \(\mathcal{D}=[-B,B]\), for some constant \(B>S\) and extend the boundary conditions \(\Phi(x,\cdot)=0\) for \(x\in[-B,0)\) and \(\Phi(x,\cdot)=1\) for \(x\in(S,B]\). This way, we will be able to calculate the integral term, as detailed below. For the temporal domain, we simply consider \(t\in[0,1]\) (we rewrite \(u\) as \(t\) as there is not risk of confusion in what follows). Given the added complexity from the non-local term, we give a detailed explanations of each of the three aforementioned schemes in this section, along with examples of specific asset value processes and, subsequently, examples of the modelling tasks pertaining to credit risk under the IFRS 9 framework we previously discussed. We write (62) as follows: \[\frac{\partial\Phi}{\partial t}=k(\theta-x)\frac{\partial\Phi}{ \partial x}+\frac{1}{2}\sigma^{2}\frac{\partial^{2}\Phi}{\partial x^{2}}+\int _{\mathbb{R}}\Phi(x+z,t)\nu(dz)-\Phi(x,t)\int_{\mathbb{R}}\nu(dz). \tag{67}\] Note that in the discretized version of this PIDE we will also have to approximate the integral with respect to the Levy measure. We employ an implicit scheme leading to a backward time centered space (BTCS) method, and handle the non-local term explicitly, as in [21]. Consider space and time grids, with step sizes \(\Delta x\) and \(\Delta t\), and with \(N\) and \(T\) total points, respectively. Therefore, we have that \(\Phi_{p}^{q}\) represents the survival probability at the grid point \(t=t_{0}+q\Delta t\), \(x=x_{0}+p\Delta x\), i.e. \(\Phi_{p}^{q}=\Phi(x_{0}+p\Delta x,t_{0}+q\Delta t)\). Furthermore, let \(L\), \(D\) and \(U\) be number of grid points in the intervals \([-B,0)\), \([0,S]\) and \((S,B]\), respectively, so that \(N=L+D+U\). For the integral terms, we first must approximate the jump density by considering a ball around the \(x\)-value of the grid: \[\bar{f}_{i}=\frac{1}{\Delta x}\int_{x_{i}-\frac{\Delta x}{2}}^{x_{i}+\frac{ \Delta x}{2}}f(x)dx. \tag{68}\] Then, noting that \(\nu(dz)=\lambda F(dz)\), we can approximate the first and second integral terms in (67) using: \[\mathcal{I}\Phi_{p}^{q}:=\sum_{i=-J/2}^{J/2}\Phi_{p+i}^{q}\bar{f }_{i}\Delta z, \tag{69}\] \[\hat{I}=\sum_{i=-J/2}^{J/2}\bar{f}_{i}\Delta z, \tag{70}\] for some \(J\in\mathbb{Z}_{+}\) large enough to ensure that \(\hat{I}\) is sufficiently close to \(1\). In the above, we have defined the operator \(\mathcal{I}:\mathcal{C}\rightarrow\mathcal{C}\), where \(\mathcal{C}\) is the Banach space as defined in Proposition 4.9. We will refer to this as the integral operator. For simplicity, we will be taking \(\Delta x=\Delta z\) in the calculations and numerical results below. The resulting implicit scheme for PIDE (67) is given by: \[\frac{\Phi_{p}^{q+1}-\Phi_{p}^{q}}{\Delta t}=k(\theta-x_{p})\frac{\Phi_{p+1}^{ q+1}-\Phi_{p-1}^{q+1}}{2\Delta x}+\frac{1}{2}\sigma^{2}\frac{\Phi_{p+1}^{q+1}-2 \Phi_{p}^{q+1}+\Phi_{p-1}^{q+1}}{\Delta x^{2}}+\lambda\mathcal{I}\Phi_{p}^{q} -\lambda\hat{I}\Phi_{p}^{q}, \tag{71}\] which, upon rearranging, can be written as: \[-\Phi_{p-1}^{q+1}c_{p}\Delta t+\Phi_{p}^{q+1}\big{(}1+a_{p}\Delta t\big{)}- \Phi_{p-1}^{q+1}b_{p}\Delta t=(1-\lambda\Delta t\hat{I})\Phi_{p}^{q}+\lambda \Delta t\mathcal{I}\Phi_{p}^{q}, \tag{72}\] for \(q=1,2,\ldots T-1\) and with coefficients \(a_{p},b_{p}\) and \(c_{p}\), for \(p=0,1,\ldots,N-1\), given by: \[c_{p}=\frac{\sigma^{2}}{2\Delta x^{2}}-\frac{k(\theta-x_{p})}{2 \Delta x}\] \[a_{p}=\frac{\sigma^{2}}{\Delta x^{2}}\] \[b_{p}=\frac{\sigma^{2}}{2\Delta x^{2}}+\frac{k(\theta-x_{p})}{2 \Delta x} \tag{73}\] Hence, system (71) can be written in the matrix form below: \[M\Phi^{q+1}=\Lambda\Phi^{q}+b,\text{ for }q=0,1,\ldots,T-1,\] where \(\Phi^{q},b\in\mathbb{R}^{N}\) and \(M\in\mathbb{R}^{N\times N}\) are given by: \[\Phi^{q}=\left(\begin{array}{c}\Phi_{0}^{q}\\ \Phi_{1}^{q}\\ \vdots\\ \Phi_{N-1}^{q}\end{array}\right),\ \ b=\left(\begin{array}{c}0\\ \vdots\\ 0\\ b_{U}\end{array}\right),\ \ M=\left[\begin{array}{ccc}I_{L}&0_{D}&0_{U}\\ 0_{L}&\mathcal{M}&0_{U}\\ 0_{L}&0_{D}&I_{U}\end{array}\right], \tag{74}\] with \(b_{U}=(1,\cdots,1)^{T}\in\mathbb{R}^{U}\), \(I_{n},0_{n}\) being the \(n\times n\)-dimensional identity and zero matrices, respectively, \(\mathcal{M}\in\mathbb{R}^{D\times D}\) given by: \[\mathcal{M}=\left(\begin{array}{cccccccc}1&0&0&0&\cdots&0&0&0\\ -c_{1}\Delta t&1+a_{1}\Delta t&-b_{1}\Delta t&0&\cdots&0&0&\\ 0&-c_{2}\Delta t&1+a_{2}\Delta t&-b_{2}\Delta t&\cdots&0&0&0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ 0&0&0&0&\cdots&-c_{N-1}\Delta t&1+a_{N-1}\Delta t&-b_{N-1}\Delta t\\ 0&0&0&0&\cdots&0&0&1\end{array}\right), \tag{75}\] and \(\Lambda\in\mathbb{R}^{N\times N}\): \[\Lambda=\left(\begin{array}{cccccccc}\lambda\Delta t\bar{f}_{-1}&\lambda \Delta t\bar{f}_{0}+\hat{F}&\lambda\Delta t\bar{f}_{1}&\cdots&\lambda\Delta t \bar{f}_{J/2}&0&\cdots&0\\ \lambda\Delta t\bar{f}_{-2}&\lambda\Delta t\bar{f}_{-1}&\lambda\Delta t\bar {f}_{0}+\hat{F}&\cdots&\lambda\Delta t\bar{f}_{J/2-1}&\lambda\Delta t\bar{f}_ {J/2}&\cdots&0\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ 0&0&0&\cdots&\lambda\Delta t\bar{f}_{-J/2+1}&\lambda\Delta t\bar{f}_{-J/2}& \cdots&\lambda\Delta t\bar{f}_{J/2}\end{array}\right), \tag{76}\] where \(\hat{F}:=1-\lambda\Delta t\hat{I}\). At each time step we can then calculate \(\Phi^{q+1}=M^{-1}(\Lambda\Phi^{q}+b)\), to obtain the solution at time \(t=q+1\). For the implementation of the numerical scheme we must analyze the necessary properties pertaining to its stability and monotonicity, for which we use the same definitions and approach as in [21]. Specifically, we define these conditions as follows. **Definition 6.1**.: 1. Scheme (71) is stable if and only if, for a bounded initial condition, there exist \(\Delta x\) and \(\Delta t\) such that the solution exists and is bounded, i.e. \(|\Phi^{q}_{p}|\leq C,\) for all \(p,q\) and some \(C>0\). 2. Scheme (71) is monotone, i.e. for two initial conditions \(\Phi^{0}\) and \(\tilde{\Phi}^{0}\): \[\Phi^{0}>\tilde{\Phi}^{0}\Rightarrow\Phi^{q}>\tilde{\Phi}^{q},\] for all \(q\). Note that the comparison of the two vectors is to be understood in the element-by-element sense. This condition is often referred to as the discrete comparison principle. These conditions must hold in order to avoid spurious oscillations in the numerical solutions and nonsensical values. We will show that the numerical scheme for the PIDE we have obtained is conditionally stable and monotone. As we will see from the result below, the condition is not restrictive and can easily be satisfied when selecting the parameters of the scheme, without significant computational cost. **Proposition 6.2**.: _Scheme (71) is stable and monotone if_ \[\Delta x\leq\frac{\sigma^{2}}{k\theta}\text{ and }\Delta t\leq\frac{1}{\lambda \hat{I}}. \tag{77}\] Proof.: We prove the two results separately, starting with stability. Let \(\Phi^{0}\) be a bounded initial condition, i.e. \(||\Phi^{0}||_{\infty}<\infty\). Following [21], we will proceed by induction and contradiction. Let \(||\Phi^{q}||_{\infty}\leq||\Phi^{0}||_{\infty}\) and suppose \(||\Phi^{q+1}||_{\infty}>||\Phi^{0}||_{\infty}\). Therefore, there exists \(p_{0}\in\{1,\ldots,N-1\}\) such that \(|\Phi_{p_{0}}^{q+1}|=||\Phi^{q+1}||_{\infty}\) and \(|\Phi_{p_{0}}^{q+1}|\geq|\Phi_{p}^{q+1}|\), for all \(p\in\{1,\ldots,N-1\}\). We will prove that this leads to a contradiction. Observing that \(a_{p}=b_{p}+c_{p}\), we can write: \[||\Phi^{q+1}||_{\infty}=|\Phi_{p_{0}}^{q+1}|=\big{[}-c_{p_{0}}\Delta t+(1+a_{p_{ 0}}\Delta t)-b_{p_{0}}\Delta t\big{]}|\Phi_{p_{0}}^{q+1}|. \tag{78}\] To proceed, we will need coefficients \(a_{p},b_{p},c_{p}\) be non-negative. This is true for \(a_{p}\), for all \(p\). From the remaining coefficients, we obtain the condition: \[\frac{\sigma^{2}}{2\Delta x^{2}}\geq\frac{|k(\theta-x_{p})|}{2\Delta x},\] and hence by requiring: \(\sigma^{2}\geq k||\theta-x||_{\infty}\Delta x\), we can ensure that the condition is satisfied at all points on the \(x\)-grid. Given that \(\Phi\) is identically zero for \(x<0\), this can be succinctly written as: \[\Delta x\leq\frac{\sigma^{2}}{k\theta}. \tag{79}\] Continuing from (78), and noting that all coefficients are non-negative, we now have: \[||\Phi^{q+1}||_{\infty} \leq-c_{p_{0}}\Delta t|\Phi_{p_{0}-1}^{q+1}|+(1+a_{p_{0}}\Delta t )|\Phi_{p_{0}}^{q+1}|-b_{p_{0}}\Delta t|\Phi_{p_{0}+1}^{q+1}|\] \[\leq|-c_{p_{0}}\Delta t\Phi_{p_{0}-1}^{q+1}+(1+a_{p_{0}}\Delta t) \Phi_{p_{0}}^{q+1}-b_{p_{0}}\Delta t\Phi_{p_{0}+1}^{q+1}|, \tag{80}\] and from (72), in combination with the induction hypothesis, it follows that: \[||\Phi^{q+1}||_{\infty}\leq|(1-\lambda\Delta t\hat{I})\Phi_{p_{0}}^{q}+\lambda \Delta t\mathcal{I}\Phi_{p_{0}}^{q}|\leq||\Phi^{0}||_{\infty}, \tag{81}\] where the last steps hold if \(1-\lambda\Delta t\hat{I}\geq 0\), leading to the second condition. Then, since \(\Phi_{p_{0}}^{q}\leq||\Phi^{q}||_{\infty}\), and \(\mathcal{I}\Phi_{p_{0}}^{q}\leq\hat{I}||\Phi^{q}||_{\infty}\), the above contradicts the assumption that \(||\Phi^{q+1}||_{\infty}>||\Phi^{0}||_{\infty}\). Hence, the scheme is stable, provided (79) is satisfied. Monotonicity is proven in a similar way. Specifically, let \(\Phi^{q},\tilde{\Phi}^{q}\) be two solutions corresponding to initial conditions \(\Phi^{0},\tilde{\Phi}^{0}\), respectively, with \(\Phi^{0}\geq\tilde{\Phi}^{0}\) and define \(d^{q}:=\Phi^{q}-\tilde{\Phi}^{q}\). We will proceed by induction and contradiction, as above. We have that \(d^{0}\geq 0\) and also assume \(d^{q}\geq 0\). We suppose that \(d^{p+1}<0\), i.e., there exists \(p_{0}\) such that \(\inf_{p}d_{p}^{q+1}=d_{p_{0}}^{q+1}<0\). Then: \[\inf_{p}d_{p}^{q+1}=d_{p_{0}}^{q+1}=-c_{p_{0}}\Delta td_{p_{0}}^{ q+1}+(1+a_{p_{0}}\Delta t)d_{p_{0}}^{q+1}-b_{p_{0}}\Delta td_{p_{0}}^{q+1}\] \[\geq-c_{p_{0}}\Delta td_{p_{0}-1}^{q+1}+(1+a_{p_{0}}\Delta t)d_{p _{0}}^{q+1}-b_{p_{0}}\Delta td_{p_{0}+1}^{q+1}=(1-\lambda\Delta t\hat{I})d_{p_{ 0}}^{q}+\lambda\Delta t\mathcal{I}d_{p_{0}}^{q}\geq 0, \tag{82}\] where the last step follows from the induction hypothesis and we have supposed that condition (79) is satisfied. By contradiction, we therefore conclude that \(d^{p+1}>0\), as required. ### Regime switching We now turn to the regime switching model, with regimes \(r\in\mathcal{R}\) and a total of \(R\) regimes. In the BTCS discretized version of (40) we let \(\Phi_{p,r}^{q}\) represent the survival probability at the grid point \(t_{q}=t_{0}+q\Delta t\), \(x_{p}=x_{0}+p\Delta x\), when the underlying Markov process is originally in state \(r\in\mathcal{R}\), i.e., \(\Phi^{q}_{p,r}=\Phi(x_{p},r,t_{q})\), with \(p=0,1,\ldots N-1,q=0,1,\ldots T\), \(r\in\mathcal{R}\). The discretized PIDE can be written as: \[-\Phi^{q+1}_{p-1,r}c_{p,r}\Delta t+\Phi^{q+1}_{p,r}\big{(}1+a_{p,r }\Delta t\big{)}-\Phi^{q+1}_{p-1,r}b_{p,r} \Delta t-\sum_{j\neq r}q_{rj}\Phi^{q+1}_{p,j}\Delta t\] \[=(1-\lambda\Delta t\hat{I})\Phi^{q}_{p,r}+\lambda\Delta t\mathcal{ I}\Phi^{q}_{p,r} \tag{83}\] where \(\mathcal{I}\Phi^{q}_{p,r}\) and \(\hat{I}\) are as in (69) and (70), respectively. The coefficients of this scheme are then given by: \[c_{p,r}=\frac{\sigma^{2}_{r}}{2\Delta x^{2}}-\frac{k_{r}(\theta _{r}-x_{p})}{2\Delta x}\] \[a_{p,r}=\frac{\sigma^{2}_{r}}{\Delta x^{2}}+\sum_{j\neq r}q_{\rho j}\] \[b_{p,r}=\frac{\sigma^{2}_{r}}{2\Delta x^{2}}+\frac{k_{r}(\theta _{r}-x_{p})}{2\Delta x}. \tag{84}\] In matrix notation, the regime switching PIDE can be written as: \[M^{RS}\Phi^{q+1}=\Lambda^{RS}\Phi^{q}+b^{RS}, \tag{85}\] where the block-form matrices \(\Phi,M^{RS},\Lambda^{RS}\in\mathbb{R}^{NR\times NR}\) and \(b^{RS}\in\mathbb{R}^{NR}\) are given by: \[\Phi^{q}=\left[\begin{array}{c}\Phi^{q}_{,1}\\ \Phi^{\dot{q}^{1}}_{,2}\\ \vdots\\ \Phi^{q}_{,R}\end{array}\right],\;\;b^{RS}=\left[\begin{array}{c}b\\ b\\ \vdots\\ b\end{array}\right],\;\;\Lambda^{RS}=\left[\begin{array}{cccc}\Lambda&0_{N}& \cdots&0_{N}\\ 0_{N}&\Lambda&\cdots&0_{N}\\ \vdots&\vdots&\vdots&\vdots\\ 0_{N}&0_{N}&\cdots&\Lambda\end{array}\right],\] \[M^{RS}=\left[\begin{array}{cccc}M_{r_{1}}&-\Delta tq_{r_{1}r_{2}}I_{N}& \cdots&-\Delta tq_{r_{1}r_{R}}I_{N}\\ -\Delta tq_{r_{2}2r_{1}}I_{N}&M_{r_{2}}&\cdots&-\Delta tq_{r_{2}r_{R}}I_{N}\\ \vdots&\vdots&\vdots&\vdots\\ -\Delta tq_{r_{RT}}I_{N}&-\Delta tq_{r_{RT}r_{2}}I_{N}&\cdots&M_{r_{R}}\end{array} \right], \tag{86}\] with \(\Phi^{q}_{,r}=(\Phi^{q}_{0,r},\ldots\Phi^{q}_{N-1,r})^{T}\), \(b,\Lambda\), as in (74) and (76) and the regime-specific matrices \(M_{r_{i}}\in\mathbb{R}^{N\times N}\) for \(r_{i}\in\mathcal{R},i=1,\ldots,R\), are as in (74) by replacing \(a_{p},b_{p},c_{p}\) with \(a_{p,r},b_{p,r},c_{p,r}\). As in the discretization of the PIDE for the one dimensional OU model, we will have to prove the appropriate stability and monotonicity results for the regime switching model. **Lemma 6.3**.: _Scheme (83) is stable and monotone if_ \[\Delta x\leq\frac{\sigma^{2}_{r}}{k_{r}\theta_{r}}\mbox{ and }\Delta t\leq \frac{1}{\lambda\hat{I}} \tag{87}\] _for all \(r\in\mathcal{R}\)._ Proof.: Let \(\Phi^{0}\) be a bounded initial condition for the survival probability. Note that this initial condition accounts for all regimes. As above, we will proceed by induction and contradiction. Let \(||\Phi^{q}||_{\infty}\leq||\Phi^{0}||_{\infty}\) and suppose \(||\Phi^{q+1}||_{\infty}>||\Phi^{0}||_{\infty}\). In this case, this means that there exists \((p_{0},r_{0})\in\{0,1,\ldots,N-1\}\times\mathcal{R}\) such that \(|\Phi^{q+1}_{p_{0},r_{0}}|=||\Phi^{q+1}||_{\infty}\), with \(|\Phi^{q+1}_{p_{0},r_{0}}|\geq|\Phi^{q+1}_{p,r}|\), for all \((p,r)\in\{0,1,\ldots,N-1\}\times\mathcal{R}\). Hence: \[||\Phi^{q+1}||_{\infty}=|\Phi^{q+1}_{p_{0},r_{0}}|=\big{[}-c_{p_{0},r_{0}}\Delta t+(1+a_{p_{0},r_{0}}\Delta t)-b_{p_{0},r_{0}}\Delta t-\sum_{j\neq r _{0}}q_{r_{0}j}\Delta t\big{]}|\Phi^{q+1}_{p_{0},r_{0}}|\] \[\leq-c_{p_{0},r_{0}}|\Phi^{q+1}_{p_{0}-1,r_{0}}|\Delta t+(1+a_{p_ {0},r_{0}}\Delta t)|\Phi^{q+1}_{p_{0},r_{0}}|-b_{p_{0},r_{0}}|\Phi^{q+1}_{p_{0 }+1,r_{0}}|\Delta t-\sum_{j\neq r_{0}}q_{r_{0}j}|\Phi^{q+1}_{p_{0},j}|\Delta t\] \[\leq|-c_{p_{0},r_{0}}\Phi^{q+1}_{p_{0}-1,r_{0}}\Delta t+(1+a_{p_{0 },r_{0}}\Delta t)\Phi^{q+1}_{p_{0},r_{0}}-b_{p_{0},r_{0}}\Phi^{q+1}_{p_{0}+1,r _{0}}\Delta t-\sum_{j\neq r}q_{r_{0}j}\Phi^{q+1}_{p_{0},j}\Delta t|\] \[\leq|(1-\lambda\Delta t\hat{I})\Phi^{q}_{p_{0},r_{0}}+\lambda \Delta t\mathcal{I}\Phi^{q}_{p_{0},r_{0}}|\leq||\Phi^{0}||_{\infty},\] where the last inequality follows from the same calculations as in Proposition 6.2. In the above, we must have \(a_{p,r},b_{p,r},c_{p,r}>0\) for each regime \(r\in\mathcal{R}\), leading to the first condition in (87). For monotonicity, again let \(\Phi^{q},\tilde{\Phi}^{q}\) be two solutions corresponding to \(\Phi^{0},\tilde{\Phi}^{0}\), respectively, with \(\Phi^{0}\geq\tilde{\Phi}^{0}\). Assume \(d^{q}:=\Phi^{q}-\tilde{\Phi}^{q}>0\) and suppose \(d^{q+1}\leq 0\). Hence, there exists \(p_{0},r_{0}\) such that \(\inf_{p,r}d^{q+1}_{p,r}=d^{q+1}_{p_{0},r_{0}}<0\). Proceeding as in Proposition 6.2: \[\inf_{p,r}d^{q+1}_{p,r}=d^{q+1}_{p_{0},r_{0}}=\big{[}-c_{p_{0},r_{ 0}}\Delta t+(1+a_{p_{0},r_{0}}\Delta t)-\sum_{j\neq r_{0}}q_{r_{0}j}\Delta t-b _{p_{0},r_{0}}\Delta t\big{]}d^{q+1}_{p_{0},r_{0}}\] \[\geq-c_{p_{0},r_{0}}d^{q+1}_{p_{0}-1,r_{0}}\Delta t+(1+a_{p_{0}, r_{0}})d^{q+1}_{p_{0},r_{0}}\Delta t-\sum_{j\neq r_{0}}q_{r_{0}j}d^{q+1}_{p_{0 },j}\Delta t-b_{p_{0},r_{0}}q^{q+1}_{p_{0}+1,r_{0}}\Delta t\] \[=(1-\lambda\Delta t\hat{I})d^{q}_{p_{0},r_{0}}+\lambda\Delta t \mathcal{I}d^{q}_{p_{0},r_{0}}\geq 0.\] Stability and monotonicity for scheme (83) thus follow by contradiction. ### Stochastic volatility model Finally, we present the numerical scheme for model (41). For this case, we must consider a discretization of the volatility process \(y\in\mathcal{V}\) of size \(V\). For the numerical implementation we use \(y\in[0,Y_{\max}]\) for some appropriate value \(Y_{\max}\). As above, we adopt the notation \(\Phi^{q}_{p,j}\) for the survival probability at the grid point \(t_{q}=t_{0}+q\Delta t\), \(x_{p}=x_{0}+p\Delta x\), and \(y_{j}=y_{0}+j\Delta y\), i.e. \(\Phi^{q}_{p,j}=\Phi(x_{p},y_{j},t_{q})\), with \(p=0,1,\ldots N-1,q=0,1,\ldots T\) and \(j=0,1,\ldots,V-1\). In this case the discretization scheme requires an alternative approach. Specifically, when the coefficient of the diffusion term becomes \(0\) or \(y\to 0\), the analogous to the previous stability and monotonicity conditions fails. The solution to this is to consider an Alternating Direction Implicit (ADI) approximation to the first derivative terms corresponding to both the asset and CIR volatility processes, as shown below: \[\frac{\partial\Phi}{\partial x}\approx\begin{cases}\frac{\Phi^{q+1}_{p+1,j}- \Phi^{q+1}_{p,j}}{\Delta x},&\text{if }k(\theta-x_{p})\geq 0\\ \frac{\Phi^{q+1}_{p,j}-\Phi^{q+1}_{p-1,j}}{\Delta x},&\text{if }k(\theta-x_{p})<0 \end{cases} \tag{88}\] \[\frac{\partial\Phi}{\partial y}\approx\begin{cases}\frac{\Phi_{p,j+1}^{q^{ \prime 1}}-\Phi_{p,j}^{q^{\prime 1}}}{\Delta y},&\text{if }\kappa(\mu-y_{j})\geq 0\\ \frac{\Phi_{p,j}^{q^{\prime 1}}-\Phi_{p,j-1}^{q^{\prime 1}}}{\Delta y},&\text{if }\kappa(\mu-y_{j})<0 \end{cases} \tag{89}\] Furthermore recall that at the boundary \(y=Y_{\max}\) we have: \[\frac{\Phi_{p,V}^{q+1}-\Phi_{p,V-2}^{q+1}}{2\Delta y}\approx\frac{\partial\Phi }{\partial y}(x,Y_{\max},u)=0, \tag{90}\] and therefore \(\Phi_{p,V}\approx\Phi_{p,V-2}\). This allows us to approximate the second derivative at the boundary by: \[\frac{\Phi_{p,V}^{q+1}-2\Phi_{p,V-1}^{q+1}+\Phi_{p,V-1}^{q+1}}{\Delta y^{2}}= \frac{2\big{(}\Phi_{p,V-2}^{q+1}-2\Phi_{p,V-1}^{q+1}\big{)}}{\Delta y^{2}} \tag{91}\] We can now write the implicit scheme for the PIDE corresponding to the stochastic volatility model: \[-\Phi_{p-1,j}^{q+1}c_{p,j}\Delta t+\Phi_{p,j}^{q+1}\big{(}1+a_{p, j}\Delta t\big{)}-\Phi_{p-1,j}^{q+1}b_{p,j}\Delta t-\Phi_{p,j-1}^{q+1}e_{p,j} \Delta t-\Phi_{p,j+1}^{q+1}f_{p,j}\Delta t\] \[=(1-\lambda\Delta t\hat{I})\Phi_{p,j}^{q}+\lambda\Delta t \mathcal{I}\Phi_{p,j}^{q}, \tag{92}\] where the coefficients are given by: \[c_{p,j} =\frac{y_{j}}{2\Delta x^{2}}-\frac{k(\theta-x_{p})}{\Delta x} \mathds{1}_{\{k(\theta-x_{p})<0\}},\] \[a_{p,j} =\frac{y_{j}}{\Delta x^{2}}+\frac{\xi^{2}y_{j}}{\Delta y^{2}}+ \bigg{|}\frac{k(\theta-x_{p})}{\Delta x}\bigg{|}+\bigg{|}\frac{\kappa(\mu-x_{ p})}{\Delta y}\bigg{|},\] \[b_{p,j} =\frac{y_{j}}{2\Delta x^{2}}+\frac{k(\theta-x_{p})}{\Delta x} \mathds{1}_{\{k(\theta-x_{p})>0\}},\] \[e_{p,j} =\frac{\xi^{2}y_{j}}{2\Delta y^{2}}\mathds{1}_{\{y\neq Y_{\max} \}}+\frac{\xi^{2}y_{j}}{\Delta y^{2}}\mathds{1}_{\{y=Y_{\max}\}}-\frac{\kappa (\mu-x_{p})}{\Delta y}\mathds{1}_{\{\kappa(\mu-y_{j})<0\;\cap\;y\neq 0\;\cap\;y \neq Y_{\max}\}},\] \[f_{p,j} =\frac{\xi^{2}y_{j}}{2\Delta y^{2}}\mathds{1}_{\{y\neq Y_{\max} \}}+\frac{\kappa(\mu-x_{p})}{\Delta y}\mathds{1}_{\{\kappa(\mu-y_{j})>0\;\cap \;y\neq Y_{\max}\}}. \tag{93}\] The solution of the resulting scheme: \[M^{SV}\Phi^{q+1}=\Lambda^{SV}\Phi^{q}+b^{SV}\] will result in estimations of the survival probability at each state of the underlying stochastic volatility process. Therefore, as in the regime switching model, we obtain the vectors \(\Phi^{q}=[\Phi_{\cdot,0}^{q}\;\;\Phi_{\cdot,1}^{q}\cdots\Phi_{\cdot,V-1}^{q}]^ {T}\in\mathbb{R}^{NV\times NV}\), \(b^{SV}=[b\;\;b\cdots b]^{T}\in\mathbb{R}^{NV}\) and matrix \(M^{SV}\in\mathbb{R}^{NV\times NV}\) as given below, in block form: \[M^{SV}=\left[\begin{array}{cccc}M_{0}&-\Delta tf_{p,0}I_{N}&0_{N}&\cdots&0_{ N}\\ -\Delta te_{p,1}I_{N}&M_{1}&-\Delta tf_{p,1}I_{N}&\cdots&0_{N}\\ \vdots&\vdots&\vdots&\vdots&\\ 0_{N}&\cdots&-\Delta te_{p,V-2}I_{N}&M_{V-2}&-\Delta tf_{p,V-2}I_{N}\\ 0_{N}&\cdots&0_{N}&-\Delta te_{p,V-1}I_{N}&M_{V-1}\end{array}\right],\] where \(M_{j}\in\mathbb{R}^{N\times N}\) for \(j=0,1,\ldots,V-1\) is given by (74) by replacing \(a_{p},b_{p},c_{p}\) with \(a_{p,j},b_{p,j},c_{p,j}\) and \(\Lambda^{SV}\!\in\mathbb{R}^{NV\times NV}\) is in the same form as \(\Lambda^{RS}\) in (86). We now prove the required stability and monotonicity results for the stochastic volatility case. **Lemma 6.4**.: _Scheme (92) is unconditionally stable and monotone._ Proof.: The proof follows almost identically to the regime switching case. Again, let \(\Phi^{0}\) be a bounded initial condition for the survival probability, \(||\Phi^{q}||_{\infty}\leq||\Phi^{0}||_{\infty}\) and suppose \(||\Phi^{q+1}||_{\infty}>||\Phi^{0}||_{\infty}\). Hence, there exists \((p_{0},j_{0})\in\{0,1,\ldots,N-1\}\times\{0,1,\ldots,V-1\}\) such that \(|\Phi^{q+1}_{p_{0},j_{0}}||=||\Phi^{q+1}||_{\infty}\), with \(|\Phi^{q+1}_{p_{0},j_{0}}|\geq|\Phi^{q+1}_{p,j}|\), for all \(p\in\{0,1,\ldots,N-1\}\times\{0,1,\cdots,V-1\}\). Hence, noting that no conditions on \(\Delta x,\Delta t\) need be imposed since all the coefficients are positive by construction, we have: \[||\Phi^{q+1}||_{\infty}=|\Phi^{q+1}_{p_{0},j_{0}}|=\big{[}-c_{p_{0 },j_{0}}\Delta t+(1+a_{p_{0},j_{0}}\Delta t)-b_{p_{0},j_{0}}\Delta t-e_{p_{0},j _{0}}\Delta t-f_{p_{0},j_{0}}\Delta t\big{]}|\Phi^{q+1}_{p_{0},j_{0}}|\] \[\leq-c_{p_{0},j_{0}}|\Phi^{q+1}_{p_{0}-1,j_{0}}|\Delta t+\big{(} 1+a_{p_{0},j_{0}}\Delta t\big{)}|\Phi^{q+1}_{p_{0},j_{0}}|-b_{p_{0},j_{0}}| \Phi^{q+1}_{p_{0}+1,j_{0}}|\Delta t\] \[-e_{p_{0},j_{0}}|\Phi^{q+1}_{p_{0},j_{0}-1}|\Delta t-f_{p_{0},j_{ 0}}|\Phi^{q+1}_{p_{0},j_{0}+1}|\Delta t\leq|(1-\lambda\Delta t\hat{I})\Phi^{q} _{p_{0},j_{0}}+\lambda\Delta t\mathcal{I}\Phi^{q}_{p_{0},j_{0}}|\leq||\Phi^{0} ||_{\infty},\] To conclude, we prove the scheme is monotone. Consider initial conditions \(\Phi^{0},\tilde{\Phi}^{0}\), respectively, with \(\Phi^{0}\geq\tilde{\Phi}^{0}\). With \(d^{q}:=\Phi^{q}-\tilde{\Phi}^{q}>0\), we also suppose \(d^{q+1}<0\), i.e., there exists a pair \((p_{0},j_{0})\) such that \(\inf_{p,j}d^{q+1}_{p,j}=d^{q+1}_{p_{0},j_{0}}<0\), and calculate: \[\inf_{p,j}d^{q+1}_{p,j} =d^{q+1}_{p_{0},j_{0}}=\big{[}-c_{p_{0},j_{0}}\Delta t+(1+a_{p_{0 },j_{0}}\Delta t)-b_{p_{0},j_{0}}\Delta t-e_{p_{0},j_{0}}\Delta t-f_{p_{0},j_{ 0}}\Delta t\big{]}d^{q+1}_{p_{0},j_{0}}\] \[\geq-c_{p_{0},j_{0}}d^{q+1}_{p_{0}-1,r_{0}}\Delta t+(1+a_{p_{0}, j_{0}}\Delta t)d^{q+1}_{p_{0},j_{0}}-b_{p_{0},j_{0}}d^{q+1}_{p_{0}+1,j_{0}}\Delta t\] \[-e_{p_{0},j_{0}}d^{q+1}_{p_{0},j_{0}-1}\Delta t-f_{p_{0},j_{0}}d^ {q+1}_{p_{0},j_{0}+1}\Delta t=(1-\lambda\Delta t\hat{I})d^{q}_{p_{0},j_{0}}+ \lambda\Delta t\mathcal{I}d^{q}_{p_{0},j_{0}}\geq 0.\] **Remark 6.5**.: It is worth noting that the resulting system for the PD function is dense due to the jump integral term, adding to the computational complexity of the scheme. Hence, additional methods such as implicit handling of the jump term and/or Crank-Nicolson schemes can be useful. We omit these methods from the present work, as they are not the main focus, however we refer the interested reader to relevant research, such as [25], [37] and [19]. **Remark 6.6**.: As previously mentioned, for the credit risk modelling tasks we will consider, using either the regime switching or the stochastic volatility model suffices. We will see multiple such examples in Section 7. Similar calculations as those considered above can be applied to PIDE (55), for the estimation of the survival probability under the generalized model. Stability and monotonicity follow from combining Lemmata 6.3 and 6.4. However, a mentioned, the combination of the regime switching and stochastic volatility variables lead to an intractable numerical scheme, plagued with the "curse of dimensionality". ## 7 Applications in credit risk ### IFRS 9 provision calculations As discussed, the IFRS 9 framework requires practitioners to take into consideration multiple risk factors and their evolution for provision calculations and other modelling tasks. Naturally, the evolution of the PD is of paramount importance in these credit risk problems. Specifically for provisioning, using the PD function we can now estimate provisions for Stage 1 and Stage 2 exposures. Recall that financial institutions must account for additional provisions for exposures which display a significant increase in credit risk. These forward-looking lifetime provisions must be calculated per exposure, with some minor differences depending on the type of portfolio (e.g., for corporate loan portfolios many consider contamination effects). In this section, we display how the framework outlined above can be used to calculate the provisions under IFRS 9. The main contribution is the calculation of Expected Lifetime provisions for Stage 2 exposures which, as previously stated, is a novel requirement introduced by the these regulatory standards. We provide specific examples of provision calculations for each case below. These calculations depend on multiple risk parameters corresponding to the credit exposure; the PD and Loss Given Default (LGD), as well as the amortization schedule, which affects the Exposure at Default (EAD), i.e., the remaining value of the loan which is not repaid in the case of default. Naturally, these risk parameters may vary according to each application and case. For example, many consider the LGD to evolve according to some stochastic process with a correlation to the PD (see e.g., [43], [55]). As our main focus is the PD function, we will consider a constant LGD and typical amortization schedule under the assumption of a zero interest rate in the examples that follow. The methodology, however, can be generalized to also consider a an appropriate LGD function (or stochastic process ) and any type of amortization. #### 7.1.1 Stage 1 provisions For Stage 1 loans standard regulations apply and we need only consider provisions as the Expected Losses (EL) that can be incurred on the current exposure. This calculation is given by the simple formula: \[EL:=\mathbb{E}[L_{t}]=EAD_{t}LGD_{t}PD_{t}. \tag{94}\] Using the implicit numerical schemes we can calculate the PD value representing the probability the loan defaults within some fixed time \(t\), represented straightforwardly by \[PD_{t}=\Psi(x,t).\] For example, the probability of a default event occurring within the current unit of time (typically year) is \(PD_{1}=\Psi(x,1).\) An example of the provision calculation for varying initial positions is given below. **Example 7.1**.: Consider a Stage 1 loan, with asset process given by: \[dG_{t}=k(\theta-G_{t})dt+\sigma dB_{t}+\int_{\mathbb{R}}zN(dt,dz),\,\,\,G_{0}=x, \tag{95}\] where \((k,\theta,\sigma)=(0.5,3.5,2.0)\), the Compound Poisson Process has normally distributed jumps, with size \(Z\sim N(0.0,0.2)\) and rate \(\lambda=1.0\). We select \(\mathcal{D}=[-10,10]\), with \(S=8.0\), as estimated by Monte Carlo experiments, \(N=1001\) and \(T=101\). The resulting survival probability is graphed in Fig. 1. Then, depending on the initial position of the asset at the time of loan origination, the provisions are calculated as \(EL=100\cdot PD\cdot 75\%\). In Table 1 we present the results for some initial positions \(x\in[0,1]\). Hence, if, at the time of calculation, the asset process is estimated to start at \(x=0.6\), the provisions are \(19.62\%\) of the current exposure. \(\triangleleft\) As expected, the provisions are a decreasing function of the initial position. A similar table can be produced at any point during the lifetime of the loan, by estimating the corresponding PD values. #### 7.1.2 Stage 2 Provisions We now turn to loan provision calculations for Stage 2 loans. Under IFRS 9, if and when loans transition to Stage 2, the lender is obligated to consider all future losses for provisioning purposes. Hence, at any time \(t\), and assuming discrete amortization payments, the following formula for the \begin{table} \begin{tabular}{c c c c} Initial Position & \(PD_{1}(\%)\) & \(LGD(\%)\) & Provisions (\(\%\)) \\ \hline 0.0 & 100.00 & 75 & 75.00 \\ 0.1 & 83.24 & 75 & 62.43 \\ 0.2 & 68.22 & 75 & 51.17 \\ 0.3 & 55.01 & 75 & 41.26 \\ 0.4 & 43.64 & 75 & 32.73 \\ 0.5 & 34.06 & 75 & 25.55 \\ 0.6 & 26.16 & 75 & 19.62 \\ 0.7 & 19.78 & 75 & 14.84 \\ 0.8 & 14.73 & 75 & 11.05 \\ 0.9 & 10.82 & 75 & 8.11 \\ 1.0 & 7.83 & 75 & 5.87 \\ \end{tabular} \end{table} Table 1: Provision calculations for Stage 1 loan, given initial position of the asset process. Figure 1: Survival probability for asset process with \((k,\theta,\sigma)=(0.5,3.5,2.0)\), using the BTCS scheme with \(J=150\). expected losses occuring at some time \(i>t\) applies: \[\mathbb{E}[L_{i}]=\mathbb{E}\Big{[}\frac{1}{(1+r_{i})^{i-t}}LGD_{i}PD_{i}^{PiT}EAD _{i}|\mathcal{F}_{t}\Big{]}. \tag{96}\] The corresponding formula for the Lifetime Expected Credit Losses (ECL - also referred to as Expected Lifetime Provisions) at time \(t\) is given by: \[ECL_{t}=\mathbb{E}\Big{[}\sum_{i=t+1}^{T}\frac{1}{(1+r_{i})^{i-t}}LGD_{i}PD_{i} ^{PiT}EAD_{i}|\mathcal{F}_{t}\Big{]}, \tag{97}\] where \(r_{i}\) is the interest rate at time \(i\) and \(T\) the maturity. In the above, \(PD_{i}^{PiT}\) represents the conditional Point-in-Time PD, which is the probability of default occurring at a given future time period. Specifically, we define: \[PD_{u}^{PiT}=\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{x}\leq 0,\inf_{r\leq u-1}G_{r }^{x}>0\Big{)}. \tag{98}\] In order to calculate the \(PD^{PiT}\) in terms of the PD function resulting from the solution of the PIDE, we note that: \[\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{x}\leq 0\Big{)}=\mathbb{P}\Big{(}\inf_{ r\leq u-1}G_{r}^{x}\leq 0\Big{)}+\mathbb{P}\Big{(}\inf_{r\leq u}G_{r}^{x}\leq 0, \inf_{r\leq u-1}G_{r}^{x}>0\Big{)},\] and hence: \[PD_{u}^{PiT}=\Psi(x,u)-\Psi(x,u-1)=\Phi(x,u-1)-\Phi(x,u).\] Therefore, (97) can now be written as: \[ECL_{t}=\sum_{i=1}^{T}\big{(}\Phi(x,i-1)-\Phi(x,i)\big{)}\mathbb{E}\Big{[}\frac {LGD_{i}EAD_{i}}{(1+r_{i})^{i-t}}|\mathcal{F}_{t}\Big{]}. \tag{99}\] **Example 7.2**.: Consider a credit exposure with asset process as in Example 7.1. However, we now suppose the exposure has been transferred to Stage 2, with remaining maturity \(T=10\) years. We also consider that the asset process of the borrower is currently estimated at \(x=1.80\). To estimate the Stage 2 provisions we require the PD function and use (99) (as mentioned, we consider \(r=0\) for simplicity): \begin{tabular}{c c c c c c} Time until & \(\underset{\text{maturity }(u)}{\text{Time until}}\) & \(EAD_{u}\) & \(PD_{u}(\%)\) & \(PD_{u}^{PiT}(\%)\) & \(LGD(\%)\) & \(EL_{t}\) \\ \hline 10.0 & 100 & 21.69 & 1.59 & 75 & 1.19 \\ 9.0 & 90 & 20.10 & 1.76 & 75 & 1.19 \\ 8.0 & 80 & 18.34 & 1.97 & 75 & 1.18 \\ 7.0 & 70 & 16.37 & 2.20 & 75 & 1.16 \\ 6.0 & 60 & 14.17 & 2.49 & 75 & 1.12 \\ 5.0 & 50 & 11.68 & 2.81 & 75 & 1.05 \\ 4.0 & 40 & 8.87 & 3.10 & 75 & 0.93 \\ 3.0 & 30 & 5.77 & 3.14 & 75 & 0.71 \\ 2.0 & 20 & 2.63 & 2.24 & 75 & 0.34 \\ 1.0 & 10 & 0.39 & 0.39 & 75 & 0.03 \\ \hline \(ECL\) & & & & & 8.89 \\ \end{tabular} The exposure and expected losses are in percentages of the remaining exposure. As shown, the current Lifetime provisions are given by the sum of the final column: \(ECL=8.89\%\). \(\triangleleft\) In practice, we expect that when an exposure is classified as Stage 2, the parameters of the underlying asset process may differ, compared to the Stage 1 counterpart. In the example above, we purposely consider the same asset process so as to highlight the differences in the final provision estimations. #### 7.1.3 Provisions under the regime switching model As discussed, the new regulatory framework aims to ensure that all financial institutions have accounted for future losses and abrupt changes in credit risk parameters, which can create severe losses and subsequent liquidity and solvency issues, both for institutions and their customers. As risk classification is widely considered a Markov process both in theory and by practitioners, considering transiton probabilities for loans can allow us to foreeccast PD values and estimate worst-case scenario provisions for loan exposures. We note that, in practice, estimating the parameters of the asset prices under each regime may be difficult. However, many financial institutions consider such models and its mathematical framework is well established, see e.g., [20] and [17]. Another approach is to use historical parameters from Stage 1 and Stage 2 loans to estimate the changes that occur when a loan transitions between Stages. For this example, we will be estimating the provisions under the regime switching model developed above. To this end, we consider an IFRS 9 compliant transition matrix: \[P=\left(\begin{array}{c|ccc}\text{IFRS 9 Rating}&\text{Stage 1}&\text{Stage 2}& \text{Stage 3}\\ \hline\text{Stage 1}&p_{11}&p_{12}&p_{12}\\ \text{Stage 2}&p_{12}&p_{22}&p_{13}\\ \text{Stage 3}&0&0&1\\ \end{array}\right).\] For a loan originating in Stage 1, we can now forecast credit losses by taking into account the probability of a SICR (significant increase in credit risk) event. Under the regime switching model, in the case of a transition to another Stage, we will need to estimate the PD values for the asset process governed by the new parameters. Using the straightforward notation \(\Phi^{i}\) or \(PD^{i}\) to emphasize the Stage (regime) under which the specific PD value is estimated, we can then define the "_Stage-weighted provisions_", given by: \[WP_{t}:=p_{11}EAD_{t}LGD_{t}PD^{1}+p_{12}\sum_{i=t}^{T}\big{(}\Phi^{2}(x,i-1)- \Phi^{2}(x,i)\big{)}\mathbb{E}\Big{[}\frac{LGD_{i}EAD_{i}}{(1+r_{i})^{i-t}}| \mathcal{F}_{t}\Big{]}+p_{13}EAD_{t}LGD_{t}, \tag{100}\] where the third term occurs in the case of default (i.e. transition to Stege 3), we have that \(PD=100\%\). This calculation holds for the case where we consider that the transition to Stage 2 occurs one period (e.g., year) after. However, we can also consider the cases where the deterioration occurs at any point \(k>t\). For this calculation we require the \(k\)-step transition matrix of the underlying rating process, \begin{table} \end{table} Table 2: Provision calculations for Stage 2 loan, given an asset process with initial position \(x=1.80\). which is known to be \(P^{k}\), whose elements will be symbolized as below: \[P^{k}=\left(\begin{array}{c|ccc}\text{IFRS 9 Rating}&\text{Stage 1}&\text{ Stage 2}&\text{Stage 3}\\ \hline\text{Stage 1}&p_{11}^{k}&p_{12}^{k}&p_{12}^{k}\\ \text{Stage 2}&p_{12}^{k}&p_{22}^{k}&p_{13}^{k}\\ \text{Stage 3}&0&0&1\end{array}\right).\] with the understanding that \(p_{ij}^{k}\) represents the \(k\)-th step transition probability. We then have: \[\mathbb{E}[WP_{k}|\mathcal{F}_{t}]=p_{11}^{k}EAD_{k}LGD_{k}PD_{k}^ {1}+p_{12}^{k}\sum_{i=k}^{T}\big{(}\Phi^{2}(x,i-1) -\Phi^{2}(x,i)\big{)}\mathbb{E}\Big{[}\frac{LGD_{i}EAD_{i}}{(1+r_{i} )^{i-t}}|\mathcal{F}_{t}\Big{]}\] \[+p_{13}^{k}EAD_{t}LGD_{t}. \tag{101}\] At any point, with the dynamics of the underlying Markov process, we can obtain the corresponding \(WP_{t}\) values and calculate the above _Expected Stage-weighted provisions_. This calculation takes the future evolution of the loan, as well as the regime into consideration to provide an estimation that incorporates all scenarios. For illustrative purposes, we consider the example below. **Example 7.3**.: Consider an asset process governed by the regime-switching model below: \[dG_{t}=\begin{cases}k_{1}(\theta_{1}-G_{t})dt+\sigma_{1}dB_{t}+\int_{\mathbb{R }}zN(dt,dz),&G_{0}=x,\text{ if }R_{t}=\text{Stage 1},\\ k_{2}(\theta_{2}-G_{t})dt+\sigma_{2}dB_{t}+\int_{\mathbb{R}}zN(dt,dz),&G_{0}=x,\text{ if }R_{t}=\text{Stage 2},\\ k_{3}(\theta_{3}-G_{t})dt+\sigma_{3}dB_{t}+\int_{\mathbb{R}}zN(dt,dz),&G_{0}=x,\text{ if }R_{t}=\text{Stage 3},\end{cases} \tag{102}\] with regime-specific parameters given in Fig. 2 and \(\mathcal{D}=[-6.0,6.0]\), with \(S=4.0\) (we can also consider a different limit value \(S\) for each regime. However, in this example the Monte Carlo estimates indicate that the same value suffices). We consider normally distributed jumps with \(Z\sim N(0.0,0.5)\), rate \(\lambda=1.0\). We have set \(N=1001,T=1001\) for the space and time grids, respectively. Furthermore, the generator matrix of the underlying Markov process given by: \[Q=\left(\begin{array}{ccc}-0.5&0.3&0.2\\ 0.3&-0.6&0.3\\ 0.0&0.0&0.0\end{array}\right).\] The graphs in Fig. 2 display the estimated survival probability in each regime (Stage), resulting from solving scheme (85). We consider an initial position of \(x=0.3\) and maturity \(T=10\). The transition matrix of the underlying Markov process is obtained by calculating \(P=\exp(Q)\): \[P=\left(\begin{array}{c|ccc}\text{IFRS 9 Rating}&\text{Stage 1}&\text{Stage 2}&\text{Stage 3}\\ \hline\text{Stage 1}&0.63&0.18&0.19\\ \text{Stage 2}&0.18&0.57&0.25\\ \text{Stage 3}&0.00&0.00&1.00\end{array}\right). \tag{103}\] We will perform the provisioning scenario analysis by forecasting the Stage-weighted provisions, given by (101) for the next four years. We first calculate the \(k-\)step transition matrices: \[P^{2}=\left(\begin{array}{ccc}0.43&0.21&0.36\\ 0.21&0.36&0.43\\ 0.00&0.00&1.00\end{array}\right),\ \ P^{3}=\left(\begin{array}{ccc}0.31&0.20&0.50\\ 0.20&0.24&0.56\\ 0.00&0.00&1.00\end{array}\right),\ \ P^{4}=\left(\begin{array}{ccc}0.23&0.17&0.60\\ 0.17&0.18&0.66\\ 0.00&0.00&1.00\end{array}\right).\] At time \(t=0\) we consider the forward looking scenarios and can calculate the Stage 1 and Stage 2 provisions. Recall that Stage 1 provisions are given by (94). Stage 2 (expected lifetime) provisions are given in column \(ECL_{t}\) in Table 3 below, which also contains the Point-in-Time Stage 1 and Stage 2 PD required to calculate the provisions. Figure 2: Survival probability for (102) with \((k_{1},k_{2},k_{3})=(0.3,0.2,0.0),(\theta_{1},\theta_{2},\theta_{3})=(0.8,0.5, 0.0),(\sigma_{1},\sigma_{2},\sigma_{3})=(0.3,0.5,0.0)\), with \(R_{0}=\) Stage 1 (top left), \(R_{0}=\) Stage 2 (top right). The average survival probability across all regimes is also shown (bottom), which also account for the survival probability when \(R_{0}=\) Stage 3, for which we have \(\Phi(x,u)\equiv 0\). As shown in Example 7.2, the Lifetime Provisions can be calculated by the sum of the expected losses column, \(EL_{u}\),. We can now calculate the Stage 1 and Stage 2 provisions at each subsequent time period, which we will use to calculate the Stage-weighted provisions for the next four years, calculated by (101). The results are shown in Table 4, where the final column below contains the Stage-weighted provisions. The large difference observed even between the Lifetime and Stage-weighted provisions is evidence of the importance of such scenario analysis in provision calculations. Particularly in cases similar to that examined in this example, where the probability of transitioning to a default state is quite high the results can have an extremely large effect, which risk managers must account for in risk and provisioning policies. \(\triangleleft\) ### Further Applications in credit risk modelling #### 7.2.1 Pricing of Credit Default Swaps Another financial field in which the PD function plays a paramount role is credit derivatives pricing. In particular, we consider the fair price of Credit Default Swap (CDS). A default swap is a contract that protects the holder of an underlying swap from the losses caused by the default to the obligation's issuer. Therefore, the evolution of the PD values can be used for the pricing, hedging and managing of such options. Extensive work has been done on modeling and pricing CDSs, such as in [18] and \begin{table} \begin{tabular}{c c c c c c} Time until maturity (\(u\)) & \(EAD_{u}\) & Stage 1 Provisions & \(ECL\) & Stage 3 Provisions & \(WP_{u}\) \\ \hline 10.0 & 100 & 0.390 & 5.13 & 75.00 & 15.42 \\ 9.0 & 90 & 0.338 & 3.99 & 67.50 & 25.28 \\ 8.0 & 80 & 0.288 & 2.98 & 60.00 & 30.69 \\ 7.0 & 70 & 0.247 & 2.11 & 52.50 & 31.92 \\ \end{tabular} \end{table} Table 4: Stage-weighted provision calculations for the next 4 years. \begin{table} \begin{tabular}{c c c c c c c} Time until maturity (\(u\)) & \(EAD_{u}\) & \(\Psi^{1}(x,u)(\%)\) & \(\Psi^{2}(x,u)(\%)\) & \(\begin{array}{c}\text{Stage 1}\\ PD_{u}^{PiT}(\%)\end{array}\) & \(\begin{array}{c}\text{Stage 2}\\ PD_{u}^{PiT}(\%)\end{array}\) & \(LGD(\%)\) & \(EL_{u}\) \\ \hline 10.0 & 100 & 4.76 & 10.55 & 0.52 & 1.53 & 75\% & 1.147 \\ 9.0 & 90 & 4.24 & 9.02 & 0.50 & 1.50 & 75\% & 1.012 \\ 8.0 & 80 & 3.74 & 7.52 & 0.48 & 1.44 & 75\% & 0.864 \\ 7.0 & 70 & 3.26 & 6.08 & 0.48 & 1.33 & 75\% & 0.698 \\ 6.0 & 60 & 2.78 & 4.75 & 0.47 & 1.17 & 75\% & 0.527 \\ 5.0 & 50 & 2.31 & 3.58 & 0.47 & 0.99 & 75\% & 0.371 \\ 4.0 & 40 & 1.84 & 2.59 & 0.46 & 0.80 & 75\% & 0.240 \\ 3.0 & 30 & 1.38 & 1.79 & 0.46 & 0.65 & 75\% & 0.146 \\ 2.0 & 20 & 0.92 & 1.14 & 0.47 & 0.58 & 75\% & 0.087 \\ 1.0 & 10 & 0.45 & 0.56 & 0.45 & 0.56 & 75\% & 0.042 \\ \end{tabular} \end{table} Table 3: Stage 1 and 2 PDs and expected losses. [35]. Specifically, it can be shown that the price of the CDS is given by: \[CDS=(1-R)\left(-\int_{0}^{T}e^{-rs}d\Phi(x,s)\right)-c\int_{0}^{T}e^{-rs}\Phi(x,s )ds,\] and the corresponding par spread: \[c^{*}=\frac{(1-R)\Big{(}-\int_{0}^{T}e^{-rs}d\Phi(x,s)\Big{)}}{ \int_{0}^{T}e^{-rs}\Phi(x,s)ds},\] where \(R\) is the specific recovery rate and \(r\) is the risk-free rate. The above expression can be discretized as follows: \[c^{*}=\frac{(1-R)\sum_{i=1}^{n}e^{-rt_{i}}(\Phi(x,t_{i-1})-\Phi(x, t_{i}))}{\frac{1}{2}\sum_{i=1}^{n}e^{-rt_{i}}(\Phi(x,t_{i-1})+\Phi(x,t_{i})) \Delta t_{i}}, \tag{104}\] where the Trapezoidal rule has been used for the discretization of the denominator. Estimating the price and par rate of CDS therefore requires the term structure of the underlying risk-free and survival probability processes. We present a simplified example, whereby the interest rate is again considered to be zero. **Example 7.4**.: Consider a CDS with maturity \(T=10\) years and recovery rate \(R=0.5\), where the asset process evolves according to the following stochastic volatility model: \[\begin{cases}dG_{t}=k(\theta-G_{t})dt+\sigma(Y_{t})dB_{t}+\int_{ \mathbb{R}}zN(dt,dz),&G_{0}=x\\ dY_{t}=\kappa(\mu-Y_{t})dt+\xi\sqrt{Y_{t}}dW_{t},&Y_{0}=y.\end{cases}\] The parameters of the stochastic model are given in Fig. 3, we let \(\mathcal{D}=[-5.0,75.0]\) and consider a spatial and temporal discretization with \(200\) and \(1000\) steps respectively. Jumps are again normally distributed, with size \(Z\sim N(0.3,0.5)\), rate \(\lambda=1.0\) and \(J=90\). Furthermore, we set a grid with \(200\) steps for the volatility \(\mathcal{V}=[0.0,200.0]\). The parameters and resulting graph of the PD function can be seen in Figure 3. We assume an initial position of \(x=3.0\) and plot the evolution of the average survival probability in Fig. 4. The resulting par spread is calculated using (104) to obtain \(c^{*}=0.33\). #### 7.2.2 Credit Portfolio Optimization For many financial institutions, one of the most important tasks is securitization of credit exposures. Ultimately, this can be formulated as an optimization problem. The PD function affects the risk of each exposure and, by extension, the corresponding return as well. To this end, we present a simple example to show how such an optimization problem can be solved under the stochastic volatility PD model. **Example 7.5**.: Suppose a securitization agency creates a portfolio consisting of loans (or credit derivatives), each with different underlying asset process. The resulting PD functions will differ depending on the loan's (or derivative's) characteristics and asset value processes. Slightly abusing notation, we suppose that, for a portfolio of three loans, the corresponding PD functions are given by \(PD_{i},i=1,2,3\), estimated using the methodology developed above. The agency aims to select the investment allocated to each of the credit exposures. Specifically, it poses the following portfolio optimization problem: suppose \(w_{i},i=1,2,3\) and \(r_{i},i=1,2,3\) represent the weight of total investment allocated to each institution's set of loans and their average return, respectively. Consider, furthermore, that the required portfolio rate of return is set to be \(R^{*}\). For the credit exposure \(i\), at time \(t\), the expected loss is given by \(EL_{t}^{i}=EAD_{t}^{i}LGD_{t}^{i}PD_{t}^{i}\), and we can then define the total loss function for the agency by \(L(t)=\sum_{1}^{3}w_{i}EL_{t}^{i}\). In order to rebalance the portfolio at each period, the securitization agency is then interested in solving a portfolio optimization problem (we present a very simple such problem, which can be solved analytically to illustrate the use of the method). The optimization we consider is the following: \[\min_{\mathbf{w}}\mathbb{E}[U(L(t))],\text{ subject to}\] \[w_{1}r_{1}+w_{2}r_{2}+w_{3}r_{3}=R,\] \[w_{1}+w_{2}+w_{3}=1,\] for an appropriate loss function \(U\). While the following analysis can be extended to any convex loss function, for the sake of simplicity, we illustrate the calculation selecting the quadratic loss function \(U(L)=bL^{2}-L\) (in the sense of a negative utility function). To standardize the optimization problem, we consider that \(EAD_{i}\) is given as a percentage of the original loan value and for simplicity we consider a constant \(LGD_{t}^{i}=1\) for \(i=1,2,3\) and all \(t\). At any point in time \(t\), the expected loss utility is then: \[\sum_{i=1}^{3}U(w_{i}EAD_{i})PD_{i}^{PiT},\] Figure 3: (Left) Evolution of the PD under the stochastic volatility model with \((k,\theta,\kappa,\mu,\xi)=(2.0,2.0,0.05,0.1,0.07)\), for various values of the starting volatility \(Y_{0}\). (Right) The average survival probability across all volatility values. where \(PD_{i}^{PiT}\) is the current Point-in-Time default probability corresponding to exposure \(i\). The agency must optimize the portfolio by solving: \[\text{minimize }f(w_{1},w_{2},w_{3}):=\sum_{i=1}^{3}b(w_{i}EAD_{i})^{2 }-w_{i}EAD_{i},\text{ subject to}\] \[w_{1}r_{1}+w_{2}r_{2}+w_{3}r_{3}=R,\] \[w_{1}+w_{2}+w_{3}=1,\] This simple quadratic optimization problem can now be solved either analytically or numerically. It is straightforward to calculate: \[w_{3}^{*}=\frac{PD_{1}^{PiT}(2bEAD_{1}^{2}\delta\epsilon-\epsilon)+PD_{2}^{ PiT}(\gamma-2baEAD_{2}^{2}\beta\gamma)-PD_{3}^{PiT}}{2(bPD_{1}^{PiT}EAD_{1}^{2} \epsilon^{2}+bPD_{2}^{PiT}EAD_{2}^{2}\gamma^{2}+bPD_{3}^{PiT}EAD_{3}^{2})},\] where \(\beta=\frac{R-r_{1}}{r_{2}-r_{1}},\gamma=\frac{r_{3}-r_{1}}{r_{2}-r_{1}}, \delta=\frac{r_{2}-R}{r_{2}-r_{1}}\) and \(\epsilon=\frac{r_{3}-r_{2}}{r_{2}-r_{1}}\). A straightforward substitution using the two conditions will result in the corresponding values \(w_{1}^{*}\) and \(w_{2}^{*}\). We consider the above setting with average returns from each institution's instruments \(r=(r_{1},r_{2},r_{3})^{T}\) and current exposures \(EAD=(EAD_{1},EAD_{2},EAD_{3})^{T}\) given by: \[r=(0.1\;\;0.3\;\;0.1)^{T},\;\;EAD=(0.9\;\;0.8\;\;0.7)^{T}.\] In order to obtain the vector containing the PD values, we consider the three asset classes described by the processes below: \[dG_{t}^{1}=k_{1}(\theta_{1}-G_{t}^{1})dt+\sigma_{1}dB_{t}+\int_{ \mathbb{R}}zN(dt,dz),\;\;x_{0}=1.00\] \[dG_{t}^{2}=k_{2}(\theta_{2}-G_{t}^{2})dt+\sigma_{2}dB_{t}+\int_{ \mathbb{R}}zN(dt,dz),\;\;x_{0}=0.20\] \[dG_{t}^{3}=k_{3}(\theta_{3}-G_{t}^{3})dt+\sigma_{3}dB_{t}+\int_{ \mathbb{R}}zN(dt,dz),\;\;x_{0}=0.50,\] Figure 4: Evolution of the average survival probability under the stochastic volatility model described in Example 7.4, with starting point \(x=0.3\). with \((k_{1},k_{2},k_{3})=(0.5,0.8,0.5),(\theta_{1},\theta_{2},\theta_{3})=(3.5,3.0,2.5),( \sigma_{1},\sigma_{2},\sigma_{3})=(2.0,1.5,2.5)\), \(J=150,\lambda=1.0\) and jump distributions \(Z\sim N(0.0,0.2)\) for all three. Solving PIDE (62), we obtain the \(PD^{PiT}\) values: \[PD^{PiT}=(0.0783\;\;0.1447\;\;0.0447)^{T},\] and fixing the expected total return to be \(R=25.00\%\), the resulting optimal weights \(w^{*}=(w^{*}_{1}\;w^{*}_{2}\;w^{*}_{3})^{T}\) are: \[w^{*}=(0.163\;\;0.750\;\;0.087)^{T}.\] \(\triangleleft\) For extensive work on portfolio optimization problems with defaultable assets, we refer the interested reader to e.g., [4]. Furthermore, empirical studies of the applicability of standard ruin probabilities in practice can be found in [15]. In the example above, we focus on a case where an agency must assess and optimize a portfolio of loan exposures with varying characteristics. Such cases could be loans originating in different sectors; e.g., in [48], the authors consider a portfolio of risky bonds originating from the Industry and Service sector. ## 8 Conclusion In this paper we have focused on a generalized approach of estimating PD values, accounting for both the cases of variable starting times and maturities. We show that under certain conditions imposed on the models representing the asset processes, these two cases can be dealt with equivalently and lead to important novel representations of the PD function. Specifically, with the integral equation approach, we can construct a robust mathematical framework that allows us to develop both theoretical and numerical tools for the calculation of the PD values. This methodology has important advantages over standard Monte Carlo methods, as well as over existing approaches using PIDEs, as we are able to consider sophisticated models that incorporate multiple latent variables, without sacrificing mathematical rigor for required regularity assumptions. In terms of practicality and applications, the proposed framework covers many of the difficulties financial institutions face due to the new regulatory requirements for provision calculations, as well as continuous credit risk monitoring for SICR events. We hypothesize that this approach could be useful for practitioners, given that it constitutes a complete and efficient modelling framework, with which one can calculate Point-in-Time and Lifetime PD values, each of which are used extensively in credit risk management. Specifically, this framework is motivated by the needs created by the IFRS 9 regulations, under which forecasting credit losses accurately and efficiently is of paramount importance. We show how the PD estimations can be used to calculate Stage 2 provisions, as well as more advanced, scenario-based provisions, and of extensive further applications in credit risk modelling. Finally, we note that this approach most likely is best fit for corporate and small business loans, where the estimation of asset processes has been documented in well-established work. Of course, it is possible that with new developments in payment services and Open-Banking solutions (in accordance to the Payment Services Directive 2), such methods could be applied to individual consumers, given sufficient historical data. An example of recent work done in this direction is [51]. To conclude, it is important to mention that the LGD parameter is also of great importance for provision calculations; in this work we considered a constant LGD, however, in practice, LGD values require separate model development, often related to current macroeconomic variables, as shown in [13]. Future research could focus on considering appropriate models for the evolution of the LGD, in combination with the PD function. ## Appendix A Stochastic processes ### The continuous Ornstein Uhlenbeck process In its simplest form, the OU process \(X_{t}\) is defined as the stochastic process satisfying the SDE: \[dX_{t}=k(\theta-X_{t})dt+\sigma dB_{t},\ X_{s}=x, \tag{105}\] for some known \(x\), where, as above, \(B_{t}\) represents the standard Brownian motion and \(k,\theta\) and \(\sigma\) are positive real constants. The OU process is a mean-reverting, Gaussian and Markov process, which is also temporally homogeneous. We can therefore equivalently write (105) as: \[dX_{u}=k(\theta-X_{u})du+\sigma dB_{u},\ X_{0}=x, \tag{106}\] where \(u=t-s\). For simplicity, we write \(X_{t}^{x}\) to indicate the OU process with \(X_{0}=x\). We adopt this convention for all stochastic processes in the remainder of this work. Employing Ito's formula we can obtain the solution to the above SDE: \[X_{t}=xe^{-kt}+\theta(1-e^{-kt})+\sigma\int_{0}^{t}e^{-k(t-u)}dB_{u},\] from which is it easy to see that \(X_{t}\sim N(\theta+(x-\theta)e^{-kt},\sigma^{2}(1-e^{-2kt})/2k).\) These properties are what make this particular family of processes widely used in many applications. We will also need the following regarding the transition density and hitting time for the OU process. **Theorem A.1**.: _The transition density of the OU process, with initial condition \(X_{0}=x\) is given by:_ \[p(y,x,t)\equiv\mathbb{P}\big{(}X_{t}=y|X_{0}=x\big{)}=\sqrt{\frac{k}{\pi \sigma^{2}(1-e^{-2kt})}}\exp\Big{(}-\frac{k(y-\theta-(x-\theta)e^{-kt})^{2}}{ \sigma^{2}(1-e^{-2kt})}\Big{)}. \tag{107}\] Furthermore, for the OU process as defined in (105), we define the corresponding survival probability distribution, given by \(Q(x,t):=\mathbb{P}\Big{(}\inf_{r\leq t}G_{r}^{x}>0\Big{)}\). The distribution can be obtained via appropriate Volterra equations, for which we refer the reader to [40]. We will not employ this representation explicitly, however we will use the fact that both \(p(y,x,t)\) and \(Q(x,t)\) are continuous in \((x,t)\). More details can be found in Appendix B. ### Continuous Time Markov Chain In this section we outline, for the reader's convenience, the background and important results pertaining to Continuous Time Markov Chains (CTMC), which are used for the regime-switching models. **Definition A.2**.: A continuous time Markov chain is a continuous stochastic process \(X_{t},t\geq 0\), with a discrete state space \(\mathcal{R}\), of cardinality \(|\mathcal{R}|<\infty\), satisfying the Markov property, and such that: \[\mathbb{P}(X_{t+\delta}=j|X_{t}=i)=\begin{cases}q_{ij}\delta+o( \delta),\,\,\,i\neq j\\ 1+q_{ii}\delta+o(\delta),\end{cases} \tag{108}\] as \(\delta\downarrow 0\). In the above \(q_{ij}\) are known as the transition rates, for which we have \(\sum_{i\in S}q_{ij}=0\), \(q_{ij}\geq 0\) for \(i\neq j\). The matrix \(Q\) with entries \((Q)_{ij}=q_{ij}\), for \(i,j\in\mathcal{R}\) is known as the generator matrix of the Markov process (also referred to as the transition rate matrix). Similar to the discrete time Markov chains, we can define the transition matrix for a CTMS, \(P(t),t\geq 0\), with entries: \[p_{ij}=\mathbb{P}(X_{t}=j|X_{0}=i), \tag{109}\] for \(i,j\in\mathcal{R}\). The following result holds for the transition matrix, from which we are also able to obtain a connection between the transition and generator matrices. **Theorem A.3**.: _The transition matrix \(P(t)\) satisfies the Kolmogorov forward equation:_ \[P^{\prime}(t)=P(t)Q,\] _and hence:_ \[P(t)=e^{tQ}. \tag{110}\] Finally, we remind the reader that we say that a state \(i\) is transient if, given that the chain starts at \(i\), it is possible, but not certain, that the chain will return to \(i\). Equivalently, there exists a non-zero probability that the chain will never return to \(i\). On the other hand, a state \(i\) is defined as absorbing if \(\mathbb{P}(X_{t}=i|X_{0}=i)=1\), for all \(t\geq 0\), i.e. the probability of transitioning from \(i\) to any other state is zero. ### Levy processes Throughout this paper, we have abopted the notation used in [46]. We start by defining a Levy process: **Definition A.4** (Levy process).: A Levy process \(\{L_{t}\}_{t\geq 0}\) is a stochastic process for which the following conditions hold: * \(L_{0}=0\). * \(L\) has independent and stationary increments, i.e., if \(t>s\) then \(L_{t}-L_{s}\) is independent from \(L_{s}\) and \(L_{t}-L_{s}\stackrel{{ D}}{{=}}L_{t-s}\). * \(L\) is stochastically continuous, i.e for all \(\epsilon>0\) and all \(s>0\) we have \[\lim_{t\to s}\mathbb{P}(|X(t)-X(s)|>\epsilon)=0.\] A consequence of the above definition is the celebrated Ito - Levy decomposition. First, we define the following required quantities: **Definition A.5**.: Let \(L_{t}\) be a Levy process, whose jump is defined as \(\Delta L_{t}=L_{t}-L_{t_{-}}\). Furthermore, let \(\mathbf{B}_{0}\) be the family of Borel sets \(U\subset\mathbb{R}\), whose closure does not contain \(0\). Then, for \(U\in\mathbf{B}_{0}\), define the Poisson random measure of the Levy process \(L_{t}\) by: \[N(t,U)=\sum_{0<s\leq t}\mathds{1}(\Delta L_{s}).\] The Poisson random measure represents the number of jumps of size \(\Delta L_{s}\in U\), which occur up to time \(t\). We can therefore define the intensity of the jumps as follows: **Definition A.6**.: The intensity of a Levy jump process \(L_{t}\), known as the Levy measure of \(L_{t}\) is defined as: \[\nu(U)=\mathbb{E}[N(1,U)],\] where, as above, \(U\in\mathbf{B}_{0}\). A useful consequence of the above definitions is that if \(\nu\) is the Levy measure of a simple Compound Poisson Process with rate \(\lambda\) and jump size density \(f(z)\), then we have that \[\nu(U)=\lambda f(U).\] To this end, we will employ the following result in the subsequent sections, due to [39]: **Theorem A.7**.: _Consider the Poisson random measure \(N(t,U)\), with \(U\in\textbf{B}_{0}\), and corresponding Levy measure \(\nu(U)\). Then the process:_ \[X_{t}=\int_{0}^{t}\int_{B}zN(ds,dz),\] _where \(B\in\mathcal{B}(\mathbb{R})\), is a Compound Poisson Process with rate \(\nu(B)\) and jump distribution \(\frac{\nu(dx)|_{B}}{\nu(B)}\)._ In this paper, we will focus on jump terms in the form above. We can now present the following theorem: **Theorem A.8** (Ito - Levy decomposition).: _Let \(\{L_{t}\}_{t\geq 0}\) be a Levy process. Then, we have_ \[L_{t}=bt+\sigma B_{t}+\int_{|z|<1}z\tilde{N}(t,dz)+\int_{|z|\geq 1}zN(t,dz), \tag{111}\] _for \(t\geq 0\), where \(b,\sigma\in\mathbb{R}\), \(B_{t}\) is a Brownian motion and \(\tilde{N}(t,dz):=N(t,dz)-\nu(dz)t\) is the compensated Poisson measure._ More generally, we can define the stochastic process \(X(t)\), as: \[dX_{t}=a(t)dt+\sigma(t)dB(t)+\int_{|z|<1}H(t,z)\tilde{N}(dt,dz)+\int_{|z|\geq 1 }H(t,z)N(dt,dz), \tag{112}\] known as Levy - Ito processes. Moreover, by combining the compensator with the drift term the above can be written as: \[dX_{t}=a(t)dt+\sigma(t)dB(t)+\int_{z\in\mathbb{R}}H(t,z)N(dt,dz). \tag{113}\] We will adopt this formulation throughout the remainder of this work. For such processes, we have the following results, which are extension of the standard Ito and Generator formulas. **Theorem A.9** (Ito formula).: _Let \(X_{t}\in\mathbb{R}\) be an Ito- Levy process and consider a function \(f(x,t)\), with \(f\in C^{2}(\mathbb{R}\times[0,T])\). Then, the dynamics of the process \(f(X_{t},t)\) are given by the following version of the Ito formula:_ \[df(X_{t},t)=\frac{\partial f}{\partial t}(X_{t},t)dt+\frac{ \partial f}{\partial x}(X_{t},t)\big{(}a(t)dt+\sigma(t)dB_{t}\big{)}+\frac{1}{ 2}\frac{\partial^{2}f}{\partial x^{2}}(X_{t},t)\sigma^{2}(t)dt\] \[+\int_{\mathbb{R}}\big{(}f\big{(}X_{t-}+H(t,z),t\big{)}-f(X_{t-},t)\big{)}N(dt,dz)\] **Definition A.10** (Generator).: For a Levy-Ito process, given by (113), and function \(f:\mathbb{R}\times[0,T]\rightarrow\mathbb{R}\) we define the generator \(\mathcal{L}\) by: \[\mathcal{L}f(x,t)=\lim_{t\downarrow 0}\frac{\mathbb{E}^{x}[f(X_{t},t)]-f(x,t)}{t}, \tag{114}\] where \(\mathbb{E}^{x}[f(X_{t},t)]=\mathbb{E}[f(X_{t},t)|X_{0}=x]\). It particular, it can be shown that the generator admits the following form: \[\mathcal{L}f(x,t)=\frac{\partial f}{\partial t}+a(t)\frac{\partial f}{ \partial x}+\frac{1}{2}\sigma(t)^{2}\frac{\partial^{2}f}{\partial x^{2}}+\int _{\mathbb{R}}\big{(}f(x+z,t)-f(x,t)\big{)}\nu(dz), \tag{115}\] ## Appendix B Infinitesimal generators and PDEs for the continuous Ornstein - Uhlenbeck models In this Appendix we detail the generators for the continuous counterparts of the processes considered in this paper, built upon the continuous OU process given by (106) in Appendix A.1. Firstly, we recall that the survival probability \(Q(x,t):=\mathbb{P}\Big{(}\inf_{r\leq t}X_{r}^{x}>0\Big{)}\), as well as the corresponding transition density \(p(\cdot,x,t)\) satisfy the equation: \[\frac{\partial f}{\partial t}(x,t)=\mathcal{L}f(x,t), \tag{116}\] where \(\mathcal{L}\) represents the operator: \[\mathcal{L}f(x,t)=k(\theta-x)\frac{\partial f}{\partial t}(x,t)+\frac{1}{2} \sigma^{2}\frac{\partial f^{2}}{\partial x^{2}}(x,t). \tag{117}\] This is known as the Kolmogorov backward equation. In general, when considering the survival probabilities and transition densities under the continuous versions of the regime switching, stochastic volatility and generalized models, analogous equations to (116) are produced. These depend on the generators of the processes, which now include terms to capture the evolution of the regime and/or volatility processes. For more details on the generators of regime switching and stochastic volatility models see e.g., [31, 58]. Below we display the Kolmogorov backward equations under the continuous regime switching, stochastic volatility and generalized models, respectively: \[\frac{\partial f}{\partial t}(x,\rho,t) =\mathcal{L}_{1}Q(x,\rho,t)\] \[:=k_{\rho}(\theta_{\rho}-x)\frac{\partial f}{\partial x}(x,\rho,t)+ \frac{1}{2}\sigma_{\rho}^{2}\frac{\partial^{2}f}{\partial x^{2}}(x,\rho,t)+ \sum_{j\neq\rho}q_{\rho j}\Big{(}Q(x,j,t)-f(x,\rho,t)\Big{)} \tag{118}\] \[\frac{\partial f}{\partial t}(x,y,t) =\mathcal{L}_{2}f(x,y,t)\] \[:=k(\theta-x)\frac{\partial f}{\partial x}(x,y,t)+\kappa(\mu-y) \frac{\partial f}{\partial y}(x,y,t)+\frac{1}{2}y\frac{\partial^{2}f}{ \partial x^{2}}(x,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}Q}{\partial y^{2} }(x,y,t)\] (119) \[\frac{\partial f}{\partial t}(x,\rho,y,t) =\mathcal{L}_{3}f(x,\rho,y,u):=k_{\rho}(\theta_{\rho}-x)\frac{ \partial f}{\partial x}(x,\rho,y,t)+\kappa(\mu-y)\frac{\partial f}{\partial y }(x,\rho,y,t)\] \[+\frac{1}{2}\sigma_{\rho}^{2}y\frac{\partial^{2}f}{\partial x^{2} }(x,\rho,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}f}{\partial y^{2}}(x,\rho, y,t)+\sum_{j\neq\rho}q_{\rho j}\Big{(}f(x,j,y,t)-f(x,\rho,y,t)\Big{)}, \tag{120}\] where we define separately the operators \(\mathcal{L}_{1},\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\) for notational convenience. Note that we write the dependency on the regime as a subscript on the right hand side of the equations above. ### Regularity of solutions to parabolic PDEs We will require the following results pertaining to the regularity of solutions of the second order parabolic PDE (116). The following are due to [27]. We first define some relevant function spaces that will be required for the subsequent regularity results. **Definition B.1**.: Consider \(\Omega\subset\mathbb{R}^{n}\) an open set, with closure \(\bar{\Omega}\). Furthermore, consider a fixed time horizon \(T>0\) and define \(Q_{T}=\Omega\times[0,T]\), with closure \(\bar{Q}_{T}\). We then define the following spaces, for \(0<\alpha<1\): * \(C^{0}(\bar{\Omega})\) is the Banach space of bounded continuous functions in \(\bar{\Omega}\), with the natural supremum norm: \[\|\cdot\|_{C^{0}(\bar{\Omega})}\equiv\|\cdot\|_{0,\bar{\Omega}}=\sup_{\Omega} |\cdot|\] * \(C^{2,1}\left(\bar{Q}_{T}\right)\) is the Banach space of functions \(\varphi(x,t)\) belonging to \(C^{0}\left(\bar{Q}_{T}\right)\) together their derivatives \(\frac{\partial f}{\partial x},\frac{\partial^{2}f}{\partial x^{2}},\frac{ \partial f}{\partial t}\) in \(\bar{Q}_{T}\) with natural norm. * \(C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)\) is the Banach space of function \(\varphi\) in \(C^{0}\left(\bar{Q}_{T}\right)\) which are Holder continuous in \(\bar{Q}_{T}\) with exponent \(\alpha\) in \(x\) and \(\frac{\alpha}{2}\) in \(t\) i.e. having a finite value for the seminorm \[\langle f\rangle_{\bar{Q}_{T}}^{(\alpha)}\equiv\langle f\rangle_{x,\bar{Q}_{T }}^{(\alpha)}+\langle f\rangle_{t,\bar{Q}_{T}}^{\left(\frac{\alpha}{2}\right)}\] where \[\langle f\rangle_{x,\bar{Q}_{T}}^{(\alpha)}=\inf\left\{C\geq 0:\left|f(x,t)-f \left(x^{\prime},t\right)\right|\leq C\left|x-x^{\prime}\right|^{\alpha}, \forall x,x^{\prime},t\right\}\] \[\langle f\rangle_{t,\bar{Q}_{T}}^{\left(\frac{\alpha}{2}\right)}= \inf\left\{C\geq 0:\left|f(x,t)-f\left(x,t^{\prime}\right)\right|\leq C\left|t-t^{ \prime}\right|^{\frac{\alpha}{2}},\forall x,t,t^{\prime}\right\}\] The quantity \[\left\|f\right\|_{C^{\alpha,\frac{\alpha}{2}}(\bar{Q}_{T})}\equiv\left\|f\right\|_ {\alpha,\bar{Q}_{T}}=\left\|f\right\|_{0,\bar{Q}_{T}}+\left\langle f\right\rangle _{\bar{Q}_{T}}^{(\alpha)}\] defines a norm. * \(C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) is the Banach space of functions \(f(x,t)\) in \(C^{2,1}\left(\bar{Q}_{T}\right)\) having a finite value for the seminorm: \[\left\langle f\right\rangle_{\bar{Q}_{T}}^{(2+\alpha)}=\left\langle\partial_{t }f\right\rangle_{\bar{Q}_{T}}^{(\alpha)}+\sum_{i,j=1}^{d}\left\langle\partial_ {ij}f\right\rangle_{\bar{Q}_{T}}^{(\alpha)}+\sum_{i=1}^{d}\left\langle \partial_{i}f\right\rangle_{t,\bar{Q}_{T}}^{\frac{1+\alpha}{2}}.\] Then, the quantity \[\left\|f\right\|_{C^{2+\alpha,\frac{2+\alpha}{2}}(\bar{Q}_{T})}\equiv\left\| f\right\|_{2+\alpha,\bar{Q}_{T}}=\sum_{2r+s\leq 2}\left\|\partial_{t}^{r} \partial_{x}^{s}f\right\|_{0,\bar{Q}_{T}}+\left\langle f\right\rangle_{\bar {Q}_{T}}^{(2+\alpha)}\] defines a norm. **Theorem B.2**.: _Consider a bounded domain \(\Omega\), the operator \(L:=\frac{\partial f}{\partial t}(x,t)-\mathcal{L}f(x,t)\) and the PDE:_ \[\begin{cases}Lf=g(x,t)&\text{ for }(x,t)\in Q_{T}\\ f(x,0)=\varphi(x)&\text{ for }x\in\Omega\\ f(x,t)=\psi(x,t)&\text{ for }x\in\Sigma_{T}:=\partial\Omega\times[0,T].\end{cases} \tag{121}\] _Then, for any \(g\in C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right),\varphi\in C^{2+ \alpha}(\bar{\Omega}),\psi\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\Sigma_{T}\right)\), with \(0<a<1\), (130) has a unique solution from the class \(C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) and satisfies the inequality:_ \[\left\|f\right\|_{2+\alpha,\bar{Q}_{T}}\leq C\left(\left\|g\right\|_{\alpha, \bar{Q}_{T}}+\left\|\varphi\right\|_{2+\alpha,\bar{\Omega}}+\left\|\psi\right\| _{2+\alpha,\Sigma_{T}}\right),\] _with the constant \(C\) independent of \(f,\varphi\) and \(\psi\)._ When extending to Levy models and the corresponding integro-differential equations, we will need the following result. **Lemma B.3**.: _Consider \(f\in C^{\alpha+2,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) and the differential operator:_ \[If(x,t)=\int_{\Omega}[f(x+z,t)-f(x,t)]\nu(dz).\] _Then, for \(0<a<1\), we have that:_ \[\left\|If\right\|_{C^{\alpha,\frac{\alpha}{2}}(\bar{Q}_{T})}\leq\varepsilon \|\nabla f\|_{C^{\alpha,\frac{\alpha}{2}}(\bar{Q}_{T})}+C(\varepsilon)\|f\|_{ C^{\alpha,\frac{\alpha}{2}}(\bar{Q}_{T})}.\] Note that Lemma F.2 is a simplified version of the corresponding results in [27], where the authors consider additional integral operators of higher orders. Kolmogorov equations for regime switching and stochastic volatility models Below we recall the Kolmogorov backward equations under the continuous regime switching, stochastic volatility and generalized models, respectively. These depend on the generators of the processes, which now include terms to capture the evolution of the regime and/or volatility processes. \[\frac{\partial f}{\partial t}(x,\rho,t)=\mathcal{L}_{1}Q(x,\rho,t):=\] \[k_{\rho}(\theta_{\rho}-x)\frac{\partial f}{\partial x}(x,\rho,t) +\frac{1}{2}\sigma_{\rho}^{2}\frac{\partial^{2}f}{\partial x^{2}}(x,\rho,t)+ \sum_{j\neq\rho}q_{\rho j}\Big{(}Q(x,j,t)-f(x,\rho,t)\Big{)} \tag{122}\] \[\frac{\partial f}{\partial t}(x,y,t)=\mathcal{L}_{2}f(x,y,t):=\] \[k(\theta-x)\frac{\partial f}{\partial x}(x,y,t)+\kappa(\mu-y) \frac{\partial f}{\partial y}(x,y,t)+\frac{1}{2}y\frac{\partial^{2}f}{\partial x ^{2}}(x,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}Q}{\partial y^{2}}(x,y,t)\] (123) \[+\frac{1}{2}\sigma_{\rho}^{2}y\frac{\partial^{2}f}{\partial x^{ 2}}(x,\rho,y,t)+\frac{1}{2}\xi^{2}y\frac{\partial^{2}f}{\partial y^{2}}(x, \rho,y,t)+\sum_{j\neq\rho}q_{\rho j}\Big{(}f(x,j,y,t)-f(x,\rho,y,t)\Big{)}, \tag{124}\] where we define separately the generator operators \(\mathcal{L}_{1},\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\), for notational convenience. For more details on the generators of regime switching and stochastic volatility models see e.g., [31, 58]. ## Appendix D PIDEs for the PD functions in Sobolev spaces For completeness, we first recall some basic definitions pertaining to the theory of weak derivatives in Sobolev spaces. **Definition D.1**.: (**Weak derivative**) Consider an open subset \(\Omega\subset\mathbb{R}^{n}\) and the space of continuous functions which are \(k\) times continuously differentiable, for \(k=1,2,\dots\), denoted by \(C^{k}(\Omega)\) and \(L^{1}_{loc}(\Omega)\) the space of locally integrable functions. Furthermore, let \(\alpha=(\alpha_{1},\dots,\alpha_{n})\) be a multi-index, with order \(|\alpha|:=\sum_{i}\alpha_{i}\), and denote \(D^{a}u\) by: \[D^{\alpha}u=\frac{\partial^{|\alpha|}u}{\partial x_{1}^{\alpha_{1}}\dots \partial x_{n}^{\alpha_{n}}}=\frac{\partial^{\alpha_{1}}}{\partial x_{1}^{ \alpha_{1}}}\dots\frac{\partial^{\alpha_{n}}}{\partial x_{n}^{\alpha_{n}}}u.\] Then, for \(f\in L^{1}_{\rm loc}(\Omega)\) we define \(u\in L^{1}_{\rm loc}(\Omega)\) to be the \(\alpha\)th weak derivative of \(f\), \(D^{\alpha}f=u\), if: \[\int_{\Omega}fD^{\alpha}\varphi dx=(-1)^{|\alpha|}\int_{\Omega}u\varphi dx,\] for every smooth test functon with compact support, \(\varphi\). With \(\mathcal{Q}:=\mathcal{D}\times\mathcal{V}\times[0,T]\), we are interested in functions which are twice weakly differentiable with respect to the initial condition and once with respect to the time until maturity. Hence, we can therefore work in the Sobolev space containing all such functions \(W^{2,1}(\mathcal{Q})=\big{\{}f\in L^{p}(\mathcal{Q}):D^{\alpha}f\in L^{1}( \mathcal{Q}),|\alpha|\leqslant 2\big{\}}\). To work in this space, we also need an appropriate weak version of the Ito formula, pertaining to cases where the underlying function may not enjoy the required regularity properties; these results are given by Theorems D.2 and D.3, due to [38] and [45], respectively. We include the results below, for completeness: **Theorem D.2**.: _Consider the stochastic process_ \[dX_{t}=a(x,t)dt+\sigma(x,t)dB_{t}\] _and a region \(\mathcal{Q}\), where \(B_{t}\) is a standard Brownian motion, with function \(f\) such that function \(f\in W^{2,1}(\mathcal{Q})\). Moreover, let \(\tau\) be some Markov time such that \(\tau<\tau_{\mathcal{Q}}\), where \(t_{Q}\) is the exit time of the process from the region \(\mathcal{Q}\). Then, if there exists some constant \(K\) such that \(|\sigma(x,t)|+|a(x,t)|\leq K\), for some fixed time \(s\) we have that:_ \[f(X_{\tau},s+\tau)-f(X_{t},s+t)=\int_{t}^{\tau}\frac{\partial f}{ \partial u}(X_{u},s+u)du\] \[+\int_{t}^{\tau}\frac{\partial f}{\partial x}(X_{u},s+u)\sigma(X _{u},u)dB(t)+\frac{1}{2}\int_{t}^{\tau}\frac{\partial^{2}f(X_{u},s+u)}{ \partial x^{2}}\sigma^{2}(X_{u},u)du, \tag{125}\] _almost surely._ **Theorem D.3**.: _Consider the stochastic process with representation_ \[X_{t}=\gamma t+\int_{0}^{t}\int_{\mathbb{R}}zN(dz,du),\] _where \(\gamma\in\mathbb{R}\). Assume \(f:\mathcal{Q}\to\mathbb{R}\) is a continuous function on \(U\mathcal{Q}\) such that \(f\in L^{1}_{loc}(\mathcal{Q})\), i.e., \(f\) is locally integrable. Furthermore, assume the existence of locally bounded weak first order derivatives, as defined in D.1. Then:_ \[f(X_{t},t)=f(X_{0},0)+\int_{0}^{t}\frac{\partial f}{\partial s}(X_{u},u)du+ \gamma\int_{0}^{t}\frac{\partial f}{\partial x}(X_{u},u)du+\] \[+\int_{0}^{t}\int_{\mathbb{R}}f(X_{u-}+z,u)-f(X_{u-},u)N(du,dz), \tag{126}\] _where all derivatives are understood in the weak sense._ In the Sobolev setting, we can now derive a PIDE using the common approach of appropriate martingale arguments (similar analysis has been given in e.g., [44]). **Lemma D.4**.: _The survival probability with a variable starting time and fixed maturity \(T\), \(\Phi(x,\rho,y,s;T)\) satisfies the partial integro-differential equation, almost surely:_ \[\frac{\partial\Phi}{\partial s}(x,\rho,y,s;T)+\mathcal{L}_{3}\Phi(x,\rho,y,s; T)+\int_{\mathbb{R}}\Big{(}\Phi(x+z,\rho,y,s;T)-\Phi(x,\rho,y,s;T)\Big{)}\nu(dz)=0, \tag{127}\] _for \((x,\rho,y,s)\in\mathcal{D}\times\mathcal{R}\times\mathcal{V}\times[0,T]\), with initial and boundary conditions:_ \[\Phi(x,\rho,y,T;T)=\mathds{1}_{\{x>0\}},\ \ (x,\rho,y)\in\mathcal{D} \times\mathcal{R}\times\mathcal{V},\] \[\Phi(0,\rho,y,s;T)=0,\ \ (\rho,y,s)\in\mathcal{R}\times\mathcal{V}\times[0,T],\] \[\Phi(x,\rho,y,s;T)\to 1,\ \text{as}\ x\to\infty,\ \ (\rho,y,s)\in\mathcal{R}\times \mathcal{V}\times[0,T],\] \[\frac{\partial\Phi}{\partial y}(x,\rho,y,s;T)=0,\ \text{as}\ y\to\infty\ \ (x,\rho,s)\in\mathcal{D}\times\mathcal{R}\times[0,T], \tag{128}\] _with the generator operator \(\mathcal{L}_{3}\) as given in (124)._ Proof.: We begin by considering the dynamics of the survival probability. As \(\Phi\) is differentiable in \(W^{2,1}(Q)\), we will employ Theorems D.2 and D.3 above. We then obtain: \[\Phi(G_{w},R_{w}, Y_{w},w)-\Phi(G_{s},\rho,y,s)=\int_{s}^{w}\Big{(}\frac{\partial\Phi}{ \partial r}(G_{r},R_{r},Y_{r},r)+\mathcal{L}_{3}\Phi(x,\rho,y,s)\Big{)}dr\] \[+\int_{s}^{w}\sigma(R_{r})\sqrt{Y_{r}}\frac{\partial\Phi}{ \partial x}(G_{r},R_{r},Y_{r},r)dB_{r}+\int_{s}^{w}\xi\sqrt{Y_{r}}\frac{ \partial\Phi}{\partial x}(G_{r},R_{r},Y_{r},r)dW_{r}\] \[+\int_{s}^{w}\int_{\mathbb{R}}\Big{(}\Phi(G_{r}+z,R_{r},Y_{r},r)- \Phi(G_{r},R_{r},Y_{r},r)\Big{)}N(dr,dz), \tag{129}\] where the derivatives are understood in the weak sense in accordance to definition D.1. Also note that we omit the dependence on the \(t\) parameter, for brevity. We are now able to formulate the following result regarding the survival probability. We write the dynamics above in terms of the compensated Poisson measure \(\tilde{N}(dt,dz)=N(dt,dz)-\nu(dz)dt\). The last term then becomes: \[\int_{s}^{w}\int_{\mathbb{R}}\Big{(}\Phi(G_{r}+z,R_{r},Y_{r},r)-\Phi(G_{r},R_ {r},Y_{r},r)\Big{)}(\tilde{N}(dr,dz)+\nu(dz)dr).\] Combining with the dynamics of \(\Phi\) above and using the fact that the sum of the non-martingale quantities must be identically zero, we obtain PIDE (127), as required. The boundary conditions follow by definition of the survival probability. ## Appendix E Existence and continuity of the PD function **Theorem E.1**.: **Arzela-Ascoli**_. Let \((X,d)\) be a compact metric space and \(C(X)\) the space of continuous functions on \(X\). Then, if a sequence of continuous functions \(\{f\}_{n=1}^{\infty}\) in \(C(X)\) is bounded and equicontinuous it has a uniformly convergent subsequence._ **Theorem E.2**.: **Schauder Fixed Point** _Let \((X,\|\cdot\|)\) be a Banach space and \(S\subset X\) is compact, convex, and nonempty. Any continuous operator \(A:S\to S\) has at least one fixed point._ ## Appendix F Regularity of solutions to parabolic PDEs We will require the following results pertaining to the regularity of solutions of the second order parabolic PDE (116). The following are due to [27]. **Theorem F.1**.: _Consider a bounded domain \(\Omega\), the operator \(L:=\frac{\partial f}{\partial t}(x,t)-\mathcal{L}f(x,t)\), where \(\mathcal{L}\) is the generator operator, and the PDE:_ \[\begin{cases}Lf=g(x,t)&\text{ for }(x,t)\in Q_{T}\\ f(x,0)=\varphi(x)&\text{ for }x\in\Omega\\ f(x,t)=\psi(x,t)&\text{ for }x\in\Sigma_{T}:=\partial\Omega\times[0,T].\end{cases} \tag{130}\] _Then, for any \(g\in C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right),\varphi\in C^{2+ \alpha}(\bar{\Omega}),\psi\in C^{2+\alpha,\frac{2+\alpha}{2}}\left(\Sigma_{T}\right)\), with \(0<a<1\), (130) has a unique solution from the class \(C^{2+\alpha,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) and satisfies the inequality:_ \[\|f\|_{2+\alpha,\bar{Q}_{T}}\leq C\left(\|g\|_{\alpha,\bar{Q}_{T}}+\|\varphi \|_{2+\alpha,\bar{\Omega}}+\|\psi\|_{2+\alpha,\Sigma_{T}}\right),\] _with the constant \(C\) independent of \(f,\varphi\) and \(\psi\)._ When extending to Levy models and the corresponding integro-differential equations, we will need the following result. **Lemma F.2**.: _Consider \(f\in C^{\alpha+2,\frac{2+\alpha}{2}}\left(\bar{Q}_{T}\right)\) and the differential operator:_ \[If(x,t)=\int_{\Omega}[f(x+z,t)-f(x,t)]\nu(dz).\] _Then, for \(0<a<1\), we have that:_ \[\|If\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}\leq\varepsilon\| \nabla f\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}+C(\varepsilon )\|f\|_{C^{\alpha,\frac{\alpha}{2}}\left(\bar{Q}_{T}\right)}.\] Note that Lemma F.2 is a simplified version of the corresponding results in [27], where the authors consider additional integral operators of higher orders.
2305.00455
Causalainer: Causal Explainer for Automatic Video Summarization
The goal of video summarization is to automatically shorten videos such that it conveys the overall story without losing relevant information. In many application scenarios, improper video summarization can have a large impact. For example in forensics, the quality of the generated video summary will affect an investigator's judgment while in journalism it might yield undesired bias. Because of this, modeling explainability is a key concern. One of the best ways to address the explainability challenge is to uncover the causal relations that steer the process and lead to the result. Current machine learning-based video summarization algorithms learn optimal parameters but do not uncover causal relationships. Hence, they suffer from a relative lack of explainability. In this work, a Causal Explainer, dubbed Causalainer, is proposed to address this issue. Multiple meaningful random variables and their joint distributions are introduced to characterize the behaviors of key components in the problem of video summarization. In addition, helper distributions are introduced to enhance the effectiveness of model training. In visual-textual input scenarios, the extra input can decrease the model performance. A causal semantics extractor is designed to tackle this issue by effectively distilling the mutual information from the visual and textual inputs. Experimental results on commonly used benchmarks demonstrate that the proposed method achieves state-of-the-art performance while being more explainable.
Jia-Hong Huang, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hung Chen, Marcel Worring
2023-04-30T11:42:06Z
http://arxiv.org/abs/2305.00455v1
# Causalainer: Causal Explainer for Automatic Video Summarization ###### Abstract The goal of video summarization is to automatically shorten videos such that it conveys the overall story without losing relevant information. In many application scenarios, improper video summarization can have a large impact. For example in forensics, the quality of the generated video summary will affect an investigator's judgment while in journalism it might yield undesired bias. Because of this, modeling explainability is a key concern. One of the best ways to address the explainability challenge is to uncover the causal relations that steer the process and lead to the result. Current machine learning-based video summarization algorithms learn optimal parameters but do not uncover causal relationships. Hence, they suffer from a relative lack of explainability. In this work, a Causal Explainer, dubbed Causalainer, is proposed to address this issue. Multiple meaningful random variables and their joint distributions are introduced to characterize the behaviors of key components in the problem of video summarization. In addition, helper distributions are introduced to enhance the effectiveness of model training. In visual-textual input scenarios, the extra input can decrease the model performance. A causal semantics extractor is designed to tackle this issue by effectively distilling the mutual information from the visual and textual inputs. Experimental results on commonly used benchmarks demonstrate that the proposed method achieves state-of-the-art performance while being more explainable. ## 1 Introduction Video summarization is the process of automatically generating a concise video clip that conveys the primary message or story in the original video. Various automatic video summarization algorithms have been proposed in recent years to tackle this task using different supervision schemes. These include fully-supervised methods that utilize visual input alone [10, 13, 35, 68, 69, 70, 71] or multi-modal input [26, 27, 42, 43, 46, 51, 54, 56, 58, 65, 72], as well as weakly-supervised methods [4, 60, 17, 13, 14, 6]. According to [10, 13, 55, 56, 27], when human experts perform the task of video summary generation, they will not only consider concrete/visual factors, e.g., visual consecutiveness and visual diversity, but also abstract/non-visual factors, such as interestingness, representativeness, and storyline smoothness. Hence, a human-generated video summary is based on many confounding factors. These factors/causes result in the video summary. Existing works do not, or in a very limited way, consider abstract factors and mainly focus on proposing various methods to exploit concrete visual cues to perform video summarization. See the illustration in Figure 1. This leads to limited modeling explainability of automatic video summarization [27, 37]. Machine learning (ML) models can be made more explainable through causation modeling based on Bayesian probability [67, 49, 48, 66, 39]. In this work, we propose a novel method for improving the inherent explainability of video summarization models called Causalainer, which is based on causation modeling. Our approach aims to address the challenge of model explainability in video summarization by leveraging the insights gained from Bayesian probability and causation modeling. See Figure 2 for the method flowchart of the proposed Causalainer. To model the problem of video summarization and increase the explainability, Figure 1: Visualization of human-annotated and machine-predicted frame-level scores for creating a video summary. Comparing the human-annotated video summary score pattern to the one generated by existing state-of-the-art video summarization methods, e.g., [26, 27, 56], we observe these methods are capable of learning visual consecutiveness and diversity which are some key factors considered by humans for creating a good video summary. These methods mainly focus on capturing the visual cues to achieve such a purpose. Red bars denote discarded frames and grey bars indicate selected frames used to form a summary. The video has \(199\) frames and the numbers, except for \(199\), denote the indices of frames. four meaningful random variables are introduced to characterize the behaviors of the data intervention, the model's prediction, observed potential confounders, and unobserved confounders, respectively. Note that data intervention is a way to help a model learn the causal relations that lead to the result [5, 53, 11, 12, 45, 5]. A prior joint distribution and its posterior approximation can be built on top of those four random variables. The proposed method is trained based on minimizing the distance between the prior distribution and the posterior approximation. We identify that predicting the behaviors of the data intervention and model's outcome can be challenging in practice due to various factors, e.g., video noise, lens or motion blur. We address this issue by introducing helper distributions for them. The helper distributions form a new loss term to guide the model learning. Furthermore, when multi-modal inputs are available, we identify that the extra input sometimes can harm the model performance most likely due to the interactions between different modalities being ineffective. We address this challenge by introducing a causal semantics extractor to effectively distill the mutual information between multi-modal inputs. These novel design choices have been instrumental in improving the explainability and performance of video summarization models. The extensive experimentation on commonly used video summarization datasets verifies that the proposed method outperforms existing state-of-the-art while also providing greater explainability. By leveraging causal learning techniques, our approach represents a promising attempt to reinforce the causal inference ability and explainability of an ML-based video summarization model. ## 2 Methodology We now present the details of the proposed Causal Explainer method for automatic video summarization, dubbed Causalainer. First, the assumptions of causal modeling are described in detail. Secondly, we introduce four random variables \(\mathbf{y}\), \(\mathbf{t}\), \(\mathbf{X}\), and \(\mathbf{Z}\) to characterize the behaviors of the model's prediction, the data intervention, observed potential confounders, and unobserved confounders, respectively. Finally, the derivation of our training objective with helper distributions and the proposed causal semantics extractor are presented. Causalainer consists of prior and posterior probabilistic networks. See Figure 2 for an overview. **3.1 Assumptions** In general, causal learning for real-world observational studies is complicated [1, 2, 7, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 44, 45, 59, 61, 62, 63]. With the established efforts [45, 49, 66] on causal learning under noisy interventions, two assumptions are imposed when modeling the problem of video summarization. First, the information of having visual/textual intervention \(\mathbf{t}\) or not is binary. Second, the observations \((\mathbf{X},\mathbf{t},\mathbf{y})\) from a deep neural network (DNN) are sufficient to approximately recover the joint distribution \(p(\mathbf{Z},\mathbf{X},\mathbf{t},\mathbf{y})\) of the unobserved/latent confounding variable \(\mathbf{Z}\), the observed confounding variable \(\mathbf{X}\), the intervention \(\mathbf{t}\), and the outcome \(\mathbf{y}\). The proposed Causalainer method is built on top of multiple probability distributions as described in the following subsections. **3.2 Causal Explainer for Video Summarization** In the proposed Causalainer, \(\mathbf{x}_{i}\) denotes an input video and an optional text-based query indexed by \(i\), \(\mathbf{z}_{i}\) indicates the latent confounder, \(t_{i}\in\{0,1\}\) denotes the intervention assignment, and \(y_{i}\) indicates the outcome. **Prior Probability Distributions.** The prior network is conditioning on the latent variable \(\mathbf{z}_{i}\) and mainly consists of the following components: **(i)** The latent confounder distribution: \(p(\mathbf{z}_{i})=\prod_{z\in\mathbf{z}_{i}}\mathcal{N}(z|\mu=0,\sigma^{2}=1),\) where \(\mathcal{N}(z|\mu,\sigma^{2})\) denotes a Gaussian distribution with a random variable \(z\), \(z\) is an element of \(\mathbf{z}_{i}\), and the mean \(\mu\) and variance \(\sigma^{2}\) follow the settings in [41], i.e., \(\mu=0\)\(\sigma^{2}=1\). **(ii)** The conditional data distribution: \(p(\mathbf{x}_{i}|\mathbf{z}_{i})=\prod_{x\in\mathbf{z}_{i}}p(x|\mathbf{z}_{i}),\) where \(p(x|\mathbf{z}_{i})\) is an appropriate probability distribution with a random variable \(x\), the distribution is conditioning on \(\mathbf{z}_{i}\), and \(x\) is an element of \(\mathbf{x}_{i}\). **(iii)** The conditional intervention distribution: \(p(t_{i}|\mathbf{z}_{i})=\text{Bernoulli}(\sigma(f_{\theta_{1}}(\mathbf{z}_{i}))),\) where \(\sigma(\cdot)\) is a logistic function, \(\text{Bernoulli}(\cdot)\) indicates a Bernoulli distribution for a discrete outcome, and \(f_{\theta_{1}}(\cdot)\) denotes a neural network parameterized by the parameter \(\theta_{1}\). **(iv)** The conditional outcome distribution: \(p(y_{i}|\mathbf{z}_{i},t_{i})=\sigma(t_{i}f_{\theta_{2}}(\mathbf{z}_{i})+(1-t _{i})f_{\theta_{3}}(\mathbf{z}_{i})),\) where \(f_{\theta_{2}}(\cdot)\) and \(f_{\theta_{3}}(\cdot)\) are neural networks parameterized by the parameters \(\theta_{2}\) and \(\theta_{3}\), respectively. In this work, \(y_{i}\) is tailored for a categorical classification problem, i.e., frame-based importance score classification in video summarization. **Posterior Probability Distribution.** Since a priori knowledge on the latent confounder does not exist, we have to marginalize over it in order to learn the model parameters, \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) in **(iii)** and **(iv)**. The non-linear neural network functions make inference intractable. Hence, variational inference [41] along with the posterior network is employed. These neural networks output the parameters of a fixed form posterior approximation over the latent variable \(\mathbf{z}\), given the observed variables. Similar to [45, 50], in this work, the proposed posterior network is conditioning on observations. Also, the true posterior over \(\mathbf{Z}\) depends on \(\mathbf{X}\), \(\mathbf{t}\) and \(\mathbf{y}\). Hence, the posterior approximation defined below is employed to build the posterior network. \(q(\mathbf{z}_{i}|\mathbf{x}_{i},y_{i},t_{i})=\prod_{z\in\mathbf{z}_{i}} \mathcal{N}(z|\boldsymbol{\mu}_{i},\boldsymbol{\sigma}_{2}^{2}),\) where \(\boldsymbol{\mu}_{i}=t_{i}\boldsymbol{\mu}_{t=1,i}+(1-t_{i})\boldsymbol{\mu}_{t=0,i}\), \(\boldsymbol{\sigma}_{i}^{2}=t_{i}\boldsymbol{\sigma}_{t=1,i}^{2}+(1-t_{i}) \boldsymbol{\sigma}_{t=0,i}^{2}\), \(\boldsymbol{\mu}_{t=0,i}=g_{\phi_{1}}\circ g_{\phi_{0}}(\mathbf{x}_{i},y_{i})\), \(\boldsymbol{\sigma}_{t=0,i}^{2}=\sigma(g_{\phi_{2}}\circ g_{\phi_{0}}(\mathbf{x}_{i},y_{i}))\), \(\boldsymbol{\mu}_{t=1,i}=g_{\phi_{3}}\circ g_{\phi_{0}}(\mathbf{x}_{i},y_{i})\), \(\boldsymbol{\sigma}_{t=1,i}^{2}=\sigma(g_{\phi_{4}}\circ g_{\phi_{0}}(\mathbf{x}_{i},y_{i}))\), \(g_{\phi_{k}}(\cdot)\) denotes a neural network with variational parameters \(\phi_{k}\) for \(k=0,1,2,3,4\), and \(g_{\phi_{0}}(\mathbf{x}_{i},y_{i})\) is a shared representation. Note that a feature map is multiplied with the approximated posterior Figure 2: Flowchart of the proposed Causal Explainer (Causalainer) method for video summarization. The proposed method is mainly composed of a prior network, a posterior network, helper distributions, and a causal semantics extractor. \(\otimes\) denotes element-wise multiplication and \(\times\) indicates matrix multiplication. “Token + PE” denotes the operations of token embedding and positional encoding. \(q(y_{i}|\mathbf{x}_{i},t_{i})\) without logistic function \(\sigma(\cdot)\) to get \(g_{\phi_{0}}(\mathbf{x}_{i},y_{i})\). ### Training Objective with Helper Distributions In practice, various factors, e.g., video noise, motion blur, or lens blur, make the prediction of the behaviors of the data intervention and the model's outcome challenging. Therefore, two helper distributions are introduced to alleviate this issue. We have to know the intervention assignment \(\mathbf{t}\) along with its outcome \(\mathbf{y}\) before inferring the distribution over \(\mathbf{Z}\). Hence, the helper distribution \(q(t_{i}|\mathbf{x}_{i})=\text{Bernoulli}(\sigma(g_{\phi_{5}}(\mathbf{x}_{i})))\) is introduced for the intervention assignment \(t_{i}\), and the other helper distribution \(q(y_{i}|\mathbf{x}_{i},t_{i})=\sigma(t_{i}g_{\phi_{6}}(\mathbf{x}_{i})+(1-t_{ i})g_{\phi_{7}}(\mathbf{x}_{i}))\) is introduced for the outcome \(y_{i}\), where \(g_{\phi_{k}}(\cdot)\) indicates a neural network with variational parameters \(\phi_{k}\) for \(k=5,6,7\). The introduced helper distributions benefit the prediction of \(t_{i}\) and \(y_{i}\) for new samples. To estimate the variational parameters of the distributions \(q(t_{i}|\mathbf{x}_{i})\) and \(q(y_{i}|\mathbf{x}_{i},t_{i})\), a helper objective function \(\mathcal{L}_{\text{helper}}=\sum_{i=1}^{N}[\log q(t_{i}=t_{i}^{*}|\mathbf{x}_{ i}^{*})+\log q(y_{i}=y_{i}^{*}|\mathbf{x}_{i}^{*},t_{i}^{*})]\) is introduced to the final training objective over \(N\) data samples, where \(\mathbf{x}_{i}^{*}\), \(t_{i}^{*}\) and \(y_{i}^{*}\) are the observed values in the training set. The overall training objective \(\mathcal{L}_{\text{causal}}\) for the proposed method is defined below. \(\mathcal{L}_{\text{causal}}=\mathcal{L}_{\text{helper}}+\sum_{i=1}^{N} \mathbb{E}_{q(\mathbf{z}_{i}|\mathbf{x}_{i},t_{i},y_{i})}[\log p(\mathbf{x}_{i },t_{i}|\mathbf{z}_{i})+\log p(y_{i}|t_{i},\mathbf{z}_{i})+\log p(\mathbf{z}_{ i})-\log q(\mathbf{z}_{i}|\mathbf{x}_{i},t_{i},y_{i})]\). ### Causal Semantics Extractor Existing commonly used video summarization datasets, e.g., TVSum [55] and QueryVS [27], provide visual and textual inputs. Since the textual input cannot always help the model performance because of the ineffective extraction of mutual information from the visual and textual inputs, a causal semantics extractor is introduced to alleviate this issue. The proposed extractor is built on top of transformer blocks [57]. Vanilla transformers exploit all of the tokens in each layer for attention computation. However, the design philosophy of the proposed causal semantics extractor, dubbed causal attention, is effectively using fewer but relatively informative tokens to compute attention maps, instead of using the total number of tokens. According to [57], the computation of the vanilla attention matrix \(\mathscr{A}\in\mathbb{R}^{n\times n}\) is based on the dot-product. It is defined as \(\mathscr{A}=\text{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}} \right);\mathbf{Q}=\mathbf{T}\mathbf{W}_{q},\mathbf{K}=\mathbf{T}\mathbf{W}_{k}\), where the query matrix \(\mathbf{Q}\in\mathbb{R}^{n\times d}\) and key matrix \(\mathbf{K}\in\mathbb{R}^{n\times d}\) are generated by the linear projection of the input token matrix \(\mathbf{T}\in\mathbb{R}^{n\times d}\)- based on the learnable weights matrices \(\mathbf{W}_{q}\in\mathbb{R}^{d_{m}\times d}\) and \(\mathbf{W}_{k}\in\mathbb{R}^{d_{m}\times d}\). \(n\) indicates the total number of input tokens. \(d\) represents the embedding dimension and \(d_{m}\) denotes the dimension of an input token. The new value matrix \(\mathbf{V}_{\text{new}}\in\mathbb{R}^{n\times d}\) can be obtained via \(\mathbf{V}_{\text{new}}=\mathscr{A}\mathbf{V};\mathbf{V}=\mathbf{T}\mathbf{W}_ {v}\), where the value matrix \(\mathbf{V}\in\mathbb{R}^{n\times d}\) and \(\mathbf{W}_{v}\in\mathbb{R}^{d_{m}\times d}\). In [57], the vanilla attention matrix is based on the calculation of all the query-key pairs. However, in the proposed Causal Semantics Extractor, only the top \(\kappa\) most similar keys and values for each query are used to compute the causal attention matrix. Similar to [57], all the queries and keys are calculated by the dot-product. Then, the row-wise top \(\kappa\) elements are used for the softmax calculation. In the proposed Causal Semantics Extractor, the value matrix \(\mathbf{V}_{\kappa}\in\mathbb{R}^{n\times d}\) is defined as \(\mathbf{V}_{\kappa}=\text{softmax}\left(\tau_{\kappa}(\mathscr{A})\right) \mathbf{V}_{\text{new}}=\text{softmax}\left(\tau_{\kappa}\left(\frac{\mathbf{Q} \mathbf{K}^{\top}}{\sqrt{d}}\right)\right)\mathbf{V}_{\text{new}},\) where \(\tau_{\kappa}(\cdot)\) denotes an operator for the row-wise top \(\kappa\) elements selection. \(\tau_{\kappa}(\cdot)\) is defined as \([\tau_{\kappa}(\mathscr{A})]_{ij}=\begin{cases}\mathscr{A}_{ij}&,\mathscr{A}_{ ij}\in\text{top}\kappa\text{ factors at row }i\\ -\infty&,\text{otherwise}.\end{cases}\) Then, \(\mathbf{V}_{\kappa}\) can be further used to generate \(\mathbf{X}_{\text{mul}}\), i.e., an output of the proposed Causal Semantics Extractor. The procedure for calculating \(\mathbf{X}_{\text{mul}}\) is defined below. \(Z_{\text{ta}}=\text{TextAtten}(\text{FFN}(\text{LayerNorm}(\mathbf{V}_{\kappa})),\) where \(\text{LayerNorm}(\cdot)\) denotes a layer normalization, \(\text{FFN}(\cdot)\) indicates a feed forward network, and \(\text{TextAtten}(\cdot)\) denotes an element-wise multiplication-based textual attention mechanism. \(Z_{\text{va}}=\text{VisualAtten}(\text{C3D}(\mathbf{I}))\), where \(I\) denotes an input video, \(\text{C3D}(\cdot)\) indicates an operation of the spatial-temporal feature extraction, e.g., 3D version of ResNet-34 [15, 16], for the input video, and \(\text{VisualAtten}(\cdot)\) indicates a visual attention mechanism based on the element-wise multiplication. \(\mathbf{X}_{\text{mul}}=\text{FC}(Z_{\text{ta}}\odot Z_{\text{va}}),\) where \(\odot\) denotes the operation of feature concatenation and \(\text{FC}(\cdot)\) indicates a fully connected layer. Note that the Causal Semantics Extractor's output \(\mathbf{X}_{\text{mul}}\) is an input of the proposed posterior network based on the scheme of using multi-modal inputs. Similar to the final step of video summary generation in [27], after the end-to-end training of the proposed causal video summarization model is complete, the trained model can be used for video summary generation. Finally, based on the generated score labels, a set of video frames is selected from the original input video to form a final video summary. Note that the summary budget is considered as a user-defined hyper-parameter in multi-modal video summarization [27]. ## 3 Experiments ### Experimental Setup and Datasets Preparation **Experimental Setup.** We consider three scenarios: 1) fully-supervised training with human-defined frame-level labels, 2) fully-supervised training with multi-modal input including text-based query, and 3) weakly-supervised learning with two-second segment-level scores, which can be considered as a form of weak label [3, 4, 6]. Note that [55] empirically finds that a two-second segment length is appropriate for capturing video local context with good visual coherence. Hence, in this work, a video segment-level score is produced per two seconds based on given frame-level scores. **Video Summarization Datasets.** In the experiments, three commonly used video summarization datasets, i.e., TVSum [55], QueryVS [27], and SumMe [13], are exploited to evaluate the proposed method. The TVSum dataset contains \(50\) videos. The length of the video in TVSum is ranging from \(2\) to \(10\) minutes. The human expert frame-level importance score label in TVSum is ranging from \(1\) to \(5\). The QueryVS dataset contains \(190\) videos. The video length in QueryVS is ranging from \(2\) to \(3\) minutes. The human expert frame-level importance score label in QueryVS is ranging from \(0\) to \(3\). Every video is retrieved based on a given text-based query. The SumMe dataset contains \(25\) videos. The video duration in SumMe is ranging from \(1\) to \(6\) minutes. In SumMe, the importance score annotated by human experts ranges from \(0\) to \(1\). Note that SumMe is not used for multi-modal video summarization. Hence, we do not have textual input when a model is evaluated on this dataset. Videos from these datasets are sampled at \(1\) frame per second (fps). The input image size is \(224\) by \(224\) with RGB channels. Every channel is normalized by standard deviation \(=(0.2737,0.2631,0.2601)\) and mean \(=(0.4280,0.4106,0.3589)\). PyTorch and NVIDIA TITAN Xp GPU are used for the implementation and to train models for \(60\) epochs with \(1e-6\) learning rate. The Adam optimizer is used [40], with hyper-parameters set as \(\epsilon=1e-8\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.99\). **Causal Learning Dataset.** When we observe people's writing behaviors, we notice some of them happen very often, such as synonym replacement, accidentally missing some words in a sentence, and so on. Motivated by the above, we randomly pick up one of the behaviors, e.g., accidentally missing some words in a sentence, and write a textual intervention function to simulate it. Similarly, we know that when people make videos in their daily life, some visual disturbances may exist, e.g., salt and pepper noise, image masking, blurring, and so on. We also randomly pick up some of them, e.g., blur and salt and pepper noise, and make a visual intervention function to do the simulation. Based on the visual and textual simulation functions, we can make our causal video summarization dataset with visual and textual interventions. The dataset is made based on the following steps. First, 50% of the _(video, query)_ data pairs are randomly selected from the original training, validation, and testing sets. Secondly, for each selected video, \(0\) or \(1\) intervention labels are randomly assigned to \(30\)% of the video frames and the corresponding queries. Note that in real-world scenarios, there are various disturbances beyond the previously mentioned visual and textual interventions that could be utilized in the proposed method. **3.2 Evaluation and Analysis** **Evaluation protocol.** Following existing works [13, 26, 27, 55], we evaluate the proposed method under the same setting. TVSum, QueryVS, and SumMe datasets are randomly divided into five splits, respectively. For each of them, \(80\)% of the dataset is used for training, and the remaining for evaluation. \(F_{1}\)-score [13, 18, 37, 55] is adopted to measure the matching degree of the generated video summaries \(\mathbb{S}_{i}\) and the ground-truth video summaries \(\hat{\mathbb{S}}_{i}\) for video \(i\). **State-of-the-art comparisons.** The proposed method outperforms existing state-of-the-art (SOTA) models based on different supervision schemes, as shown in Table 1, Table 2, and Table 3. This is because the introduced causal modeling strengthens the causal inference ability of a video summarization model by uncovering the causal relations that guide the process and result. **Effectiveness analysis of the proposed causal modeling.** The proposed approach differs from existing methods by introducing causal modeling. Hence, the results in Tables 1, 2, and 3, demonstrate the effectiveness of this approach and serve as an ablation study of causal learning. An auxiliary task/distribution is a key component of the proposed approach, helping the model learn to diagnose input to make correct inferences for the main task, i.e., video summary inference, During training, a binary causation label is provided to teach the model to perform well regardless of intervention. This implies the model has the ability to analyze input and perform well in the main task, making it more robust. **Explainability improvement analysis.** The Causalainer method benefits modeling explainability with its associated causal graph of video summarization. Latent factors affecting video summary generation are treated as the causal effect in the proposed causal modeling. A causal graphical model is used to approach the video summarization problem, and the modeling explainability is illustrated in Figure 3.
2309.10593
Variational method for learning Quantum Channels via Stinespring Dilation on neutral atom systems
The state $|\psi(t)\rangle$ of a closed quantum system evolves under the Schr\"{o}dinger equation, where the reversible evolution of the state is described by the action of a unitary operator $U(t)$ on the initial state $|\psi_0\rangle$, i.e.\ $|\psi(t)\rangle=U(t)|\psi_0\rangle$. However, realistic quantum systems interact with their environment, resulting in non-reversible evolutions, described by Lindblad equations. The solution of these equations give rise to quantum channels $\Phi_t$ that describe the evolution of density matrices according to $\rho(t)=\Phi_t(\rho_0)$, which often results in decoherence and dephasing of the state. For many quantum experiments, the time until which measurements can be done might be limited, e.g. by experimental instability or technological constraints. However, further evolution of the state may be of interest. For instance, to determine the source of the decoherence and dephasing, or to identify the steady state of the evolution. In this work, we introduce a method to approximate a given target quantum channel by means of variationally approximating equivalent unitaries on an extended system, invoking the Stinespring dilation theorem. We report on an experimentally feasible method to extrapolate the quantum channel on discrete time steps using only data on the first time steps. Our approach heavily relies on the ability to spatially transport entangled qubits, which is unique to the neutral atom quantum computing architecture. Furthermore, the method shows promising predictive power for various non-trivial quantum channels. Lastly, a quantitative analysis is performed between gate-based and pulse-based variational quantum algorithms.
L. Y. Visser, R. J. P. T. de Keijzer, O. Tse, S. J. J. M. F. Kokkelmans
2023-09-19T13:06:44Z
http://arxiv.org/abs/2309.10593v1
# Variational method for learning Quantum Channels via Stinespring Dilation on neutral atom systems ###### Abstract The state \(|\psi(t)\rangle\) of a closed quantum system evolves under the Schrodinger equation, where the _reversible_ evolution of the state is described by the action of a unitary operator \(U(t)\) on the initial state \(|\psi_{0}\rangle\), i.e. \(|\psi(t)\rangle=U(t)|\psi_{0}\rangle\). However, realistic quantum systems interact with their environment, resulting in _non-reversible_ evolutions, described by Lindblad equations. The solution of these equations give rise to quantum channels \(\Phi_{t}\) that describe the evolution of density matrices according to \(\rho(t)=\Phi_{t}(\rho_{0})\), which often results in decoherence and dephasing of the state. For many quantum experiments, the time until which measurements can be done might be limited, e.g. by experimental instability or technological constraints. However, further evolution of the state may be of interest. For instance, to determine the source of the decoherence and dephasing, or to identify the steady state of the evolution. In this work, we introduce a method to approximate a given target quantum channel by means of variationally approximating equivalent unitaries on an extended system, invoking the _Stinespring dilation theorem_. We report on an experimentally feasible method to extrapolate the quantum channel on discrete time steps using only data on the first time steps. Our approach heavily relies on the ability to spatially transport entangled qubits, which is unique to the neutral atom quantum computing architecture. Furthermore, the method shows promising predictive power for various non-trivial quantum channels. Lastly, a quantitative analysis is performed between gate-based and pulse-based variational quantum algorithms. ## I Introduction In the current noisy intermediate-scale quantum (NISQ) [1], quantum computers suffer from noise and are limited in their computational power. Nevertheless, these NISQ systems are useful for specific, well-designed tasks [2]. The currently predominant algorithms are variational quantum algorithms (VQAs) that aim to construct a unitary operation that minimizes a prescribed loss function. Certain VQAs have shown proof of concept for small dimensional problems with various hardware choices of qubits [3; 4; 5; 6; 7]. Another application of NISQ-era quantum computers is quantum simulation. Here, the goal is to simulate the behavior of a closed quantum system evolving under the Schrodinger equation, by mimicking the Hamiltonian of the target system. This results in a unitary evolution that fully describes the dynamics of the system. On the other hand, many quantum systems of interest consist of a subsystem interacting with an environment, a so-called _open quantum system_, e.g. photonic devices, fermionic lattices [8], ion collisions, quark-gluon plasmas [9] or even a NISQ quantum computer [10]. This interaction leads to non-unitary evolutions described by the Lindblad equation [11] (see Eq. (1)), generally portraying dissipation in the form of dephasing and decoherence. Evolution operators associated to this equation are called quantum channels--the open system equivalent of unitary operators. In this paper, we propose a novel quantum channel VQA to approximate the evolution of these open systems, which we call the _target system_. ApproachOur method first considers a number of computational qubits on the quantum computer, depending on the dimensionality of the Hilbert space in which the target quantum channel is described. In order to capture the behavior of the environment, this Hilbert space is extended by introducing a number of ancilla qubits based on the Stinespring dilation theorem (see Sec. II) [12]. The method then variationally learns a unitary operator on this extended system (computational \(+\) ancilla qubits)--henceforth called the _Stinespring unitary_--based on input measurement data on the computational qubits. The Stinespring unitary is traced out over the ancilla qubits (environment) to provide an approximation of the target Figure 1: Schematic representation of a quantum channel VQA using ancilla qubits. Both gate- and pulse-based methods can be used for the Stinespring unitary approximation. quantum channel, see Fig. 1. By repeatedly moving away the old ancilla qubits and applying the Stinespring unitary on the computational system and new ancillas, the quantum channel can be repeated, resulting in approximations of the target quantum channel at discrete time steps. Because of its ability to coherently move around qubits [13], a neutral atom quantum computing system is especially well-suited for initializing a new set of ancilla qubits. _Relation to previous work._ Several other works have been previously performed on quantum channel approximations using quantum computers and the Stinespring dilation theorem [9; 10; 12; 14; 15; 16; 17]. Our method differs from these previous approaches in the sense that no prior knowledge is required on the Lindblad equation jump operators (see Eq. (1)) and the method fully relies on measurement data on the target system, such as can be provided by an experiment. Moreover, we introduce the inclusion of input data at multiple discrete time steps and the steady state. Lastly, ours is the first to consider a pulse-based method for quantum channel approximation. Furthermore, we also note parallels between our approach and other ancilla qubit methods for quantum channel simulation including quantum Zeno dynamics [18], unitary decomposition [19], and imaginary time evolution [8; 20]. The layout of this paper is as follows. Sec. II describes quantum channels and the Stinespring dilation theorem. Sec. III details the newly developed quantum channel VQA. Sec. IV describes how the algorithm is readily tailored towards execution on a NISQ neutral atom quantum computing system. In Sec. V, we show initial results of our quantum channel approximation method and compare gate- and pulse-based methods. ## II Quantum channels & Stinespring dilation A system of interest will always, albeit limited, interact with its environment. This results in an open quantum system undergoing decoherence and dephasing [21] (see Fig. 2), which can no longer be represented using pure states \(|\psi\rangle\), but requires the use of density matrices \(\rho\), henceforth called states. The evolution of such an open quantum system is described by the Lindblad equation [22; 11] \[\partial_{t}\rho=-i\left[H(t),\rho\right]+\sum_{k}\gamma_{k}A_{k}\rho A_{k}^{ \dagger}-\frac{1}{2}\gamma_{k}\left\{A_{k}^{\dagger}A_{k},\rho\right\}, \tag{1}\] with \(H(t)\) the original Hamiltonian acting on the system of interest, \(A_{k}\) the jump operators with corresponding decay rates \(\gamma_{k}\), that characterize the interactions with the environment, and \([\cdot,\cdot]\) and \(\{\cdot,\cdot\}\) respectively the commutator and anti-commutator. Solutions to the Lindblad equation are described by a quantum channel \(\Phi_{t}\) acting on states as \[\rho(t)=\Phi_{t}(\rho_{0}).\] A quantum channel has the following properties [23]: * \(\Phi_{t}\) is linear; * \(\Phi_{t}\) is completely positive (i.e. \(I_{n}\otimes\Phi_{t}\) is positive for every \(n\)); * \(\Phi_{t}\) is trace preserving. Because of the high dimensionality of the Hilbert spaces on which quantum channels act, it is not computationally feasible to simulate these channels on classical computers. Therefore, we propose a method which utilizes the highly dimensional computational space inherent to quantum computers. The qubits of an ideal quantum computer form a closed system and evolve only under unitary transformations. As a result, it is impossible to mimic a non-unitary quantum channel acting on a \(2^{m}\) dimensional space by using a quantum computer with only \(m\) qubits. In dilation theory, operators on a Hilbert space \(A\) are extended as projections of operators that act on a larger Hilbert space \(A\otimes B\). The new operator is then called a dilation and can have more favorable properties than the original operator [24]. For this work, we employ the Stinespring dilation theorem [25] applied to quantum channels. **Theorem** (Stinespring dilation theorem).: _Let \(A\) be a Hilbert space and \(\Phi:A\to A\) be a quantum channel. Then there exists a Hilbert space \(B\) and a unitary \(U:A\otimes B\to A\otimes B\) such that_ \[\Phi(\rho)=\mathrm{Tr}_{B}[U\cdot\rho\otimes|0\rangle_{B}\langle 0|_{B}\cdot U^{ \dagger}].\] _where \(|0\rangle_{B}\) is the pure zero state on \(B\). Furthermore, \(\dim(B)\leq\dim(A)^{2}\)._ The Hilbert space of a quantum computer has a dimensionality of \(2^{m}\), with \(m\) the number of qubits. As a result, Figure 2: Example of quantum channel behavior showing populations for single qubit Rabi oscillation with \(\Omega=0.4\pi\) Hz, \(A=|1\rangle\langle 0|\) and \(\gamma=0.15\) Hz decay. at most \(3m\) qubits are needed to simulate a quantum channel on a \(2^{m}\) dimensional system. Moreover, it has been shown that \(\dim(B)=k\), with \(k\) the number of jump operators present in the Lindblad equation of Eq. (1) [15]. Generally, we call the qubits that live in the space \(A\) the computational qubits and the qubits living in space \(B\) the ancilla qubits. Knowledge of the unitary dilation of a quantum channel at the initial sample time \(\Delta t\) opens up the possibility of extrapolation if all underlying processes are time independent (i.e. \(H(t)=H\)). Indeed, if the quantum channel is given by \(\Phi_{\Delta t}(\rho_{0})=\rho(\Delta t)\), then \(\Phi_{\Delta t}(\rho(t))=\rho(t+\Delta t)\) and iteratively applying \(\Phi_{\Delta t}\) gives \[\rho(n\Delta t)=\underbrace{\Phi_{\Delta t}(\Phi_{\Delta t}(\cdots\Phi_{ \Delta t}(\rho_{0})\cdots))}_{n\text{ repeated applications}}=\Phi_{n\Delta t}( \rho_{0}). \tag{2}\] Moreover, the Stinespring dilation theorem provides the existence of a unitary transformation \(U\) satisfying \[\rho(\Delta t)=\Phi_{\Delta t}(\rho_{0})=\mathrm{Tr}_{B}[U\cdot\rho_{0} \otimes|0\rangle_{B}\langle 0|_{B}\cdot U^{\dagger}].\] Consequently, \[\rho((n+1)\Delta t)=\mathrm{Tr}_{B}[U\cdot\rho(n\Delta t)\otimes|0\rangle_{B} \langle 0|_{B}\cdot U^{\dagger}],\] for all \(n\in\mathbb{N}\). Thus, if an approximation for the behavior of a quantum channel is known at a fixed time \(\Delta t\), then the approximation can be reapplied \(n\) times to get predictions on the behavior at \(n\Delta t\). When reapplying the Stinespring unitary, one can not use the same set of ancillas, as the old ancillas have become entangled with the computational qubits by the application of the first \(U\). This means that any disturbance on the state of the old ancillas also disturbs the state of the computational qubits. This calls for a new set of ancillas to be brought in from a reservoir array, while the old set of ancillas has to be coherently moved to a storage array. In Sec. IV we detail on how a neutral atom quantum computing system's ability to coherently move around entangled qubits is perfectly suited for this purpose [13]. ## III Variational quantum algorithms ### Input data and loss function The goal of our method is to approximate the quantum channel \(\Phi_{\Delta t}\) by a parametrized quantum channel \(\Phi^{\prime}_{\Delta t}[\theta]\) taking the form \[\Phi^{\prime}_{\Delta t}[\theta](\rho_{0})=\mathrm{Tr}_{B}[U[\theta]\cdot\rho _{0}\otimes|0\rangle_{B}\langle 0|_{B}\cdot U^{\dagger}[\theta]],\] where the Stinespring unitary \(U[\theta]\) is constructed on a quantum computer. Variationally training for the parameters \(\theta\) requires input data and a loss function. As input data, we take \(L\in\mathbb{N}\) pairs of initial states and their \(\Delta t\)-time evolutions \(\{\rho_{l,0},\rho_{l,1}\coloneqq\Phi_{\Delta t}(\rho_{l,0})\}_{l=1}^{L}\). However, in many experiments \(\rho_{l,1}\) will not be known in full, but instead information on \(\rho_{l,1}\) is only known through measurements against an observable \(O_{l}\), giving \(\mathrm{Tr}[O_{l}\rho_{l,1}]\). Thus, the input data is defined as \(\{\rho_{l,0},\mathrm{Tr}[O_{l}\rho_{l,1}]\}_{l=1}^{L}\). Note that \(\rho_{l,0}\) can be taken identical for different observables \(O_{l}\). Based on this knowledge, the loss function can be set as \[J_{1}(U)=\sum_{l=1}^{L}\left(\mathrm{Tr}_{A}[O_{l}\rho^{\prime}_{l,1}]-\mathrm{ Tr}_{A}[O_{l}\rho_{l,1}]\right)^{2}, \tag{3}\] where \(\rho^{\prime}_{l,1}\coloneqq\Phi^{\prime}_{\Delta t}(\rho_{l,0})\). The Pauli strings are a good choice for the observables, as the set of Pauli strings forms a basis of all Hermitian operators. Thus, if \(\{O_{l}\}_{l=1}^{L}\) contains all Pauli strings for every unique \(\rho_{l,1}\), a loss of zero in Eq. (3) corresponds with identical \(\rho^{\prime}_{l,1}\) and \(\rho_{l,1}\) for all \(l\). However, if not all Pauli strings are taken into account, the states do not necessarily have to match on the missing Pauli strings. Appendix A details on how to extend our methods for input data on multiples of the time step \(\Delta t\)\(\{\rho_{l,0},\mathrm{Tr}[O_{l}\rho_{l,1}],\mathrm{Tr}[O_{l}\rho_{l,2}],...\}_{l=1}^ {L}\), i.e. repeated applications of the quantum channel \(\Phi_{\Delta t}\) based on Eq. (2). The last time on which input data is used is defined as \(t_{\mathrm{train}}\). If the steady state \(\rho_{\infty}=\Phi_{t}(\rho_{\infty}),t>0\) is known, this can also be included as input data. In the rest of this section, we describe our choice and optimization of \(\theta\) based on a gate- and pulse-based quantum state evolution method. ### Gate-based optimization One way to train for the Stinespring unitary \(U\) is using a parametrized gate sequence. One commonly used template for such a gate sequence is the hardware-efficient ansatz [3; 26]. This sequence alternates between blocks of parametrized single qubit gates \(U_{q,j}\) executed in time \(\tau_{g}\), with \(q\) indicating the qubit and \(j\) the block, and an unparametrized entangling gate \(U_{ent}\). The easiest way to implement such an entangling gate is letting the system evolve for a time \(\tau_{V}\) under its drift Hamiltonian \(H_{d}\) (the passive evolution of the system, see App. B) such that \(U_{ent}=\exp(-iH_{d}\tau_{V})\). The final unitary \(U[\theta]\) takes the form \[U[\theta]=\left[U_{ent}\prod_{q=1}^{m}U_{q,d}(\theta)\right]\cdots\left[U_{ent }\prod_{q=1}^{m}U_{q,1}(\theta)\right],\] where the depth \(d\) of a state preparation is defined as the number of blocks in the gate sequence. The total execution time is thus \(\tau_{f,\mathrm{gate}}=d(\tau_{g}+\tau_{V})\) with \(d\) the depth of the hardware-efficient ansatz. This method is especially relevant in NISQ machines, where single qubit gates \(U_{q,d}\) can be implemented with high fidelity [27]. If some form of control in entanglement operations is present, there is more freedom in choosing what \(U_{ent}\) should look like, and this choice is influential on the performance of the algorithm [28]. Optimizing the parameters is done by means of gradient descend using finite differences. More sophisticated methods for gate-based optimization using analytic gradients exist [29], which may be included in future work. The number of parameters for a hardware efficient ansatz is \(3\cdot d\cdot m\). Thus, the gate-based algorithm needs \(\#\text{QE}=3\cdot d\cdot m\) evaluations of quantum states to find the gradient using finite differences. To possibly speed up convergence, one can do a stochastic gradient descend where only a small random subset of parameters is updated per iteration. ### Pulse-based optimization Another way of optimizing for the Stinespring unitaries is through pulse-based optimization [30; 31; 32; 33]. This method takes a more analog approach to state preparation and has the advantages of faster state preparation and higher expressibility of the Hilbert space, which are especially important factors for decoherence mitigation in the NISQ-era [33]. The goal of pulse-based optimization is to solve the minimization problem \[\min_{U,\{z_{r}\}}J_{1}(U)+\frac{\lambda}{2}\sum_{r=1}^{R}\int_{0}^{\tau_{f}}| z_{r}(\tau)|^{2}d\tau,\] for unitary \(U\) and pulses \(z_{r}\in L^{2}([0,\tau_{f}],\mathbb{C})\), which are coupled by the Schrodinger equation as \[i\partial_{\tau}U(\tau)=[H_{d}+H_{c}[z(\tau)]]U(\tau),\quad U(0)=I. \tag{4}\] Here \(\tau_{f}>0\) is the pulse end time and \(\lambda>0\) is a regularization parameter ensuring the pulses don't become nonphysical/unimplementable in energy, and the Hamiltonian is split up in an uncontrolled part \(H_{d}\) (the drift Hamiltonian) and a controlled part \(H_{c}[z(\tau)]\) (the control Hamiltonian), see App. B. We generally assume \[H_{c}[z(\tau)]=\sum_{r=1}^{R}z_{r}(\tau)Q_{r}+\overline{z_{r}(\tau)}Q_{r}^{ \dagger}, \tag{5}\] for a set of control operators \(Q_{r}\). These minimization problems are solved using the optimal control framework, as in [33], leading to the following KKT optimality conditions \[i\partial_{\tau}U(\tau)-\left(H_{d}+H_{c}[z(\tau)]\right)U(\tau) =0, \tag{6}\] \[\lambda z_{r}(\tau)+\operatorname{Tr}\left[Q_{r}^{\dagger}\left( P(\tau)U(\tau)^{\dagger}+U(\tau)P(\tau)^{\dagger}\right)\right] =0,\] \[i\partial_{\tau}P(\tau)-\left(H_{d}^{\dagger}+H_{c}[z(\tau)]^{ \dagger}\right)P(\tau) =0,\] where \(P(\tau)\) is the _adjoint process_ with boundary condition for \(P(\tau_{f})\) given by \[P(\tau_{f})= -4i\sum_{l}\operatorname{Tr}_{A}[O_{l}\left(\rho_{l,1}^{\prime}- \rho_{l,1}\right)] \tag{7}\] \[\times\left(O_{l}\otimes I_{B}\cdot U(\tau_{f})\cdot\rho_{l,0} \otimes|0\rangle\langle 0|_{B}\right).\] It is easily shown that \[P(\tau)=U(\tau)U(\tau_{f})^{\dagger}P(\tau_{f}),\] satisfies the requirements on \(P(\tau)\) and \(P(\tau_{f})\). The trace term in Eq. (6) can be written as \[\eta_{r}(\tau): =\operatorname{Tr}_{A\otimes B}\Bigl{[}Q_{r}^{\dagger}\Bigl{(}P( \tau)U^{\dagger}(\tau)+U(\tau)P^{\dagger}(\tau)\Bigr{)}\Bigr{]}\] \[=\sum_{l}\sum_{k=1}^{K}4i\operatorname{Tr}_{A}[O_{l,1}(\rho_{l,1 }^{\prime}-\rho_{l,1})]\] \[\times\operatorname{Tr}_{A\otimes B}\left[\tilde{\rho}(\tau) \Bigl{[}V_{k,r}^{\dagger},\,\Gamma^{\dagger}(\tau_{f},\tau)(O_{l}\otimes I_{B })\Gamma(\tau_{f},\tau)\Bigr{]}\right],\] where \(\tilde{\rho}(\tau):=U(\tau)(\rho_{l,0}\otimes|0\rangle\langle 0|_{B})U(\tau)^{\dagger}\) is the evolved state of the extended system up to time \(\tau\), \(\Gamma(\tau_{f},\tau):=U(\tau_{f})U^{\dagger}(\tau)\) describes the evolution by the pulses from \(\tau\) to \(\tau_{f}\), \([\cdot,\cdot]\) is the usual commutator, and \(V_{k,r}\) are unitaries decomposing \(Q_{r}\) as \[Q_{r}=\sum_{k=1}^{K}V_{k,r},\qquad r=1,...,R,\] where \(K\in\mathbb{N}\) is the necessary number of unitaries. For practical purposes, where the control terms work on only 1 (or occasionally 2) qubits \(K=O(1)\). These terms can be efficiently determined on a quantum computer, by first applying the pulse until time \(\tau\), performing a single gate operation \(V_{k,r}\), then applying the rest of the pulse up to \(\tau_{f}\) and finally measuring the expectation of \(O_{l}\) (cf. [33]). The pulses are iteratively updated as \[z_{r,k+1}(\tau)=z_{r,k}(\tau)-\alpha_{k}(\lambda z_{r,k}(\tau)+\eta_{r}(\tau)),\] where \(\alpha_{k}\) is a step size, in this work determined using the Armijo condition [34] and \(z_{r,0}(\tau)\) are constant zero value pulses. The pulses are discretized as equidistant piecewise constant functions with \(N\in\mathbb{N}\) steps. This results in \(\#\text{QE}=N\cdot K\cdot R\) quantum evaluations per iteration. For further details on the pulse-based methods used, we refer to [33]. ## IV Execution on neutral atom systems Neutral atom quantum computing architectures have come up as a promising candidate for NISQ computing, reaching single qubit gate fidelities of 0.9996 [35] and two qubit gate fidelities of 0.995 [36]. In this system, the qubits are individual atoms trapped in laser optical tweezers. These tweezer sites can be moved around and rearranged using AOM techniques [37], resulting in adaptable qubit geometries. Entanglement in neutral atom systems is supplied by excitations to high-lying Rydberg states, which interact using Van der Waals interactions [38]. Electronic states of the atoms function as the qubit states. One possibility is the implementation of gg qubits, in which two (meta-)stable states are chosen as the \(\{|0\rangle,|1\rangle\}\) qubit manifold and a Rydberg state \(|r\rangle\) is seen as an auxiliary state used for entanglement. On the other hand, the (often less stable [39]) Rydberg state can take the role of the \(|1\rangle\) qubit state, which results in gr qubits [40; 41]. As mentioned in Sec. II, before preparing \(\rho(2\Delta t)\) from \(\rho(\Delta t)\), the old set of ancillas needs to be stored away such that it is isolated from the rest of the system and a new set of ancillas is required for the evolution from \(\Delta t\) to \(2\Delta t\). A neutral atom quantum computer is well suited for this quantum channel extrapolation method compared to other architectures, for three main reasons: 1. With gg qubits, after application of \(U[\theta]\) the ancilla qubits are back in the \(\{|0\rangle,|1\rangle\}\) manifold which is well isolated from its surroundings and can be kept stable for large periods of time (up to minutes for \({}^{88}\)Sr [42]). This gives rise to a viable means of creating storage and reservoir arrays. 2. Using moveable tweezers, qubits can be coherently moved around on \(\mu\)s timescales, as achieved experimentally in [13]. 3. The system is hugely scalable in the number of qubits, as the number of tweezers scales linearly with laser power. Per evolution step \(n\Delta t\rightarrow(n+1)\Delta t\), only the computational qubits and one set of ancilla qubits need to be controlled in the quantum processing unit (QPU) when applying the Stinespring unitary, so there is no requirement for extra control when extrapolating to further time steps. An illustration of a practical implementation of the quantum channel extrapolation algorithm using a neutral atom quantum computer can be seen in Fig. 3. ## V Results We apply our quantum channel VQA to several distinct target quantum channels. We explore the construction of the Stinespring unitary \(U[\theta]\) and the convergence of its partial trace towards the target quantum channel, as well as the quantum channel extrapolation for longer time behavior. To quantitatively show the quality of this convergence, we use the Bures distance between the exact state \(\rho\) and the approximation \(\rho^{\prime}\) as Figure 3: A neutral atom implementation of a two time step, four qubit quantum channel approximation using five ancilla qubits by iteratively applying the Stinespring unitary to the computational qubits and a new set of ancilla qubits every iteration. 1) In the quantum processing unit (QPU) the Stinespring unitary is performed to evolve \(\rho_{0}\) to \(\Phi_{\Delta t}(\rho_{0})\). 2) Using movable tweezers, the entangled ancillas are deposited in the storage and new ancillas are brought in from the reservoir. 3,4) The processes of 1) and 2) are repeated to evolve \(\Phi_{\Delta t}(\rho_{0})\) to \(\Phi_{2\Delta t}(\rho_{0})\). \[d_{\rm Bures}(\rho,\rho^{\prime})^{2}=2\left(1-{\rm Tr}\left[(\sqrt{\rho}\rho^{ \prime}\sqrt{\rho})^{\frac{1}{2}}\right]\right)\leq 2.\] and take the average over the evolution of 10 different initial states, which are not in the training set. The arbitrary timescale of the target quantum channel (tqc) is \(T_{\rm tqc}\) and is independent of the state preparation timescale \(\tau_{f}\). In the results in this section, we will train on all Pauli strings. Thus, every initial state \(\rho_{0,l}\) is measured against every Pauli string. Furthermore, we will always use Van der Waals (VdW) interactions between the qubits with \(R=0.85\,\mu\)m between nearest neighbor qubits and \(C_{6}=0.037\,{\rm kHz}/\mu{\rm m}^{6}\) such that \(V=0.1\,{\rm kHz}\) between nearest neighbors (cf. Appendix B). For all considered target quantum channels except the 1 qubit case, no analytic solution is known. Thus, we approximate the exact solution by a high-accuracy numerical solver for the error analysis. The pulses are discretized as equidistant step functions with \(N=100\) steps. ### 1 Qubit decay The evolution of a single qubit decaying from \(\ket{1}\) to \(\ket{0}\) with rate \(\gamma_{\rm tqc}\) and undergoing Rabi oscillations with frequency \(\Omega_{\rm tqc}\) is given by the Lindbladian, as in Eq. (1), with \[H=\frac{\Omega}{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\ A_{0}=\begin{pmatrix}0&1\\ 0&0\end{pmatrix},\ \rho(0)=\begin{pmatrix}\rho_{00}&\rho_{01}\\ \overline{\rho_{01}}&1-\rho_{00}\end{pmatrix}.\] The dynamics of this system allow for the state to be described analytically (cf. App. C), allowing for exact calculations of the errors of the quantum channel approximation. By the Stinespring dilation theorem (cf. Sec. II), at most two ancilla qubits are necessary to approximate this target quantum channel. We want to approximate single qubit decay with \(\gamma_{\rm tqc}=\Omega_{\rm tqc}=0.5\), in units of \(1/T_{\rm tqc}\). From Fig. 4, we see that after minimizing the loss function (cf. Eq. (3)), the quantum channel approximation becomes extremely accurate. Furthermore, the pulses retrieved are smooth and implementable on an actual quantum computing system. Note that because of spatial symmetry between the two ancilla qubits and the computational qubit, the optimal pulses on the ancilla qubits are equal. The Bures error averaged over 10 new initial states on the first time step is \(8.3\cdot 10^{-7}\) and rises up to \(5.1\cdot 10^{-6}\) after 9 reapplications of the unitary circuit. From the figure and the evolution of the average Bures error, we see that the behavior of the quantum channel can be accurately predicted for longer times without the error increasing significantly. This is especially interesting as no knowledge on the system after \(t_{\rm train}\) is assumed. ### 2 Qubit decay As a more complicated case, we consider a quantum channel describing two decaying, and interacting qubits. By the Stinespring dilation theorem (cf. Sec. II), we know that we need a maximum of four ancilla qubits. However, as we only have two decay terms, we only need two additional ancilla qubits. Nevertheless, to open up the search space, we choose to implement three ancilla qubits. For the target quantum channel, we take \(\gamma_{1,\rm tqc}=0.5\), \(\Omega_{1,\rm tqc}=0.3\), \(\gamma_{2,\rm tqc}=0.3\), \(\Omega_{2,\rm tqc}=0.2\), all in units of \(1/T_{\rm tqc}\), to ensure that there is no symmetry between the qubits. Furthermore, their interaction strength is \(V_{\rm tqc}=0.1\left[1/T_{\rm tqc}\right]\). All of this leads to non-trivial behavior over the populations (see solid lines in Fig. 5a). Figure 4: Quantum channel approximations for a target quantum channel describing single qubit decay with \(\gamma_{\rm tqc}=0.5\) and \(\Omega_{\rm tqc}=0.5\), both in units of \(1/T_{\rm tqc}\). The algorithm is trained on 10 randomly sampled initial states combined with all 4 Pauli strings to give \(L=40\). Two time steps of training are used, such that we have \(t_{\rm train}=2\Delta t\). Two ancilla qubits are taken, with all qubits positioned in an equilateral triangle. (a) Exact, approximated and steady state populations (with errors) for a single state not in the training data set. (b) Convergence of error \(J_{1}(U)\) together with the final pulses on all qubits. From Fig. 5, we see that learning the target quantum channel becomes harder as the Pauli trace errors remains larger than in the single qubit results of Sec. V. One can clearly see this in the population figure, as the populations are no longer precisely predicted. Furthermore, the average Bures error on the first time step is \(9.0\cdot 10^{-4}\), which is significantly higher than for the single qubit target quantum channel. Despite this, it is remarkable to see that the qualitative behavior of the evolution is well predicted, even far beyond the training time \(t_{\text{train}}\). ### Transverse Field Ising model We analyze the transverse field Ising model (TFIM) with decay as a use case of our method outside quantum computing [43]. In this model, spins interact with a transverse magnetic field as well as with their nearest neighbors. For spins aligned in a straight line, the TFIM Hamiltonian takes the form \[H=-B_{\text{tqc}}\sum_{i}X_{i}+J_{\text{tqc}}\sum_{i}Z_{i}Z_{i+1},\] with \(Z_{i}\) and \(X_{i}\) respectively the Pauli \(Z\)-operator and Pauli \(X\)-operator on qubit \(i\), \(J_{\text{tqc}}\) the coupling strength and \(B_{\text{tqc}}\) the transverse external field strength, both in units of \(1/T_{\text{tqc}}\). Fig. 6 shows results for a 2 spin TFIM. The average Bures error on the first time step is \(2.3\cdot 10^{-3}\). Similar to the 2 qubit case of Fig. 5, we see increasing errors for higher extrapolation times, but good prediction of the general behavioral trend of the evolution. This shows that the algorithm can properly handle target quantum channels which are not related to the system they are approximated on. Figure 5: Quantum channel approximations for a quantum channel describing two decaying qubits with interaction. The parameters of the target quantum channel are \(\gamma_{1,tqc}=0.5\) and \(\Omega_{1,tqc}=0.3\) for qubit 1, \(\gamma_{2,tqc}=0.3\) and \(\Omega_{2,tqc}=0.2\) for qubit 2 and an interaction strength of \(V_{tqc}=0.1\), all in units of \(1/T_{\text{tqc}}\). The algorithm is trained on 20 randomly sampled initial states combined with all 16 Pauli strings to give \(L=320\). Three ancilla qubits are taken so that all qubits are positioned as two equilateral triangles in an “M” shape. (a) Exact, approximated and steady state populations (with errors) for a single state not included in the training data. (b) Convergence of error \(J_{1}(U)\) together with the final pulses on all qubits. Figure 6: Quantum channel approximations for a 2 qubit TFIM model with \(B_{\text{tqc}}=0.5\), \(J_{\text{tqc}}=0.4\), \(\gamma_{0,\text{tqc}}=0.5\), and \(\gamma_{1,\text{tqc}}=0.3\), in units of \(1/T_{\text{tqc}}\). For the approximating qubit system, we take the same parameters and training setup as in Fig. 5. (a) Exact, approximated and steady state populations (with errors) for a single state not included in the training data set. (b) Convergence of error \(J_{1}(U)\) together with the final pulses on all qubits. ### Pulse-based vs. gate-based In the current NISQ-era, gate-based optimization methods are most widely implemented. In recent works, it has been suggested that stochastic and pulse-based methods can lead to higher expressibility and faster convergence of the optimization [32, 33, 44, 45]. For our algorithm, we compare gate-, stochastic gate-, and pulse-based methods using the notion of equivalent evolution processes [33], which considers that for all methods, the system evolves under the most similar circumstances. Concretely, this means all methods are run on a quantum computation system with similar controls, entanglement operations and evolution times. We consider control over the coupling strength \(\Omega\) for the pulse-based method, and a hardware efficient ansatz with \(ZXZ\) parametrized gates for the gate-based method, both resulting in full rotational control of the Bloch sphere [33]. Furthermore, we consider a VdW drift Hamiltonian \(H_{d}\) and entanglement gates \(U_{ent}=\exp(-iH_{d}\tau_{V})\) for the gate-based method. We assume \(V=0.07\) kHz with \(R=0.9\mu\)m and \(C_{6}=0.037\) kHz/\(\mu\)m\({}^{6}\). In order to supply enough time for entanglement, we take \(\tau_{V}=1/V=10\) ms and take Rabi frequencies in the order of 1 kHz to get \(\tau_{g}=1\) ms. To get similar evolutions, we take \(\tau_{f,\text{pulse}}=\tau_{f,\text{gate}}\), which completes the construction of the equivalent evolution processes. To add stochasticity to the gate-based optimization, we uniformly select a random subset of gates to update the parameters for each iteration instead of optimizing the entire gate set. In Fig. 7 we compare the average Bures distances over time for the unitary approximations based on a gate based, a stochastic gate based and a pulse based method, each trained for the same number of #QE. The target quantum channel is again a decay model on two qubits with \(\gamma_{1,tqc}=0.3\) and \(\Omega_{1,tqc}=0.5\) for qubit 1, \(\gamma_{2,tqc}=0.2\) and \(\Omega_{2,tqc}=0.35\) for qubit 2 and an interaction strength of \(V_{tqc}=0.2\), all in units of \(1/T_{\text{tqc}}\). The pulse based method and the stochastic gate based method perform very similar, while both outperform the gate based method. We hypothesize that the pulse based method can be improved by adding stochasticity as well. Note that this comparison was done on a data set where all three methods went through a gradient descent. In our simulations, we find that for many training data sets, the gate based methods had problems with finding good local minima of the loss function. ## VI Conclusion In this work, we introduce an algorithm for quantum channel approximation and extrapolation. This method differentiates itself from previous work by being able to approximate the quantum channel based purely on measurement data, and the inclusion of multiple time steps plus steady state behavior. The method variationally learns the Stinespring unitary describing the quantum channel at a fixed time \(\Delta t\), either through a gate- or pulse-based method. This approximation can later be extrapolated to make future time predictions for the quantum channel behavior at discrete multiples of \(\Delta t\). The method has, through simulations, shown proof of concept for non-trivial target quantum channels based on NISQ qubits and Ising models. An analysis between gate- and pulse-based implementations of the algorithm, using equivalent evolution processes, has shown that adding stochasticity or switching to a pulse-based method is beneficial for approximating quantum channels given small state preparation times, which is especially important for mitigating decoherence in the NISQ era. Furthermore, it is reasoned that a neutral atom quantum computing system is very well-suited for executing this algorithm. The algorithm requires numerous ancilla qubits that need to be coherently moved and stored without being controlled, which neutral atom systems are adapted for given their high scalability, long coherence times, and modifiable qubit topologies. In future work, it would be of interest to split the approximation of the quantum channel into a part that is unitary on the computational qubits and a part that is a dilation, as in [9]. By separating these two contributions, the model could potentially become less complex and better suited for cases where the unitary evolution and decoherence act on different timescales. Another direction of improvement would be in using prior information on the quantum channel to improve convergence. We hypothesize that by knowing the Kraus decomposition [11] or steady state, it could be possible to construct a set of observables and initial states that provides more information on the channel's behavior. Figure 7: Comparison of the evolution over time of the Bures error for a gate-based, a stochastic gate-based, and a pulse-based approach. The target quantum channel is a decay model on two qubits with \(\gamma_{1,tqc}=0.3\) and \(\Omega_{1,tqc}=0.5\) for qubit 1, \(\gamma_{2,tqc}=0.2\) and \(\Omega_{2,tqc}=0.35\) for qubit 2 and an interaction strength of \(V_{tqc}=0.2\), all in units of \(1/T_{\text{tqc}}\). The Bures error is the average of the predictions on 10 new states. Training done using the notion of equivalent evolutions and for the same number of quantum evaluations #QE. ## Acknowledgements We thank Jasper Postema and Jurgen Snijders for fruitful discussions. This research is financially supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL program, and by the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.11855
Quantum Locking of Intrinsic Spin Squeezed State in Earth-field-range Magnetometry
In the Earth-field range, the nonlinear Zeeman (NLZ) effect has been a bottleneck limiting the sensitivity and accuracy of atomic magnetometry from physical mechanism. To break this bottleneck, various techniques are introduced to suppress the NLZ effect. Here we revisit the spin dynamics in the Earth-field-range magnetometry and identify the existence of the intrinsic spin squeezed state (SSS) generated from the geomagnetically induced NLZ effect with the oscillating squeezing degree and squeezing axis. Such oscillating features of the SSS prevent its direct observation and as well, accessibility to magnetic sensing. To exploit quantum advantage of the intrinsic SSS in the Earth-field-range magnetometry, it's essential to lock the oscillating SSS to a persistent one. Hence, we develop a quantum locking technique to achieve a persistent SSS, benefiting from which the sensitivity of the Earth-field-range magnetometer is quantum-enhanced. This work presents an innovative way turning the drawback of NLZ effect into the quantum advantage and opens a new access to quantum-enhanced magnetometry in the Earth-field range.
Peiyu Yang, Guzhi Bao, Jun Chen, Wei Du, Jinxian Guo, Weiping Zhang
2023-09-21T07:57:07Z
http://arxiv.org/abs/2309.11855v1
# Quantum Locking of Intrinsic Spin Squeezed State in Earth-field-range Magnetometry ###### Abstract In the Earth-field range, the nonlinear Zeeman (NLZ) effect has been a bottleneck limiting the sensitivity and accuracy of atomic magnetometry from physical mechanism. To break this bottleneck, various techniques are introduced to suppress the NLZ effect. Here we revisit the spin dynamics in the Earth-field-range magnetometry and identify the existence of the intrinsic spin squeezed state (SSS) generated from the geomagnetically induced NLZ effect with the oscillating squeezing degree and squeezing axis. Such oscillating features of the SSS prevent its direct observation and as well, accessibility to magnetic sensing. To exploit quantum advantage of the intrinsic SSS in the Earth-field-range magnetometry, it's essential to lock the oscillating SSS to a persistent one. Hence we develop a quantum locking technique to achieve a persistent SSS, benefiting from which the sensitivity of the Earth-field-range magnetometer is quantum-enhanced. This work presents an innovative way turning the drawback of NLZ effect into the quantum advantage and opens a new access to quantum-enhanced magnetometry in the Earth-field range. _Introduction_.--Sensitive measurements of magnetic fields in the Earth-field range is crucial to various real-world applications, including geological survey [1, 2, 3, 4, 5], biomedicine [6, 7, 8, 9, 10], fundamental physics experiments [11, 12, 13, 14, 15], and magnetic navigation [16, 17, 18, 19]. Alkali-metal atomic magnetometers are the outstanding candidates for such missions because of their high sensitivity, reaching the \(\mathrm{fT}/\sqrt{\mathrm{Hz}}\)[20, 21, 22]. However, in the Earth-field range (\(50\,\mu\mathrm{T}\)), the NLZ effect produces splittings and asymmetries of magnetic-resonance lines, leading to the signal reduction and heading errors, and brings a well-known bottleneck limiting the sensitivity and accuracy of atomic magnetometry [6, 25, 26, 27, 28, 29, 30, 23, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123]. In order to tackle this bottleneck, the intuitive thinking is to eliminate the NLZ effect, while maintaining the spin coherence. Such an operation renders an Earth-field-range magnetometer with a sensitivity at the standard quantum limit (SQL) [6, 25, 26, 27, 28, 29, 30, 23, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 122, 124, 125, 126, 127, 128, 129, 130, 131, 14, 132, 14, 1 \(\omega=(\mu_{B}B)^{2}/(4\hbar\Delta)\), \(\mu_{B}\) is the Bohr magneton, \(B\) is the magnetic field intensity, and \(\Delta\) is the hyperfine-structure energy splitting (more detail is discussed in the [39] Appendix C). The squeezing angle \(\nu\), defined as the angle between the optimal squeezing axis and \(\hat{\vec{x}}\) axis, is calculated by \(\nu=\pi/2-1/2\arctan(Y/X)\), where \(X=1-\cos^{2F-2}(2\omega t)\), \(Y=4\sin(\omega t)\cos^{2F-2}(\omega t)\). The optimal squeezing axis of the SSS varies at different times. The green line in Fig. 1(a) shows the minimum noise along the optimal squeezing axis at different times, illustrating that the squeezing degree changes over time as well. To employ this SSS in quantum metrology, it is essential to match the angular momentum component with minimum noise to the measurement quantity. In an atomic magnetometer, the measurement quantity is the Larmor precession frequency, which is typically measured through finding the time between two zero crossings of a collective of oscillating spin component [40]. The signal and noise of the corresponding magnetic measurement are scaled as \(\langle\hat{\vec{F}}_{y}\rangle^{2}\) and \(\text{Var}(\hat{\vec{F}}_{x})\), respectively. Here \(\hat{\vec{F}}_{x}=\sum\hat{\vec{f}}_{x}\) and \(\vec{F}_{y}=\sum\hat{\vec{f}}_{y}\) are collective angular momentum components of the atomic ensemble. The red line in Fig. 1(a) shows the \(\text{Var}(\hat{\vec{F}}_{x})\) with NLZ effect, which always exceeds the fluctuation of CSS. That is because the optimal squeezing axis of this type of SSS is not along the observable \(\hat{\vec{F}}_{x}\). We can rotate the SSS along the -\(\hat{y}\) axis by \(\nu\) to align the optimal squeezing axis with the axis of \(\hat{\vec{F}}_{x}\), thereby ensuring that \(\hat{\vec{F}}_{x}\) has the minimum noise. After the rotation operation, we calculate the signal-to-noise-ratio (SNR) of the magnetic measurement with \(N\) atoms \[\begin{split}\text{SNR}_{\text{S}}&=\frac{\langle \hat{\vec{F}}_{y}\rangle^{2}}{\text{Var}(\hat{\vec{F}}_{x})}\\ &=\frac{2NF(\cos\omega t)^{2(2F-1)}}{1+\frac{1}{2}(F-\frac{1}{2}) (X-\sqrt{X^{2}+Y^{2}})},\end{split} \tag{2}\] where \(N\) is the atom number. The SNR for the CSS injection is \(\text{SNR}_{\text{C}}=2NF\) for the spin-\(F\) system. In order to intuitively represent the advantage of the SSS injection, we define the ratio of the SNR with the SSS and with the CSS as the metrological gain \(g=\text{SNR}_{\text{S}}/\text{SNR}_{\text{C}}=(\cos\omega t)^{2(2F-1)}/[1+(1/ 2)(F-1/2)(X-\sqrt{X^{2}+Y^{2}})]\)[41]. Since the spin squeezing arises from the single-atom effect, the metrological gain is independent of \(N\). For \(F=2\) system, the metrological gain is shown in Fig. 1(b). The maximum metrological gain defined as the optimal squeezing point approaches \(2.9\,\text{dB}\) at \(\omega t=0.34\). Note that the maximum gain [Fig. 1(b)] doesn't occur at the minimum noise [Fig. 1(a)], because the signal and the noise have different time dependencies. _The scalability of spin squeezing.--_ The maximum metrological gain \(g_{m}\) with respect to \(\omega t\) is scalable with is scalable with the quantum number of total angular momentum \(F\) for a single atom. Figure 2 plots \(g_{m}\) as a function of \(F\). Relatively large values of \(F\) available are \(F=4\) in the ground state of Cs and \(F=12.5\) in a metastable state of Dy [42], corresponding to a maximum metrological gain by \(4.5\) dB and \(7.7\,\text{dB}\), respectively, which are comparable to the reported ones with multi-atom collective spin squeezing [43; 44]. The maximum metrological gain can scale up even higher with larger \(F\) such as in Rydberg atoms and in molecules [5]. _Quantum spin locking.--_The SSS produced by the NLZ effect could help improve the sensitivity of atomic magnetometry. However, because the NLZ effect always exists, the SSS keeps evolving. To lock the SSS at the optimal squeezing point [the star in Fig. 1(b)], we develop the quantum locking method, which is a machine-learning-assisted DD technique. We set the wavefunction of the SSS to be locked as \(|\Psi(0)\rangle\). Figure 3 shows the principle of the quantum spin locking scheme in the form of the real-time fidelity Figure 1: (a) The noise of the spin projections along the optimal squeezing axis (green line) and the noise of \(\hat{\vec{F}}_{x}\) (red line) at different times. The dashed line is the noise of CSS \(F/2\) for single atom. As the spin squeezing arises from the single-atom effect, the atom number \(N\) is set to \(1\) in the simulation. (b) The metrological gain along the optimal squeezing axis at different times. The red star denotes optimal squeezing point to be locked. Figure 2: The maximum metrological gain \(g_{m}\) varies with \(F\). \(\mathcal{F}_{r}=\mathrm{Tr}|\sqrt{\sqrt{\rho_{0}}\rho_{t}\sqrt{\rho_{0}}}|^{2}\) between the SSS and the state at time \(t\). \(\rho_{0}=|\Psi(0)\rangle\langle\Psi(0)|\) is the density matrix of the SSS. \(\rho_{t}\) is the density matrix of spin state at time \(t\). If the SSS evolves freely with the NLZ effect, the SSS collapses and revives periodically, as the blue line in Fig. 3 shown. While, in the quantum locking scheme, utilizing the periodic property of spin evolution under the NLZ effect, the SSS can be locked by taking the shortcut from \(\Psi(t_{1})\) to \(\Psi(2\pi-t_{1})\), which can be achieved by the spin evolution with the DD sequence \(\hat{\hat{U}}_{DD}(T)\) (as the red line in Fig. 3 shown), where \(T\) is the period of one DD cycle. More detail is discussed in the [39] Appendix H. The DD sequence consists \(p\) instance rotation pulses \(\hat{\Pi}^{d}(\chi_{i})\,(i=1...p)\) following the free evolution \(\hat{\hat{U}}_{F}(T/p)\), where \(\hat{\Pi}^{d}(\chi_{i})=e^{-i\chi_{i}\hat{\hat{F}}_{d}}\) is the pulsed rotation evolution operator, \(\hat{\hat{U}}_{F}(T/p)=e^{-i\hat{\hat{H}}_{B}T/ph}\) is the free evolution operator. The pulse is applied by a RF magnetic field in the laboratory frame (a static magnetic field in the rotating frame). \(d=\pm\hat{\hat{x}},\pm\hat{\hat{y}},\pm\hat{\hat{z}}\) is the rotation direction in the rotating frame which is decided by the phase of RF field, \(\chi_{i}\) is the rotation angle of the \(i\)-th pulse. To ensure that the atomic spin remains stretched along the \(\hat{\hat{y}}\) direction and that as many atomic spins as possible can be used to sense the magnetic field, the pulses in the DD sequence are all along the -\(\hat{\hat{y}}\) axis. Hence, the DD sequence takes the form \[\begin{split}\hat{\hat{U}}_{DD}(T)=&\hat{\Pi}^{- \hat{y}}(\chi_{p})\hat{\hat{U}}_{F}(\frac{T}{p})\hat{\Pi}^{-\hat{\hat{y}}}( \chi_{p-1})\hat{\hat{\hat{U}}}_{F}(\frac{T}{p})\\ &\cdots\hat{\Pi}^{-\hat{\hat{y}}}(\chi_{2})\hat{\hat{U}}_{F}( \frac{T}{p})\hat{\Pi}^{-\hat{\hat{y}}}(\chi_{1})\hat{\hat{U}}_{F}(\frac{T}{p }).\end{split} \tag{3}\] For a specific angular momentum \(F\), the DD performance is a function of parameters \(T\), \(p\), and \(\chi_{i}\) (\(i=1...p\)). In the experiment, there are some restrictive conditions for the values of these parameters. The DD period \(T\) should be substantially shorter than the revival period \(2\pi/\omega\) to ensure that the SSS experiences enough DD cycles and keeps squeezed most of the time. Theoretically, the rotation angle \(\chi_{i}\) is arbitrary. Considering the running speed of the algorithm--the and the convenience of pulse design in the experiment, \(\chi_{i}\) is set to be \(n\pi/36\) (step length is 5 degrees), where \(n\) is an integer. The relationship between the rotation angle \(\chi_{i}\) and the pulse length \(t_{i}\), is formulated as \(\chi_{i}=\int_{0}^{t_{i}}\Omega_{i}(t)dt\), where \(\Omega_{i}(t)\) denotes the energy distribution of the \(i\)th pulse. The pulse length \(t_{i}\) should be considerably shorter than the free evolution time \(T/p\) to satisfy the instance's rotation condition and substantially longer than the Larmor period to satisfy the rotation wave approximation condition [45]. Considering the Larmor frequency, the quantum-beat revival frequency of \({}^{87}\)Rb in the Earth-field range and the restrictive conditions mentioned above, we choose \(\Omega_{L}/\omega=20000\), \(T\Omega_{L}=2\pi\times 2000\), and \(p=10\) for the DD sequence optimization. We introduce the final state \(\mathcal{F}_{f}=\mathrm{Tr}|\sqrt{\sqrt{\rho_{0}}\rho_{T}\sqrt{\rho_{0}}}|^{2}\) to evaluate the DD performance [46; 47; 48], where \(\rho_{T}=\hat{\hat{U}}_{DD}(T)|\Psi(0)\rangle\langle\Psi(0)|\hat{\hat{U}}_{DD }^{\dagger}(T)\) is the density matrix of spin state after one DD sequence. For the locking of the SSS, our goal is to determine the optimal \(\chi_{i}\) in the DD sequence so that the fidelity \(\mathcal{F}_{f}\) is as close to 1 as possible. The DE algorithm is used to obtain the DD sequence. The target function is the infidelity \(1-\mathcal{F}_{f}\). The spin state is measured by the Faraday interaction between the probe light and the atomic system (see Appendix D in [39] for details). The fidelity can be obtained by the state tomography through varying the propagation direction of the probe light [49; 50; 51]. Our pulse sequence design task can be seen as a kind of multi-parameter optimization with discrete variables. As a gradient-free algorithm, the DE algorithm is a powerful choice, which is not easily trapped in a local optimum solution [31]. The framework of the proposed approach is shown in Fig. 4. For given \(T\) and \(p\), we generate an initial population containing a series of DD sequences with random rotation angles. The DD sequences are applied to the atomic system by the coils in the form of the RF field pulses. The main operations are mutation, crossover and selection. In the mutation operation, three DD sequences are randomly selected from the population as the mutation sources and combined to reproduce the mutation sequence (see Eq. (26) in [39] Appendix E). The rotation angles in the original DD sequence are partially replaced by the ones in the mutation sequence to derive the test sequence (see Eq. (27) in [39] Appendix E), which is the crossover operation. In the selection process, the target function values of the test sequence and the original sequence are compared, and the sequence with the better value is retained to the next optimization loop. The three operations mentioned Figure 3: The principle of the quantum spin locking scheme. The red star at the starting point is the SSS \(|\Psi(0)\rangle\) to be locked. The blue line represents the real-time fidelity of the SSS oscillating at the period \(2\pi\) with the NLZ effect. The red line is the path of the state evolution in the DD process. The SSS firstly evolves to \(|\Psi(t_{1})\rangle\), then slips to \(|\Psi(2\pi-t_{1})\rangle\) and finally revists to the state \(|\Psi_{DD}(T)\rangle\). \(T\) is the period of the DD sequence and is set substantially shorter than the revival period \(2\pi/\omega\). \(\Omega_{L}=2\pi\), \(\Omega_{L}/\omega=20000\), \(T\Omega_{L}=2\pi\times 2000\). above are performed once on each DD sequence in every loop to update the population. After a number of iterations, the algorithm gradually converges to the optimal sequence \[\begin{split}\hat{\tilde{U}}_{DD}(T)&=\hat{\Pi}^{- \hat{\tilde{\tilde{\tilde{\tilde{\theta}}}}}}(\frac{\pi}{36})\hat{\tilde{U}}_{F} (\frac{T}{10})\hat{\Pi}^{-\hat{\tilde{\tilde{\theta}}}}(\frac{4\pi}{9})\hat{ \tilde{U}}_{F}(\frac{T}{10})\\ &\times\hat{\Pi}^{-\hat{\tilde{\tilde{\theta}}}}(\frac{\pi}{4}) \hat{\tilde{U}}_{F}(\frac{T}{10})\hat{\Pi}^{-\hat{\tilde{\tilde{\theta}}}}( \frac{4\pi}{9})\hat{\tilde{U}}_{F}(\frac{T}{5})\\ &\times\hat{\Pi}^{-\hat{\tilde{\theta}}}(\frac{\pi}{36})\hat{ \tilde{U}}_{F}(\frac{2T}{5})\hat{\Pi}^{-\hat{\tilde{\tilde{\theta}}}}(\frac{ \pi}{6})\hat{\tilde{U}}_{F}(\frac{T}{10}),\end{split} \tag{4}\] in which \(T\) is set as \(1/10\) revival periods. _Magnetic field measurement with the SSS._--To measure the magnetic field, the SSS can be locked during an interval of darkness permitting the spin state to precess in the presence of the static magnetic field and DD pulses. A probe pulse is then applied for readout, subjecting the atoms to various systematics in the process [52], as shown in the Fig. 12(a). To guarantee sufficient time to perform the measurement, an effective DD sequence is also essential to ensure a high fidelity for a long-time duration. Figure 12(b) shows the long-time stability of the SSS. Without the DD sequence, the fidelity periodically oscillates due to the NLZ effect; while with the DD sequence, the fidelity remains in the range \([99.97\%,99.99\%]\), even during several hundred cycles. Note that the quantum locking scheme is flexibly adapted to different measurement protocols as we choose appropriate target function depending on the requirements for the SSS in the measurement process. More detail is discussed in the [39] Appendix I. _Conclusion and discussion_.--To conclude, in this paper, the intelligent Earth-field-range magnetometer, which automatically utilizes intrinsic SSS to achieve measurement sensitivity beyond the SQL is proposed. We turn the NLZ effect which limits the sensitivity of the Earth-field-range magnetometer into a valuable technique for quantum enhancement of magnetic sensing. Due to the continuous collapse and revival of the atomic state induced by the NLZ effect, the SSS oscillates over time and is not typically used in experiments. This issue can be solved by locking the SSS with periodic pulsed modulations. The pulse sequence is calculated by the DE algorithm. Similarly, the quantum locking technique can, without loss of generality, be extended to lock the large-\(F\) SSS with high metrological gain. There are three steps to build such a quantum-enhanced magnetometer in the Earth-field range. First, the state evolves freely to generate the SSS; then, at the time of the optimal squeezing point, a rotation operation is performed to align the optimal squeezing axis with the measurement axis; and finally, a DD sequence is applied to lock the SSS and the magnetic measure Figure 5: (a) The sequence for magnetic field measurement. \(t_{p}\) is the total time of the probe light. T is set as \(1/10\) revival periods. (b) The final fidelity \(\mathcal{F}_{f}\) during \(200\) DD cycles with (blue line) and without (red line) the DD sequence. \(n_{DD}\) denotes the number of the DD cycles. Figure 4: System of the quantum locking scheme. With the leading magnetic field \(B\) along the \(\tilde{\tilde{z}}\) axis, the atomic spins oscillate in the \(\hat{x}-\hat{y}\) plane at Larmor frequency in the laboratory frame and are orientated along the \(\tilde{\tilde{y}}\) axis in the rotating frame. With the NLZ effect, the spins are squeezed along the optimal squeezing axis. After the rotation operation with angle \(\nu\), the \(\hat{\tilde{F}}_{x}\) is squeezed as represented on the Bloch sphere. A sequence of pulses optimized by the DE algorithm are applied in the form of the RF field pulses along the \(\hat{\tilde{y}}\) axis to lock the SSS. PD is the photo-detector used to measure the spin projection of the atomic state for the target function calculation. The end condition is when we reach the maximum generation. More detail is discussed in the [39] Appendix E. ment is implemented afterwards. With the help of the SSS injection, the noise of the magnetic resonance can be reduced; in addition, the application of the quantum locking technique cancels the extra NLZ effect, which increases the magnetic resonance's amplitude and narrows its linewidth. Such quantum-enhanced magnetometry is promising for practical applications in the Earth-field range. ###### Acknowledgements. We would like to thank D. Budker from Johannes Gutenberg University, Keye Zhang and Lu Zhou from East China Normal University for useful discussions. This work is supported by the Innovation Program for Quantum Science and Technology (2021ZD0303200); the National Natural Science Foundation of China (12234014, 11654005, 12204304, 11904227); the Fundamental Research Funds for the Central Universities; the Shanghai Municipal Science and Technology Major Project (2019SHZDZX01); the National Key Research and Development Program of China (Grant No. 2016YFA0302001); and the Fellowship of China Postdoctoral Science Foundation (Grant No. 2020TQ0193, 2021M702146, 2021M702150, 2021M702147,2022T150413); W. Z. also acknowledges additional support from the Shanghai talent program.
2302.14493
Open Strange Mesons in (magnetized) nuclear matter
We investigate the mass modifications of open strange mesons (vector $K^*$ and axial vector $K_1$) in (magnetized) isospin asymmetric nuclear matter using quantum chromodynamics sum rule (QCDSR) approach. The in-medium decay widths of $K^*$ $\rightarrow$ $K\pi$ and $K_1$ $\rightarrow$ $K^*\pi$ are studied from the mass modifications of $K_1$, $K^*$ and $K$ mesons, using a light quark-antiquark pair creation model, namely the ${}^3 P_0$ model. The in-medium decay width for $K_1$ $\rightarrow$ $K^*\pi$ is compared with the decay widths calculated using a phenomenological Lagrangian. The effects of magnetic fields are also studied on the mass and the partial decay width of the vector $K^*$ meson decaying to $K\pi$. Within the QCD sum rule approach, the medium effects on the masses of the open strange mesons are calculated from the light quark condensates and the gluon condensates in the hadronic medium. The quark condensates are calculated from the medium modifications of the scalar fields ($\sigma$, $\zeta$, and $\delta$) in the mean field approximation within a chiral $SU(3)$ model, while the scalar gluon condensate is obtained from the medium modification of a scalar dilaton field ($\chi$), which is introduced within the model to imitate the scale invariance breaking of QCD.
Ankit Kumar, Amruta Mishra
2023-02-28T11:20:05Z
http://arxiv.org/abs/2302.14493v2
# Open Strange Mesons in (magnetized) nuclear matter ###### Abstract We investigate the mass modifications of open strange mesons (vector \(K^{*}\) and axial vector \(K_{1}\)) in (magnetized) isospin asymmetric nuclear matter using Quantum Chromodynamics sum rule (QCDSR) approach. The in-medium decay widths of \(K^{*}\to K\pi\) and \(K_{1}\to K^{*}\pi\) are studied from the mass modifications of \(K_{1}\), \(K^{*}\) and \(K\) mesons, using a light quark-antiquark pair creation model, namely the \({}^{3}P_{0}\) model. The in-medium decay width for \(K_{1}\to K^{*}\pi\) is compared with the decay widths calculated using a phenomenological Lagrangian, derived from a chiral SU(3) model. The effects of magnetic fields are also studied on the mass and the partial decay width of the vector \(K^{*}\) meson decaying to \(K\pi\). Within the QCD sum rule approach, the medium effects on the masses of the open strange mesons are calculated through the light quark condensates and the gluon condensates in the hadronic medium. The quark condensates are calculated from the medium modifications of the scalar fields (\(\sigma\), \(\zeta\), and \(\delta\)) in the mean field approximation within a chiral SU(3) model, while the scalar gluon condensate is obtained from the medium modification of a scalar dilaton field (\(\chi\)), which is introduced within the model to imitate the scale invariance breaking of QCD. Introduction The investigation of various properties of hadrons[1] has become an important and emerging topic of research interest in high energy physics due of its relevance to relativistic heavy ion collision (HIC) experiments. The medium produced in these heavy ion colliders has high density and/or high temperature, which can affect the experimental observables due to the medium modifications of the produced hadrons. The study of the effects of isospin asymmetry is also important as initially colliding heavy ions have more number of neutrons as compared to the number of protons. The study of light vector mesons is important due to its relevance to observables e.g., the dilepton spectra in the HIC experiments. The dileptons are promising observables for the study the properties of hadrons in dense nuclear matter as their interaction with the hadronic environment is negligible, and, they give information about all stages of the evolution of the strongly interacting created in heavy ion collision experiments. The in-medium masses of light vector mesons (\(\rho\), \(\omega\), and \(\phi\)) have been studied in strange hadronic matter [2] and isospin asymmetric magnetized nuclear medium [3] using the QCD sum rule approach, with the quark and gluon condensates calculated within a chiral SU(3) model. The study of strange mesons has been the center of attention due to their significance in studying the yield and spectra of these mesons, produced in HIC experiments [4] as well as in the study of certain astronomical bodies where strange matter is believed to exist in the core [5; 6]. The in-medium masses of pseudoscalar kaons and antikaons have been studied in strange hadronic matter using a chiral effective model [7] and the effects of temperature have also been incorporated. The effects from Baryonic Dirac Sea are also investigated in [7] and the results for in-medium masses are compared to that obtained from Chiral Perturbation Theory (ChPT) and, in addition, the dependence of collective flow on the potential strength of kaons and antikaons have been calculated for Ni+Ni reactions at 1.93\(A\) GeV energies. In reference [8], the energies and optical potentials for kaons and antikaons have been studied in isospin asymmetric nuclear matter within the chiral effective model. Also the optical potentials for kaons and antikaons have been studied in isospin asymmetric nuclear matter [9] and dense hyperonic matter[10] in chiral SU(3) model and the effects of isospin asymmetric matter are found to be quite important. These results can be particularly important in studying the heavy-ion beam collisions at Compressed Baryonic Matter (CBM) experiment at future Facility for Antiproton and Ion Research (FAIR). Moreover, the low energy scattering of kaon (antikaon) with nucleons (\(N\)) have been studied in the framework of coupled channel approach[11; 12]. The \(\bar{K}N\) channel couples strongly to the \(\pi\Sigma\) channel and the \(\Lambda(1405)\) is reproduced dynamically just below the \(\bar{K}N\) threshold [13]. In the present work, we study the in-medium masses as well as the decay widths of vector \(K^{*}\) (decaying to \(K\pi\)) and axial vector \(K_{1}\) (to \(K^{*}\pi\)) mesons, in (magnetized) isospin asymmetric dense nuclear medium. These strange vector and axial-vector mesons are the chiral partners of each other and are considered as obvious systems to study the chiral symmetry breaking effects and its restoration in the medium. In reference[14], the properties of \(f_{1}(1285)\) and \(\omega\) mesons are studied in QCDSR approach to probe the chiral symmetry restoration in nuclear environment. The in-medium masses for the light vector mesons (\(\rho\), \(\omega\), and \(\phi\)) have been calculated within QCD sum rule approach[15] from the medium modifications of quark condensates and scalar gluon condensates. The lowest Charmonium states \(J/\psi\) and \(\eta_{c}\) have been studied in isospin asymmetric hot nuclear matter[16] within QCD sum rule approach, and the effects of medium density are found to be the dominant effects. Moreover, within the QCD sum rule approach, the masses of \(1S\) and \(1P\) states of heavy quarkonium (charmonium and bottomonium) have been studied in isospin asymmetric nuclear matter including the effects of strong magnetic fields[17]. The mass modification for heavy quarkonium states, within QCD sum rule approach, arises from the medium modifications of scalar gluon condensates and twist-2 gluon condensates. This work includes the effects from Dirac sea through summing over the nucleonic tadpole diagrams and leads to a decrease in the values of light quark condensates with magnetic field, an effect known as inverse magnetic catalysis, when the AMMs of nucleons are taken into account. The decay width of channel \(K^{*}\to K\pi\) is studied using the \({}^{3}P_{0}\) model, from the in-medium masses of vector \(K^{*}\) and pseudoscalar \(K\) meson, which are calculated within QCD sum rule approach and chiral SU(3) model respectively. The open flavor mesons decay through the creation of a \(\bar{q}q\) pair which is produced with vacuum quantum numbers (\(J^{PC}=0^{++}\)) corresponding to a \({}^{3}P_{0}\) state [18; 19]. The \({}^{3}P_{0}\) model has been used extensively in the literature to study the decays of various mesons[20; 21; 22]. This model indicates the importance of taking into account the internal structures of hadrons as it has explained the experimentally observed suppression of \(\psi(4040)\) decay mode to \(\bar{D}D\) and (\(\bar{D^{*}}D\) + \(\bar{D}D^{*}\)) in comparison with \(\bar{D^{*}}D^{*}\) decay mode[23]. The masses of strange mesons (\(K\), \(K^{*}\), and \(\phi\)) have been investigated in the presence of strong magnetic fields in Ref. [24] due to \(\phi-\eta^{\prime}\) and \(K^{*}-K\) mixing, and also the decay widths for \(\phi\to K\bar{K}\), \(K^{*}\to K\pi\) are studied in a field theoretic model for composite hadrons from the mass modifications of the mesons. The in-medium spectral functions and production cross-sections for the strange mesons (\(K^{*},\bar{K}^{*}\), and \(\phi\)), in strange hadronic medium, have also been studied from the in-medium masses and decay widths for these mesons[25]. The effects of medium density as well as strangeness on the production cross-sections of \(K^{*},\bar{K}^{*}\), and \(\phi\) from the \(K\pi\), \(\bar{K}\pi\), and \(\bar{K}K\) scattering respectively, have been found to be quite appreciable when compared to the vacuum conditions. The properties of vector \(\bar{K^{*}}\) meson in the nuclear matter are investigated in Ref. [26] using a unitary approach in coupled channels. The strange vector \(K^{*}(\bar{K^{*}})\) mesons are produced mainly at later stages from the \(K\pi(\bar{K}\pi)\) scattering and the contribution from direct hadronization from the quark gluon plasma (QGP) state is quite small, as calculated within Parton-Hadron-String-Dynamics (PHSD) transport model [27; 28]. Although the study of medium density effects might find relevance in future experiments at the GSI Facility for Antiproton and Ion Research (FAIR) and Nuclotron-based Ion Collider facility (NICA) [29; 30], where matter having large baryon density will be produced. The axial vector \(K_{1}\) meson masses have been analyzed in the QCD sum rule analysis [31] in the nuclear matter and the decay widths for \(K_{1}\to K\pi\pi\) channel have been studied using a \({}^{3}P_{0}\) model[32]. In QCD sum rule approach [2; 15; 33], we expand the current-current correlation function for the corresponding meson using operator product expansion (OPE) in terms of local operators and their coefficients. The central idea of this approach is to relate the spectral density of this correlation function with the OPE expression via a dispersion relation, for large space-like regions. The medium modifications in masses arise due to the medium modifications of light quark condensates and gluon condensates within the QCD sum rule approach. These light quark condensates are related to scalar fields (\(\sigma\), \(\zeta\), and \(\delta\)) of the medium by comparing the explicit chiral symmetry breaking term of QCD Lagrangian to the corresponding Lagrangian term in the chiral SU(3) model[34; 35]; while the gluon condensates are related to the scalar dilaton field (\(\chi\)) of the medium. The chiral effective Lagrangian is written such that it includes various symmetries of low energy QCD and the symmetry breaking effects. The coupled equations of motion for various scalar fields (\(\sigma\), \(\zeta\), \(\delta\), and \(\chi\)) are solved within the chiral SU(3) model including various medium effects. The estimation of strong magnetic field production in the peripheral HIC experiments [36; 37] have grown immense interest in the study of the magnetic field effects on the produced medium. The estimated magnetic field strength at Relativistic Heavy Ion Collider (RHIC) is \(\sim 2m_{\pi}^{2}\) and at Large Hadron Collider (LHC) is \(\sim 15m_{\pi}^{2}\), calculated considering Lienard-Wiechert potential in numerical simulations within a microscopic transport model [37]. The study of strong magnetic field effects on produced medium is also important due to novel interesting quantum effects like chiral magnetic effect [38], magnetic catalysis[39; 40] and inverse magnetic catalysis[39; 41] as well as in the study of neutron stars and magnetars where large magnetic fields are estimated to exist. The charged mesons have contribution due to Landau level quantization in the presence of magnetic field and the effects of PV mixing [42; 43; 44; 45] and spin-magnetic field interaction[46; 47; 48] are also investigated in this study. In ref. [44], the mixing between vector and pseudoscalar charmonium states is considered through the pseudoscalar vector (PV) mixing term in chiral Lagrangian in the presence of strong magnetic fields and the decay width of vector charmonium state \(\psi(3770)\) to \(\bar{D}D\) is also studied within a field theoretic model of composite hadrons. Furthermore, the PV mixing between open charm mesons (\(D^{*}\) and \(D\)) is also studied in ref. [45] alongwith the contribution due to Landau Levels for charged mesons, and the decay widths (\(D^{*}\to D\pi\)) and (\(\psi(3770)\rightarrow\bar{D}D\)) are also studied using a field theoretic model. In field theoretic model of composite hadrons, the hadronic states like charmonium (\(\psi(3770)\)) state, open charm mesons (\(D^{*},\bar{D},D\)) and pion (\(\pi\)) states are constructed explicitly from constituent quark fields assuming harmonic oscillator wave functions for these states, and the matrix element for the decay is then calculated from the light quark-antiquark pair creation term of the free Dirac Hamiltonian density. However, the produced medium density will be very small in these peripheral collision experiments. Therefore, to understand the behavior at these conditions of high magnetic field and low medium density, we also study the effects of high magnetic fields on the properties of vector \(K^{*}\) meson. Hence, this present work might have relevance in lower energy central collisions, where produced medium has high density, as well as in high energy peripheral collision experiments where large magnetic fields are produced but the produced medium has low density. The present work is organized in the following manner: In section II, we briefly describe the chiral SU(3) model to compute the medium modifications of the quark condensates and scalar gluon condensates from the medium modifications of the scalar fields (\(\sigma\), \(\zeta\), \(\delta\), and \(\chi\)). In section III, we discuss the QCD sum rule approach, which is used to study the in-medium masses of these open strange mesons in isospin asymmetric nuclear medium. We also discuss the effects of strong magnetic field on the masses (and hence the decay widths) of the open strange mesons. In section IV, we discuss the in-medium \(K^{*}\) meson self-energy at one loop level from the mass modification of pseudoscalar kaon (\(K\)). In section V, we briefly discuss the \({}^{3}P_{0}\) model, which will be further used to calculate the in-medium decay width of vector \(K^{*}\) and axial vector \(K_{1}\) mesons. The decay width of \(K_{1}\to K^{*}\pi\) is studied within a phenomenological Lagrangian approach also. In section VI, we discuss and analyze the results obtained and compare them with earlier work to emphasize the relevance of this work. In section VII, we summarize the results obtained in the present work. ## II The Hadronic Chiral Su(3) Model We make use of an effective chiral SU(3) model [49; 50; 51; 52; 34; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 287; 288; 289; 288; 289; 291; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 329; 336; 340; 341; 342; 343; 344; 35; 356; 357; 358; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 43; 435; 436; 44; 445; 45; 45; 45; 456; 457; 46; 458; 459; 460; 47; 47; 47; 48; 48; 49; 49; 421; 43; 43; 44; 45; 461; 47; 47; 48; 49; 43; 44; 49; 44; 45; 462; 453; 454; 463; 464; 464; 47; 48; 49; 49; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 413; 414; 415; 416; 417; 418; 419; 421; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 43; 435; 436; 437; 438; 439; 444; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 46; 46; 47; 48; 49; 44; 49; 45; 46; 47; 48; 49; 49; 402; 403; 404; 405; 406; 407; 408; 411; 419; 43; 44; 45; 46; 47; 48; 49; 49; 408; 412; 43; 44; 44; 45; 46; 47; 49; 48; 49; 413; 44; 49; 409; 42; 43; 44; 44; 45; 46; 47; 48; 49; 414; 42; 44; 43; 44; 45; 46; 47; 49; 48; 49; 40; 415; 49; 416; 42; 44; 45; 46; 47; 48; 49; 49; 40; 417; 49; 418; 42; 43; 44; 419; 44; 45; 46; 47; 48; 49; 420; 44; 49; 43; 44; 44; 49; 45; 46; 48; 49; 47; 49; 40; 421; 44; 48; 49; 40; 44; 41; 44; 42; 43; 44; 45; 46; 47; 48; 49; 49; 40; 44; 49; 41; 45; 47; 49; 42; 44; 48; 49; 40; 43; 44; 49; 40; 44; 41; 45; 46; 49; 40; 41; 42; 44; 45; 47; 48; 49; 40; 42; 45; 48; 49; 40; 44; 49; 410; 41; 42; 43; 44; 45; 46; 47; 48; 49; 40; 45; 49; 40; 411; 44; 48; 49; 41; 42; 45; 49; 46; 47; 48; 49; 40; 48; 49; 40; 42; 44; 49; 41; 45; 49; 40; 43; 41; 44; 42; 43; 44; 45; 46; 47; 48; 49; 42; 45; 49; 43; 44; 46; 47; 49; 48; 49; 49; 42; 45; 49; 40; 43; 44; 47; 49; 45; 46; 48; 49; 47; 49; 48; 49; 49; 40; 44; 49; 40; 45; 46; 49; 41; 47; 48; 49; 42; 48; 49; 43; 44; 49; 45; 46; 47; 49; 48; 49; 40; 49; 50; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 59; 510; 53; 57; 59; 52; 50; 54; 50; 56; 58; 59; 50; 57; 59; 53; 58; 51; 59; 50; 59; 50; 51; 54; 50; 58; 52; 51; 53; 59; 53; 54; 51; 54; 55; 56; 57; 59; 56; 57; 58; 59; 58; 59; 59; On the other hand, the energy momentum tensor in QCD [54; 55], accounting for the current quark masses, is written as \[T_{\mu\nu}=-ST\big{(}G_{\mu\sigma}^{a}G_{\nu}^{a\sigma}\big{)}+\frac{g_{\mu\nu}} {4}\Big{(}\sum_{i}m_{i}\big{\langle}\bar{q}_{i}q_{i}\big{\rangle}+\Big{\langle} \frac{\beta_{QCD}}{2g}G_{\sigma\kappa}^{a}G^{a\sigma\kappa}\Big{\rangle}\Big{)} \tag{4}\] where the first term represents the symmetric traceless part and the second term contains the trace part. After multiplying eq. (3) and (4) by \(g^{\mu\nu}\), we get the trace (\(T_{\mu}^{\mu}\)) of the energy momentum tensor in chiral model and QCD respectively, and the expression for scalar gluon condensate is then given as \[\Big{\langle}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{a}G^{a\mu\nu}\Big{\rangle}=\frac {8}{9}\Big{[}\big{(}1-d\big{)}\chi^{4}+\sum_{i}m_{i}\big{\langle}\bar{q}_{i}q _{i}\big{\rangle}\Big{]} \tag{5}\] where the one-loop QCD \(\beta\)-function is given by \[\beta_{QCD}(g)=-\frac{11N_{c}g^{3}}{48\pi^{2}}\Big{(}1-\frac{2N_{f}}{11N_{c}} \Big{)}+\mathcal{O}\big{(}g^{5}\big{)} \tag{6}\] with \(N_{c}=3\), and \(N_{f}=3\), are the number of colors and quark flavors respectively and the strong coupling constant of QCD, \(\alpha_{s}=g^{2}/4\pi\). Thus, the scalar gluon condensate is introduced in the chiral SU(3) model through a scalar dilaton field (\(\chi\)). The scalar gluon condensate has additional contribution due to finite non-zero quark masses \(m_{i}(i=u,d,s)\). The non-zero quark condensates \(\big{\langle}\bar{q}_{i}q_{i}\big{\rangle}\) are introduced in the QCD vacuum by the spontaneous breaking of chiral symmetry by the ground state. The finite quark mass term \(\Big{(}m_{i}\big{\langle}\bar{q}_{i}q_{i}\big{\rangle}\Big{)}\) of equation (5) is given in terms of scalar fields \(\big{(}\sigma,\,\zeta,\) and \(\delta\big{)}\) by comparing the explicit chiral symmetry breaking term of the chiral model, after mean field approximation \[\mathcal{L}_{SB}=Tr\left[diag\,\Big{(}-\frac{1}{2}m_{\pi}^{2}f_{\pi}\big{(} \sigma+\delta\big{)},-\frac{1}{2}m_{\pi}^{2}f_{\pi}\big{(}\sigma-\delta\big{)},\Big{(}\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\Big{)} \zeta\Big{)}\right] \tag{7}\] with the corresponding Lagrangian density term of the QCD, which is written as \[\mathcal{L}_{SB}^{QCD}=-Tr\left[diag\,\big{(}m_{u}\bar{u}u,m_{d}\bar{d}d,m_{s }\bar{s}s\big{)}\right] \tag{8}\] Within the chiral model, the coupled equations of motion for the scalar fields \(\big{(}\sigma,\,\zeta,\,\delta\) and \(\chi\)), derived from the chiral Lagrangian density, are given as \[k_{0}\chi^{2}\sigma-4k_{1}\big{(}\sigma^{2}+\zeta^{2}+\delta^{2} \big{)}\sigma-2k_{2}\big{(}\sigma^{3}+3\sigma\delta^{2}\big{)}-2k_{3}\chi\sigma\zeta\] \[-\frac{d}{3}\chi^{4}\left(\frac{2\sigma}{\sigma^{2}-\delta^{2}} \right)+\left(\frac{\chi}{\chi_{0}}\right)^{2}m_{\pi}^{2}f_{\pi}-\sum_{i}g_{ \sigma i}\rho_{i}^{s}=0 \tag{9}\] \[k_{0}\chi^{2}\zeta-4k_{1}\big{(}\sigma^{2}+\zeta^{2}+\delta^{2} \big{)}\zeta-4k_{2}\zeta^{3}-k_{3}\chi\big{(}\sigma^{2}-\delta^{2}\big{)}- \frac{d}{3}\frac{\chi^{4}}{\zeta}\] \[+\left(\frac{\chi}{\chi_{0}}\right)^{2}\left[\sqrt{2}m_{K}^{2}f_ {K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\right]-\sum_{i}g_{\varsigma i}\rho_{i }^{s}=0\] (10) \[k_{0}\chi^{2}\delta-4k_{1}\big{(}\sigma^{2}+\zeta^{2}+\delta^{2 }\big{)}\delta-2k_{2}\big{(}\delta^{3}+3\sigma^{2}\delta\big{)}+2k_{3}\chi\delta\zeta\] \[+\frac{2}{3}d\chi^{4}\left(\frac{\delta}{\sigma^{2}-\delta^{2}} \right)-\sum_{i}g_{\delta i}\rho_{i}^{s}=0\] (11) \[k_{0}\chi\big{(}\sigma^{2}+\zeta^{2}+\delta^{2}\big{)}-k_{3} \big{(}\sigma^{2}-\delta^{2}\big{)}\zeta+\chi^{3}\left[1+ln\Big{(}\frac{\chi^ {4}}{\chi_{0}^{4}}\Big{)}\right]+\] \[\left[\Big{(}m_{\pi}^{2}f_{\pi}\sigma+\big{(}\sqrt{2}m_{K}^{2}f_ {K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\big{)}\zeta\Big{)}\right]=0 \tag{12}\] The medium effects due to the baryon density, isospin asymmetry, and magnetic field, are incorporated into the model through the scalar fields, which depend on the scalar densities (\(\rho_{i}^{s}\)) of the baryons. The coupled equations of motion, given by (9,10,11,12), are solved to find the medium dependent values of the scalar fields, from which we obtain the scalar gluon condensates \(\big{\langle}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{a}G^{a\mu\nu}\big{\rangle}\) and the quark condensates \(\Big{(}\big{\langle}\bar{u}u\big{\rangle},\,\big{\langle}\bar{d}d\big{\rangle}, \,\)and \(\big{\langle}\bar{s}s\big{\rangle}\Big{)}\) in the nuclear medium. In the next section, we will discuss the QCD sum rule approach to calculate the in-medium masses for the vector \(K^{*}\) and axial vector \(K_{1}\) meson. ## III QCD sum rule approach In this section, we will discuss the QCD sum rule method,[2; 31; 56] which is used to calculate the in-medium masses through the medium modifications of the quark and gluon condensates, calculated within the chiral model. The current-current correlation function, written in terms of the time-ordered product of two currents, for the meson \(V\) is given by \[\Pi_{\mu\nu}^{V}(q)=i\int d^{4}x\,e^{iq.x}\big{\langle}0\big{|}T\big{[}j_{\mu }^{V}(x),j_{\nu}^{V}(0)\big{]}\big{|}0\big{\rangle} \tag{13}\] where the currents for vector \(K^{*}\) meson are given as \(j^{K^{*+}}_{\mu}=\bar{s}\gamma_{\mu}u\) and \(j^{K^{*0}}_{\mu}=\bar{s}\gamma_{\mu}d\); while the currents for the axial vector \(K_{1}\) meson are given by \(j^{K^{+}_{1}}_{\mu}=\bar{s}\gamma_{\mu}\gamma_{5}u\) and \(j^{K^{0}_{1}}_{\mu}=\bar{s}\gamma_{\mu}\gamma_{5}d\). We write the transverse tensor structure for the correlation function as a sum of two independent functions[31, 57] as \[\Pi^{V}_{\mu\nu}(q)=-g_{\mu\nu}\Pi_{1}(q^{2})+q_{\mu}q_{\nu}\Pi_{2}(q^{2}) \tag{14}\] For the conserved vector current \(j^{K^{*}}_{\mu}\), these two functions are related as \(\Pi_{1}(q^{2})=q^{2}\Pi_{2}(q^{2})\). As the axial current \(j^{K_{1}}_{\mu}\) is not conserved, this relation does not hold true for axial current. We can make use of either \(\Pi_{1}(q^{2})\) or \(\Pi_{2}(q^{2})\) to carry out QCDSR, but \(\Pi_{2}(q^{2})\) will have contributions from the pseudoscalar mesons which require further investigation of the in-medium properties of pseudoscalar mesons. Therefore we will make use of \(\Pi_{1}(q^{2})\) in this work throughout. The main idea of QCDSR is to relate the spectral density of the correlator function \(\Pi^{V}(q^{2}=-Q^{2})\) on the phenomenological side via a dispersion relation with the QCD operator product expansion (OPE) side. The correlator function, on the phenomenological side, can be written as \[12\pi^{2}\widetilde{\Pi}^{V}_{phen}(Q^{2})=\int ds\frac{R^{V}_{phen}(s)}{s+Q^{ 2}} \tag{15}\] where \(\widetilde{\Pi}^{V}(Q^{2})=\Pi^{V}(Q^{2})/Q^{2}\) and the spectral density \(R^{V}_{phen}(s)\) is related to the imaginary part of the correlator as \(R^{V}_{phen}(s)=12\pi Im(\Pi^{V}_{phen}(s))\). For enhancing the contribution of the pole, we make use of Borel transformation[58, 59] and we get \[12\pi^{2}\,\widetilde{\Pi}^{V}(M^{2})=\int dse^{-\frac{s}{M^{2}}}R^{V}_{phen}(s) \tag{16}\] For large space-like regions, \(Q^{2}=-q^{2}>>1\) GeV\({}^{2}\), the asymptotic freedom in QCD allows for series expansion of correlation function in terms of operator product expansion (OPE)[56, 60] as \[12\pi^{2}\,\widetilde{\Pi}^{V}(q^{2}=-Q^{2})=d_{V}\bigg{[}-c_{0}^{V}ln\frac{Q ^{2}}{\mu^{2}}+\frac{c_{1}^{V}}{Q^{2}}+\frac{c_{2}^{V}}{Q^{4}}+\frac{c_{3}^{V} }{Q^{6}}+...\bigg{]} \tag{17}\] where \(d_{K^{*},K_{1}}=3\) and the scale \(\mu\) is taken to be 1 GeV here. The first term is the leading perturbative QCD term and subsequent higher order terms, containing the non-perturbative effects of QCD, are suppressed by powers of \(1/Q^{2}\). The coefficients \(c_{i}^{\prime}s\) (\(i=2,3...\)) are related to quark condensates and scalar gluon condensates. For the \(K^{*+}\) meson [31, 58, 61], these coefficients are given by \[c_{0}^{K^{*}}=1+\frac{\alpha_{s}(Q^{2})}{\pi},\ c_{1}^{K^{*}}=-3(m_{u}^{2}+m_{ s}^{2}) \tag{18}\] \[c_{2}^{K^{*}}=\frac{\pi^{2}}{3}\Big{\langle}\frac{\alpha_{s}}{\pi}G^{\mu\nu}G_{ \mu\nu}\Big{\rangle}+\frac{16\pi^{2}}{d_{K^{*}}}\big{\langle}m_{u}\bar{s}s+m_{s }\bar{u}u\big{\rangle}-\frac{4\pi^{2}}{d_{K^{*}}}\big{\langle}m_{u}\bar{u}u+m_{ s}\bar{s}s\big{\rangle} \tag{19}\] \[c_{3}^{K^{*}}=-8\pi^{3}\Bigg{[}2\big{\langle}\alpha_{s}(\bar{u}\gamma_{\mu} \gamma_{5}\lambda^{a}s)(\bar{s}\gamma^{\mu}\gamma^{5}\lambda^{a}u)\big{\rangle} +\frac{2}{9}\Big{\langle}(\bar{u}\gamma_{\mu}\lambda^{a}u+\bar{s}\gamma_{\mu} \lambda^{a}s)\times\] \[\Big{(}\sum_{q=u,d,s}\bar{q}\gamma^{\mu}\lambda^{a}q\Big{)}\Big{\rangle} \Bigg{]}=-8\pi^{3}\alpha_{s}\kappa_{n}\Bigg{[}\frac{32}{9}\big{\langle}\bar{u} u\big{\rangle}\big{\langle}\bar{s}s\big{\rangle}+\frac{32}{81}\big{(} \big{\langle}\bar{u}u\big{\rangle}^{2}+\big{\langle}\bar{s}s\big{\rangle}^{2} \big{)}\Bigg{]} \tag{20}\] where \(\alpha_{s}(Q^{2})=4\pi/\big{[}b\,ln(Q^{2}/\Lambda_{QCD}^{2})\big{]}\) is the running coupling constant of QCD and \((m_{u},m_{s})\) are current quark masses for up and strange quark. The QCD scale is taken to be \(\Lambda_{QCD}=140\) MeV, with \(b=11-(2/3)N_{f}\) with \(N_{f}=3\) as the number of quark flavors. To evaluate the four quark operators of coefficient \(c_{3}^{K^{*}}\), we have used the factorization method[61], \[\big{\langle}(\bar{q}_{i}\gamma_{\mu}\gamma_{5}\lambda^{a}q_{j})^{2}\big{\rangle} =-\big{\langle}(\bar{q}_{i}\gamma_{\mu}\lambda^{a}q_{j})^{2}\big{\rangle}= \delta_{ij}\frac{16}{9}\kappa_{n}\big{\langle}\bar{q}_{i}q_{i}\big{\rangle}^{2} \tag{21}\] where \(q_{i}=(u,d,s)\) and the parameter \(\kappa_{n}\) is introduced to parameterize the deviation from the exact factorization with 'n' corresponding to different mesons. For the neutral \(K^{*0}\) meson, the up (u) quark flavor is replaced by the down (d) quark flavor. The coefficients for the strange axial vector meson \(K_{1}^{+}\) are given by[31] \[c_{0}^{K_{1}}=1+\frac{\alpha_{s}(Q^{2})}{\pi},\ c_{1}^{K_{1}}=-3(m_{u}^{2}+m_ {s}^{2}) \tag{22}\] \[c_{2}^{K_{1}}=\frac{\pi^{2}}{3}\Big{\langle}\frac{\alpha_{s}}{\pi}G^{\mu\nu}G _{\mu\nu}\Big{\rangle}-\frac{16\pi^{2}}{d_{K_{1}}}\big{\langle}m_{u}\bar{s}s+m _{s}\bar{u}u\big{\rangle}+\frac{4\pi^{2}}{d_{K_{1}}}\big{\langle}m_{u}\bar{u}u +m_{s}\bar{s}s\big{\rangle} \tag{23}\] \[c_{3}^{K_{1}}=-8\pi^{3}\Bigg{[}2\big{\langle}\alpha_{s}(\bar{u}\gamma_{\mu} \lambda^{a}s)(\bar{s}\gamma^{\mu}\lambda^{a}u)\big{\rangle}+\frac{2}{9}\Big{ }\Big{\langle}(\bar{u}\gamma_{\mu}\lambda^{a}u+\bar{s}\gamma_{\mu}\lambda^{a} s)\times\] \[\Big{(}\sum_{q=u,d,s}\bar{q}\gamma^{\mu}\lambda^{a}q\Big{)}\Big{\rangle} \Bigg{]}=-8\pi^{3}\alpha_{s}\kappa_{n}\Bigg{[}-\frac{32}{9}\big{\langle}\bar{u }u\big{\rangle}\big{\langle}\bar{s}s\big{\rangle}+\frac{32}{81}\big{(}\big{ \langle}\bar{u}u\big{\rangle}^{2}+\big{\langle}\bar{s}s\big{\rangle}^{2}\big{)} \Bigg{]} \tag{24}\] Thus the difference in correlator function for the vector and axial vector is proportional to chiral symmetry breaking dimension-4 \(\big{(}\big{\langle}m_{q_{i}}\bar{q}_{i}q_{i}\big{\rangle}\big{)}\) and dimension-6 \(\big{(}\big{\langle}\bar{q}_{i}q_{i}\big{\rangle}\big{\langle}\bar{q}_{j}q_{j} \big{\rangle}\big{)}\) operators. After doing the Borel transformation of equation (17) for improved convergence, we get \[12\pi^{2}\,\widetilde{\Pi}^{V}(M^{2})=d_{V}\Bigg{[}c_{0}^{V}M^{2}+c_{1}^{V}+ \frac{c_{2}^{V}}{M^{2}}+\frac{c_{3}^{V}}{2M^{4}}+...\Bigg{]} \tag{25}\] We assume that the spectral density has a resonance part \(R_{phen}^{V(res)}(s)\) and a perturbative continuum that contains all higher energy states, which are separated by an energy scale \(\sqrt{s_{0}^{V}}\) as [56; 60] \[R_{phen}^{V}(s)=R_{phen}^{V(res)}(s)\,\Theta(s_{0}^{V}-s)+d_{V}c_{0}^{V}\,\Theta(s- s_{0}^{V}) \tag{26}\] As the rapid crossover between the resonance and continuum part is not realistic, the scale \(s_{0}^{V}\) is taken as the average scale characterizing the smooth transition from low-lying resonance region to high-energy continuum part. Due to larger exponential suppression of the correlator function through utilizing the Borel transform, the more detailed description of the crosssover and continuum region becomes insignificant[57]. The correlator function is matched from equations (16) and (25) to get \[\int ds\,e^{-\frac{s}{M^{2}}}R_{phen}^{V}(s)=d_{V}\bigg{[}c_{0}^{V}M^{2}+c_{1}^ {V}+\frac{c_{2}^{V}}{M^{2}}+\frac{c_{3}^{V}}{2M^{4}}\bigg{]} \tag{27}\] We expand the exponential expression, for \(M>\sqrt{s_{0}^{V}}\), of equation (27) in powers of \(\frac{s}{M^{2}}\) for \(s<s_{0}^{V}\) and get the finite energy sum rules (FESRs)[2; 56] in vacuum, by comparing the coefficients of various powers of \(1/M^{2}\). Using simple ansatz for the vector \(K^{*}\) meson spectral function \(R_{phen}^{K^{*}}(s)\)[2; 56; 60] as, \[R_{phen}^{K^{*}}(s)=F_{K^{*}}\,\delta(s-m_{K^{*}}^{2})+d_{K^{*}}c_{0}^{K^{*}} \,\Theta(s-s_{0}^{K^{*}}) \tag{28}\] the finite energy sum rules (FESRs) for the vector \(K^{*}\) meson in vacuum are given as \[F_{K^{*}}=d_{K^{*}}(c_{0}^{K^{*}}s_{0}^{K^{*}}+c_{1}^{K^{*}}) \tag{29}\] \[F_{K^{*}}\,m_{K^{*}}^{2}=d_{K^{*}}\Big{(}\frac{(s_{0}^{K^{*}})^{2}c_{0}^{K^{*}} }{3}-c_{2}^{K^{*}}\Big{)} \tag{30}\] \[F_{K^{*}}\,m_{K^{*}}^{4}=d_{K^{*}}\Big{(}\frac{(s_{0}^{K^{*}})^{3}c_{0}^{K^{*} }}{3}+c_{3}^{K^{*}}\Big{)} \tag{31}\] These equations are solved by putting in the vacuum masses and vacuum values of condensates to find the delineation scale \(s_{0}^{K^{*}}\), the overlap strength \(F_{K^{*}}\) between the current and lowest-lying resonance, and the coefficient \(c_{3}^{K^{*}}\) for vector meson \(K^{*+}\) and \(K^{*0}\) separately. The parameter \(\kappa_{n}\) can be fixed from the value of coefficient \(c_{3}^{K^{*}}\). However, in the presence of the nuclear medium, the quark and gluon condensates are also modified due to medium modifications of quark and gluon condensates, and these medium effects are incorporated in the FESRs through the medium-modified coefficients \(c_{2}^{*}\) and \(c_{3}^{*}\). Then the finite energy sum rules for the \(K^{*}\) meson in the nuclear medium are given by \[F_{K^{*}}^{*}=d_{K^{*}}(c_{0}^{K^{*}}s_{0}^{*K^{*}}+c_{1}^{K^{*}}) \tag{32}\] \[F^{*}_{K^{*}}\,m^{*2}_{K^{*}} = d_{K^{*}}\Big{(}\frac{(s^{*K^{*}}_{0})^{2}c^{K^{*}}_{0}}{2}-c^{*K^{*} }_{2}\Big{)} \tag{33}\] \[F^{*}_{K^{*}}\,m^{*4}_{K^{*}} = d_{K^{*}}\Big{(}\frac{(s^{*K^{*}}_{0})^{3}c^{K^{*}}_{0}}{3}+c^{*K ^{*}}_{3}\Big{)} \tag{34}\] These FESRs are solved to find the in-medium scale \(s^{*K^{*}}_{0}\), overlap strength \(F^{*}_{K^{*}}\), and mass \(m^{*}_{K^{*}}\) of the vector meson. However, for the strange axial vector meson channel, the spectral density \(R^{K_{1}}_{phen}(s)\) will have a contribution from the pseudoscalar \(K\) meson as well as from the axial vector \(K_{1}\) meson resonance and we parameterize the spectral density for strange axial vector \(K_{1}\) meson as, \[R^{K_{1}}_{phen}(s)=f^{2}_{K}\,\delta(s-m^{2}_{K})+F_{K_{1}}\,\delta(s-m^{2}_{ K_{1}})+d_{K_{1}}c^{K_{1}}_{0}\,\Theta(s-s^{K_{1}}_{0}) \tag{35}\] with \(f_{K}\) and \(m_{K}\) being the kaon decay constant and kaon mass respectively. A similar parameterization scheme have been used for non-strange axial vector \(A_{1}\) meson which gets additional contribution to the spectral density due to pseudoscalar pion (\(\pi\)) [57; 61; 62]. The FESRs for the strange axial vector meson in the nuclear medium are given as \[F^{*}_{K_{1}} = d_{K_{1}}(c^{K_{1}}_{0}s^{*K_{1}}_{0}+c^{K_{1}}_{1})-f^{2}_{K} \tag{36}\] \[F^{*}_{K_{1}}\,m^{*2}_{K_{1}} = d_{K_{1}}\Big{(}\frac{(s^{*K_{1}}_{0})^{2}c^{K_{1}}_{0}}{2}-c^{* K_{1}}_{2}\Big{)}-f^{2}_{K}m^{2}_{K}\] (37) \[F^{*}_{K_{1}}\,m^{*4}_{K_{1}} = d_{K_{1}}\Big{(}\frac{(s^{*K_{1}}_{0})^{3}c^{K_{1}}_{0}}{3}+c^{* K_{1}}_{3}\Big{)}-f^{2}_{K}m^{4}_{K} \tag{38}\] The above FESRs are solved simultaneously to find the in-medium masses for the axial vector \(K_{1}\) mesons. Next, we study the effects of strong magnetic fields on the masses of vector \(K^{*}\) meson: (1) Whenever a charged particle moves in an external magnetic field, the Lorentz force comes into play and the particle's momenta perpendicular to the direction of magnetic field, are discretized to certain levels characterized by an integral label (\(n\)) called Landau levels; while the particle's momenta in the direction of magnetic field remains unaffected. Therefore the energy levels of charged pseudoscalar (spin-0) and vector (spin-1) mesons, in the presence of magnetic field, are discretized to different Landau levels[42] and are given by \[m_{P}(eB)=\sqrt{m^{2}_{P}+(2n+1)|eB|+p^{2}_{z}} \tag{39}\] \[m_{V}(eB)=\sqrt{m^{2}_{V}+(2n+1)|eB|+p^{2}_{z}+gS_{z}|eB|} \tag{40}\] where \(m_{P}\) and \(m_{V}\) are the masses in the absence of magnetic field. The integer '\(n\)' specifies the Landau levels, \(p_{z}\) is the continuous momentum in the z-direction, g is the Lande g-factor, and \(S_{z}\) is the spin quantum number along the direction of magnetic field. The internal structure of the mesons is not considered while writing the above expressions[63]. In the present discussion, we will consider only the lowest Landau level (LLL), \(n=0\), contribution at zero momentum in the z-direction (\(p_{z}=0\)). The contribution from the higher Landau levels (\(n\geq 1\)) is more important in the weak magnetic field case as these will be very close to the lowest Landau level and hence can not be treated as the continuum part. The three polarization states (\(S_{z}=+1,0,-1\)) of charged vector \(K^{*+}\) meson have different Landau contributions to the masses, and pseudoscalar \(K\) meson mass also gets modified, which are given by \[m_{P}(eB)=\sqrt{m_{P}^{2}+eB} \tag{41}\] \[m_{V^{\perp}_{+1}}(eB)=\sqrt{m_{V}^{2}+3\,eB},\ m_{V^{||}}(eB)=\sqrt{m_{V}^{2 }+eB},\ m_{V^{\perp}_{-1}}(eB)=\sqrt{m_{V}^{2}-eB} \tag{42}\] (2) The presence of external magnetic field can induce mixing among some spin states due to breaking of a part of spatial rotation symmetry. Because to this, only the mesonic spin states oriented along the direction of the magnetic field can persist as a good quantum number and there is a mixing between the pseudoscalar meson \(\left((S,S_{z})=(0,0)\right)\) and longitudinal part \(\left((S,S_{z})=(1,0)\right)\) of vector meson. This is incorporated through an effective interaction vertex, ensuring the Lorentz invariance, in the Lagrangian as [43; 44; 45] \[\mathcal{L}_{\gamma PV}=\frac{g_{PV}}{m_{avg}}e\tilde{F}_{\mu\nu}(\partial^{ \mu}P)V^{\nu} \tag{43}\] where \(m_{avg}\) is the average of masses of pseudoscalar (\(m_{P}\)) and longitudinal part of vector meson (\(m_{V^{||}}\)). Here \(P\) and \(V^{\mu}\) are the pseudoscalar and vector meson fields and \(\tilde{F}_{\mu\nu}\) is the dual field strength tensor of QCD. Here, we considered the magnetic field in z-direction so that the only non-zero components of dual field strength tensor are \(\tilde{F}_{03}=-\tilde{F}_{30}=B\) and the mesons are considered to be at rest. We fit the dimensionless coupling parameter \(g_{PV}\) from the experimentally observed radiative decay width \(\Gamma(V\to P\gamma)\)[42; 44; 45]. The parameter \(g_{PV}\) depends strongly on the value of magnetic field through the medium modifications of masses of these mesons due to Landau level contributions. From the equations of motion, obtained from the effective Lagrangian containing the kinetic and interaction term, we find that only the longitudinal part \(V^{||}\) is mixed with pseudoscalar P and there is no mixing for the transverse parts \(V^{\perp}\) of vector meson. The energy eigenvalues for the physical state are given by[42; 43; 44; 45] \[m_{V^{||},P}^{PV}=\sqrt{\frac{1}{2}\Bigg{(}M_{+}^{2}+\frac{c_{PV}^{2}}{m_{avg}^{2 }}\pm\sqrt{M_{-}^{4}+\frac{2c_{PV}^{2}M_{+}^{2}}{m_{avg}^{2}}+\frac{c_{PV}^{4}} {m_{avg}^{4}}\Bigg{)}} \tag{44}\] where \(M_{+}^{2}=m_{V}^{2}+m_{P}^{2}\), \(M_{-}^{2}=m_{V}^{2}-m_{P}^{2}\), and \(c_{PV}=g_{PV}\,eB\). The effects of PV mixing, on charged meson masses, can be studied separately with and without Landau contributions on the masses, while the neutral mesons do not have contribution due to Landau quantization. (3) Also, due to the spin-magnetic field interaction, there is mixing between different spin states through the \((-\mu_{i}.B)\) term. The external magnetic field with quantum numbers (\(J^{P}=1^{+}\)) can induce mixing between a pseudoscalar (\(J^{P}=0^{-}\)) and vector meson (\(J^{P}=1^{-}\)). The spin singlet state \(\big{|}00\big{\rangle}=\Big{(}\big{|}\uparrow\downarrow\big{\rangle}-\big{|} \downarrow\uparrow\big{\rangle}\Big{)}/\sqrt{2}\) is mixed with longitudinal component \(\big{|}10\big{\rangle}=\Big{(}\big{|}\uparrow\downarrow\big{\rangle}+\big{|} \downarrow\uparrow\big{\rangle}\Big{)}/\sqrt{2}\) of spin triplet state: \[\big{(}-\mu_{i}.B\big{)}\big{|}00\big{\rangle}=-\frac{gB}{4}\Big{(}\frac{q_{1 }}{m_{1}}-\frac{q_{2}}{m_{2}}\Big{)}\big{|}10\big{\rangle} \tag{45}\] where the magnetic moment of the \(i\)th particle (with \(i=1,2\) corresponding to quark, antiquark in this work) is \(\mu_{i}=gq_{i}\sigma_{i}/4m_{i}\), where \(\sigma_{i}\) are the Pauli spin matrices. The Lande factor g is taken to be 2 and \((q_{i},m_{i})\) are the charges and constituent masses of quark/antiquark respectively. Since the \(\big{|}10\big{\rangle}\) and \(\big{|}00\big{\rangle}\) states are not pure eigenstates of interaction Hamiltonian term \(\big{(}\mathcal{H}_{s}=-\mu_{i}.B\big{)}\), we will consider a two dimensional eigensystem, for the \(\big{|}10\big{\rangle}\) and \(\big{|}00\big{\rangle}\) states, to determine the spin mixing effect[46]. Then effective physical mass eigenvalues are given by \[m_{V^{||},P}^{eff}=m_{V^{||},P}\pm\frac{\Delta E}{2}\big{(}\sqrt{1+\chi_{sB}^ {2}}-1\big{)} \tag{46}\] where \(\Delta E=m_{V^{||}}-m_{P}\), and \(\chi_{sB}=\big{(}\frac{2}{\Delta E}\big{)}\Big{(}-\frac{gq_{1}B}{4m_{1}}+ \frac{gq_{2}B}{4m_{2}}\Big{)}\). For the charged meson, the effects of Landau levels are also considered here in \(m_{V^{||},P}\). The transverse polarized states, \(\big{|}1+1\big{\rangle}=\big{|}\uparrow\uparrow\big{\rangle}\) and \(\big{|}1-1\big{\rangle}=\big{|}\downarrow\downarrow\big{\rangle}\), do not get mixed with any other states[47; 48; 64]. The energy eigenvalue equation for transverse polarized states is given by \[\big{(}-\mu_{i}.B\big{)}\big{|}1\pm 1\big{\rangle}=\mp\frac{gB}{4}\Big{(}\frac{q_{1 }}{m_{1}}+\frac{q_{2}}{m_{2}}\Big{)}\big{|}1\pm 1\big{\rangle} \tag{47}\] This contribution vanishes for meson states, like charmonium and bottomonium, which have same quark-antiquark as constituents due to \(q_{1}=-q_{2}\) and \(m_{1}=m_{2}\). For the charged \(K^{*+}(\bar{s}u)\) meson, the charges are \(q_{1}=+2e/3\) and \(q_{2}=+e/3\) and for the neutral \(K^{*}(\bar{s}d)\) meson, the charges are \(q_{1}=-e/3\) and \(q_{2}=+e/3\). The quark masses taken here are the constituent quark masses. ## IV \(K^{*}\) self energy We now discuss the medium modifications of masses and decay widths of \(K^{*}\) meson from the self energy at one-loop level. The interaction term for a vector meson V decaying to two pseudoscalar mesons \(P_{1}\) and \(P_{2}\) can be written as[65] \[{\cal L}_{V\bar{P}_{1}P_{2}}=igV^{\mu}[\bar{P}_{1}(\partial_{\mu}P_{2})-( \partial_{\mu}\bar{P}_{1})P_{2}] \tag{48}\] The parameter '\(g\)' is known as the coupling strength of the decay channel. This interaction term of the effective Lagrangian generates a hadronic current, which couples with the \(K^{*}\) meson field to produce the self energy, which is given by \[\Pi^{\mu\nu}(p)=-ig_{K^{*}}^{2}\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}-m_ {K}^{2}+i\epsilon}\Big{[}\frac{q^{\mu}q^{\nu}}{(q-p)^{2}-m_{\pi}^{2}+i \epsilon}\Big{]} \tag{49}\] where \(p\) is the 4 -momenta carried by vector \(K^{*}\) meson and the 4 -momentas of intermediate \(K\) and \(\pi\) are \(q\) and \((q-p)\) respectively, and we have integrated over the internal loop momenta. Writing \(\Pi_{K^{*}}(p^{2})=\frac{1}{3}g_{\mu\nu}\Pi^{\mu\nu}(p)\) and \(\mathring{m}_{K^{*}}\) as the bare \(K^{*}\) meson mass, the physical mass Figure 1: \(K^{*}\) meson self energy diagram at one loop level can be written as[66] \[m_{K^{*}}^{2}=\mathring{m}_{K^{*}}^{2}+Re\,\Pi_{K^{*}}(p^{2}=m_{K^{*}}^{2}) \tag{50}\] The decay width, at resonance, is related to the imaginary part of the propagator as \[\Gamma(K^{*}\to K\pi)=-Im\,\Pi_{K^{*}}(p^{2}=m_{K^{*}}^{2})/m_{K^{*}} \tag{51}\] The imaginary part of the propagator is calculated by the standard Cutkosky rule [67] and is given by \[Im\,\Pi_{K^{*}}(p^{2})=-\frac{g_{K^{*}}^{2}m_{K^{*}}^{2}}{64\pi}\Big{[}1-\frac{ (m_{K}-m_{\pi})^{2}}{m_{K^{*}}^{2}}\Big{]}^{3/2}\Big{[}1-\frac{(m_{K}+m_{\pi})^ {2}}{m_{K^{*}}^{2}}\Big{]}^{3/2} \tag{52}\] The scalar part of the in-medium self-energy, in the rest frame of \(K^{*}\), can be written as \[i\Pi_{K^{*}}(p)=-\frac{1}{3}g_{K^{*}}^{2}\int\frac{d^{4}q}{(2\pi)^{4}}q^{2}D_{ K}(q)D_{\pi}(q-p) \tag{53}\] where \(D_{K}(q)=(q^{2}-m_{K}^{2}+i\epsilon)^{-1}\) and \(D_{\pi}(q)=\big{(}(q-p)^{2}-m_{\pi}^{2}+i\epsilon\big{)}^{-1}\) are the kaon and pion propagators, respectively. The real part of the self energy is written as \[Re\,\Pi_{K^{*}}(p)=-\frac{1}{3}g_{K^{*}}^{2}{\cal P}\int\frac{d^{3}q}{(2\pi)^{ 3}}q^{2}\frac{(E_{K}+E_{\pi})}{(2E_{K}E_{\pi})\big{(}(E_{K}+E_{\pi})^{2}-m_{K^ {*}}^{2}\big{)}} \tag{54}\] Here \({\cal P}\) denotes the principal value of the integral and the energies are given by \(E_{K}=\sqrt{q^{2}+m_{K}^{2}}\) and \(E_{\pi}=\sqrt{q^{2}+m_{\pi}^{2}}\). The integral (54) is divergent and it needs to be regularized to avoid singularities. We use a phenomenological form factor approach with a cutoff parameter \(\Lambda_{c}\)[68, 69], and the integral then becomes \[Re\,\Pi_{K^{*}}(p)=-\frac{1}{3}g_{K^{*}}^{2}{\cal P}\int_{0}^{\Lambda_{c}} \frac{d^{3}q}{(2\pi)^{3}}q^{2}\Big{(}\frac{\Lambda_{c}^{2}+m_{K^{*}}^{2}}{ \Lambda_{c}^{2}+4E_{K}^{2}}\Big{)}^{2}\Big{(}\frac{\Lambda_{c}^{2}+m_{K^{*}}^ {2}}{\Lambda_{c}^{2}+4E_{\pi}^{2}}\Big{)}^{2}\frac{(E_{K}+E_{\pi})}{(2E_{K}E_{ \pi})\big{(}(E_{K}+E_{\pi})^{2}-m_{K^{*}}^{2}\big{)}} \tag{55}\] where the quantities are called vertex form factors, given by \[u_{K,\pi}(q^{2})=\left(\frac{\Lambda_{c}^{2}+m_{K^{*}}^{2}}{\Lambda_{c}^{2}+4 E_{K,\pi}^{2}}\right)^{2} \tag{56}\] In a simple picture, the cutoff parameter is related to the overlap region of the parent and daughter particle at the vertex and depends on the size of their wave functions as done in [69, 70]. So we also calculate the vertex form factors using a \({}^{3}P_{0}\) model for quark-antiquark pair creation with Gaussian wave functions for the mesons. Then the root mean square radii from the two form factors is compared to get an estimate of the cutoff mass \(\Lambda_{c}\) as done in [69] for the \(J/\psi\) meson. A similar work has been done for the \(\phi\) meson in [66]. To include the uncertainty in the estimated value, we take the range of \(\Lambda_{c}\) from 1 to 4 GeV. The coupling parameter \(g_{K^{*}}\) is determined separately, from the empirical decay width of the \(K^{*}\) meson in vacuum, for each particular value of cut-off parameter \(\Lambda_{c}\). Then we fix the bare mass of the \(K^{*}\) meson by the relation (50), from the vacuum mass of \(K^{*}\) meson. ## V Decay width using the \({}^{3}p_{0}\) model (1) First, we study the decay width of vector \(K^{*}\) meson to two pseudoscalar mesons which are Pion and Kaon, using a \({}^{3}P_{0}\) model [21; 22]. This model was first introduced by L. Micu to calculate the decay rates of various meson resonances[19] and then extended further by A. Le Yaouanc and others to explain strong decay amplitudes of mesons and baryons[20]. In this model, a light quark-antiquark pair is assumed to be produced in the \({}^{3}P_{0}\) state having vacuum quantum numbers \(J^{PC}=0^{++}\). The quark (antiquark) of this produced pair combines with the antiquark (quark) of the parent meson, which is assumed to be at rest initially, to give the final state mesons. The matrix element for the general decay \(A\to BC\) in the \({}^{3}P_{0}\) model is given as \[{\cal M}_{A\to BC}=\left\langle A\big{|}\gamma\left[\bar{q}_{s}q_{s}\right]^{ {}^{3}P_{0}}\big{|}BC\right\rangle \tag{57}\] where \(\gamma\) is the coupling strength which is related to the probability of production of a quark antiquark pair in the \({}^{3}P_{0}\) state. The wave functions for the produced \(\bar{q}q\) pair are chosen to be simple harmonic oscillator (SHO) wave functions to calculate the strong decay amplitude. In the present case of a vector \(K^{*}\) meson decaying to two pseudoscalar mesons, the matrix element for the \(1(^{3}S_{1})\to 1(^{1}S_{0})+1(^{1}S_{0})\) channel with proper flavor factor \(I_{f}\) is given by \[{\cal M}_{K^{*}\to K\pi}=\frac{\gamma_{K^{*}}}{\pi^{1/4}\beta_{avg}^{1/2}} \bigg{[}\frac{-2^{4}}{3^{1/2}}\frac{r^{3/2}(1+r^{2})x}{(1+2r^{2})^{5/2}} \bigg{]}exp\Big{(}\frac{-x^{2}}{4(1+2r^{2})}\Big{)}I_{f} \tag{58}\] where the ratio \(r=\frac{\beta_{K^{*}}}{\beta_{avg}}\), with \(\beta_{K^{*}}\) being the harmonic oscillator potential strength for the \(K^{*}\) meson and \(\beta_{avg}\) as the average of harmonic oscillator potential strength of two daughter mesons. Taking \(r\neq 1\) i.e. \(\beta_{K^{*}}\neq\beta_{avg}\) allows to account for the different sizes of daughter and parent mesons. The factor \(I_{f}\) gives the flavor weight factor contributions from the two Feynman decay diagrams, called \(d_{1}\) and \(d_{2}\) diagrams, of the parent meson [21] and is taken to be \(1/(2\sqrt{2})\) for the above decay. The quantity '\(x\)' is the scaled momentum, given by \(p_{K}/\beta_{avg}\), carried by the daughter meson. The decay width is then given by \[\Gamma(K^{*}\to K\pi)=2\pi\frac{p_{K}E_{K}E_{\pi}}{M_{K^{*}}}\sum_{LS}\left|{\cal M }_{LS}\right|^{2} \tag{59}\] where \(p_{K}\) is the 3-momentum, given by \[p_{K}=\sqrt{\frac{M_{K^{*}}^{2}}{4}-\frac{M_{K}^{2}+M_{\pi}^{2}}{2}+\left( \frac{M_{K}^{2}-M_{\pi}^{2}}{2M_{K^{*}}}\right)^{2}} \tag{60}\] and the energies are given by \(E_{K}=\sqrt{p_{K}^{2}+m_{K}^{2}}\) and \(E_{\pi}=\sqrt{p_{\pi}^{2}+m_{\pi}^{2}}\). As the decaying meson is assumed to be at rest, momentum conservation gives \(p_{K}=p_{\pi}\). The masses \(M_{K^{*}}\) and \(M_{K}\) are the in-medium masses, from which we calculate the in-medium decay width. In this work, we do not take into account the medium modification of the pion mass. (2) The axial vector \(K_{1}(1270)\) meson is not a pure \(1^{3}P_{1}\) (\(J^{PC}=1^{++}\)) or \(1^{1}P_{1}\) (\(J^{PC}=1^{+-}\)) state, but the physically observed \(K_{1}(1270)\) and \(K_{1}(1400)\) are a mixture of the two non mass eigenstates, \(\left|K_{1A}\right\rangle\) and \(\left|K_{1B}\right\rangle\), of the two strange axial vector nonets \(1^{3}P_{1}\) and \(1^{1}P_{1}\) respectively[32; 71], \[\left|K_{1}(1270)\right\rangle =sin\,\theta_{K_{1}}\big{|}K_{1A}\big{\rangle}+cos\,\theta_{K_{1} }\big{|}K_{1B}\big{\rangle} \tag{61}\] \[\left|K_{1}(1400)\right\rangle =cos\,\theta_{K_{1}}\big{|}K_{1A}\big{\rangle}-sin\,\theta_{K_{1} }\big{|}K_{1B}\big{\rangle} \tag{62}\] where \(\theta_{K_{1}}\) is the mixing angle. The matrix element for the axial vector meson decay channel (\(K_{1}\to K^{*}\pi\)) is given as \[{\cal M}_{LS}=\frac{\gamma_{K_{1}}}{\pi^{1/4}\beta_{avg}^{1/2}}{\cal P}_{LS}(x,r)\,exp\Big{(}\frac{-x^{2}}{4(1+2r^{2})}\Big{)}I_{f} \tag{63}\] where the polynomials for various decays are given as[21; 22] \[{\cal P}_{01}^{\big{(}1^{3}P_{1}\to 1^{3}S_{1}+1^{1}S_{0}\big{)}}=\Bigg{(}2 ^{5}\Big{(}\frac{r}{1+2r^{2}}\Big{)}^{5/2}\Big{(}1-\frac{1+r^{2}}{3(1+2r^{2})} x^{2}\Big{)}\Bigg{)} \tag{64}\] \[{\cal P}_{21}^{\big{(}1^{3}P_{1}\to 1^{3}S_{1}+1^{1}S_{0}\big{)}}=-\sqrt{ \frac{5}{6}}\Bigg{(}\frac{1}{\sqrt{15}}\frac{r^{5/2}2^{5}(1+r^{2})}{(1+2r^{2}) ^{7/2}}x^{2}\Bigg{)} \tag{65}\] \[{\cal P}_{01}^{\big{(}1^{1}P_{1}\to 1^{3}S_{1}+1^{1}S_{0}\big{)}}=-\frac{1}{ \sqrt{2}}\Bigg{(}2^{5}\Big{(}\frac{r}{1+2r^{2}}\Big{)}^{5/2}\Big{(}1-\frac{1+r ^{2}}{3(1+2r^{2})}x^{2}\Big{)}\Bigg{)} \tag{66}\] \[{\cal P}_{21}^{\big{(}1^{1}P_{1}\to 1^{3}S_{1}+1^{1}S_{0}\big{)}}=-\sqrt{ \frac{5}{3}}\Bigg{(}\frac{1}{\sqrt{15}}\frac{r^{5/2}2^{5}(1+r^{2})}{(1+2r^{2} )^{7/2}}x^{2}\Bigg{)} \tag{67}\] with \(r=\frac{\beta_{K_{1}}}{\beta_{avg}}\), and \(\beta_{avg}\) being the average of harmonic oscillator potential strength of the daughter \(K^{*}\) and \(\pi\) mesons. The flavor factor \(I_{f}\) is scaled together with the coupling constant \(\gamma_{K_{1}}\) in this work. The decay width for \((K_{1}\to K^{*}\pi)\) decay is then evaluated from equation (59) after changing the corresponding variables for this decay. (3) Further, we also use a phenomenological approach for the study of \(K_{1}\to K^{*}\pi\) decay. The interaction Lagrangian is constructed from the anti-symmetric tensor fields for the axial-vector and vector meson. The \(SU(3)\) matrices associated with the tensor fields for the two axial vector nonets \(a_{1}\) and \(b_{1}\), and vector meson nonet are respectively given as \[A_{\mu\nu} = \begin{pmatrix}\frac{1}{\sqrt{2}}a_{1}^{0}+\frac{1}{\sqrt{2}}f_{ 1}(1285)&a_{1}^{+}&K_{1A}^{+}\\ a_{1}^{-}&-\frac{1}{\sqrt{2}}a_{1}^{0}+\frac{1}{\sqrt{2}}f_{1}(1285)&K_{1A}^{0 }\\ K_{1A}^{-}&K_{1A}^{\bar{0}}&f_{1}(1420)\end{pmatrix}_{\mu\nu}\] \[B_{\mu\nu} = \begin{pmatrix}\frac{1}{\sqrt{2}}b_{1}^{0}+\frac{1}{\sqrt{2}}h_{ 1}(1170)&b_{1}^{+}&K_{1B}^{+}\\ b_{1}^{-}&-\frac{1}{\sqrt{2}}b_{1}^{0}+\frac{1}{\sqrt{2}}h_{1}(1170)&K_{1B}^{0 }\\ K_{1B}^{-}&K_{1B}^{\bar{0}}&h_{1}(1380)\end{pmatrix}_{\mu\nu}\] \[V_{\mu\nu} = \begin{pmatrix}\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{\sqrt{2}} \omega^{0}&\rho^{+}&K^{*+}\\ \rho^{-}&-\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{\sqrt{2}}\omega^{0}&K^{*0}\\ K^{*-}&K^{\bar{K^{*0}}}&\phi\end{pmatrix}_{\mu\nu}\] The mesons corresponding to two nonets, \(a_{1}\) and \(b_{1}\) above, corresponds to \(J^{PC}=1^{++}\) and \(1^{+-}\) states. The interaction vertex, ensuring the SU(3) invariance, charge conjugation (C), and Parity (P) conservation, for the decay of an axial vector meson A (and B) to a vector meson V and a pseudoscalar meson P is written as [72] \[\mathcal{L}_{AVP} = i\tilde{F}\big{\langle}A_{\mu\nu}[V^{\mu\nu},P]\big{\rangle} \tag{68}\] \[\mathcal{L}_{BVP} = \tilde{D}\big{\langle}B_{\mu\nu}\{V^{\mu\nu},P\}\big{\rangle} \tag{69}\] where \(P\) is the usual \(SU(3)\) matrix for the pseudoscalar meson nonet considering the standard \(\eta-\eta^{{}^{\prime}}\) mixing. The free parameters, \(\tilde{F}\) and \(\tilde{D}\), are fitted globally from the available data of various decays and branching ratios of the members of \(a_{1}\) and \(b_{1}\) nonets in reference [72] and the mixing angle is taken to be \(\theta_{K_{1}}\sim 62^{\circ}\). The symbol \(\big{\langle}...\big{\rangle}\) represents the SU(3) trace and factor '\(i\)' ensures that the Lagrangian is hermitian. The decay width for the \(K_{1}\to K^{*}\pi\) decay calculated within the phenomenological approach is given as \[\Gamma_{K_{1}\to K^{*}\pi}=\frac{q}{8\pi M_{K_{1}}^{2}}{|\mathcal{M}|}^{2} \tag{70}\] where the momentum \(q\) of the final state particles, in the rest frame of the parent particle \(K_{1}\), is given by \[q(M_{K_{1}},M_{K^{*}},M_{\pi})=\frac{1}{2M_{K_{1}}}\sqrt{\big{(}M_{K_{1}}^{2}-( M_{K^{*}}+M_{\pi})^{2}\big{)}\big{(}M_{K_{1}}^{2}-(M_{K^{*}}-M_{\pi})^{2} \big{)}} \tag{71}\] and the matrix element is given by \[\mathcal{M}=\frac{-2\lambda_{AVP}}{M_{K_{1}}M_{K^{*}}}(p^{\prime}.p\,\epsilon^ {\prime}.\epsilon-\epsilon^{\prime}.p\,\epsilon.p^{\prime}) \tag{72}\] with \(p^{\prime},\epsilon^{\prime}\) and \(p,\epsilon\) being the momentum and polarization vector of the axial-vector \(K_{1}\) and vector \(K^{*}\) meson respectively. The decay width then becomes \[\Gamma_{K_{1}\to K^{*}\pi}=\frac{|\lambda_{AVP}|^{2}}{2\pi M_{K_{1}}^{2}}q \Big{(}1+\frac{2}{3}\frac{q^{2}}{M_{K^{*}}^{2}}\Big{)} \tag{73}\] The coefficients \(\lambda_{AVP}\) of the interaction vertex are given as \(\pm\frac{1}{\sqrt{2}}(cos\,\theta\,\tilde{D}+sin\,\theta\,\tilde{F})\), \(\pm(cos\,\theta\,\tilde{D}+sin\,\theta\,\tilde{F})\), \(\pm(cos\,\theta\,\tilde{D}+sin\,\theta\,\tilde{F})\), and \(+\frac{1}{\sqrt{2}}(cos\,\theta\,\tilde{D}+sin\,\theta\,\tilde{F})\) for the \(K_{1}^{\pm}\to K^{*\pm}\pi^{0}\), \(K_{1}^{\pm}\to K^{*0}\pi^{\pm}\), \(K_{1}^{0}\to K^{*\pm}\pi^{\mp}\), and \(K_{1}^{0}\to K^{*0}\pi^{0}\) decays respectively, with \(\theta\equiv\theta_{K_{1}}\). ## VI Results and Discussion: Now we discuss the medium modifications of the \(K^{*}\) and \(K_{1}\) meson masses and decay widths in nuclear matter including the effects of isospin asymmetry, and the effects of the magnetic field on the vector \(K^{*}\) meson will also be discussed. Isospin asymmetry corresponds to the fact that the number of neutrons in heavy ion collision experiments is more than the number of protons inside the colliding heavy ions. Isospin asymmetry factor (\(\eta\)) gives the amount of isospin asymmetry in the medium. For example, \(\eta\) = 0.5 corresponds to a medium consisting of neutrons only. We investigate the in-medium masses and decay widths at different values (\(\rho_{0}\), 2\(\rho_{0}\), and 4\(\rho_{0}\)) of medium density where nuclear matter saturation density, \(\rho_{0}\) = 0.15 \(fm^{-3}\). First, the medium modifications of scalar fields (\(\sigma\), \(\zeta\), \(\delta\), and \(\chi\)) are calculated using a chiral effective model by solving the coupled equations of motion for these fields. Then these modifications are included in scalar gluon condensates, \(\big{<}\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{a}G^{a\mu\nu}\big{>}\) and light quark condensates \(\left(\left<\bar{u}u\right>,\left<\bar{d}d\right>,\left<\bar{s}s\right>\right)\). Finally, we calculate the in-medium masses using QCD sum rule (QCDSR) approach from the medium modified scalar quark and gluon condensates, and then in-medium decay widths are calculated from the in-medium masses, using the \({}^{3}P_{0}\) model. The in-medium masses and decay widths for the \(K^{*}\) meson are also calculated from the in-medium self energy of the \(K^{*}\) meson at the one-loop level as discussed in section IV. The values of constant parameters in sections II and III, taken in this work, are given as \(\alpha_{s}(Q^{2}=1\) GeV\({}^{2})=0.3551\), \(d=0.064018\), \(m_{\pi}\)=139 MeV, pion decay constant \(f_{\pi}=93.3\) MeV, \(m_{K}\)= 498 MeV, kaon decay constant \(f_{K}=122.143\) MeV. The vacuum values of the scalar fields are \(\sigma_{0}=-93.3\) MeV, \(\zeta_{0}=-106.6\) MeV, and \(\chi_{0}\)= 409.77 MeV. The vacuum mass values of \(K^{*+}\), \(K^{*0}\), and \(K_{1}^{\pm}\) mesons are taken to be 891.67, 895.55, and 1253 MeV respectively [24; 73]. The current quark masses, used in QCD sum rule approach, for the light quarks are taken as \(m_{u}=4\) MeV, \(m_{d}=7\) MeV, and \(m_{s}=150\) MeV. The isospin asymmetry parameter, in the medium consisting of nucleons only, is given by \(\eta=\frac{\rho_{n}-\rho_{p}}{2\rho_{B}}\), where \((\rho_{n},\rho_{p})\) are the vector number density of (neutrons, protons) respectively with \(\rho_{B}\) as total baryon density. The coefficient \(\kappa_{n}\) for \(K^{*}\) and \(K_{1}\) mesons is found from the vacuum FESRs with the values of vacuum masses and values of the scalar quark and gluon condensates. The obtained values are 0.73586, 2.30955, 22.53633, and 17.77003 for the \(K^{*+}\), \(K^{*0}\), \(K_{1}^{+}\), and \(K_{1}^{0}\) mesons respectively. ### In-medium masses using QCD sum rule approach In figure (2), the masses of \(K^{*}\) and \(K_{1}\) mesons are plotted as a function of nuclear medium density in terms of nuclear matter saturation density (\(\rho_{0}\)) for various isospin asymmetry parameters (\(\eta\)), calculated within the QCD sum rule approach. The mass values are observed to decrease monotonically as the medium density increases. This is because of the fact that the chiral condensates \(\left<\bar{q}q\right>\) and the gluon condensates \(\left<\frac{\alpha_{s}}{\pi}G_{\mu\nu}^{a}G^{a\mu\nu}\right>\) decrease in magnitude as a function of medium density. The non-strange quark condensates \(\left(\left<\bar{u}u\right>,\,\left<\bar{d}d\right>\right)\) decrease more in nuclear matter as compared to strange quark condensate \(\left<\bar{s}s\right>\) and gluon condensates[3]. There is observed to be a smaller mass drop in isospin asymmetric nuclear matter (\(\eta\neq 0\)) as compared to isospin symmetric matter (\(\eta=0\)) for the charged \(K^{*+}\) and \(K_{1}^{+}\) meson. The mass modifications for the neutral \(K^{*0}\) and \(K_{1}^{0}\) mesons are observed to be smaller as compared to the corresponding charged mesons and the mass drop is slightly higher in \(\eta\neq 0\) matter, for the neutral mesons, as compared to isospin symmetric matter. Also, there is observed to be a sharper mass drop for smaller values of nuclear medium density up to around \(2\rho_{0}\). As nuclear medium density increases, the drop in strange quark condensates decreases even more which results in smaller changes in \(K^{*}\) and \(K_{1}\) meson mass at higher densities. However, the isospin asymmetry effects are observed to be larger at higher densities. Figure 2: Masses of vector \(K^{*}\) meson and axial vector \(K_{1}\) meson plotted as a function of nuclear medium density (\(\rho_{B}\)) in terms of nuclear matter saturation density \(\rho_{0}\) for various values of isospin asymmetry parameter \(\eta\), within QCD sum rule approach at zero magnetic field. higher medium density. The masses for these open strange mesons are given in table 1 and 2 as calculated within the framework of QCD sum rule approach. ### Effects of magnetic field on \(K^{*}\) masses We now discuss the effects of the magnetic field on the vector (\(K^{*}\)) and pseudoscalar (\(K\)) meson masses. First, due to Lowest Landau Level (LLL) contribution, the masses of the electrically charged mesons will be modified as given by equations (41) and (42). The charge neutral mesons will not undergo Landau quantization. We study the effects of the magnetic field at zero medium density as well as at medium density \(\rho_{B}=\rho_{0}\). Furthermore, due to PV mixing effects incorporated through effective interaction term, the masses of the longitudinal part of vector meson \(K^{*||}\) and pseudoscalar meson, both charged and neutral, are further modified as given by equation (44). We conclude that the effects of PV mixing alone are small as can be seen from figures (3), (4), and (5). Due to PV mixing, there is a level repulsion between vector \((K^{*||})\) and pseudoscalar \((K)\) meson, and the mass of vector increases, while the mass of pseudoscalar \((K)\) decreases as a function of magnetic field. The coupling parameter \(g_{PV}=\sqrt{12\pi\,m_{avg}^{2}\big{(}\Gamma(V\to P\gamma)\big{)} \big{/}e^{2}\,p_{cm}^{3}}\) is a magnetic field dependent quantity due to the Landau-level contribution to the masses and its value increases for charged vector meson \((K^{*+})\) as a function of magnetic field \((eB)\). So the effect of PV mixing increases even more at higher magnetic field values after including the contributions from Lowest Landau Level. The effects of PV mixing are also studied on the masses of open bottom mesons and Upsilon states in magnetized nuclear matter which, in turn, affects the decay width of \(\Upsilon(4S)\) to \(\bar{B}B\) pair appreciably[74]. The effects of spin-magnetic field interaction are also shown in figures (3), (4), and (5) for medium density \(\rho_{B}=0\), \(\rho_{0}(\eta=0)\), and \(\rho_{0}(\eta=0.5)\) respectively. The values are plotted after including the combined effects of Lowest Landau Level contribution and spin-magnetic field interaction (shown as label 'LLL + spin') for charged pseudoscalar \(K\) meson and longitudinal polarization \(K^{*||}\) of vector meson. The transverse polarized states are also modified, due to spin-magnetic field interaction, in the presence of magnetic field due to unequal quark masses and/or charges for the \(K^{*}\) meson. This is known as anomalous Zeeman splitting, shown as 'Z', which is due to the non-vanishing intrinsic magnetic moment of the meson. This splitting contributes to both charged as well as neutral \(K^{*}\) meson masses. The constituent quark masses are taken as \(m_{u,d}=330\) MeV and \(m_{s}=480\) MeV. The mass of vector \(K^{*||}\) meson increases and that of pseudoscalar \((K)\) meson decreases when we consider only the spin mixing effects. For the charged vector \(\big{|}1+1\big{>}\) state, there is a decrease in mass due to the spin interaction term, while the \(\big{|}1-1\big{>}\) state has a positive mass shift. However, for the neutral vector \(\big{|}1+1\big{>}\)\(\Big{(}\big{|}1-1\big{>}\Big{)}\) state, there is observed to be an increase (decrease) in the masses due to the spin interaction term which is due to the polarity of constituent quark/antiquark charges. The effects of medium density and isospin asymmetry on the pseudoscalar \(K\) meson mass are calculated within a chiral SU(3) model[75; 76]. The effects of magnetic field are observed to be quite appreciable and should be visible in the mass spectra of these particles and other observables in heavy ion collision experiments. ### In-medium masses and decay widths from self-energy The in-medium mass of \(K^{*}\) meson is calculated from the medium modified self-energy at one loop level. The effects of the medium are incorporated through in-medium \(K\) meson masses, which are calculated in chiral SU(3) model[75; 76]. The coupling parameter \(g_{K^{*}}\) is fixed for various decay channels, from the vacuum decay width given by equation (51). Figure 3: Masses of vector (\(K^{*}\)) and pseudoscalar (K) meson plotted as a function of magnetic field eB in terms of \(m_{\pi}^{2}\) at zero medium density. Here ‘LLL’ represents the Lowest Landau Level contribution, ‘PV’ represents the pseudoscalar-vector meson mixing, ‘spin’ represents the spin mixing, and ‘Z’ represents the Zeeman splitting. The three polarization states of vector \(K^{*}\) meson \(\left(\left|1+1\right>,\left|10\right>,\left|1-1\right>\right)\) are written as \(K_{+1}^{\ast\perp},K^{\ast||},K_{-1}^{\ast\perp}\). Then we fix the bare mass of \(K^{*}\) meson using equation (50) for a particular value of cut-off parameter \(\Lambda_{c}\) by taking vacuum mass values for mesons. After fixing the bare mass \(\dot{m}_{K^{*}}\) and coupling parameter \(g_{K^{*}}\), we can calculate the in-medium masses for \(K^{*}\) meson from equation (50) by inserting in-medium \(K\) meson masses. The mass shifts for \(K^{*+}\) (from \(K^{+}\pi^{0}\) and \(K^{0}\pi^{+}\) loop) and \(K^{*0}\) (from \(K^{0}\pi^{0}\) and \(K^{+}\pi^{-}\) loop) mesons in the nuclear medium at zero Figure 4: Masses of vector (\(K^{*}\)) and pseudoscalar (K) meson plotted as a function of magnetic field eB in terms of \(m_{\pi}^{2}\) at medium density \(\rho_{B}\)= \(\rho_{0}\) in isospin symmetric (\(\eta=0\)) nuclear matter. Here ‘LLL’ represents the Lowest Landau Level contribution, ‘PV’ represents the pseudoscalar-vector meson mixing, ‘spin’ represents the spin mixing, and ‘Z’ represents the Zeeman splitting. The three polarization states of vector \(K^{*}\) meson \(\left(\left|1+1\right\rangle,\left|10\right\rangle,\left|1-1\right\rangle\right)\) are written as \(K^{*\perp}_{+1},K^{*\parallel},K^{*\perp}_{-1}\). magnetic fields, for various values of cut-off parameter, are given in table 3(a) and 3(b) respectively. We observe a positive mass shift of 27.8 MeV for the charged \(K^{*+}\) meson for isospin symmetric nuclear matter at density \(\rho=\rho_{0}\) for cut-off parameter \(\Lambda_{c}=1000\) MeV. A positive self energy indicates that the nuclear mean field provides repulsion to \(K^{*+}\) meson. This mass shift can be compared with a mass shift of around +40 MeV in self-consistent Figure 5: Masses of vector (\(K^{*}\)) and pseudoscalar (K) meson plotted as a function of magnetic field eB in terms of \(m_{\pi}^{2}\) at medium density \(\rho_{B}\)= \(\rho_{0}\) in isospin asymmetric (\(\eta=0.5\)) nuclear matter. Here ‘LLL’ represents the Lowest Landau Level contribution, ‘PV’ represents the pseudoscalar-vector meson mixing, ‘spin’ represents the spin mixing, and ‘Z’ represents the Zeeman splitting. The three polarization states of vector \(K^{*}\) meson \(\left(\left|1+1\right\rangle,\left|10\right\rangle,\left|1-1\right\rangle\right)\) are written as \(K^{*\perp}_{+1},K^{*\parallel},K^{*\perp}_{-1}\). scattering amplitude calculations and a mass modification of +50 MeV, at \(\rho_{0}\), in the low density T\(\rho\) approximation [77]. **Table III(a): Vector \(K^{*+}\) meson mass shifts (MeV) from \(K\pi\) loop for \(eB\) = 0** \begin{tabular}{|c||c|c|c||c|c||c|c||c|c|c|c|c|} \hline & \multicolumn{6}{c||}{\(\rho=\rho_{0}\)} & \multicolumn{6}{c|}{\(\rho=4\rho_{0}\)} \\ \hline & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c||}{\(\eta=0.5\)} & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c|}{\(\eta=0.5\)} \\ \hline \(\Lambda_{c}\) & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Delta\) m & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Delta\) m & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Delta\) m & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Delta\) m \\ \hline 1000 & 18.0 & 9.8 & 27.8 & 11.9 & 11.8 & 23.7 & 5.3 & 6.0 & 11.3 & 3.6 & 8.7 & 12.3 \\ \hline 2000 & 10.3 & 4.2 & 14.5 & 5.9 & 6.1 & 12.0 & 4.3 & 12.0 & 16.3 & 1.6 & 7.6 & 9.2 \\ \hline 3000 & 5.5 & 2.6 & 8.1 & 6.0 & 5.0 & 11.0 & 9.2 & 2.2 & 11.4 & 3.9 & 10.9 & 14.8 \\ \hline 4000 & 7.1 & 8.1 & 15.2 & 7.1 & 8.0 & 15.1 & 11.8 & 8.9 & 20.7 & 6.3 & 11.0 & 17.3 \\ \hline \end{tabular} **Table III(b): Vector \(K^{*0}\) meson mass shifts (MeV) from \(K\pi\) loop for \(eB\) = 0** \begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c|c|c|} \hline & \multicolumn{6}{c||}{\(\rho=\rho_{0}\)} & \multicolumn{6}{c|}{\(\rho=4\rho_{0}\)} \\ \hline & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c||}{\(\eta=0.5\)} & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c|}{\(\eta=0.5\)} \\ \hline \(\Lambda_{c}\) & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Delta\) m & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Delta\) m & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Delta\) m & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Delta\) m \\ \hline 1000 & 1.5 & 15.6 & 17.1 & 3.4 & 16.9 & 20.3 & 1.8 & 19.5 & 21.3 & 0.7 & 7.7 & 8.4 \\ \hline 2000 & 0.8 & 7.7 & 8.5 & 3.8 & 0.6 & 4.4 & 2.1 & 3.1 & 5.2 & 0.6 & 3.7 & 4.3 \\ \hline 3000 & 2.2 & 0.4 & 2.6 & 1.8 & 1.7 & 3.5 & 6.1 & 2.3 & 8.4 & 3.6 & 5.4 & 9.0 \\ \hline 4000 & 6.5 & 2.0 & 8.5 & 4.6 & 12.7 & 17.3 & 4.0 & 5.4 & 9.4 & 5.6 & 4.5 & 10.1 \\ \hline \end{tabular} The partial decay widths of various decay channels of vector \(K^{*+}\) and \(K^{*0}\) mesons to pseudoscalar kaon and pion are also given in Tables III(c) and III(d), where the effects of medium density and isospin asymmetry are studied. These are calculated using equation (51) by considering the in-medium masses of \(K^{*}\) as calculated from the self-energy loop and \(K\) meson masses calculated within the chiral model[76]. As the increase in \(K\) meson masses is more as compared to vector \(K^{*}\) meson masses, the in-medium decay width is seen to decrease when compared to its vacuum value. We also observe that the decay widths have very small dependency on the cut-off parameter \(\Lambda_{c}\) which is an encouraging result. The wide range of cut-off parameter value were taken to account for the rough estimate made by comparing the vertex form factor \(u(q^{2})\) of 56 with the vertex form factor calculated in \({}^{3}P_{0}\) model. **Table III(c): Vector \(K^{*+}\) meson partial decay widths from \(K\pi\) loop for \(eB\) = 0** \begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c|c|} \hline & \multicolumn{6}{c||}{\(\rho\) = \(\rho_{0}\)} & \multicolumn{6}{c|}{\(\rho\) = \(4\rho_{0}\)} \\ \hline & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c||}{\(\eta=0.5\)} & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c|}{\(\eta=0.5\)} \\ \hline \(\Lambda_{c}\) & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Gamma\) & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Gamma\) & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Gamma\) & \(K^{+}\pi^{0}\) & \(K^{0}\pi^{+}\) & \(\Gamma\) \\ \hline 1000 & 15.6 & 29.6 & 45.2 & 15.4 & 29.1 & 44.5 & 12.2 & 24.4 & 36.6 & 13.6 & 16.7 & 30.3 \\ \hline 2000 & 15.0 & 28.7 & 43.7 & 14.9 & 28.1 & 43.0 & 12.1 & 25.4 & 37.5 & 13.5 & 16.5 & 30.0 \\ \hline 3000 & 14.6 & 28.4 & 43.0 & 14.9 & 27.9 & 42.8 & 12.5 & 23.8 & 36.3 & 13.7 & 17.0 & 30.7 \\ \hline 4000 & 14.7 & 29.3 & 44.0 & 15.0 & 28.4 & 43.4 & 12.7 & 24.9 & 37.6 & 13.8 & 17.0 & 30.8 \\ \hline \end{tabular} **Table III(d): Vector \(K^{*0}\) meson partial decay widths from \(K\pi\) loop for \(eB\) = 0** \begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c|c|} \hline & \multicolumn{6}{c||}{\(\rho=\rho_{0}\)} & \multicolumn{6}{c|}{\(\rho=4\rho_{0}\)} \\ \hline & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c||}{\(\eta=0.5\)} & \multicolumn{3}{c||}{\(\eta=0\)} & \multicolumn{3}{c|}{\(\eta=0.5\)} \\ \hline \(\Lambda_{c}\) & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Gamma\) & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Gamma\) & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Gamma\) & \(K^{0}\pi^{0}\) & \(K^{+}\pi^{-}\) & \(\Gamma\) \\ \hline 1000 & 13.3 & 28.5 & 41.8 & 13.1 & 29.2 & 42.3 & 11.3 & 24.7 & 35.9 & 7.5 & 25.8 & 33.3 \\ \hline 2000 & 13.3 & 27.3 & 40.6 & 13.1 & 26.7 & 39.8 & 11.3 & 22.2 & 33.5 & 7.5 & 25.2 & 32.6 \\ \hline 3000 & 13.4 & 26.1 & 39.5 & 12.9 & 26.9 & 39.8 & 11.6 & 22.1 & 33.7 & 7.7 & 25.4 & 33.1 \\ \hline 4000 & 13.7 & 26.4 & 40.1 & 13.2 & 28.6 & 41.7 & 11.4 & 22.6 & 34.0 & 7.8 & 25.3 & 33.1 \\ \hline \end{tabular} We also compute the effects of strong magnetic field on vector \(K^{*}\) masses, through the self energy loop, from the pseudoscalar \(K\) meson masses calculated after including the effects of Landau quantization ( for charged mesons only) and spin mixing (through \(-\mu_{i}.B\) term) in the presence of magnetic field. These are tabulated in III(e) and III(f) at various values of cut-off parameter \(\Lambda_{c}\). We observe that the modifications in \(K^{*}\) meson masses due to self energy loop are small when compared with the direct effects of magnetic field like Landau quantization and spin-magnetic field interaction. **Table III(e): Vector \(K^{*+}\) meson masses from \(K\pi\) loop at \(\rho_{B}=0\)** \begin{tabular}{|c||c|c|c|c|c||c|c|c|c|} \hline & \multicolumn{4}{|c||}{\(K^{+}\pi^{0}\)} & \multicolumn{4}{|c|}{\(K^{0}\pi^{+}\)} \\ \hline \(eB/m_{\pi}^{2}\)\(\Lambda_{c}\) & 1000 & 2000 & 3000 & 4000 & 1000 & 2000 & 3000 & 4000 \\ \hline 0 & 891.67 & 891.67 & 891.67 & 891.67 & 891.67 & 891.67 & 891.67 & 891.67 \\ \hline 2 & 887.81 & 894.24 & 894.72 & 892.58 & 889.75 & 904.00 & 894.77 & 922.53 \\ \hline 4 & 893.73 & 889.92 & 894.03 & 899.75 & 900.90 & 886.30 & 894.24 & 893.08 \\ \hline 6 & 891.73 & 891.70 & 889.53 & 890.44 & 897.61 & 894.75 & 895.77 & 893.40 \\ \hline 8 & 894.41 & 894.14 & 899.05 & 895.98 & 890.769 & 892.27 & 891.2 & 893.14 \\ \hline 10 & 895.75 & 895.60 & 898.77 & 895.94 & 895.51 & 898.78 & 888.47 & 880.38 \\ \hline \end{tabular} **Table III(f): Vector \(K^{*0}\) meson masses from \(K\pi\) loop at \(\rho_{B}=0\)** \begin{tabular}{|c||c|c|c|c|c||c|c|c|} \hline & \multicolumn{4}{|c||}{\(K^{0}\pi^{0}\)} & \multicolumn{4}{|c|}{\(K^{+}\pi^{-}\)} \\ \hline \(eB/m_{\pi}^{2}\)\(\Lambda_{c}\) & 1000 & 2000 & 3000 & 4000 & 1000 & 2000 & 3000 & 4000 \\ \hline 0 & 895.55 & 895.55 & 895.55 & 895.55 & 895.55 & 895.55 & 895.55 & 895.55 & 895.55 \\ \hline 2 & 883.93 & 890.73 & 896.40 & 899.95 & 897.67 & 902.91 & 898.48 & 901.17 \\ \hline 4 & 893.47 & 896.23 & 896.75 & 905.33 & 903.26 & 905.13 & 894.39 & 895.69 \\ \hline 6 & 890.78 & 891.09 & 901.23 & 903.09 & 899.86 & 894.35 & 897.74 & 896.41 \\ \hline 8 & 890.47 & 888.51 & 897.80 & 898.71 & 904.42 & 901.33 & 902.63 & 904.85 \\ \hline 10 & 893.10 & 891.30 & 893.76 & 899.33 & 905.43 & 904.15 & 905.23 & 903.89 \\ \hline \end{tabular} ### In-medium decay width of vector \(K^{*}\) meson First, we study the decay width of vector \(K^{*}\) meson to two pseudoscalar mesons (Kaon and Pion), using a \({}^{3}P_{0}\) model, from the mass modifications of vector \(K^{*}\) meson and pseudoscalar Kaon (\(K\)). The mass modifications for the pseudoscalar Kaon have been studied using a chiral SU(3) model[76]. The medium modifications for the pion are not considered in this work so as to focus more on the strangeness degrees of freedom. The vacuum masses for pseudoscalar mesons are taken to be \(m_{K^{+}}=493.677\) MeV, \(m_{K^{0}}=497.611\) MeV, \(m_{\pi^{\pm}}=139.57039\) MeV, \(m_{\pi^{0}}=134.9768\) MeV [24; 73]. By putting the vacuum values of decay widths \(\Gamma(K^{*}\to K\pi)\) and vacuum masses for various decay channels, we find the coupling strength \(\gamma\) related to the strength of \({}^{3}P_{0}\) vertex of each channel individually. The coupling strength parameter \(\gamma\) for the decays (\(K^{*+}\to K^{+}\pi^{0}\)), (\(K^{*+}\to K^{0}\pi^{+}\)), (\(K^{*0}\to K^{0}\pi^{0}\)), (\(K^{*0}\to K^{+}\pi^{-}\)) comes out to be 0.1242963, 0.1781849, 0.1198312, 0.1673111 MeV respectively. We assume spherical harmonic oscillator potential for the wave functions Figure 6: Decay width of vector \(K^{*}\) meson plotted as a function of nuclear medium density in terms of nuclear matter saturation density \(\rho_{0}\) for various values of isospin asymmetry parameter \(\eta\) for zero magnetic field case. of vector \(K^{*}\) as well as for pseudoscalar \(K\) and \(\pi\) mesons. The harmonic oscillator strength parameter for pion, fitted from its charge radius squared value (0.4 \(fm^{2}\)), is (211 MeV)\({}^{-1}\)[78; 79]. Then we find the harmonic oscillator strength parameter for the \(K\) meson by assuming the ratio \(\beta_{K}/\beta_{\pi}\) to be the same as the ratio of their charge radii, \((r_{ch})_{\pi}/(r_{ch})_{K}\). The charge radius for \(K\) and \(K^{*}\) mesons are taken to be \(\left((r_{ch})_{K}=0.56fm\right)\) and \(\left((r_{ch})_{K^{*}}=0.74fm\right)\), which gives \(\beta_{K}=(238.3\) MeV\()^{-1}\) and \(\beta_{K^{*}}\)= (184.84 MeV)\({}^{-1}\)[25]. By taking into account the mass modifications of vector \(K^{*}\) meson calculated within the QCD sum rule approach and of pseudoscalar \(K\) meson using the chiral SU(3) model, we calculate the in-medium decay width for the decay (\(K^{*}\to K\pi\)) for various sub-channels by using the \({}^{3}P_{0}\) model. The decay width for the \(K^{*}\) meson for various decay channels is plotted in figure (6) as a function of relative medium density (\(\rho_{B}/\rho_{0}\)) for different isospin asymmetry parameter (\(\eta=0,0.5\)) and is also given in table 4. We observed that the decay width for each decay channel decreases as the medium density is increased. This is because the mass for \(K^{*}\) meson decreases as a function of density, while the mass of pseudoscalar Kaon (both \(K^{+}\) and \(K^{0}\)) increases with density as calculated within the chiral SU(3) model. The effect of isospin asymmetry on the decay width is also more pronounced at higher density. The drop in decay width for the channels (\(K^{*+}\to K^{+}\pi^{0}\)) and (\(K^{*+}\to K^{0}\pi^{+}\)) is observed to be more for isospin symmetric case (\(\eta=0\)) as compared to isospin asymmetric case (\(\eta\neq 0\)), while for the (\(K^{*0}\to K^{+}\pi^{-}\)) and (\(K^{*0}\to K^{0}\pi^{0}\)) decay channels, the drop is observed to be more for isospin asymmetric case (\(\eta\neq 0\)). This is due to the fact that as the medium density is increased, the mass of charged \(K^{+}\) meson is modified less for isospin asymmetric matter as compared to isospin symmetric matter, while neutral \(K^{0}\) meson mass is modified greatly for isospin asymmetric matter[76]. **Table IV(a): Decay width of vector \(K^{*+}\) meson** \begin{tabular}{|l||l|l|l|l|} \hline & \(\Gamma(K^{*+}\to K^{+}\pi^{0})\) (MeV) & \(\Gamma(K^{*+}\to K^{0}\pi^{+})\) (MeV) \\ \hline \hline density & \(\eta=0\) & \(\eta=0.5\) & \(\eta=0\) & \(\eta=0.5\) \\ \hline \hline \(\rho_{B}=0\) & 16.98 & 16.98 & 33.77 & 33.77 \\ \hline \(\rho_{0}\) & 2.98 & 4.11 & 5.48 & 6.90 \\ \hline \(2\rho_{0}\) & 0.10 & 0.73 & 0.05 & 0.23 \\ \hline \end{tabular} **Table IV(b): Decay width of vector \(K^{*0}\) meson** \begin{tabular}{|l||l|l|l|l|} \hline & \(\Gamma(K^{*0}\to K^{0}\pi^{0})\) (MeV) & \(\Gamma(K^{*0}\to K^{+}\pi^{-})\)(MeV) \\ \hline \hline density & \(\eta=0\) & \(\eta=0.5\) & \(\eta=0\) & \(\eta=0.5\) \\ \hline \hline \(\rho_{B}=0\) & 15.87 & 15.87 & 31.31 & 31.31 \\ \hline \(\rho_{0}\) & 4.25 & 3.63 & 8.45 & 7.99 \\ \hline \(2\rho_{0}\) & 0.91 & 0.30 & 1.76 & 1.69 \\ \hline \end{tabular} Furthermore, we have studied the effects of strong magnetic field on the decay widths of each decay channel by considering the effects from each polarization state individually. The coupling parameter \(\gamma\) of the \({}^{3}P_{0}\) model is fitted individually for each polarization state. In figure (7), the partial decay widths of vector \(K^{*}\) meson are plotted as a function of magnetic field (eB) after including the effects of Landau quantization and spin mixing (\(-\mu_{i}.B\) term). The qualitative behavior of decay width, as a function of magnetic field, reflects the medium dependence of the masses of mesons involved in that decay channel. As the neutral \(K^{*0}\) vector meson does not undergo Landau quantization, the decay width modifications for transverse polarized states are due to medium modifications of vector mesons due to Zeeman splitting and pseudoscalar meson mass due to spin mixing, while for the longitudinal component, it is due to medium modifications of both longitudinal part of vector meson and pseudoscalar particle due to spin mixing. In figures (8) and (9), the effects of magnetic field are shown after calculating the masses from QCDSR at nuclear medium density equal to nuclear matter saturation density (\(\rho_{0}\)), in isospin symmetric (\(\eta=0\)) and asymmetric (\(\eta=0.5\)) matter respectively. ### In-medium decay width of Axial vector \(K_{1}\) meson The two physical strange axial vector mesons \(K_{1}(1270)\) and \(K_{1}(1400)\) are the admixture of two strange members, \(K_{1A}\) and \(K_{1B}\), of two axial vector nonets. The larger branching ratio of \(K_{1}(1400)\) to \(K^{*}\pi\) as compared to \(\rho K\) decay channel, and of \(K_{1}(1270)\) to \(\rho K\) as compared to \(K^{*}\pi\) decay mode, indicates that there is a large mixing between \(K_{1A}\) and \(K_{1B}\) to give the physical mesons[72]. We have studied the in-medium \(K_{1}\to K^{*}\pi\) decay width from the medium modifications of the \(K_{1}\) and \(K^{*}\) meson masses, calculated within the QCD sum rule approach. In a phenomenological approach, the decay width is given by equation (73). The parameters \(\tilde{F}\) and \(\tilde{D}\) are fitted in [72] from the various observed decay channels of \(K_{1}(1270)\) and \(K_{1}(1400)\) mesons as 1400 MeV and \(-1250\) MeV respectively. This fitting leads to the vacuum partial decay widths for the \(K_{1}^{+}\to K^{*+}\pi^{0}\), \(K_{1}^{+}\to K^{*0}\pi^{+}\), \(K_{1}^{0}\to K^{*+}\pi^{-}\), and \(K_{1}^{0}\to K^{*0}\pi^{0}\) decays as \(6.2789,12.3173,12.4761\), and \(6.1998\) MeV respectively. The in-medium decay width is observed to increase as the medium density is increased as shown in Fig. 7. The decay width is determined by the ratio of the \(\rho_{\rm B}\) and \(\rho_{\rm B}\) values to the \(\rho_{\rm B}\) values. The ratio of the \(\rho_{\rm B}\) values to the \(\rho_{\rm B}\) values is shown in Fig. in figure (10). This is due to a larger relative decrease in the \(K^{*}\) meson mass as compared to the relative mass decrease for the \(K_{1}\) meson within the QCDSR approach. As the density of the nuclear medium increases, we observed a significant increase of the effects of isospin Figure 8: Partial decay widths of vector \(K^{*}\) meson plotted as a function of magnetic field (eB) at medium density equal to nuclear matter saturation density (\(\rho_{0}\)) with \(\eta=0\). The effects of Landau quantization and spin mixing are included. The decay widths for different polarization states \(\left(\left|1+1\right\rangle,\left|10\right\rangle,\left|1-1\right\rangle\right)\) are written as \(\perp(+1)\), \(\left|\,\right|\), and \(\perp(-1)\) respectively. asymmetric matter. Moreover, the \(K_{1}\) meson decay width is also studied using the \({}^{3}P_{0}\) model. The properties of strange axial vector mesons are still not very clear[80] and we take the vacuum partial decay widths to be the same as calculated within the phenomenological approach. The decay widths of an axial vector (\(A_{1}\rightarrow\rho\pi\)) are also studied in a phenomenological approach in reference [62]. The main idea here is to analyze the qualitative behavior of the strange axial vector mesons. Figure 9: Partial decay width of vector \(K^{*}\) meson plotted as a function of magnetic field (eB) at medium density equal to nuclear matter saturation density (\(\rho_{0}\)) with \(\eta=0.5\). The effects of Landau quantization and spin-magnetic field interaction are included. The decay widths for different polarization states \(\left(\left|1+1\right\rangle,\left|10\right\rangle,\left|1-1\right\rangle\right)\) are written as \(\perp\) (\(+1\)), \(\left|\ \right|\), and \(\perp\) (\(-1\)) respectively. axial-vector mesons. In figure (11), we have plotted the partial decay widths in nuclear medium calculated in \({}^{3}P_{0}\) model. The variation trends in the decay width are caused by the interplay between the mass modifications of the involved \(K^{*}(K^{*+}\) or \(K^{*0})\) and \(K_{1}(K_{1}^{+}\) or \(K_{1}^{0})\) mesons. There is observed to be a smaller variation as a function of medium density when compared with the results of the phenomenological approach. Figure 10: Decay width of Axial vector \(K_{1}\) meson, plotted as a function of nuclear medium density (\(\rho_{B}\)) in terms of nuclear matter saturation density (\(\rho_{0}\)), calculated within a phenomenological Lagrangian approach. ## VII Summary: In the present work, we have studied the in-medium masses of vector \(K^{*}\) meson and axial vector \(K_{1}\) meson in isospin asymmetric nuclear matter, using the QCD sum rule approach. The self-energy loop of \(K^{*}\) meson is evaluated at one loop level from the medium modifications of the kaon masses. In the presence of a strong magnetic field, the effects of Landau quantization, PV mixing, and spin-magnetic field interactions are also included for the \(K^{*}\) meson and are found to be quite significant. The decay widths of various decay channels for the \(K^{*}\to K\pi\) decay are also studied by using the \({}^{3}P_{0}\) model. Also, the partial Figure 11: Decay width of Axial vector \(K_{1}\) meson, plotted as a function of nuclear medium density (\(\rho_{B}\)) in terms of nuclear matter saturation density (\(\rho_{0}\)), calculated within a \({}^{3}P_{0}\) model. decay widths for the \(K_{1}\to K^{*}\pi\) decay are analyzed within a phenomenological approach as well as within the \({}^{3}P_{0}\) model. In QCDSR, the mass modifications in the medium occur due to medium modifications of the light quark condensates and scalar gluon condensates, which are calculated using a chiral effective Lagrangian approach within a chiral effective model. The effects of density are found to be more pronounced than the effects of isospin asymmetry of the nuclear medium. The isospin asymmetry effects are observed to be more pronounced at higher medium density. The effects of PV mixing and spin mixing are observed to be larger at larger magnetic fields. The effects of medium density and/or magnetic field on decay widths are a reflection of medium modifications of the masses of mesons involved. This present analysis of the in-medium properties of open strange particles might find relevance in heavy-ion collision experiments in the Relativistic Heavy Ion Collider (RHIC) low-energy scan programme and the High Acceptance DiElectron Spectrometer (HADES) Collaboration at GSI, Darmstadt.
2310.05957
Double Fountain Effect in Superfluid Helium
A double fountain pressure model is used to analyze the recent measurements of Yu and Luo (arXiv2211.02236v4) of superfluid $^4$He flow between two chambers held at different temperatures via two superleaks and an intervening third chamber that spontaneously achieves a temperature higher than both fixed temperatures. The physical origin of the increased temperature in the intervening chamber is attributed to the balance between the rate of mechanical energy deposited by superfluid transport and the rate of convection back to the lower temperature chambers. An equation is given for the pressure of the third chamber, the measurement of which would confirm or refute the theory.
Phil Attard
2023-09-12T03:37:46Z
http://arxiv.org/abs/2310.05957v1
# Double Fountain Effect in Superfluid Helium ###### Abstract A double fountain pressure model is used to analyze the recent measurements of Yu and Luo (arXiv2211.02236v4) of superfluid \({}^{4}\)He flow between two chambers held at different temperatures via two superleaks and an intervening third chamber that spontaneously achieves a temperature higher than both fixed temperatures. The physical origin of the increased temperature in the intervening chamber is attributed to the balance between the rate of mechanical energy deposited by superfluid transport and the rate of convection back to the lower temperature chambers. An equation is given for the pressure of the third chamber, the measurement of which would confirm or refute the theory. Recently Yu and Luo (2022) carried out measurements on superfluid \({}^{4}\)He below the \(\lambda\) transition temperature \(T_{\lambda}\) using the experimental arrangement depicted in figure 1. Starting empty, the high temperature chamber B gradually fills at a steady rate over many hours. After about 2 hours chamber C attains a steady temperature \(T_{C}\) that is higher than the fixed temperatures of either of the two other chambers, but still below the \(\lambda\)-transition temperature, \(T_{\lambda}>T_{C}>T_{B}>T_{A}\). The high temperature induced in chamber C by the superfluid flow is at first sight surprising. Yu and Luo (2022) conclude: "The two-fluid model proposes that a super flow of \({}^{4}\)He carries no thermal energy...This experimental result directly contradicts the pivotal hypothesis of the two-fluid model" (Yu and Luo 2022 page 3). And also "The two-fluid model postulates the existence of a super fluid component that possesses an exotic characteristic of zero entropy...the zero entropy assumption requires this temperature to be absolute zero" (Yu and Luo 2022 page 3). To resolve these problems Yu and Luo (2022) propose a replacement for the two-fluid model, namely that low-lying energy levels occur in particular groups that are thermally populated by \({}^{4}\)He atoms. The claim that the measurements refute the accepted model of superfluidity merits close scrutiny. The interpretation of the conventional two-fluid model by Yu and Luo (2022) is not without foundation. F. London (1938) explained superfluidity and the \(\lambda\)-transition as Bose-Einstein condensation into the ground energy state, as Einstein (1925) had explicitly proposed (Balibar 2014). Tisza (1938) explained superfluid hydrodynamics by postulating that helium II had zero entropy. Landau's (1941) phonon-rotation theory focusses on the ground state for helium II, (and solely the first excited state for helium I). H. London (1939) derived the fountain pressure equation, for which there is overwhelming quantitative experimental evidence (Donnelly and Barenghi 1998), by asserting that condensed bosons have zero entropy. In contrast, I believe that the derivation of the fountain pressure equation by H. London (1939) is flawed, although the equation itself is correct, and that in fact the equation implies that superfluid flow is at constant entropy, not zero entropy (Attard 2022, 2023a, 2023b). Also I believe that Bose-Einstein condensation is into multiple low-lying momentum states, not the ground state (Attard 2023a, 2023c). In this paper I show how the Bose-Einstein condensation, two-fluid model of superfluidity, modified with these two ideas, explains the temperature increase observed by Yu and Luo (2022). The experimental arrangement (figure 1) is modeled as two fountain pressure systems with a common high temperature, high pressure chamber C. As mentioned, the original fountain pressure equation (H. London 1939) has stood the test of time, although I have a different derivation and interpretation (Attard 2022, 2023a). I find that permutation entropy is what drives Bose-Einstein condensation, and that the fountain pressure equations imply that condensed bosons are driven to minimize the energy at constant entropy, which leads to equality of chemical potential (Attard 2022, 2023a). This is thermodynamically equivalent to the H. London (1939) formula for the fountain pressure (Attard 2022, 2023a). The fact that the present system is in a steady state with continuous flow from A through C to B does not fun Figure 1: Two chambers containing saturated \({}^{4}\)He at fixed temperatures \(T_{A}<T_{B}<T_{\lambda}\) and connected by superleaks to a closed container C. damentally affect the analysis since for slow changes one can invoke local thermodynamic equilibrium. It is standard for the fountain pressure to be measured with steady flow in the superleak (Keller and Hammel, 1960). What drives the fountain pressure, and superfluid flow more generally, is the minimization of the energy at constant entropy, which is equivalent to chemical potential equality between connected superfluid regions (Attard, 2022, 2023a). In the present steady state case with chambers A and B held at different temperatures, \(T_{B}>T_{A}\), equality is not possible because the chemical potential decreases with increasing temperature along the saturation curve, \(\mu_{B}^{\rm sat}<\mu_{A}^{\rm sat}\)(Donnelly and Barenghi, 1998). Instead the most that can be achieved in the steady state is for the chemical potential in chamber C to be the average of that in the fixed temperature chambers, \[\mu_{C}=\frac{1}{2}[\mu_{A}^{\rm sat}+\mu_{B}^{\rm sat}]. \tag{1}\] With this the difference in chemical potential across each superleak is \(\Delta_{\mu}=\mu_{A}^{\rm sat}-\mu_{C}=\mu_{C}-\mu_{B}^{\rm sat}>0\). Below I show that if the superfluid number flux is proportional to the difference in chemical potentials, then for identical superleaks this result ensures mass conservation. For an incompressible liquid, the change in pressure equals the number density times the change in chemical potential (Attard, 2002). Hence the preceding result gives the pressure in chamber C as \[p_{C} = p_{C}^{\rm sat}+\rho_{C}^{\rm sat}[\mu_{C}-\mu_{C}^{\rm sat}] \tag{2}\] \[= p_{C}^{\rm sat}+\frac{\rho_{C}^{\rm sat}}{2}[\mu_{A}^{\rm sat}+\mu _{B}^{\rm sat}-2\mu_{C}^{\rm sat}].\] Measured values as a function of temperature for the various quantities on the right hand side have been tabulated for \({}^{4}\)He (Donnelly and Barenghi, 1998). If one measures the temperature \(T_{C}\) then this gives the pressure \(p_{C}\). Measuring both \(T_{C}\) and \(p_{C}\) would confirm or refute the present double fountain pressure model and analysis of the experimental arrangement. A second equation explains the elevated temperature of the closed intermediate chamber. Since superfluid flow is driven to equalize the chemical potential (Attard, 2022, 2023a), the simplest assumption is that in the steady state the number flux in each superleak is linearly proportional to the chemical potential difference across it, \[J_{N,AC}=K_{AC}[\mu_{A}^{\rm sat}-\mu_{C}]=K_{AC}\Delta_{\mu}. \tag{3}\] The superfluid transport coefficient depends on the material properties of the superleak, likely scaling with the cross-sectional area while being independent of the length. The validity of this assumed linear form should be checked by actual measurement. Similarly \[J_{N,CB}=K_{CB}[\mu_{C}-\mu_{B}^{\rm sat}]=K_{CB}\Delta_{\mu}. \tag{4}\] In the steady state, mass conservation gives \(J_{N,AC}=J_{N,CB}\). For identical superleaks, \(K_{AC}=K_{CB}\), and so these equations confirm that the chemical potential difference must be the same across the two superleaks. The superfluid flows at constant entropy (Attard, 2022, 2023a, 2023b). The fountain pressure equation (H. London, 1939) minimizes the energy at constant entropy (Attard, 2022, 2023a). Since \(\partial E(S,V,N)/\partial N=\mu\)(Attard, 2002), the rate of energy transport by superfluid flow is just the chemical potential times the number flux. Hence the rate of energy change of chamber \(C\) due to superfluid flow through it is \[\dot{E}_{C}^{\rm sf} = [\mu_{A}^{\rm sat}J_{N,AC}-\mu_{C}J_{N,CB}] \tag{5}\] \[= [\mu_{A}^{\rm sat}-\mu_{C}]J_{N}\] \[= K\Delta_{\mu}^{2}.\] This is positive irrespective of which chamber has the higher temperature. This assumes that there is no gradient in chemical potential within the superleaks, so that there is a step change in chemical potential at their exits. This result shows that superfluid flow carries energy, and it explains how chamber C is heated by that flow. In the steady state this superfluid energy flux into the chamber must be equal and opposite to the heat flow from the chamber to the two fixed temperature chambers A and B. The heat flux in helium II has been measured (F. London and Zilsel, 1948, Keller and Hammel, 1960), including in powdered superleaks (Schmidt and Wiechert, 1979). The rate of change of the energy in chamber \(C\) due to conduction via the walls, powder, and liquid of the superleaks is proportional to the temperature gradients, \[\dot{E}_{C}^{\rm cond} = \Lambda_{AC}L_{AC}^{-1}[T_{C}^{-1}-T_{A}^{-1}]+\Lambda_{AB}L_{AB}^ {-1}[T_{C}^{-1}-T_{B}^{-1}] \tag{6}\] \[= \Lambda L^{-1}[2T_{C}^{-1}-T_{A}^{-1}-T_{B}^{-1}]\] \[\equiv \Lambda L^{-1}\Delta_{T}^{\rm tot}.\] This is just Fourier's law (in inverse temperature), with \(\Lambda\) being the effective thermal conductivity, and \(L\) the length of the superleak. If the temperature of C is greater than the fixed temperatures, \(T_{C}>T_{B}>T_{A}\), then \(\Delta_{T}^{\rm tot}<0\), and energy is conducted out of chamber C. Evidently and obviously, the larger \(T_{C}\), the greater the rate of energy loss by conduction. If conduction is the dominant mechanism for the heat back-flow, then the chamber temperature \(T_{C}\) is determined by the steady state condition, \(\dot{E}_{C}^{\rm cond}+\dot{E}_{C}^{\rm sf}=0\). In fountain pressure measurements there is viscous flow from the high pressure chamber through the connecting capillary, frit, or superleak (Keller and Hammel, 1960). The viscous number flux from chamber C should be linearly proportional to the sum of the pressure gradients, \(\Delta_{p}^{\rm tot}/L\equiv[2p_{C}-p_{A}^{\rm sat}-p_{B}^{\rm sat}]/L\). Hence the convective rate of energy change scales as \[\dot{E}_{C}^{\rm conv}\propto-\Delta_{p}^{\rm tot}h_{C}\approx-\Delta_{p}^{\rm tot }h_{C}^{\rm sat}, \tag{7}\] where \(h\) is the enthalpy per particle, which is taken at saturation to make use of readily available data. If convective heat flow dominates the heat back-flow, then is determined by the steady state condition, \(\dot{E}_{C}^{\rm conv}+\dot{E}_{C}^{\rm sf}=0\). Presumably radiation losses are negligible. In table 1 the measured temperatures (Yu and Luo 2022) are used to test these results. The saturated chemical potentials and enthalpies are derived from data given by Donnelly and Barenghi (1998), corrected as explained by Attard (2022, 2023a). The predicted pressure \(p_{C}\) is substantially higher than the saturated vapor pressures (eg. \(p^{\rm sat}(1.5\,K)=0.47\,\)kPa and \(p^{\rm sat}(2.0\,K)=3.13\,\)kPa) (Donnelly and Barenghi 1998). As mentioned, comparison of the calculated and measured pressure would test the present theory. According to the present theory, if conduction dominates, the ratio \(-\Delta_{\mu}^{2}/\Delta_{T}^{\rm tot}\) should be positive and constant in any one series of measurements. If convection dominates, \(\Delta_{\mu}^{2}/(h_{C}^{\rm sat}\Delta_{p}^{\rm tot})\) should be positive and constant. In both cases in table 1 the energy flux ratio is positive. Over the series of measurements it varies by about a factor of seven for conductive heat flow, and by about a factor of two for convective heat flow. These results suggest that it is mainly heat transported by viscous flow that counters the superfluid energy flux and stabilizes the steady state temperature \(T_{C}\). Further measurements are required to quantitatively clarify the situation. In conclusion, the double fountain pressure model provides a basis to analyze the experimental arrangement of Yu and Luo (2022). The results confirm that superfluid flow carries both entropy and energy and that it is driven to minimize energy at constant entropy (Attard 2022, 2023a, 2023b). This gives a physical mechanism for the temperature increase in the closed chamber, and it reconciles the experimental measurements with the (modified) Bose-Einstein condensation, two-fluid model of superfluidity. The experimental design of Yu and Luo (2022) might provide a quantitative measurement method for the superfluid transport coefficient.
2306.00111
Broadband VLA Spectral Line Survey of a Sample of Ionized Jet Candidates
The study of the interaction between ionized jets, molecular outflows and their environments is critical to understanding high-mass star formation, especially because jets and outflows are thought to be key in the transfer of angular momentum outwards from accretion disks. We report a low-spectral resolution VLA survey for hydrogen radio recombination lines, OH, NH$_3$, and CH$_3$OH lines toward a sample of 58 high-mass star forming regions that contain numerous ionized jet candidates. The observations are from a survey designed to detect radio continuum; the novel aspect of this work is to search for spectral lines in broadband VLA data (we provide the script developed in this work to facilitate exploration of other datasets). We report detection of 25$\,$GHz CH$_3$OH transitions toward ten sources; five of them also show NH$_3$ emission. We found that most of the sources detected in CH$_3$OH and NH$_3$ have been classified as ionized jets or jet candidates and that the emission lines are coincident with, or very near ($\lesssim 0.1$ pc) these sources, hence, these molecular lines could be used as probes of the environment near the launching site of jets/outflows. No radio recombination lines were detected, but we found that the RMS noise of stacked spectra decreases following the radiometer equation. Therefore, detecting radio recombination lines in a sample of brighter free-free continuum sources should be possible. This work demonstrates the potential of broadband VLA continuum observations as low-resolution spectral line scans.
E. Sanchez-Tovar, E. D. Araya, V. Rosero, P. Hofner, S. Kurtz
2023-05-31T18:35:24Z
http://arxiv.org/abs/2306.00111v1
# Broadband VLA spectral line survey of a sample of ionized jet candidates ###### Abstract The study of the interaction between ionized jets, molecular outflows and their environments is critical to understanding high-mass star formation, especially because jets and outflows are thought to be key in the transfer of angular momentum outwards from accretion disks. We report a low-spectral resolution VLA survey for hydrogen radio recombination lines, OH, NH\({}_{3}\), and CH\({}_{3}\)OH lines toward a sample of 58 high-mass star forming regions that contain numerous ionized jet candidates. The observations are from a survey designed to detect radio continuum; the novel aspect of this work is to search for spectral lines in broadband VLA data (we provide the script developed in this work to facilitate exploration of other datasets). We report detection of 25 GHz CH\({}_{3}\)OH transitions toward ten sources; five of them also show NH\({}_{3}\) emission. We found that most of the sources detected in CH\({}_{3}\)OH and NH\({}_{3}\) have been classified as ionized jets or jet candidates and that the emission lines are coincident with, or very near (\(\lesssim 0.1\) pc) these sources, hence, these molecular lines could be used as probes of the environment near the launching site of jets/outflows. No radio recombination lines were detected, but we found that the RMS noise of stacked spectra decreases following the radiometer equation. Therefore, detecting radio recombination lines in a sample of brighter free-free continuum sources should be possible. This work demonstrates the potential of broadband VLA continuum observations as low-resolution spectral line scans. ISM: jets and outflows \(-\) stars: formation \(-\) ISM: molecules and recombination lines \(-\) radio lines: ISM \(-\) techniques: interferometric + Footnote †: journal: ApJS + Footnote †: journal: ApJS ## 1 Introduction High-mass stars (M \(\gtrsim\) 8 M\({}_{\odot}\)) have widespread effects in the medium where they form and are likely a key factor that regulates further star formation in their natal clouds by feedback mechanisms, from the development of HII regions to supernovae (e.g., Zinnecker & Yorke, 2007). During the earliest phases of formation, high-mass protostars can drive large (\(\gtrapprox 1\) pc) and massive (\(\gtrapprox 10\) M\({}_{\odot}\)) molecular outflows that inject mechanical energy into the natal environment (e.g., Rodriguez et al., 2021; Torii et al., 2017; Bally, 2016; Arce et al., 2007). At smaller scales (\(\sim 10^{4}\) a.u.), high-sensitivity radio continuum observations find evidence that young high-mass protostars also drive ionized jets (e.g., Purser et al., 2021; Purser et al., 2016; Rosero et al., 2019). In many cases, ionized jets are found to be approximately co-linear with large-scale molecular outflows (e.g., Araya et al., 2007). Therefore, a connection between molecular outflows and ionized jets is expected, including the possibility that ionized jets can drive outflows. This idea is supported by observations that suggest a correlation between the momentum rates of molecular outflows and partially optically thick ionized jets (e.g., Rosero et al., 2019). Spectral line studies are needed to detect and characterize molecular gas to explore the connection between ionized jets and the origin of outflows, particularly at the interphase of ionized jets. Traditionally, studies of radio continuum and spectral lines involve different observing settings (or independent runs), one to measure radio continuum and one to observe specific spectral lines. However, with the WIDAR correlator of the NSF's Karl G. Jansky Very Large Array (VLA), continuum mode observations are effectively _simultaneous_ spectral line scans, albeit with broad channel widths. Spectral line detections from continuum mode observations have the additional advantage of having the nearly-identical baseline coverage, i.e., distribution of complex visibilities in the \((u,v)\)-plane. Therefore, images of radio continuum emission (including ionized jets) can be directly compared to spectral line detections at similar angular resolution. In this work we explore if narrow and bright maser lines or broad and weaker thermal lines1 from hydroxyl (OH), ammonia (NH\({}_{3}\)) and methanol (CH\({}_{3}\)OH) are detectable in low spectral resolution surveys obtained as a byproduct of continuum mode observations, as transitions from these species have been detected in high-mass star forming regions (e.g., Tan et al., 2020; Henkel et al., 2013). Transitions of CH\({}_{3}\)OH and NH\({}_{3}\) are particularly interesting as masers have been detected together with thermal components of the same transitions, which is uncommon in other widespread maser species such as H\({}_{2}\)O (e.g., Towner et al., 2017; Goddi et al., 2015; Zhang & Ho, 1995). Low spectral resolution surveys could also result in the detection of radio recombination lines (RRLs), particularly using stacking techniques (e.g., Beuther et al., 2016; Liu et al., 2013). Footnote 1: We refer to _thermal_ lines those that originate from gas in local thermodynamic equilibrium (LTE) conditions as well as from gas close to LTE (often referred to as _quasi-thermal_ lines in the literature, e.g., Araya et al., 2008). We evaluate the idea of VLA continuum-mode observations as broadband spectral line scans by exploring the continuum observations of high-mass star-forming regions conducted by our group with the VLA (Rosero et al., 2016). In Rosero et al. (2019), we characterized the sample of radio continuum sources detected in our main survey, in which we found a high fraction of ionized jet sources (including candidates). This paper explores whether transitions of NH\({}_{3}\), CH\({}_{3}\)OH, and OH, as well as RRLs can be detected as by-products of the continuum observations. In Section 2 we describe the data used in this project as well as the algorithm developed to search for spectral lines from specific species and to stack spectral windows (SPWs) at RRL frequencies. In Section 3 we discuss our detections of NH\({}_{3}\) and CH\({}_{3}\)OH lines, as well as the non-detections of excited OH and RRLs. A summary is presented in Section 4. The novel idea presented in this paper, i.e., the use of observations designed for radio-continuum measurements as low resolution spectral line scans, is applicable to any VLA broadband dataset. We provide a general purpose script that can be used to search for spectral lines from other VLA observations. ## 2 Dataset and script development Between 2010 and 2014, our group conducted an extensive survey for weak radio continuum sources toward 58 high-mass star-forming regions with the Karl G. Jansky Very Large Array (VLA) (Rosero et al., 2016). The sources in Rosero et al. (2016) were selected based on no previous radio continuum detection (or very weak detection at the \(\sim 1\) mJy level), i.e., a sample consistent with early evolutionary stages (see Rosero et al., 2016 and Rosero et al., 2019 for a detailed discussion of the sample). The main goal of the project was to investigate the nature of the weak continuum sources, which resulted in the detection of 70 sources associated with 1.2 mm dust clumps (Beuther et al., 2002, Rathborne et al., 2006; see details in Rosero et al., 2016). We found that a significant fraction of them (\(\sim 30\%\)) may be tracing ionized jets (Rosero et al., 2019). The observations were conducted at C and K bands, using scaled arrays to achieve similar angular resolutions, i.e., C-band observations were conducted in A configuration and the K-band observations were obtained in B configuration. In each band, two sets of 8 spectral windows (128 MHz bandwidth in most cases) were simultaneously observed, each set covered a bandwidth of 1024 MHz centered at 4.9 GHz (C-band), 7.4 GHz (C-band), 20.9 GHz (K-band) and 25.5 GHz (K-band). Each spectral window comprised 64 channels. During the several years of the project, some variations were adopted in the spectral setups, and thus, not all frequencies were always observed in the same broadband continuum setup. Further details of the observation setup, including the results of the continuum observations, are reported in Rosero et al. (2016). Even though we are interested in investigating spectral lines in our specific continuum dataset, our approach is applicable to any pipeline calibrated visibility file (also known as Measurement Set, MS) from the VLA archive. Hence, we implemented the method of searching and imaging spectral windows with potentially interesting spectral lines in a Python script accessible through GitHub2 (see installation and usage details in Appendix A). The script uses tasks and functions from the NRAO software package CASA3. Given that stacking of spectral lines may be needed for specific applications, e.g., search for RRLs, our script has two main components: 1) identification/imaging of individual spectral windows, and 2) stacking of spectral line cubes. Specifically: - Step 1 (identification/imaging): The script generates a report of metadata of the MS file, including the frequency range associated with each spectral window. The script then compares the frequency ranges with a file of transitions of interest and runs the task tclean to create cubes of the spectral windows in the LSRK reference frame using the rest frequencies. Given the large channel width of the observations, spectral line shifts due to Earth's rotation during an observing run are a small fraction of the channel width, and therefore, do not significantly contribute to spectral line dilution.4 - Step 2 (stacking): Continuum subtraction is done in the image plane by fitting a zero order polynomial baseline to a subset of channels of each spectral cube. The script regrids each cube to have the effective channel width (in velocity) of the lowest frequency spectral window to be stacked, i.e., regrids to the lowest velocity resolution. Finally, the script generates the stacked cube using the RMS to weight the relative contribution of each cube (further details are included in Appendix A). Footnote 2: [https://github.com/AstroLab-WIV/Spectral_Line_Search_and_Stack](https://github.com/AstroLab-WIV/Spectral_Line_Search_and_Stack) Footnote 3: The code was tested in CASA version 5.1, 5.6 and 6.2. Footnote 4: See discussion of velocity reference frames and Doppler correction in [https://science.nrao.edu/facilities/vla/docs/manuals/obguide/modes/line](https://science.nrao.edu/facilities/vla/docs/manuals/obguide/modes/line). In some cases, e.g., search for specific CH\({}_{3}\)OH transitions, only Step 1 is needed, while in others, e.g., search for thermal lines from the same species, both steps are necessary. As a positive control test to check the procedure developed in this project (Appendix A), we applied the script to continuum observations of the ultra-compact (UC) HII region W3(OH) (project number 14A-014; PI: L. F. Rodriguez). The observations were conducted in March 2014 with the VLA in A-configuration. The scheduling-block contained spectral windows between 18 and 37 GHz; each spectral window had 64 channels, a bandwidth of 128 MHz and a channel width of 2 MHz. To make the test as automatic as possible, the data were only calibrated with the VLA pipeline without any additional flagging or self-calibration. The script selected spectral windows covering hydrogen RRL frequencies and created cubes using these spectral windows. Some spectral cubes had imaging artifacts and were subsequently excluded. The script then subtracted the radio-continuum in the image plane of the remaining cubes, re-sampled the spectra to a common channel width and synthesized beam, and stacked the cubes. Figure 1 presents the result of the W3(OH) data; the left panel shows the radio continuum (in contours) and the peak channel of the stacked cube in colors. As expected, the peak emission of the stacked RRL data is coincident with the peak radio continuum of the UCHII region. The right-panel shows the spectra of all RRL transitions that were stacked to generate the blue spectrum. As is evident from the figure, the RRL signal is clearly visible in the stacked spectrum. Even though the W3(OH) observations significantly filtered out extended emission due to poorly sampled short spacing visibilities, particularly at the highest frequencies, and that the peak RRL flux densities are underestimated due to the large channel width of the continuum mode observations, Figure 1 demonstrates that RRL lines can be detected from stacking of continuum-mode VLA observations. The potential of stacking spectral lines has been demonstrated in many previous studies (e.g., Beuther et al., 2016, Jolly et al., 2020, and references therein). In contrast, the novel aspect presented in this article is stacking of spectra from broadband data, thus, opening a new discovery space of low-velocity resolution spectral-line scans as by-product of VLA continuum observations. The stacking approach used in this paper is similar to that of LINESTACKER (Jolly et al., 2020), i.e., image-plane stacking based on CASA tasks. We note that stacking in the \((u,v)\)-plane is likely to provide better results than stacking in the image-plane, however, developing of such strategy in our code is beyond the objective of this work (see Jolly et al., 2020 and references therein for a discussion of stacking in the \((u,v)\)-plane). A practical difference between the tool developed in this project and LINESTACKER is that the script provided here requires calibrated MS files as inputs (not cubes, which are generated automatically by the script), while LINESTACKER requires a list of pre-existing data cubes to be stacked. The strategy of using the calibrated MS file instead of pre-existing cubes facilitates the discovery process, as _a priori_ knowledge of whether the rest frequency of a spectral line is in one of the SPWs of the MS file is not needed (the script searches for SPWs where spectral lines may be located). Moreover, creating spectral cubes of 'continuum-mode' broadband SPWs is usually not done when creating continuum images, and therefore, cubes are not readily available to use LINESTACKER in typical reduction of continuum datasets. However, a caveat in the use of the script provided in this work is that the VLA calibration pipeline can flag (mask) very bright _bona-fide_ spectral lines as radio frequency interference (RFI). Nevertheless, the automatic flagging is unlikely to remove most lines from continuum spectral windows given frequency dilution of narrow bright lines into broad channels. In case of doubt, users can remove the flags applied to the science sources of a calibrated dataset before running the script developed in this project. The script was applied to the 58 young high-mass star forming regions of the high-sensitivity VLA radio continuum survey carried out by Rosero et al. (2016). We used the script to search for H\(\alpha\)RRLs, as well as all OH, CH\({}_{3}\)OH and NH\({}_{3}\) transitions listed in Splatalogue5 up to energy levels between 500 K and 10,000 K depending on the molecular species and source (see below). Footnote 5: Spectroscopy information of all transitions in this article is from Splatalogue ([https://splatalogue.online/](https://splatalogue.online/)). ## 3 Results and Discussion We report upper limits of excited OH transitions, detections of NH\({}_{3}\) and CH\({}_{3}\)OH, and upper limits of RRLs. We cannot rule out that lines reported in this work are caused by transitions from other species (coincidental detections), however, given that the spectral density of bright molecular lines is low at cm wavelengths with respect to mm and sub-mm bands (e.g., ALMA) and the previous detections of these transitions toward other sources as reported in the literature (see discussion below), we consider that the possibility of misidentification of spectral lines is low. Moreover, several lines from similar excitation energies are detected for CH\({}_{3}\)OH and NH\({}_{3}\) species, which suggests a correct identification of the lines. Nevertheless, high-spectral resolution observations are required to confirm the detections reported in this work and rule out misidentifications. Table 1 lists the OH transitions in the broadband spectral windows of the Rosero et al. (2016) sample, channel width and typical RMS. In the following sections, we discuss the NH\({}_{3}\), CH\({}_{3}\)OH, and RRL results in more detail. ### Nh\({}_{3}\) Lines The script was used to find the NH\({}_{3}\) frequencies listed in Splatalogue that were serendipitously included (whether detected or not) in the SPWs of the Rosero et al. (2016) observations. We found that between 11 to 16 transitions were included in the SPWs when we searched for lines up to \(E_{l}/k_{B}=10,000\) K (minimum energy level corresponded to the transition (4,1) at \(E_{l}/k_{B}=279\) K and the maximum energy level corresponded to the transition (28,24) at \(E_{l}/k_{B}=8359\) K). Different sources have different number of transitions due to different tuning of the SPWs. We detected NH\({}_{3}\) emission lines toward five sources (Figures 2 to 6). Of them, NH\({}_{3}\) masers have been reported only toward IRAS 20126+4104 but at other transitions [(3,3), (4,4); Zhang et al. 1999]. As shown in Table 2, the detections include para (\(K\neq 3n\)) and ortho (\(K=3n\)) states, from both metastable [(6,6), (7,7)] and non-metastable [(4,1), (5,3), (6,4), (8,6), (7,5), (9,7), (10,8)] transitions. For the five sources with NH\({}_{3}\) detections, 3 to 6 different NH\({}_{3}\) transitions were detected per source; in all cases the emission mostly coincides with the peak continuum in a single locus. The RMS of the spectra were typically smaller than 1 mJy; the maximum peak flux density of 17.5 mJy was found towards G34.43+00.24mm1A at 68 km s\({}^{-1}\) (in a 24 km s\({}^{-1}\) channel width). Given the large channel width of our data (23 to 29 km s\({}^{-1}\) depending on the transition), we cannot use linewidths as proxy to determine whether the lines are due to a maser or thermal processes. For example, Figure 7 shows the (6,6) NH\({}_{3}\) spectrum toward G34.43+00.24mm1A (blue lines in all panels). The left panels show that a bright (300 mJy) and narrow (2 km s\({}^{-1}\) FWHM; red-dashed line) hypothetical maser can reproduce the observed spectrum when smoothed to our large channel width (green-dashed line). However, as shown in the right panels of Figure 7, a hypothetical thermal line (40 mJy peak and 15 km s\({}^{-1}\) FHWM; red-dashed line) can also reproduce the observed data when smoothed to the channel width of the spectrum (green-dashed line). We note that as observed in the W51 region, thermal and maser lines of NH\({}_{3}\) can have values of peak flux density and FWHM similar to those used in Figure 7 (e.g., Goddi et al. 2015, Henkel et al. 2013). We note that because of low spectral resolution, our brightness temperature lower limits (\(T_{b}>10\) K) are too low to disentangle the maser or thermal nature of the lines, i.e., all NH\({}_{3}\) transitions reported in this work could be thermal. Nevertheless, a maser interpretation could be supported based on the angular size of the emission. For example, as shown in Figures 2 to 6, the NH\({}_{3}\) emission is mostly compact (\(<1^{\prime\prime}\)) with emission cores smaller than 0.5\({}^{\prime\prime}\) in at least four of the sources. This is in contrast to more extended NH\({}_{3}\) thermal emission detected toward other sources, e.g., \(\sim 4^{\prime\prime}\) angular size of (6,6) NH\({}_{3}\) emission in W51 IRS2 (Goddi et al. 2015; see also Kraemer and Jackson 1995). Based on the spatial distribution of our different detections (Figures 2 to 6), we find that the peaks of the NH\({}_{3}\) emission lines of metastable and non-metastable transitions are coincident with the peak radio continuum in most sources, although some positional offsets are observed, i.e., several transitions toward IRAS 18566+0408 (Figure 5). This behavior is similar to the (9,6) NH\({}_{3}\) masers toward Cepheus A and G34.26+0.15 (Yan et al. 2022). Further high-spectral and angular resolution observations are needed to investigate the position and velocity offsets between different transitions, as, for example, Goddi et al. (2015) found that ortho [(6,6), (9,9)] and para [(7,7)] NH\({}_{3}\) masers in W51 show different spatial and velocity distributions. High-excitation NH\({}_{3}\) masers have been reported toward tens of high-mass star forming regions (see Wilson et al. 1982; Hofner et al. 1994; Zhang et al. 1999; Walsh et al. 2011; Hoffman 2012; Henkel et al. 2013; Mills et al. 2018; Mei et al. 2020; Yan et al. 2022, and references therein), including non-metastable masers toward W51, W49, DR21, NGC 7538, Cepheus A, G34.26+0.15, Sgr B2(N) and NGC 6334 (with possible detection of the (11,9) and (8,6) transitions toward G19.61\(-\)0.23, Walsh et al. 2011) and metastable masers toward IRAS 20126+4104, W33, Sgr B2 Main, the DR21 region (DR21(OH), DR21 HII), W51, G5.89\(-\)0.39, G9.62+0.19, NGC 6334, in addition to possible detections toward Sgr B2 and G23.33\(-\)0.30, and a few active/starburst galaxies (IC 342, Gorski et al. 2018, Lebron et al. 2011; NGC 253, Gorski et al. 2017; NGC 3079, Miyamoto et al. 2015). If high spectral resolution observations toward our detections confirm the maser interpretation, we will have added three new sources to the sample of high-mass star-forming regions with known NH\({}_{3}\) masers. Also, the (4,1) line would be the first detection of this maser transition in the ISM. NH\({}_{3}\) masers are weaker with respect to other well known transitions of H\({}_{2}\)O or CH\({}_{3}\)OH. For example, the low-sensitivity (\(\sim 2\) Jy detection limit) HOPS survey resulted in only two new possible NH\({}_{3}\) masers out of a sample of hundreds of sources with H\({}_{2}\)O masers (Walsh et al. 2011). As pointed out in the literature (e.g., Goddi et al. 2015; Hoffman and Joyce 2014) many questions on the nature of these masers remain unanswered, including whether the masers are tracers of outflows (Zhang et al. 1999, Yan et al. 2022), disks/tori or quiescent material (e.g., see Hoffman and Joyce 2014 for modeling of a (9,3) maser velocity gradient in terms of a Keplerian disk or a rotating torus). Hunter et al., 2008; Beuther et al., 2007; Hofner et al., 1994, Zhang & Ho, 1995; non-metastable transitions: Yan et al., 2022; Hoffman & Joyce, 2014; Hoffman, 2012; Hoffman & Seojin Kim, 2011; Walsh et al., 2007; Gaume et al., 1993; Pratap et al., 1991; Wilson et al., 1990). The importance of interferometric observations of these maser transitions is exemplified by W51-IRS2, where Goddi et al. (2015) found that the NH\({}_{3}\) masers do not originate from the prominent molecular outflow traced by SiO and H\({}_{2}\)O masers but could be related to the outflow from a binary companion (at least the (7,7) transition; see also a similar interpretation of the (6,6) NH\({}_{3}\) maser in NGC 6334I, Beuther et al., 2007). In contrast, the NH\({}_{3}\) masers in NGC7538 IRS1 could be associated with a rotating torus, where variability of the (3,3) \({}^{15}\)NH\({}_{3}\) maser may be due to entrainment in the interface between the torus and the outflow (e.g., Hoffman & Joyce, 2014, and references therein; see also interferometric observations of NH\({}_{3}\) (9,6) masers in Cep A and G34.26+0.25 reported by Yan et al., 2022). W51-IRS2 is one of the richest sources of NH\({}_{3}\) masers known (19 maser transitions have been reported; Henkel et al., 2013). The high detection rate of emission lines toward the five sources reported in this work (despite our low spectral resolution) indicates that these sources may also be rich in NH\({}_{3}\) masers. Goddi et al. (2015) reported the first imaging study of high-excitation NH\({}_{3}\) masers up to 850 K in a high-mass star-forming region and Yan et al. (2022) recently reported interferometric observations of the (9,6) line (1090 K); if confirmed, we will have extended the imaging study of high-excitation NH\({}_{3}\) masers up to energies of 1220 K above ground. A common characteristic between the possible masers reported here and the ensemble of maser transitions in W51-IRS2 (Henkel et al., 2013) is that the strongest line detected in each source corresponds to the only ortho-NH\({}_{3}\) metastable transition observed (Table 2). As proposed by Henkel et al. (2013), such strong deviation from LTE could be due to the greater statistical weights of ortho-states (resulting in greater column densities) and/or deviations caused by connection between the high excitation ortho-states to the \(K=0\) level. Hence, future high angular and spectral resolution observations of the (3,3) NH\({}_{3}\) transition toward our sample could result in the detection of masers (e.g., Zhang & Ho, 1995), as is the case with IRAS 20126+4104 (Zhang et al., 1999). A particularly interesting aspect of the detections presented in this work is the lack of strong (greater than a few mJy at cm wavelengths, Rosero et al., 2016) radio continuum (in contrast to regions such as W51, Sgr B2 and W49). If confirmed, these NH\({}_{3}\) masers would be tracing very young phases of high-mass star formation before the development of ultracompact HII regions. Gorski et al. (2017) mention that NH\({}_{3}\) masers in starburst/active galaxies may originate from the interface of ionized outflows with surrounding molecular gas. Arguments supporting the association between NH\({}_{3}\) masers with the interface between ionized and molecular gas have been presented in the case of high-mass star-forming regions in the Galaxy (e.g., Walsh et al., 2011, Hunter et al., 2008, Beuther et al., 2007, Kraemer & Jackson, 1995), including the surfaces of hot expanding molecular shells (Martin-Pintado et al., 1999). If such an environment is also responsible for the putative NH\({}_{3}\) masers reported in this work, these masers may offer a window into the interaction between the expanding ionized jets and molecular envelopes, leading to momentum transfer into molecular outflows. The pumping of NH\({}_{3}\) masers is still unclear as multiple mechanisms have been suggested depending on whether the masers are from metastable or non-metastable states (e.g., see Mills et al., 2018). Proposed mechanisms include radiative excitation from a chance line overlap, collisional excitation of the ortho-states (e.g., Yan et al., 2022; Goddi et al., 2015; Zhang et al., 1999; Mangum & Wootten, 1994; Flower et al., 1990; see also laboratory masers created via collisional excitation reported by Willey et al., 1995), and infrared pumping (Madden et al., 1986). For example, Goddi et al. (2015) mention that, although collisional excitation is a viable mechanism for metastable ortho-NH\({}_{3}\) transitions, very high densities would be needed for the mechanism to operate for non-metastable states (see also Henkel et al., 2013). We note that our observations resulted in the detection of transitions with the same \(K\) value [(6,6) and (8,6); (7,7) and (9,7)], which suggests that multiple transitions sharing the same \(K\) state may be inverted; this should help constrain the physical conditions leading to the pumping (see Brown & Cragg, 1991). ### CH\({}_{3}\)OH Lines As done with the NH\({}_{3}\) transitions, the script was used to find the CH\({}_{3}\)OH frequencies listed in Splatalogue that were serendipitously included in the SPWs of the Rosero et al. (2016) observations (whether detected or not). We found that between 8 to 10 transitions were included in the SPWs up to energy levels of 500 K (the transition with the minimum lower energy level was the 6(2)-6(1) E1 vt=0 line at \(E_{l}/k_{B}=70\) K, and the maximum lower energy level of a transition was the 8(2)-7(3) E1 vt=1 line at \(E_{l}/k_{B}=482\) K). Different sources have different number of transitions due to different tuning of the SPWs. For sources with detections (see below), we explored whether other detections occurred at higher energy levels (we checked up to 5,000 K) but none were found6. Out of the CH\({}_{3}\)OH transitions serendipitously included in the SPWs, we report detection of CH\({}_{3}\)OH emission at 25 GHz toward ten sources. The detections correspond to the E-type \(\Delta J=0\), \(\Delta K=2-1\) series for \(J\) values between 6 and 10 (see energy level diagram in Leurini et al., 2016); these transtions have been detected before in the interstellar medium (Ladeyschikov et al., 2019, Voronkov et al., 2006, Towner et al., 2017). Table 3 presents the line parameters for sources with detection in at least one transition. The RMS of the spectra were smaller than \(\sim\)2 mJy; the maximum peak flux density of 19 mJy was found towards G34.43+00.24mm1A at 61 km s\({}^{-1}\). The mean RMS of the sources with detection in at least one transition is 0.54 mJy, the mean flux density is 5.6 mJy (24 km s\({}^{-1}\) channel-width). Figures 8 to 17 show the peak channel image and corresponding spectra of all sources with detection of at least one transition. In total, we detected emission of five different CH\({}_{3}\)OH transitions, e.g., see the detections toward IRAS 19012+0536A (Table 3). For some sources, some transitions were not included in the frequency band coverage of the observations, therefore, they are not listed in Table 3, e.g., the 10(2)-10(1) transition was not observed toward IRAS 18089\(-\)1732 A. In cases where the five transitions were observed toward a source, but not all were detected, we include detection limits in the table (e.g., IRAS 18553+0414 A). Footnote 6: Given the richness of CH\({}_{3}\)OH spectra listed in Splatalogue, we found it impractical to search for spectral lines above 500 K toward the complete sample. Cubes are generated for every transition, which requires significant computing time and auxiliary storage that could not be justified for sources with no detection of low energy transitions. This was particularly evident when sources with detection of low energy transitions resulted in non-detections above 500 K. The 25 GHz CH\({}_{3}\)OH transitions detected in this work correspond to Class I masers when inverted (e.g., Towner et al., 2017). However, as in the case of NH\({}_{3}\) lines (Section 3.1), the standard methods to establish the nature of the emission (thermal vs maser) using brightness temperature, linewidth, line ratios or spatial extent (e.g, Towner et al., 2017) are not conclusive in our case because the peak flux density of the lines is underestimated due to low spectral resolution, particularly in the case of narrow lines (see Figure 7). Despite our high angular resolution (\(\sim\)0.4\({}^{\prime\prime}\)), the average brightness temperature of CH\({}_{3}\)OH lines in our sample is \(\sim 50\) K, which is too low to distinguish between maser and thermal emission, our channel width is too broad to determine linewidths. Extended CH\({}_{3}\)OH emission (e.g., see the detections toward IRAS 20126+4104, Figure 17) could be due to the superposition of different maser spots, and a line ratio analysis is premature due to the underestimated flux density values as a result of spectral dilution. Nevertheless, we find it likely that our CH\({}_{3}\)OH detections encompass both thermal and maser lines because: - _Thermal:_ 1. In some sources the CH\({}_{3}\)OH detections are seen in all observed transitions toward the peak continuum, which is consistent with thermal emission (the detected transitions have similar, i.e., within a factor of \(\sim 2\), energy above ground). 2. The energy levels of the transitions reported here (Table 3) are between 70 K and 150 K, which could easily be achieved in hot molecular core environments of high-mass star forming regions (e.g., Araya et al., 2005 and references therein). - _Maser:_ 1. While the CH\({}_{3}\)OH lines in some sources trace the peak radio continuum (e.g., Figure 8), in at least four cases (Figures 9, 13, 15, 16) the CH\({}_{3}\)OH emission is offset from the peak radio continuum as expected from Class I masers (e.g., Kurtz et al., 2004, Cyganowski et al., 2009, Araya et al., 2009). 2. Some transitions are found toward the same lines-of-sight (presumably the same physical locations) while other transitions are not. For example, the 9(2)-9(1) regions B and C in IRAS 18182\(-\)1433 are not detected in the lower J transitions (Figure 9), and the distribution of CH\({}_{3}\)OH emission IRAS 20126+4104 differs between transitions (Figure 17), which is also expected in the case of masers (e.g., Towner et al., 2017). To our knowledge, we report first detections in nine of the ten sources. The exception is IRAS 18182\(-\)1433 (G16.59\(-\)0.05), which was also observed by Towner et al. (2017) with the VLA. They conducted higher spectral resolution (0.4 km s\({}^{-1}\) channel width) but lower angular resolution (\(\theta_{syn}\sim 1^{\prime\prime}\)) observations compared to our data. They reported two 25 GHz CH\({}_{3}\)OH emission regions detected in the 3\({}_{2}\), 5\({}_{2}\), 8\({}_{2}\), and 10\({}_{2}\) transitions (see their Figure 1)7. The southern source, which they named G16.59\(-\)0.05_b is coincident with our source A (Figure 9), and was reported to have a narrow linewidth (0.8 km s\({}^{-1}\)) and peak flux density of the 5\({}_{2}\) line of 320 mJy, which implied a brightness temperature of 7200 K, and therefore, was classified as a maser (the 5\({}_{2}\) transition was not observed in our work). We note that such linewidth and flux density are very similar to those we used in Figure 7 to demonstrate the dichotomy between thermal and maser lines in our data. The other CH\({}_{3}\)OH emission region reported by Towner et al. (2017) (G16.59\(-\)0.05_a, which is coincident with our source B in Figure 9) was classified as thermal because of its broader linewidth (2.8 km s\({}^{-1}\)) and low brightness temperature of the 5\({}_{2}\) line (44 K for a 36 mJy line; see their Table 5); such a weak, narrow line would have been undetectable in our data. Therefore, our detection toward the same position (B), and our additional detection toward position C in a different transition (9(2)-9(1); Figure 9) imply that the CH\({}_{3}\)OH lines reported in this work toward B and C are also masers. The only transition we have in common with Towner et al. (2017) toward IRAS 18182\(-\)1433 is the 8\({}_{2}\) line, for which we detect emission toward the region A (Figure 9; G16.59\(-\)0.05\(\_\)b in Towner et al., 2017). They reported a peak flux density of 205 mJy for this line; smoothing their line as illustrated in Figure 7 to a 24 km s\({}^{-1}\) channel width results in a line that is consistent with our measurement of 4.3 mJy within the RMS of our data. We note that the LSR velocities reported by Towner et al. (2017) toward the two CH\({}_{3}\)OH emission regions in 18182\(-\)1433 (i.e., 58.6 and 61.4 km s\({}^{-1}\)) correspond very well with the LSR velocities we list in Table 3 (i.e., \(\sim 60\) km s\({}^{-1}\)), which exemplifies the reliability of our method to find spectral lines from broadband VLA continuum observations (note that the channel width of our observations is 24 km s\({}^{-1}\)). Of the ten sources with 25 GHz CH\({}_{3}\)OH detections, six were previously observed by our group in Gomez-Ruiz et al. (2016) and Rodriguez-Garza et al. (2017) with the VLA in the 44 GHz CH\({}_{3}\)OH Class I maser transition (G23.01\(-\)0.41A, G34.43\(+\)00.24mm1A, IRAS 19012\(+\)0536A, and G53.25\(+\)00.04mm4A were not observed). All six sources harbor 44 GHz CH\({}_{3}\)OH masers, but as commonly observed with CH\({}_{3}\)OH Class I transitions, the masers were mostly offset with respect to the central radio continuum source, in some cases by as much as \(\sim 15\arcsec\) along aligned structures suggestive of shocks in outflows. Out of this sample of six sources, we found 44 GHz CH\({}_{3}\)OH masers coincident (within less than 0.5\(\arcsec\) angular offset) with 25 GHz CH\({}_{3}\)OH detections toward IRAS 18182\(-\)1433 (region A in Figure 9) and IRAS 18553\(+\)0414 A (Figure 13). The spatial coincidence between the 25 GHz CH\({}_{3}\)OH detections reported in this work and the known 44 GHz CH\({}_{3}\)OH masers suggests that these specific detections correspond to Class I masers. In the case of the 25 GHz CH\({}_{3}\)OH detections found toward the radio continuum sources, the lines may correspond to thermal emission (e.g., see section 4.3 of Towner et al., 2017). However, in the two sources where 44 GHz and 25 GHz CH\({}_{3}\)OH components are coincident, the angular separation between the masers and the radio continuum is less than \(\sim 1\arcsec\), i.e., these are Class I masers found very close to young massive stellar objects. It is therefore possible that some others of our 25 GHz CH\({}_{3}\)OH detections may also be masers, but that the excitation of the 25 GHz CH\({}_{3}\)OH lines requires conditions found nearer massive YSO (\(\lesssim 0.1\) pc), while 44 GHz masers are often found farther away from the continuum source (e.g., Kurtz et al., 2004, Voronkov et al., 2014). If this were the case, 25 GHz CH\({}_{3}\)OH Class I masers could preferentially be tracing shocked gas closer to the massive YSOs, and therefore, could trace the expansion of ionized jets, similar to the H\({}_{2}\)O masers detected in W75N(B)-VLA2 (Carrasco-Gonzalez et al., 2015). As pointed out by the numerical models of Leurini et al. (2016), the masers from the 25 GHz ladder are inverted at higher densities than other Class I masers (\(10^{6}\) vs \(10^{4}\) cm\({}^{-3}\)) and therefore, may trace denser gas in shocks near massive YSOs. Further high-spectral resolution observations are needed to explore this possibility and to contrast the spatial distribution of possible masers in our sample with respect to the spatial distribution of 25 GHz CH\({}_{3}\)OH masers in other regions as reported in the literature (e.g., Voronkov et al., 2006, Towner et al., 2017). ### Association with Ionized Jets We found that both NH\({}_{3}\) and CH\({}_{3}\)OH detections are located very near the continuum sources (coincident in most cases, or within 2\(\arcsec\), which corresponds to a physical separation between \(\sim 0.1\) to 0.01 pc given the distances as listed in Rosero et al., 2016). Out of the ten regions with CH\({}_{3}\)OH detection, five have been classified as ionized jets and three as ionized jet candidates, i.e., eight regions belong to the group of 25 sources listed as ionized jets or candidates in Tables 2 and 3 of Rosero et al. (2019). The two exceptions are G34.43\(+\)00.24mm1A and G53.25\(+\)00.04mm4A, however, association with an ionized jet is also possible in at least one of them. In the case of G34.43\(+\)00.24mm1A (Figures 4 and 11), the radio continuum is elongated and it is characterized by a rising spectral index \(\alpha=+0.7\pm 0.1\), which is consistent with dense ionized gas in a jet (e.g., see Reynolds, 1986). Moreover, Isequilla et al. (2021) reported the presence of multiple molecular outflows in this source; the main outflow being in the NE-SW direction, which is approximately colinear with the radio continuum elongation of the source (Figure 4 contours; Rosero et al., 2016)1. The other source not classified as a jet or jet candidate by Rosero et al. (2019) is G53.25\(+\)00.04mm4A, which is unresolved (\(\theta_{syn}\sim\)0.4\(\arcsec\), K-band, Rosero et al., 2016) and has a flat spectral index \(\alpha=0.1\pm 0.1\). We note that multiple cm-continuum sources with different spectral indices can coexist within small physical areas in star forming regions (e.g., Towner et al., 2021, Sanna et al., 2019), therefore, higher sensitivity and angular resolution observations should be conducted to further investigate the nature of G53.25\(+\)00.04mm4A. We highlight that all five sources with NH\({}_{3}\) detection (Section 3.1) were also detected in CH\({}_{3}\)OH; it is likely that other regions with CH\({}_{3}\)OH detection may also have NH\({}_{3}\) emission, albeit below our sensitivity levels. Our data therefore suggest that high-excitation NH\({}_{3}\) and CH\({}_{3}\)OH lines can trace very young high-mass star forming regions harboring ionized jets. ### Radio Recombination Lines and Stacking VLA radio continuum datasets of regions characterized by thermal ionized gas are ideal to search for RRLs as a by-product of broadband observations. However, our target sample consists of very young high-mass star forming regions and ionized jet candidates prior to the development of bright HII regions, therefore, any RRL would be very weak, as expected from ionized jets (Anglada et al., 2018). We searched for \(\Delta n=1\) (\(\alpha\)) hydrogen transitions in the observed continuum frequencies, as they are expected to be the brightest RRLs (e.g., Liu et al., 2013)9. A total of 8 RRL frequencies are within the observed spectral windows; Table 4 lists the specific transitions, channel widths and range of RMS values from the spectra of all sources. Footnote 9: Some carbon RRLs could be brighter than hydrogen RRLs, e.g., see spectrum of Mol 160 in Araya et al. (2007a), however, hydrogen transitions would be brighter when smoothed to the large channel width of our data. We found no clear RRL detection in the stacked spectra. Figure 18 shows examples of RRL spectra toward four sources. The left column shows C-band continuum detections in contours (Rosero et al., 2016) superimposed to a channel image from the stacked C-band RRL cubes. The center and right columns of Figure 18 show the C-band and K-band RRL spectra, respectively. The thin color lines in each panel show individual RRL transitions as identified in the insets, while the thick blue line shows the stacked spectrum per band. The RMS values of the stacked RRLs at C-band and K-band are listed in Table 5. As expected, the RMS of the stacked spectra is smaller than the RMS values of individual transitions. For regions with multiple continuum sources in the field (e.g., IRAS 18566+0408, Figure 18), we show spectra of only one continuum source (or the combined spectra from a subset of sources in a region), but we searched for RRLs toward all continuum sources (not all spectra are shown). As explained in Section 2, the individual RRL spectra were regrided to the same channel width, i.e., 133 km s\({}^{-1}\) at C-band and 38 km s\({}^{-1}\) at K-band, before stacking. The RMS values per source reported here are similar to state-of-the-art RRL surveys, albeit with broader channel width due to the nature of our VLA continuum data set. For example, one of the most sensitive large-scale RRL surveys to date is that of Liu et al. (2019), which reported RMS values of \(\sim 0.65\) mJy with a 5.1 km s\({}^{-1}\) channel width based on stacking of twelve hydrogen RRLs transitions from H163\(\alpha\) to H174\(\alpha\) (see Tables 4 and 5 for comparison). We can obtain a statistical upper limit of RRL detectability in our sample by combining (stacking) all RRL cubes of all sources per band. To accomplish this, each cube was smoothed to the same spatial and spectral resolution, then sub-regions were obtained centered at the location of each known continuum source in all fields. Finally, all sub-region cubes were stacked. Such a stacking method assumes that the RRLs are aligned in velocity, which is a reasonable assumption given the large channel width of our data, especially at C-band (see above). Using this method, we obtained RMS values of 7.0 \(\mu\) Jy beam\({}^{-1}\) at C-band and 17 \(\mu\) Jy beam\({}^{-1}\) at K-band (same channel widths as above). As it is well known from the radiometer equation, stacking \(N\) different lines would decrease the RMS by \(N^{-1/2}\) assuming the same sensitivity in all input spectra (e.g., Anglada et al., 2018, Jolly et al., 2020). Figure 19 shows the results of this stacking exercise. The figure shows the RMS values as a function of the number of stacked RRLs for C-band and K-band data. At an abscissa value of \(10^{0}\) (one RRL transition), Figure 19 shows the dispersion of RMS values for all RRLs (sources and transitions) per band. The RMS of the sample decreased when all RRL spectra per source were stacked (2 to 5 RRLs were included in the broadband continuum observations per band per source). When all spectra per band were stacked (all transitions and sources per band) then we obtained the right-most data point in both panels of Figure 19. We conclude that the RMS decreases with the number of stacked RRLs as expected, i.e., \(RMS\propto N^{-1/2}\) (blue line, Figure 19). Although this trend is not expected to continue indefinitely, our results show that for VLA imaging the relation is valid at least through moderate samples of \(N\sim 100\). Despite the very low RMS obtained using this stacking method, we are still above the expected minimum flux density value for the combined RRL signal in our sample assuming optically thin emission (\(S_{\nu,min}\)). We can estimate this limit as follows: the weakest continuum source in our sample at C-band is UYSO1 B that has a peak continuum intensity of 15 \(\mu\) Jy beam\({}^{-1}\) at 7.4 GHz. Assuming optically thin free-free continuum as above, the RRL from this source would be \(\sim 0.4\,\mu\) Jy beam\({}^{-1}\). We can take this value as \(S_{\nu,min}\) because all other continuum sources in the sample are brighter than UYSO1 B, and thus, the \(\sim 0.4\,\mu\) Jy beam\({}^{-1}\) value would be a lower limit of the expected optically thin emission in the stacked sample. This value is below the 7.0 \(\mu\) Jy beam\({}^{-1}\) RMS at C-band of our stacked data, and thus, we cannot rule out optically thin free-free RRLs from the overall sample. Moreover, the continuum emission is likely not optically thin at C-band (see SEDs in Rosero et al., 2016), and the lines may be narrower than our channel width, which would result in an even lower value of the stacked RRL signal. The same argument applies to the K-band data based on the weakest continuum source in our sample (IRAS 19266+1745C, 17 \(\mu\) Jy beam\({}^{-1}\) at 20.9 GHz; assuming a line-to-continuum ratio of 32%), which results in an expected peak flux density of \(\sim 5\,\mu\) Jy beam\({}^{-1}\), which is smaller than the RMS of the stacked spectrum (17 \(\mu\) Jy beam\({}^{-1}\)). We conclude that the stacking procedure works as expected (RMS decreases as \(\sim N^{-1/2}\)), and that we can rule out strong maser-like emission of RRLs in our overall sample, however, as expected, thermal RRL emission is too weak in our sample for a statistical detection. The stacking method used in this project (instructions for installation and use of the script developed here are given in the Appendix A) can be used for statistical detections toward a sample of brighter radio-continuum sources than those discussed here, as well as toward samples of weak continuum sources with the next generation of high-sensitivity interferometers such as the ngVLA. ## 4 Summary The broadband and multi-channel continuum observing mode of the VLA results in datasets that are effectively spectral scans of weak+broad or bright+narrow lines. We developed a script to search for spectral lines in calibrated continuum visibility files, and applied it to our radio continuum survey toward 58 young high-mass star forming regions (Rosero et al., 2016), which consisted of observations at C and K bands. We focused on the search for excited OH, NH\({}_{3}\), CH\({}_{3}\)OH, and RRLs. In the case of the RRLs, the script stacked the different transitions per source to improve sensitivity. Given the low radio continuum intensity in our sample, our search for RRLs was done as a proof of concept to investigate whether the RMS decreases as expected from the radiometer equation and to search for unusually strong lines that could be indicative of non-thermal emission. We found that the stacking method decreases the RMS noise as expected, and report no detection of RRLs (3\(\sigma\) detection limit of \(\sim\)0.2 mJy at C-band with a 133 km s\({}^{-1}\) channel width, and \(\sim\)0.4 mJy at K-band with a channel width of 29 km s\({}^{-1}\)), within an angular size smaller than \(\sim\)1\(\arcsec\). We detected no excited OH lines in the sample (3\(\sigma\) upper limit of \(\sim\)0.6 mJy within an angular size smaller than 0.8\(\arcsec\) and a channel width between 70 and 130 km s\({}^{-1}\)), but found multiple CH\({}_{3}\)OH and NH\({}_{3}\) lines. Specifically, we report the first detection of several 25 GHz CH\({}_{3}\)OH transitions toward nine of the 58 regions in our sample; five sources have NH\({}_{3}\) detections. Due to the large channel width of our data (\(\sim 70\) to 130 km s\({}^{-1}\) at C-band; \(\sim 20\) to 30 km s\({}^{-1}\) at K-band) and the low flux density of the detections (\(<19\) mJy), it is unclear whether the lines are due to a maser or thermal mechanism. High spectral resolution follow-up observations are required to investigate the nature of these detections. We found that both NH\({}_{3}\) and CH\({}_{3}\)OH are located very near ionized jets or jet candidates (coincident in most cases, or within 2\(\arcsec\), \(\lesssim 0.1\) pc). Therefore, these transitions could be useful tracers of young massive stellar objects during the development of ionized jets. Particularly interesting is the case of CH\({}_{3}\)OH detections that, if due to maser in origin, would imply Class I transitions tracing material very near ionized jets, which could reveal shocks due to the interaction of the jets with the molecular medium near the YSOs from which the outflows originate. The script provided in this article to search for spectral lines and stacking RRLs from continuum VLA datasets should be especially applicable to continuum surveys toward extragalactic objects, in which low velocity resolution spectra would be appropriate for detection of broad lines (e.g., Araya et al., 2004, Eisner et al., 2019). This is a new discovery tool to explore the rich VLA archive of continuum observations and complement high-\(z\) stacking efforts (e.g., see Jolly et al., 2020 and references therein). We would like to thank an anonymous referee for a detailed review that helped us to significantly improve this manuscript. E.D.A. acknowledges support of WIU Distinguished Alumnus Frank Rodeffer to the WIU Physics Department and the WIU Astrophysics Research Laboratory, in particular student scholarships and computational resources. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. E.D.A. acknowledges partial support from NSF grant AST-1814063. P.H. acknowledges partial support from NSF grant AST-1814011. E.D.A. would like to thank WIU student Mr. Gbenga Miracle Adebisi for participating in the test of the code developed in this project. E.S.T. would like to thank the AAS for a FAMOUS grant to support participation in an AAS meeting. This research has made use of NASA's Astrophysics Data System. CASA 5.1, 5.6, 6.2
2309.05834
SCD-Net: Spatiotemporal Clues Disentanglement Network for Self-supervised Skeleton-based Action Recognition
Contrastive learning has achieved great success in skeleton-based action recognition. However, most existing approaches encode the skeleton sequences as entangled spatiotemporal representations and confine the contrasts to the same level of representation. Instead, this paper introduces a novel contrastive learning framework, namely Spatiotemporal Clues Disentanglement Network (SCD-Net). Specifically, we integrate the decoupling module with a feature extractor to derive explicit clues from spatial and temporal domains respectively. As for the training of SCD-Net, with a constructed global anchor, we encourage the interaction between the anchor and extracted clues. Further, we propose a new masking strategy with structural constraints to strengthen the contextual associations, leveraging the latest development from masked image modelling into the proposed SCD-Net. We conduct extensive evaluations on the NTU-RGB+D (60&120) and PKU-MMD (I&II) datasets, covering various downstream tasks such as action recognition, action retrieval, transfer learning, and semi-supervised learning. The experimental results demonstrate the effectiveness of our method, which outperforms the existing state-of-the-art (SOTA) approaches significantly.
Cong Wu, Xiao-Jun Wu, Josef Kittler, Tianyang Xu, Sara Atito, Muhammad Awais, Zhenhua Feng
2023-09-11T21:32:13Z
http://arxiv.org/abs/2309.05834v1
SCD-Net: Spatiotemporal Clues Disentanglement Network for Self-supervised Skeleton-based Action Recognition ###### Abstract Contrastive learning has achieved great success in skeleton-based action recognition. However, most existing approaches encode the skeleton sequences as entangled spatiotemporal representations and confine the contrasts to the same level of representation. Instead, this paper introduces a novel contrastive learning framework, namely Spatiotemporal Clues Disentanglement Network (SCD-Net). Specifically, we integrate the decoupling module with a feature extractor to derive explicit clues from spatial and temporal domains respectively. As for the training of SCD-Net, with a constructed global anchor, we encourage the interaction between the anchor and extracted clues. Further, we propose a new masking strategy with structural constraints to strengthen the contextual associations, leveraging the latest development from masked image modelling into the proposed SCD-Net. We conduct extensive evaluations on the NTU-RGB+D (608.120) and PKU-MMD (I&II) datasets, covering various downstream tasks such as action recognition, action retrieval, transfer learning, and semi-supervised learning. The experimental results demonstrate the effectiveness of our method, which outperforms the existing state-of-the-art (SOTA) approaches significantly. ## Introduction Skeleton-based action recognition focuses on identifying human actions via skeleton sequences, which has witnessed significant advancements in recent years. On one hand, deep networks, such as Graph Convolutional Network (GCN) [13], have been investigated and successfully applied for the task at hand. On the other hand, several large-scale datasets, _e.g._, NTU-RGB+D [2], have been proposed, providing an experimental foundation for further development of the area. However, like most visual tasks, the training of a high-performance model typically requires a massive amount of high-quality labelled data. This requirement poses a significant challenge in data collection and annotation. Fortunately, self-supervised learning has emerged as a solution to address this challenge by leveraging inherent associations instead of relying on annotations. In particular, recent investigations [11] have demonstrated that contrastive learning, owing to its interpretability and transferability, has emerged as a front-runner in self-supervised skeleton-based action recognition. However, several crucial aspects are disregarded by existing approaches. First, the encoder is responsible for mapping the input into a latent space where the contrast can be conducted. While most previous methods [15, 16] concentrate on obtaining unified information through commonly used spatiotemporal modelling networks. Their designs result in the complete entanglement of information, failing to provide clear indications for subsequent contrastive measures. There have been sporadic attempts [11] aiming to extract absolutely isolated spatial or temporal information. But repeated evidence has shown that complete isolation of spatiotemporal information is suboptimal for action recognition [17, 12]. More importantly, most approaches focus on constructing contrast pairs at same level of representation [10] during optimisation; Or attempt to force the interaction between information flows, overlooking the gap between domains [11]. In addition, existing techniques [13] often limit themselves to scale transformation, which results in not fully capitalising on the potential of data augmentation. Here we introduce a novel contrastive learning framework that focuses on disentangling spatiotemporal clues, and exploits masking in data augmentation to provide more discriminative inputs, thereby prompting the model to learn more robust interactions. To leverage the intricate features present in skeleton sequences, we propose a dual-path decoupling encoder to generate explicit representations from spatial and temporal domains. Our encoder comprises two main subsystems: a feature extractor and a decoupling module. The role of the feature extractor is to extract fundamental spatiotemporal features from skeleton sequences as the intermediate representations. Since lacking an overall grasp of the skeleton sequence, it is difficult to obtain a picture of the features simply by modelling from a certain perspective. Next, we generate token embeddings by projection and refine the sequence features with a transformer-based module. The decoupling modules are instrumental to deriving disentangled joint-based and frame-based representations, leading to enhanced interpretability of the learned representations. The principle underlying contrastive learning lies in achieving that the encoded query exhibits similarity with its corresponding key while showing dissimilarity with other keys from the backup queue [14]. Here we extend contrastive loss to measure discrimination among representations of multiple spatiotemporal granularities. We strategically incorporate a global view of spatiotemporal representation as an anchor and evaluate its correlation with other representations obtained from an alternate encoder. To elaborate further, we fuse and project the cues derived from the encoder into the contrastive space to create a global representation. Our aim is to establish a bridge that facilitates the interaction of information across different domains through the utilisation of this anchor. Furthermore, to prompt the model to learn more robust interactions, we propose an innovative mask-based data augmentation in a structurally-constrained manner. Specifically, we mask the adjacent area of the randomly selected region in the spatial domain and construct cube-based random masking in the temporal domain. This structured masking strategy serves to significantly increase the variety of training data. Moreover, it enables the model to implicitly capture spatiotemporal contextual relationships within the skeleton sequence. We perform extensive experiments to demonstrate the effectiveness of the proposed method. As shown in Figure 1, the results indicate that our approach surpasses the mainstream methods in all downstream tasks, demonstrating its superior capabilities in skeleton-based action understanding. In summary, the key innovations of SCD-Net include: * A novel contrastive learning framework that is instrumental to decoupling spatiotemporal clues and enhancing the discriminatory capacity of action representations. * The novel decoupling encoders that are designed to extract clean spatial and temporal representations from complexly entangled information. * A new contrastive loss that encourages the network to capture meaningful interactions across different domains. * A structurally-constrained masking, which reflects the inherent properties of skeleton sequences in a conceptually unique way, is proposed for data augmentation. * SCD-Net redefines the new SOTA across all downstream tasks. ## Related Work ### Skeleton-based Action Recognition Skeleton-based action recognition has garnered significant attention by the research community [13, 14, 15]. In earlier approaches [16, 17], customised techniques were employed to classify skeletons via traditional feature extraction methods. Recently, GCN-based approaches [15, 18, 19, 20] have gained prominence in the field. The general paradigm initially models the skeleton sequence as a spatiotemporal graph and subsequently employs information aggregation and updating techniques. Inspired by the notable achievements of transformer [11, 20], some recent methods [17, 18] have explored its powerful sequence modelling capability for skeleton-based tasks. ### Contrastive Learning Contrastive learning is a typical solution for self-supervised learning. Unlike generative learning [17, 16], contrastive learning does not involve explicit generation or reconstruction of the input. Instead, it focuses on learning discriminative representations through a contrastive loss. Most contrastive learning methods [15, 16] operate on the principle of pulling positive pairs closer to each other, while simultaneously pushing dissimilar pairs farther apart within a projection space. By exploring the internal properties within the data, contrastive learning enables learning more generalised and robust representations, resulting in remarkable performance on downstream tasks [16]. ### Contrastive Learning for Skeleton-based Action Recognition Contrastive learning has also been successfully employed in skeleton-based action recognition. Thoker _et al._[18] proposed the intra-skeleton and inter-skeleton contrastive loss, achieving promising results in several downstream tasks. Dong _et al._[14] utilised down-sampling operations at different stages of the encoder to obtain multi-scale features for constructing a hierarchical contrastive learning framework. Franco _et al._[17] proposed a novel approach that involves projecting the encoded features into a hyperbolic space, which is a non-Euclidean space that allows more efficient modelling of complex association. Despite these advances, most existing studies overlook the crucial step of extracting and disentangling spatial and temporal clues from skeleton sequences, not to mention the failure of considering the interactions among representations of different domains. For contrastive learning, data augmentation processes the training sample to obtain positive input pairs with certain differences. Thoker _et al._[18] used various spatiotemporal augmentation techniques, including pose Figure 1: A comparison of the proposed method with HiCo-Transformer [14], using multiple evaluation metrics. (Better view in colour.) augmentation, joint jittering, and temporal crop-resize, to generate different inputs for the query and key encoders. While most methods follow similar scale transformation paradigms, Zhou _et al._[13] proposed a strategy of masking selected nodes and frames, which greatly extends the augmentation to "destroy" the data structure. However, unlike image data, skeleton sequences have strong physical associations, meaning that even if a certain node or frame is corrupted, it can easily be corrected using the information from adjacent areas [1]. Incorporating the structural constraints, we expand the point-based masking approach to area-based masking. This extension aims to prevent potential data leakage and enhance the learning capabilities of SCD-Net. ## The Proposed SCD-Net In this section, we will initially present the overall framework of SCD-Net, followed by a detailed introduction to each of its components in the subsequent sections. ### The Overall Framework The overall pipeline of the proposed method consists of two branches, as shown in Figure 2 (b). Each branch has the same components, including data augmentation and encoder. For any input data, we link the outputs obtained by the encoder and momentum encoder to form contrast pairs. To elaborate further, the input of the network is defined as a sequence of human body key points, denoted as \(\mathcal{X}\in\mathbb{R}^{C\times T\times V}\), where \(T\) is the length of the sequence, \(C\) is the physical coordinate defined in a 2D/3D space, \(V\) is the number of key points. In SCD-Net, we first apply data augmentation to generate the augmented views for the encoders. Second, for each encoder, we deploy feature extraction and (spatial/temporal) decoupling operations to generate spatial feature \(\mathbf{z}_{s}\in\mathbb{R}^{C_{2}}\) and temporal feature \(\mathbf{z}_{t}\in\mathbb{R}^{C_{2}}\) from the entangled information. Third, we project these clues into the same semantic space to obtain the final representations. The loss function, \(\mathcal{L}_{\theta,\xi}\), is defined as a measure of interactions of these representations. The parameters \(\theta\) and \(\xi\) specify the architecture corresponding to the encoder and the momentum encoder. During the optimisation, the loss is backpropagated only through the encoder, while the parameters of the momentum encoder are updated using a momentum update strategy. So the final optimiser is: \[\boldsymbol{\theta}\leftarrow\mathrm{optimizer}(\boldsymbol{\theta},\nabla_{ \boldsymbol{\theta}}\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\xi}},r), \boldsymbol{\xi}=\boldsymbol{\xi}*m+\boldsymbol{\theta}*(1-m), \tag{1}\] where \(r\) and \(m\) are the learning rate and decay rate. ### The Dual-path Decoupling Encoder In general, the features extracted from a skeleton sequence are characterised as complex spatiotemporal associations describing an action. However, we argue that this paradigm is not suitable for contrastive learning. As the information is greatly entangled, it is difficult to provide clear guidance for the subsequent comparison. In SCD-Net, we advocate a dual-path decoupling encoder to extricate clear and multiple discriminative cues from the complex sequence information. Such clues provide clear instructions for a subsequent contrast quantification. More importantly, a reliable assessment of the contrast between different domains is likely to provide stronger discrimination. For brevity, we generally denote the augmented input for the encoder as \(\mathcal{X}\). As demonstrated by the existing studies, completely isolating the information flow is sub-optimal [12, 13, 14]. Given that, we apply a spatiotemporal modelling network to extract the intermediate features. Inspired by the excellent performance in modelling skeleton sequences [15], we use a \(l_{g}\)-layer GCN, consisting of spatial-GCN (S-GCN) and temporal GCN (T-GCN), to obtain unified representations \(\mathcal{Y}\in\mathbb{R}^{C_{1}\times T\times V}\). This can be expressed as a process of aggregation and updating of adjacent features. Specifically, for any \(\mathcal{X}_{ti}\in\mathbb{R}^{C}\), where \(t\) and \(i\) are the frame and joint index, the newly generated features \(\mathcal{Y}_{ti}\in\mathbb{R}^{C_{1}}\) can be expressed as: \[\mathcal{Y}_{ti}=\sum_{\mathcal{X}_{uj}\in\mathrm{B}(\mathcal{X}_{ti})}\frac{1 }{\mathcal{I}_{ti}(\mathcal{X}_{uj})}\cdot\mathcal{X}_{uj}\cdot\mathrm{w}( \mathrm{l}_{ti}(\mathcal{X}_{uj})), \tag{2}\] where \(\mathrm{B}(\mathcal{X}_{ti})\) denotes the kernel of the graph convolution operation on \(\mathcal{X}_{ti}\), \(\mathrm{Z}(\cdot)\) represents normalisation, \(\mathrm{w}(\cdot)\) is the weight function, and \(\mathrm{l}(\cdot)\) maps adjacent nodes to the corresponding subset index. Given the intermediate spatiotemporal representation, \(\mathcal{Y}\), the following step is decoupling operation, which involving projection and refinement, as shown in Figure 3. Specifically, we perform a dimension transformation on \(\mathcal{Y}\) to derive \(\mathcal{Y}_{rs}\in\mathbb{R}^{V\times(C_{1}T)}\) and \(\mathcal{Y}_{rt}\in\mathbb{R}^{T\times(C_{1}V)}\). These transformed representations are then projected to higher semantic space to obtain the corresponding spatial and temporal embeddings. For instance, the spatial embedding operation is defined as: \[\mathcal{Y}_{s}=\mathcal{W}_{s2}*\mathrm{ReLU}(\mathcal{W}_{s1}*\mathcal{Y}_ {rs}+\mathcal{B}_{s1})+\mathcal{B}_{s2}, \tag{3}\] Figure 2: Our model benefits from three innovations: a dual-path encoder for distinct spatiotemporal information decoupling; a bespoke cross-domains contrastive loss promoting the information interaction; a structurally-constrained masking strategy for efficient data augmentation. where \(\mathcal{W}\) and \(\mathcal{B}\) are the trainable weights and bias, \(\mathcal{Y}_{s}\in\mathbb{R}^{V\times C_{2}}\). However, the current embedding is still a rough representation as the current features lack explicit interactions within points or frames. While the feature extraction operation incorporates significant spatiotemporal interactions, these interactions often become intertwined. Hence, it remains crucial to address the interaction of individual spatial and temporal embeddings. Here we use a \(l_{t}\)-layer self-attention network to construct the self-correlation information extraction process that refines the spatial and temporal representations, as shown in Figure 3. The transformer architecture used in this method has two main components: self-attention and feed-forward modules. For instance, we obtain \(\mathbf{z_{s}}\in\mathbb{R}^{C_{2}}\) as follows: \[\hat{\mathcal{Z}}_{si}=\mathrm{SoftMax}\left[\frac{\left(\mathrm{F}_{q}( \mathcal{Y}_{s})\right)_{i}\cdot\left(\mathrm{F}_{k}(\mathcal{Y}_{s})\right)_{ i}^{T}}{\sqrt{d_{i}}}\cdot\left(\mathrm{F}_{v}(\mathcal{Y}_{s})\right)_{i} \right],\] \[\hat{\mathcal{Z}}_{s}=\mathrm{WM}[\hat{\mathcal{Z}}_{s1},..,\hat{\mathcal{Z}}_{ si},...,\hat{\mathcal{Z}}_{sh}]+\mathcal{Y}_{s},\quad i\in[0,h],\] \[\mathbf{z}_{s}=\mathrm{MaxPooling}[\mathrm{F}_{c}^{u}(\mathrm{FFN}(\mathrm{ LN}(\hat{\mathcal{Z}}_{s}))+\hat{\mathcal{Z}}_{s}], \tag{4}\] where \(\mathrm{F}\) represents feature projection, implemented by a fully connected layer, with \([\dots]\) denoting the concatenation operation, and \(h\) signifying the number of heads. \(\mathbf{z_{t}}\in\mathbb{R}^{C_{2}}\) can also be obtained by similar operations. ### Cross-domain Contrastive Loss With the decoupled spatial and temporal representations, as shown in Figure 2, we first obtain the final representations by: \[\mathbf{q}_{s}=\mathrm{F}_{s}(\mathbf{z}_{s}),\quad\mathbf{q}_{t}=\mathrm{F}_{ t}(\mathbf{z}_{t}). \tag{5}\] where \(\mathrm{F}_{s}\), \(\mathrm{F}_{t}\) are the corresponding projection functions, which can be defined by two fully connected layers, similar to Eq. (3). As we discussed earlier, there is an obvious gap between spatial and temporal domains, for which we introduce a global perspective \(\mathbf{q}_{g}\) compatible with both as a intermediary for contrasts. \[\mathbf{q}_{g}=\mathrm{F}_{g}[\mathbf{z}_{t},\mathbf{z}_{s}], \tag{6}\] where \(\mathrm{F}_{g}\) are the corresponding projection function. The outputs (\(\mathbf{k}_{s}\), \(\mathbf{k}_{t}\), \(\mathbf{k}_{g}\)) of the corresponding key encoder, can also be obtained by a similar process. Based on these candidate features, we define a new cross-domain loss. The core of our design lies in anchoring the global representation and building its association with other representations obtained by another encoder. The loss function is defined as, \[\mathcal{L}_{\theta,\xi}\triangleq\lambda_{1}\cdot\mathcal{L}( \mathbf{q}_{g},\mathbf{k}_{s})+\lambda_{2}\cdot\mathcal{L}(\mathbf{q}_{g}, \mathbf{k}_{t})+ \tag{7}\] \[\lambda_{3}\cdot\mathcal{L}(\mathbf{q}_{s},\mathbf{k}_{g})+\lambda _{4}\cdot\mathcal{L}(\mathbf{q}_{t},\mathbf{k}_{g}),\] where \(\lambda\) is the mixing weight of the averaging operation. Specifically, for any given contrast pair \(\mathbf{u}\) and \(\mathbf{v}\), \(\mathcal{L}(\mathbf{u},\mathbf{v})\) evaluates the correlation between \(\mathbf{u}\) and \(\mathbf{v}\). The objective is to minimise the distance between positive pairs from the query and key encoders, while maximising the distance from the other features. To achieve this, we employ the contrastive loss based on InfoNCE [1] as follows: \[\mathcal{L}(\mathbf{u},\mathbf{v})=-\log\frac{\mathrm{h}(\mathbf{u},\mathbf{v })}{\mathrm{h}(\mathbf{u},\mathbf{v})+\sum_{\mathbf{m}\in M}\mathrm{h}( \mathbf{u},\mathbf{m})}, \tag{8}\] where \(\mathrm{h}(\mathbf{u},\mathbf{v})=\exp(\mathbf{u}\cdot\mathbf{v}/\tau)\) is the exponential similarity measurement. We denote the first-in-first-out queue of the previously extracted features, containing \(l_{m}\) negative samples, by \(M\). ### Data Augmentation By imposing structural constraints, our approach applies the masking operation within a local region around the current randomly selected joints or frames instead of relying only on isolated points or frames. In this way, we substantially eliminate explicit local contextual associations, and force the encoders to model robust contextual relationships through interactive contrastive learning. Structurally Guided Spatial MaskingConsidering the physical structure of skeleton, when a certain joint is selected for masking, we simultaneously mask the points in its adjacent area. Let us represent the adjacency relationship using the matrix \(\mathbf{P}\). \(\mathbf{P}_{ij}=1\), if joints \(i\) and \(j\) are connected, otherwise \(\mathbf{P}_{ij}=0\). We denote \(\mathbf{D}=\mathbf{P}^{n}\), where \(n\) is the exponent. The element \(\mathbf{D}_{ij}\) in \(\mathbf{D}\) represents the number of paths that can be taken to reach node \(j\) from node \(i\) by walking \(n\) steps. Note that reversal and looping are allowed. To Figure 3: The dual-path decoupling encoder that provides clean spatial and temporal representations of a skeleton sequence. impose a structural constraint, when node \(i\) is selected, we perform the same augmentation operation on all nodes \(j\) for which \(\mathbf{D}_{ij}\neq 0\). The only undesirable artefact of this operation is that it may give rise to a variable number of candidate joints. To avoid this, for several randomly selected nodes, the actual augmentation is applied only to a fixed number (\(k\)) of points exhibiting the highest overall response on \(\mathbf{D}\). Cube-based Temporal MaskingThe sequence follows a linear relationship in time. To avoid information leakage between adjacent frames [12], we construct a cube, defined by a selected segment and its adjacent frames. Specifically, we start by dividing the input sequence into \(s\) cubes of equal length. Next, we randomly select \(r\) cubes as the candidates for masking. We denote the data augmentation candidates as \(\mathcal{T}\). Given a skeleton sequence \(\mathcal{X}\), the augmented view is obtained by: \[\mathcal{X}^{a}\triangleq\mathrm{t}_{\mathrm{n}}(\mathcal{X}^{a}_{n},p_{n}) \cdots\triangleq\mathrm{t}_{1}(\mathcal{X}^{a}_{1},p_{1}), \tag{9}\] where \(\mathcal{X}^{a}_{1}=\mathcal{X}\), \(t_{1},\dots,t_{n}\sim\mathcal{T}\), and if \(p=\mathrm{False}\), t degenerates into an identity map. ## Experiments ### Experimental Settings DatasetsWe evaluate the proposed method on four benchmarking datasets, NTU-RGB+D (60&120) [2] and PKU-MMD (I&II) [13]. Implementation DetailsFor the input data, 64 frames are randomly selected for training and evaluation. We perform data augmentation operations, including rotate, flip and shear, as well as the proposed structural spatial masking and temporal masking, on the selected sequence. Each operation has a \(50\%\) chance of being executed. For masking, we set \(n=2\), \(k=8\), \(s=16\), \(r=6\). For the encoder, we refer to MoCo [1] and build a query encoder and the corresponding key encoder. The two encoders have exactly the same structure as shown in Figure 3. For feature extractor, we borrow the structure from CTR-GCN [1] as the basic operation. For network optimisation, we set the queue length of \(M\) to 8192 (except 2048 for PKU-MMD I), moco momentum to 0.999, and softmax temperature to 0.2. More details of the datasets and implementation are presented in the supplementary materials. ### Comparison with The SOTA Methods To evaluate the merits of the proposed SCD-Net, we construct multiple downstream tasks, including action recognition, action retrieval, transfer learning and semi-supervised learning. Action RecognitionHere we adopt the linear evaluation method, which involves fixing the pre-trained parameters and training only a fully connected layer for label prediction. Table 1 presents a comparison of our approach with other SOTA methods on several popular datasets. The results demonstrate that our method outperforms all the existing approaches by a large margin. Specifically, we achieve 5.5% and 3.1% improvements over the previous best method on NTU-60 x-sub and x-view, respectively. On NTU-120, our approach surpasses the previous SOTA by 4.1% and 5.6% on x-sub and x-set, respectively. Again, for the PKU-MMD dataset, SCD-Net achieves 91.9% on V1 and 54.0% on V2, which are much higher than the existing SOTA results. Action RetrievalReferring to [14], we use the KNeighbors classifier [10] for action retrieval while keeping all the pre-trained parameters fixed. As reported in Table 2, our SCD-Net achieves promising results on the NTU-60's x-sub and x-view datasets, with the accuracy of 76.2% and 86.8%, respectively. Additionally, on the NTU-120's x-sub and x-set datasets, our method attains \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Encoder**} & \multicolumn{2}{c}{**NTU-60**} & \multicolumn{2}{c}{**NTU-120**} & PKU-MMD I & PKU-MMD II \\ & & x-sub & x-view & x-sub & x-setup & x-sub & x-sub \\ \hline \multicolumn{7}{c}{_Encoder-decoder_} \\ LongT GAN [1] (AAAT18) & GRU & 52.1 & 56.4 & - & - & 67.7 & 26.5 \\ EnGAN-PoseRNN [1] (WACV19) & LSTM & 68.6 & 77.8 & - & - & - & - \\ H-Transformer [1] (CME+21) & Transformer & 69.3 & 72.8 & - & - & - & - \\ SeBiReNet [13] (CCV20) & GRU & - & 79.7 & - & - & - & - \\ Colorization [12] (CCV21) & GCN & 75.2 & 83.1 & - & - & - & - \\ GL-Transformer [1] (ECCV22) & Transformer & 76.3 & 83.8 & 66.0 & 68.7 & - & - \\ \hline \multicolumn{7}{c}{_Hybrid learning_} \\ MS\({}^{2}\)L [13] (ACMMM21) & GRU & 52.6 & - & - & - & 64.9 & 27.6 \\ PCRP [1] (TMM21) & GRU & 54.9 & 63.4 & 43.0 & 44.6 & - & - \\ \hline \multicolumn{7}{c}{_Contrastive-learning_} \\ CrossSCLR [1] (CVPR21) & GCN & 72.9 & 79.9 & - & - & 84.9\({}^{*}\) & 27.6\({}^{*}\) \\ AimCLR [1] (AAAT22) & GCN & 74.3 & 79.7 & 63.4 & 63.4 & 87.8\({}^{*}\) & 38.5\({}^{*}\) \\ ISC [14] (ACMMM21) & GRU@CNN\&GCN & 76.3 & 85.2 & 67.1 & 67.9 & 80.9 & 36.0 \\ HYSP [15] (Franceo et al.2023)(ICAR21) & GCN & 78.2 & 82.6 & 61.8 & 64.6 & 83.8 & - \\ SKeAttCLR [13] (ACAT23) & GCN & 80.3 & 86.1 & 66.3 & 74.5 & 87.3 & 52.9 \\ ActCLR [13] (CVPR23) & GCN & 80.9 & 86.7 & 69.0 & 70.5 & - & - \\ HiCo-Transformer [1] (AAAT23) & Transformer & 81.1 & 88.6 & 72.8 & 74.1 & 89.3 & 49.4 \\ \hline SCD-Net (Ours) & GCN\&Transformer & **86.6** & **91.7** & **76.9** & **80.1** & **91.9** & **54.0** \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of the proposed method with the mainstream methods in action recognition. The blue font indicates the previous SOTA. the accuracy of 59.8% and 65.7%, surpassing all the existing methods by a significant margin. Transfer LearningFor transfer learning, follow Dong et al. (2023), we apply the knowledge representation learned from one domain to another domain. Specifically, we load the pre-trained parameters from the PKU-MMD I and NTU-60 datasets respectively, and fine-tune the model on the PKU-MMD II dataset, following the cross-subject evaluation protocol. The results presented in Table 3 demonstrate that our SCD-Net brings a performance improvement of 9.3% and 11.2%, as compared with the current SOTA results. Semi-supervised LearningFor semi-supervised learning, we first load the pre-trained parameters and then fine-tune the entire network on a partially labelled training set. In our experiment, we randomly select limited of labelled samples from the NTU-60 dataset for further training. The results in Table 4 show that even when only 1% of the labels are available, our method achieves the accuracy of 69.1% and 66.8% on x-sub and x-view, respectively. With 10% of the labelled data available, the performance of our model is further improved to 82.2% and 85.8%. ### Ablation Study In this part, we verify all the innovative components of the proposed SCD-Net. All the experimental results are focused on the action recognition task using cross-subject evaluation on the NTU-60 dataset. The Decoupling EncoderThe primary role of our novel encoder is to extract crucial spatial and temporal representations. In Table 5, when we discard the feature extractor, the performance drops a lot. This shows that the way of extracting completely isolated information flow is not feasible in the current task, which is also in line with our expectation. It is worth noting that using non-shared feature extractors for the two branches leads to better performance than using a shared one. When we attempt to discard decouple module, compared with the default setting, the accuracy is decreased from 86.6% to 63.7% as the output is impacted by the spatiotemporal entanglement. This situation improves after converting spatiotemporal representations into temporal and spatial domain-specific embeddings, resulting in an accuracy of 84.0%. However, it was still inferior to the design with the refinement model. This is because refinement provides powerful sequence modelling capabilities, thereby refining the current rough representations. Encoder ParametersIn Table 6, we investigate the impact of the parameter settings on the model performance. Overall, the optimal performance is achieved when we use a 3-layer GCN block, 64 as the number of output channels, and set the transformer with 1 layer, 8 heads, and 2048 output channels. The results also demonstrate that changing the parameters does not significantly affect the model's performance, indicating the stability of our approach. Additionally, we can see that the network size does not necessarily improve the performance, suggesting that it is not dependent on the network size. Loss FunctionWe report the results of different configurations of the loss function in Table 7. We can see that the interactive loss performs better than the traditional instance loss, leading to 0.7% and 1.6% performance boost. When using all three granularities jointly, the model achieves optimal performance. This is because there is a significant gap between the nature of the video information conveyed by the spatial and temporal features, although they describe the same action. The spatiotemporal information provides more comprehensive representations, which bridge this gap and enhances the discriminative ability. It is worth noting that the \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**x-sub**} & \multicolumn{2}{c}{**x-view**} \\ & 1\% & 10\% & 1\% & 10\% \\ \hline LongT GAN Zheng et al. (2018) & 35.2 & 62.0 & - & - \\ MS2L Lin et al. (2020) & 33.1 & 65.2 & - & - \\ ASSL Si et al. (2020) & - & 64.3 & - & 69.8 \\ ISC Thoker et al. (2021) & 35.7 & 65.9 & 38.1 & 72.5 \\ MCC Su et al. (2021) & - & 60.8 & - & 65.8 \\ Colorization Yang et al. (2021) & 48.3 & 71.7 & 52.5 & 78.9 \\ CrosSCLR Li et al. (2021) & - & 67.6 & - & 73.5 \\ HI-TRs Li et al. (2021) & - & 70.7 & - & 74.8 \\ GL-Transformer Kim et al. (2022) & - & 68.6 & - & 74.9 \\ HiCo-Transformer Dong et al. (2023) & 54.4 & 73.0 & 54.8 & 78.3 \\ \hline SCD-Net (Ours) & **69.1** & **82.2** & **66.8** & **85.8** \\ \hline \hline \end{tabular} \end{table} Table 4: A comparison with the mainstream methods in semi-supervised learning. The blue font indicates the previous SOTA. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**NTU-60**} & \multicolumn{2}{c}{**NTU-120**} \\ & x-sub & x-view & x-sub & x-setup \\ \hline LongT GAN Zheng et al. (2018) & 52.1 & 56.4 & - & - \\ P\&C Su et al. (2020) & 50.7 & 76.3 & 39.5 & 41.8 \\ AimCLR Guo et al. (2022) & 62.0 & - & - & - \\ ISC Thoker et al. (2021) & 62.5 & 82.6 & 50.6 & 52.3 \\ SkeAttnCLR Hua et al. (2023) & 69.4 & 76.8 & 46.7 & 58.0 \\ HiCo-Transformer Dong et al. (2023) & 68.3 & 84.8 & 56.6 & 59.1 \\ \hline SCD-Net (Ours) & **76.2** & **86.8** & **59.8** & **65.7** \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison with the mainstream methods in action retrieval. The blue font indicates the previous SOTA. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Transfer to PKU-MMD II**} \\ & PKU-MMD I & NTU-60 \\ \hline LongT GAN Zheng et al. (2018) & 43.6 & 44.8 \\ MS2L Lin et al. (2020) & 44.1 & 45.8 \\ ISC Thoker et al. (2021) & 45.1 & 45.9 \\ HiCo-Transformer Dong et al. (2023) & 53.4 & 56.3 \\ \hline SCD-Net (Ours) & **62.7** & **67.5** \\ \hline \hline \end{tabular} \end{table} Table 3: A comparison with the mainstream methods in transfer learning. The blue font indicates the previous SOTA. use of both loss functions jointly does not improve the performance further. This could be attributed to the fact that the supervisory information across the information flow already provides adequate guidance, and further guidance mechanisms are unnecessary. Data AugmentationHere we investigate the impact of different data augmentation strategies on the model performance. The results are reported in Table 8. Without any augmentation, the performance drops by more than 16%, compared to the default setting. When using only the conventional augmentation methods, including rotation, flipping, and shearing, the model achieves an accuracy of 85.4%. After introducing the proposed structurally guided spatiotemporal augmentation, the performance of the model increases 1.2% further. Even with random masking, the performance is still lower than the default setting. It is worth noting that discarding either spatial or temporal masking leads to a performance degradation. Also, when only masking is used, the performance of the model is mediocre, even far worse than using only the conventional data augmentation methods. That is because our method performs a compensation, instead of replacement. A proper masking further improves the diversity of the input data and promotes the model in learning more robust spatiotemporal context associations. When combining all these techniques, the performance of the model is the best. Visualisation of The Decoupled CluesAs shown in Figure 4, we use t-SNE [22] to analyse the decoupled clues from SCD-Net. We select three groups of data with different emphases for comparison. The first row represents the spatial clue and the second one is the temporal clue. We can notice that from (a) and (d), 'throw' and 'clapping' have great separability on spatially and temporally. From (b) and (e), 'brush teeth' vs 'brush hair' are more separable on spatial domain, because the most significant difference is the object. According to (c) and (f), 'drop' and 'pick up' are more separable on temporal domain, while showing certain entanglement on the spatial domain, as they are in reverse order in temporally. More importantly, the results demonstrate that our encoder successfully decouples the corresponding features, which makes the specificity between different cues correspond to the same samples. ## Conclusion In this paper, we presented a new contrastive learning framework for unsupervised skeleton-based action recognition. The key innovation is the design of spatiotemporal clue extraction mechanism. In the proposed method, we first used a spatiotemporal modelling network to encode an action sequence, followed by a decoupling module for obtaining pure spatial and temporal representations. An cross-domain loss was proposed to guide the learning of discriminative representations conveyed by different representations. The training of the system was facilitated by a novel data augmentation method tailored for the proposed unsupervised learning framework. This method imposes structural constraints on action data perturbations to enhance the efficacy of contextual modelling and increase the diversity of the data. Extensive experimental results obtained on widely used benchmarking datasets demonstrated the merits of the proposed method that defines a new SOTA of the area. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{**GCN**} & \multicolumn{2}{c}{**Trans**} & \multicolumn{2}{c}{**Accuracy**} \\ Layer & Channel & Layer & Head & Channel & Top-1 & Top-5 \\ \hline 2 & 64 & 1 & 8 & 2048 & 85.4 & 97.2 \\ **3** & **64** & **1** & **8** & **2048** & **86.6** & **97.6** \\ 4 & 64 & 1 & 8 & 2048 & 86.6 & 97.5 \\ 3 & 32 & 1 & 8 & 2048 & 86.4 & 97.4 \\ 3 & 128 & 1 & 8 & 2048 & 85.8 & 97.5 \\ \hline 3 & 64 & 2 & 8 & 2048 & 86.4 & 97.7 \\ 3 & 64 & 1 & 4 & 2048 & 86.2 & 97.4 \\ 3 & 64 & 1 & 16 & 2048 & 85.7 & 97.3 \\ 3 & 64 & 1 & 8 & 1024 & 85.7 & 97.5 \\ 3 & 64 & 1 & 8 & 4096 & 85.1 & 97.2 \\ \hline \hline \end{tabular} \end{table} Table 6: The details of the encoder. The bold part in black indicates the best performance. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{**Granularity**} & \multicolumn{2}{c}{**Loss Type**} & \multicolumn{2}{c}{**Accuracy**} \\ \hline S-T & Instance Loss & 84.6 & 97.1 \\ S-T & Interactive Loss & 85.3 & 97.2 \\ S-T-G & Instance Loss & 85.0 & 97.5 \\ S-T-G & Interactive Loss & **86.6** & **97.6** \\ S-T-G & Interactive Loss \& Instance Loss & 86.5 & 97.6 \\ \hline \hline \end{tabular} \end{table} Table 7: A comparison of different loss functions. ’S’, ’T’, ’G’ represent Spatial, Temporal and Global representations. Figure 4: Visualisation of decoupled clues. Spatial clue: (a) ’throw’ vs ’clapping’; (b) ’brush teeth’ vs ’brush hair’; (c) ’drop’ vs ’pick up’. Temporal clue: (d) ’throw’ vs ’clapping’; (e) ’brush teeth’ vs ’brush hair’; (f) ’drop’ vs ’pick up’; \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{**Data Augmentation Strategy**} & \multicolumn{2}{c}{**Accuracy**} \\ & Top-1 & Top-5 \\ \hline None & 70.1 & 91.1 \\ Conventional & 85.4 & 97.4 \\ Conventional, Spatial Mask, Temporal Mask & **86.6** & **97.6** \\ Conventional, Random SpatioTemporal Mask & 85.6 & 97.4 \\ Conventional, Spatial Mask & 86.2 & 97.5 \\ Conventional, Temporal Mask & 83.7 & 96.7 \\ Spatial Mask, Temporal Mask & 76.0 & 93.6 \\ \hline \hline \end{tabular} \end{table} Table 8: A comparison of different data augmentation strategies. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China (Grant No. U1836218, 62020106012, 62106089, 61672265), the 111 Project of Ministry of Education of China (Grant No.B12018), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No. KYCX22_2299), and the EPSRC Grants (EP/V002856/1, EP/T022205/1).
2309.14939
Factorization of covariant Feynman graphs for the effective action
We prove a neat factorization property of Feynman graphs in covariant perturbation theory. The contribution of the graph to the effective action is written as a product of a massless scalar momentum integral that only depends on the basic graph topology, and a background-field dependent piece that contains all the information of spin, gauge representations, masses etc. We give a closed expression for the momentum integral in terms of four graph polynomials whose properties we derive in some detail. Our results can also be useful for standard (non-covariant) perturbation theory.
Gero von Gersdorff
2023-09-26T13:52:26Z
http://arxiv.org/abs/2309.14939v1
# Factorization of covariant Feynman graphs for the effective action ###### Abstract We prove a neat factorization property of Feynman graphs in covariant perturbation theory. The contribution of the graph to the effective action is written as a product of a massless scalar momentum integral that only depends on the basic graph topology, and a background-field dependent piece that contains all the information of spin, gauge representations, masses etc. We give a closed expression for the momentum integral in terms of four graph polynomials whose properties we derive in some detail. Our results can also be useful for standard (non-covariant) perturbation theory. ###### Contents * 1 Introduction * 2 The gauge-invariant function \(\Gamma\) * 3 Factorization of Feynman graphs * 4 The universal momentum integral * 4.1 The fundamental spaces of the incidence matrix * 4.2 Loop integrations * 4.3 Simple examples at tree-level and one-loop * 5 Characterization of \(\Delta\), \(\mathbb{Q}\), \(\mathbb{R}\) and \(\mathbb{U}\) * 5.1 Graph theory basics * 5.2 Explicit expressions in terms of subgraphs * 5.2.1 The Kirchhoff polynomial \(\Delta\) * 5.2.2 The matrix \(\mathbb{Q}\) * 5.2.3 The matrix \(\mathbb{R}\) * 5.2.4 The matrix \(\mathbb{U}\) * 6 Symmetries, symmetry factors, and complex conjugation * 7 Conclusions * A Cycle spaces * B Moore-Penrose pseudo inverses ## 1 Introduction Covariant perturbation theory for the effective action has a long history [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. In recent years there has been some revived interest in the subject [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], mostly due to its significance within the Standard Model effective theory. With some exceptions [32; 33; 34; 35], covariant perturbation theory has been limited to the one-loop case, where the results can be written as functional traces. Recently, we have proposed a new covariant formalism based on the heat kernel [36] which is applicable at any loop order [37]. In this approach, the effective action is calculated in the background field (BF) method in a manifestly covariant way. Recall that in the BF approach, one separates all fields into backgrounds and fluctuations. One then writes down all Feynman graphs to the desired loop order, where the edges of the graphs correspond to BF dependent propagators of fluctuations, and the vertices to BF dependent couplings. Notice that these graphs are connected, one particle irreducible, and have no external lines. In this work we will further develop this idea and show that the contribution to the effective Lagrangian from a given Feynman graph \(G\) can be factorized as follows \[{\cal L}^{G}_{\rm eff}(x)=(-1)^{N_{\rm CFL}}\,{\rm SF}\int d^{P}\tau\,\left[I_ {G_{0}}(\tau_{i};i\partial_{x_{n}};z_{i})\,\Gamma_{G}(\tau_{i};x_{n};-i\vec{ \partial}_{z_{i}})\right]_{x_{n}=x,\ z_{i}=0}. \tag{1}\] We will always use an index \(i\) to number the edges of the graph (\(1\leq i\leq P\)), and the index \(n\) over its vertices (\(1\leq n\leq V\)). The \(\tau_{i}\) are Schwinger parameters, and the \(x_{n}\) spacetime points associated to the vertices of the graph. The \(z_{i}\) are position variables dual to propagator momenta (in analogy to the vertex positions \(x_{n}\) being dual to vertex momenta). Working with the \(z_{i}\) variables greatly simplifies the treatment of fermions and derivatives of fields. SF denotes the symmetry factor of the graph, and \(N_{\rm CFL}\) is the number of closed fermion loops. The first factor appearing in eq. (1), the function \(I_{G_{0}}(\tau_{i};p_{n};z_{i})\), is essentially the result of the momentum integral of the graph which we will evaluate in closed form. The second one, a gauge-invariant \(V\)-point function \(\Gamma_{G}(\tau_{i};x_{n};k_{i})\), depends on the background fields and can be readily computed in terms of heat kernel coefficients and field-dependent couplings. We will see that the momentum integral \(I_{G_{0}}\) only depends on the topology of the graph \(G\). Feynman graphs constitute so-called _labeled_ graphs, which are graphs that carry an identity (or label) at each edge and each vertex. The labels of the edges are the type of fluctuation fields propagating through them and the vertex labels the corresponding couplings.1 Notice that several edges (vertices) can carry the same label. While these labels do enter the calculation of \(\Gamma_{G}\), the calculation of \(I_{G_{0}}\) only depends on the reduced graph \(G_{0}\), which is obtained from the full graph by removing all the labels. In particular, all information (mass, spin, and all other quantum numbers such as gauge representations, flavor, extra derivatives etc) can be completely ignored. An example is given in figure 1. Footnote 1: For Feynman graphs the vertex labels are uniquely fixed by the labels of the connecting edges and vice versa. This paper is organized as follows. We will give in sec. 2 a brief review of the construction of \(\Gamma\), and also describe some of its properties glossed over in [37]. Section 3 is devoted to a formal derivation of eq. (1) and some comments about the importance to our findings in standard (non-covariant) perturbation theory. We devote section 4 to the explicit calculation of \(I\) (that is, we will perform the momentum integration in closed form), expressing the result entirely in terms of graph-theoretic quantities in sec. 5. This is the most technical section, and and we moved some of the graph theory and linear algebra background into two appendices. In sec. 6 we discuss how symmetries of the graph act on the factors \(I\) and \(\Gamma\), and in sec. 7 we give our conclusions. ## 2 The gauge-invariant function \(\Gamma\) We define the \(V\)-point function \(\Gamma_{G}\) for a given graph \(G\) as follows. * For each edge of \(G\) write the expressions given in tables 1 and 2. We refer to these expressions somewhat loosely as _propagators_, even though they are only part of the latter. The basic propagators (without derivatives) are given by the expressions in table 1. Covariant derivatives of fields are indicated by a star and \begin{table} \begin{tabular}{c c} \(\alpha\) & \(x_{m}\)\(=\)\(e^{-\tau_{i}m_{i}^{2}}\left\{B(\tau_{i},X_{s};x_{n},x_{m})\right\}^{\alpha}_{\ \beta}\) \\ \(x_{n}\)\(\alpha\)\(x_{n}\)\(\beta\)\(x_{m}\)\(=\)\(e^{-\tau_{i}m_{i}^{2}}(iD_{n}+k_{i}+m)\left\{B(\tau_{i};X_{f};x_{n},x_{m})\right\}^{ \alpha}_{\ \beta}\) \\ \(\mu\), \(a\)\(x_{n}\)\ are subject to the rules given in table 2. Two or more derivatives (on either end of the propagator) are included in an obvious way, and so are derivatives on fermion propagators (the latter do not appear in renormalizable theories). * For each vertex, write the appropriate field-dependent coupling \(C_{n}(x_{n})\), obtained from differentiation of the interaction \({\cal L}_{\rm int}\) with respect to the fluctuations.2 We stress that derivatives only appear in the propagators, never in the couplings. Footnote 2: Notice that we define the contribution of a vertex to \(\Gamma\) as \(C\) and not \(iC\). The remaining factor of \(i^{V}\) will be included in the definition of \(I\) instead, see eq. (10). Hence, \(\Gamma\) is the product of all of these factors, \[\Gamma_{G}(\tau_{i};x_{n};k_{i})=\prod_{i=1}^{P}A_{i}\prod_{n=1}^{V}C_{n}\,, \tag{1}\] where \(A_{i}\) are the expressions in tables 1 and 2. Gauge and Lorentz indices have to be contracted as dictated by the graph; the result is a total gauge and Lorentz singlet. Notice that there is no integration hidden in eq. (1). A comment on the arrows is in order. Edges corresponding to propagators involving the momentum \(k\) need an orientation. For complex fields, we conventionally take this to be the direction of particle number flow. For real fields, this direction is arbitrary but needs to be used consistently in the calculation of \(\Gamma\) and \(I\). Reversing the direction of edge \(i\) in effect corresponds to \(z_{i}\to-z_{i}\) in \(I\) and \(k_{i}\to-k_{i}\) in \(\Gamma\), yielding the same result \begin{table} \begin{tabular}{c c c} \(\alpha\) & \(\beta\) & \(x_{m}\)\(=e^{-\tau_{i}m_{i}^{2}}(D_{n,\rho}-ik_{i,\rho})\left\{B(\tau_{i};X_{s};x_{n},x_{m}) \right\}^{\alpha}_{\beta}\) \\ \(\rho\) & \(k_{i}\) & \\ \(\alpha\) & \(\beta\) & \(x_{m}\)\(=e^{-\tau_{i}m_{i}^{2}}(D_{m,\rho}+ik_{i,\rho})\left\{B(\tau_{i};X_{s};x_{n},x_{m}) \right\}^{\alpha}_{\beta}\) \\ \(k_{i}\) & \(\rho\) & \\ \(\mu\), \(a\) & \(\nu\), \(b\) & \(x_{m}\)\(=-(D_{n,\rho}-ik_{i,\rho})\left\{B(\tau_{i};X_{v};x_{n},x_{m})\right\}^{\mu,\,a}_{ \nu,b}\) \\ \(\rho\) & \(k_{i}\) & \\ \(\mu\), \(a\) & \(\nu\), \(b\) & \(x_{m}\)\(=-(D_{m,\rho}+ik_{i,\rho})\left\{B(\tau_{i};X_{v};x_{n},x_{m})\right\}^{\mu,\,a}_{ \nu,b}\) \\ \end{tabular} \end{table} Table 2: Propagators of derivative of fields. See eq. (2) for the formal definition of \(B\). in eq. (1). The reversal of the arrow of real edges thus constitutes an isomorphism of graphs (see section 6). The \(B\) function appearing in the Feynman rules is formally defined as the ratio of two functions3 Footnote 3: We use a different convention for \(B\) as opposed to ref. [37], the correspondence is \(B(it;x,y)=B_{[37]}(t;x,y)\). \[B(it,X;x,y)\equiv\frac{\langle x|e^{-it(D^{2}+X)}|y\rangle}{\langle x|e^{-it \partial^{2}}|y\rangle}\,. \tag{2}\] It is a matrix valued function analytic in \(t\) near \(t=0\) (the non-analyticities cancel between numerator and denominator). Here, \(X\) is a BF dependent mass that always features a universal piece \(X_{\rm spin}=-S^{\mu\nu}F^{a}_{\mu\nu}\mathfrak{t}^{a}\) where \(S^{\mu\nu}\) are the Lorentz generators in the appropriate representation, but may include other contributions. In this work we will not be concerned with the evaluation of \(B\) (the heat kernel expansion) which has been extensively studied elsewhere [2; 3; 4; 7; 36; 38], and a Mathematica notebook has been included in the arXiv version of ref. [37] for an automated calculation of the expansion. In the following we will give some useful identities that were ommitted in ref. [37]. From the definition eq. (2) we have for real \(\tau\) \[B(\tau,X;x,y)^{\dagger}=B(\tau,X^{\dagger};y,x)\,. \tag{3}\] For scalar, fermion and vector fields one has, respectively, \[X_{s}^{\dagger} = X_{s}\,, \tag{4}\] \[X_{f}^{\dagger} = \gamma_{0}X_{f}\gamma_{0}\,,\] (5) \[X_{v}^{\dagger} = g^{-1}X_{v}g\,, \tag{6}\] and therefore \[B(\tau,X_{s};x,y)^{\dagger} = B(\tau,X_{s};y,x)\,, \tag{7}\] \[B(\tau,X_{f};x,y)^{\dagger} = \gamma_{0}B(\tau,X_{f};y,x)\gamma_{0}\,,\] (8) \[B(\tau,X_{v};x,y)^{\dagger} = g^{-1}B(\tau,X_{v};y,x)g\,. \tag{9}\] For real fields we in addition have that \(X\) and \(B\) are purely real. Further useful identities can be derived for the Fermion propagator. Since \[\frac{i}{i\not{D}-m}=(i\not{D}+m)\frac{-i}{\not{D}^{2}-m^{2}}=\frac{-i}{\not{ D}^{2}-m^{2}}(i\not{D}+m)\,, \tag{10}\] the fermion propagator has an alternative form due to the identity \[(i\not{D}_{x}+\not{k}+m)B(\tau;X_{f};x,y)=B(\tau;X_{f};x,y)(-i\not{\bar{D}}_{y}+ \not{k}+m)\,. \tag{11}\] This immediately gives the Hermiticity property \[\left\{(i\not{D}_{x}+\not{k}+m)B(\tau;X_{f};x,y)\right\}^{\dagger}=\gamma^{0} \left(i\not{D}_{y}+\not{k}+m\right)B(\tau;X_{f};y,x)\,\gamma^{0}\,. \tag{12}\] eqns. (11) and (12) are valid only under the \(k\) and \(\tau\) integrations. ## 3 Factorization of Feynman graphs In terms of the \(V\)-point function \(\Gamma_{G}\) defined in the previous section, we get the contribution of the graph \(G\) to the effective action [37] \[S^{G}_{\rm eff}=(-1)^{N_{\rm CFL}}\,{\rm SF}\int d^{P}\!\tau\int\frac{d^{dP}\!k }{(2\pi)^{dP}}\int d^{d}\!x\,\,\hat{I}_{G_{0}}(\tau_{i};x_{n};k_{i})\Gamma_{G} (\tau_{i};x_{n};k_{i})\,, \tag{13}\] Notice the double integration over both propagator momenta and vertex positions. We have defined \[\hat{I}_{G_{0}}(\tau_{i};x_{n};k_{i})\equiv i^{V-1-P}\exp\left(k^{\intercal} \mathbb{T}_{0}k-ix^{\intercal}\mathbb{B}_{G_{0}}k\right)\,. \tag{14}\] The factor of \((-i)^{P}\) originates from the Wick rotation of the Schwinger parameters \(t_{i}=-i\tau_{i}\), the factor of \(i^{V}\) from the vertices (see footnote 2), and the factor of \(i^{-1}\) is due to the fact that a Feynman graph \(G\) actually computes \(iS^{G}_{\rm eff}\). We have furthermore defined \[(\mathbb{T}_{0})_{ij}\equiv\tau_{i}\delta_{ij}\,, \tag{15}\] as well as the so called incidence matrix of the reduced graph \(G_{0}\), \[(\mathbb{B}_{G_{0}})_{ni}=\begin{cases}+1&\text{if edge $i$ enters vertex $n$}\\ -1&\text{if edge $i$ leaves vertex $n$}\\ 0&\text{else}\end{cases} \tag{16}\] which is a \(V\times P\) matrix. We adopt the convention that edges which start and end at the same vertex (self-loops) have a columns of zeros. An example of a graph and its incidence matrix is provided in fig. 2. For the purpose of the following discussion, it is convenient to rewrite eq. (13) in a Dirac notation as \[S^{G}_{\rm eff}=(-1)^{N_{\rm CFL}}\,{\rm SF}\int d^{P}\!\tau\,\left<\hat{I}_{G _{0}}(\tau_{i})|\Gamma_{G}(\tau_{i})\right>\,. \tag{17}\] that is, \[\hat{I}_{G_{0}}(\tau_{i};x_{n};k_{n}) = \langle\hat{I}_{G_{0}}(\tau_{i})|x_{n};k_{i}\rangle\, \tag{11}\] \[\Gamma_{G}(\tau_{i};x_{n};k_{n}) = \langle x_{n};k_{i}|\Gamma_{G}(\tau_{i})\rangle. \tag{12}\] In ref [37] the interior product in eq. (10) was computed in the \(|x_{n};k_{i}\rangle\) representation. One of the main result of the present paper is that it is much more advantageous to compute it in the \(|x_{n},z_{i}\rangle\) representation, as the \(z_{i}\) integration turns into a simple derivative, analogous to the \(x_{n}\) integration. This works because \(\Gamma_{G}(\tau_{i};x_{n};k_{i})\) is polynomial in \(k_{i}\). Therefore, Fourier-transforming the latter to the dual \(z_{i}\) variables gives a distribution \[\langle x_{n};z_{i}|\Gamma_{G}(\tau_{i})\rangle=\Gamma_{G}(\tau_{i};x_{n};i \partial_{i})\prod_{i}\delta(z_{i})\,. \tag{13}\] The momentum-space loop-integration is now entirely contained in \(\langle\hat{I}_{G_{0}}|x_{n};z_{i}\rangle\) which only depends on the reduced graph \(G_{0}\) and is therefore very universal. We will derive a closed expression for this quantity in sec. 4. To prove eq. (1), consider \(\langle\hat{I}_{G_{0}}(\tau_{i})|p_{n};z_{i}\rangle\) which will contain an overall momentum conserving delta function, so we parametrize it in terms of a new function \(I_{G_{0}}\) as \[\langle\hat{I}_{G_{0}}(\tau_{i})|p_{n};z_{i}\rangle=(2\pi)^{d}\delta(\sum_{n} p_{n})I_{G_{0}}(\tau_{i};p_{n};z_{i})\,. \tag{14}\] Since we are after the local part of the effective action we expand in the vertex momenta \(p_{n}\). Hence, switching from the \(p_{n}\) to the \(x_{n}\) variables is equally simple: \[\langle\hat{I}_{G_{0}}(\tau_{i})|x_{n};z_{i}\rangle=\int d^{d}x\,I_{G_{0}}( \tau_{i};-i\partial_{n};z_{i})\prod_{n}\delta(x_{n}-x)\,. \tag{15}\] In summary, both the \(x_{n}\) and \(z_{i}\) integrations have turned into simple derivatives, and in particular eq. (10) implies eq. (1). It remains to compute the function \(I_{G_{0}}\), this is done in the next section; the result can be found in eq. (14). Figure 2: An example of a three-loop graph and its incidence matrix. The presented method can be modified to work for standard QFT perturbation theory (without covariant background field method and local derivative expansion). A standard multi-loop Feynman graph gives rise to \[i\mathcal{M}(p_{n})=i^{V}\!\int\frac{d^{dP}\!k}{(2\pi)^{dP}}(2\pi)^{dV}\delta(p+ \mathbb{B}k)\left[\prod_{i}\frac{i}{k_{i}^{2}-m_{i}^{2}}\right]T_{G}(p_{n};k_{ i})\,, \tag{21}\] where \(T_{G}\) is a tensor (polynomial in the propagator momenta \(k_{i}\)), consisting essentially of the nontrivial numerators of the propagators as well as polarization vectors and coupling constants, and \(p_{n}\) are the ingoing momenta from external lines.4 After introduction of Schwinger parameters, we recognize this as Footnote 4: The momenta of internal vertices (not connected to external lines) are simply zero here. \[\mathcal{M}(p_{n})=\int d^{P}\!\tau\!\int\frac{d^{dP}\!k}{(2\pi)^{dP}}\,\langle \hat{I}_{G_{0}}(\tau_{i})|p_{n};k_{i}\rangle\,T_{G}(p_{n};k_{i})e^{-\sum_{i} \tau_{i}m_{i}^{2}}\,. \tag{22}\] Repeating the same reasoning as above gives \[\mathcal{M}(p_{n})=(2\pi)^{d}\delta(\sum_{n}p_{n})\int d^{P}\tau\,\left[I_{G_ {0}}(\tau_{i};p_{n};z_{i})T_{G}(p_{n};-i\vec{\partial}_{z_{i}})e^{-\sum_{i} \tau_{i}m_{i}^{2}}\right]_{z_{i}=0}\,, \tag{23}\] where \(I_{G_{0}}\) is given by eq. (16). The formalism thus completely bypasses the usual tensor reduction (e.g. a la Passarino-Veltman). At this point, let us summarize the main difference to the standard approach: * Using Schwinger instead of Feynman parameters renders the momentum integration in \(I_{G_{0}}(\tau_{i};p_{n};z_{n})\) mass-independent. It is noteworthy that the integration over the variable \(\sum_{i}\tau_{i}\) is always trivially possible at the end of the calculation, effectively returning to Feynman parametrization _after_ the loop momentum integration. * Switching from the \(k_{n}\) to the \(z_{n}\) integration in the convolution in eq. (22) results in the explicit factorization eq. (23) and renders the loop momentum integration independent of \(T_{G}\). It also sidesteps the parametrization of the propagator momenta \(k_{i}\) in \(T_{G}\) in terms of loop and external momenta. The remaining integration over the Feynman parameters is more difficult compared to the effective action (as one does not expand in external momenta). Powerful mathematical methods have been developed over the years to deal with this problem, see [39] and references therein. The universal momentum integral In this section we will perform the loop integrations exactly, the result will be precisely the function \(I_{G_{0}}\) of eq. (1). As shown in sec. 3, this function only depends on the reduced graph \(G_{0}\) (whose information is entirely contained in its incidence matrix). In this section and section 5, "graph" will always refer to the reduced graph, and the subscript \(G_{0}\) is generally omitted. ### The fundamental spaces of the incidence matrix We can write momentum conservation of the propagator momenta \(k_{i}\) and incoming vertex momenta \(p_{n}\) as \[p+\mathbb{B}k=0\,. \tag{13}\] The rank of \(\mathbb{B}\) for a connected graph is \(V-1\) (in general \(V-C\) for graphs with \(C\) connected components). Each column has one \(+1\) and one \(-1\),5 which implies that \(\sum_{n}\mathbb{B}_{ni}=0\), which is just overall momentum conservation. Another way of saying this is that \((1,1,\ldots 1)^{\intercal}\) is in the kernel of \(\mathbb{B}^{\intercal}\) (a.k.a. the cokernel of \(\mathbb{B}\)). If the graph is connected, this is the whole cokernel. The kernel of \(\mathbb{B}\) gives the set of edges whose momenta sum to zero for zero external momenta, that is, precisely the loops. The kernel of \(\mathbb{B}\) is referred to as the _cycle space_ (loop space would be a more physics-based terminology). Some properties of this space are reviewed in app. A. The matrix \(\mathbb{B}\) has nullity \(L\) (the number of loops) and the matrix \(\mathbb{B}^{\intercal}\) has nullity \(C\) (the number of connected components). The equality of row and column rank gives Footnote 5: The only exception to this are self loops, which have a column of all zeroes. \[P-L=V-C \tag{14}\] For a connected graph (\(C=1\)) this is simply \(P-L=V-1\). ### Loop integrations It remains to calculate the function \(\hat{I}_{G_{0}}(\tau_{i};p_{n};z_{i})=\langle\hat{I}_{G_{0}}(\tau_{i})|p_{n};z_ {i}\rangle\). In the remainder of this section, we suppress the subscript \(G_{0}\). Inserting a complete set of \(|x_{n};k_{i}\rangle\) states, using eq. (12) and performing the trivial \(x_{n}\) integrations one gets \[\hat{I}(\tau_{i};p_{n};z_{i})=i^{V-1-P}\int\frac{d^{d}\!P\!k}{(2\pi)^{dP}}(2 \pi)^{dV}\delta(p+\mathbb{B}k)\exp\left\{k^{\intercal}\mathbb{T}_{0}k+iz^{ \intercal}k\right\}\,. \tag{15}\] which features the usual momentum conserving delta functions for each vertex. The map \(\mathbb{B}\) is not surjective, and, unless \(L=0\) (tree graphs), neither is it injective, hence the relation eq. (13) cannot be inverted (the \(k_{i}\) cannot be written in terms of the \(p_{n}\)). This motivates the introduction of loop momenta \(q_{1}\dots q_{L}\) and the restriction to some set of independent external momenta such as \(p_{1}\dots p_{V-1}\) (i.e., one parametrizes the kernel and image of \(\mathbb{B}\)). This is of course the usual procedure of loop integrations. While we certainly could proceed this way, we opt for a different approach which does not single out any edges or vertices and hence is more symmetric. Besides being more transparent, it also simplifies the treatment of graph isomorphisms and automorphisms which typically permute vertices and edges. Keeping symmetries manifest in the resulting expressions can be very helpful in many contexts (see e.g. sec. 6). The key technical trick that facilitates this manifestly symmetric treatment of vertices and edges is a Gaussian regularization of the delta functions \[\delta(p+\mathbb{B}k)=i^{-V}\left(\epsilon\pi\right)^{-\frac{dV}{2}}\exp\{ \epsilon^{-1}(p+\mathbb{B}k)^{\intercal}(p+\mathbb{B}k)\}\,, \tag{4.4}\] which allows us to directly perform the Gaussian integration over the (Wick-rotated) \(k\) momenta without the need for a parametrization in terms of loop momenta. Defining the symmetric matrix \[\mathbb{T}\equiv\mathbb{T}_{0}+\epsilon^{-1}\mathbb{B}^{\intercal}\mathbb{B}\,, \tag{4.5}\] we can express the result as \[\hat{I}(\tau_{i};p_{n};z_{i})=i^{-1}(4\pi)^{-\frac{d(P-V)}{2}} \epsilon^{-\frac{dV}{2}}(\det\mathbb{T})^{-\frac{d}{2}}\\ \exp\left\{\epsilon^{-1}p^{\intercal}(\mathbb{1}-\epsilon^{-1} \mathbb{B}^{\intercal}\mathbb{B}^{\intercal})p-i\epsilon^{-1}z^{\intercal} \mathbb{T}^{-1}\mathbb{B}^{\intercal}p+\tfrac{1}{4}z^{\intercal}\mathbb{T}^{- 1}z\right\}\,. \tag{4.6}\] We now proceed to carefully take the limit \(\epsilon\to 0\). We expect the following behavior:6 Footnote 6: One way to quickly prove these facts is to write \(\mathbb{T}=\mathbb{T}_{0}^{\frac{1}{2}}(\mathbb{1}+\epsilon^{-1}\mathbb{T}_{0} ^{-\frac{1}{2}}\mathbb{B}^{\intercal}\mathbb{B}\mathbb{T}_{0}^{-\frac{1}{2}}) \mathbb{T}_{0}^{\frac{1}{2}}\) and diagonalize the \(P\times P\) symmetric matrix \(\epsilon^{-1}\mathbb{T}_{0}^{-\frac{1}{2}}\mathbb{B}^{\intercal}\mathbb{B} \mathbb{T}_{0}^{-\frac{1}{2}}\), which is of the form \(\mathrm{diag}(\frac{\lambda_{1}}{\epsilon}\dots\frac{\lambda_{P-L}}{\epsilon}0 \dots 0)\), with \(\lambda_{i}\neq 0\), from which the small-\(\epsilon\) behaviors of \(\det\mathbb{T}\) and \(\mathbb{T}^{-1}\) follow immediately. \[\det\mathbb{T}\sim\mathcal{O}(\epsilon^{L-P})\,,\qquad\mathbb{T}^{-1}\sim \begin{cases}\mathcal{O}(1)&L>0\\ \mathcal{O}(\epsilon)&L=0\end{cases} \tag{4.7}\] We thus expand \(\mathbb{T}^{-1}\) in powers of \(\epsilon\): \[\mathbb{T}^{-1}=\mathbb{Q}+\epsilon\,\mathbb{Q}^{\prime}+\epsilon^{2}\mathbb{ Q}^{\prime\prime}+\dots \tag{4.8}\] Comparing coefficients of \(\epsilon^{-1}\) in the equation \(\mathbb{T}\mathbb{T}^{-1}=\mathbb{1}\) one gets \(\mathbb{B}^{\intercal}\mathbb{B}\mathbb{Q}=0\), and hence, since \(\mathbb{B}\) and \(\mathbb{B}^{\intercal}\mathbb{B}\) have the same kernel, \[\mathbb{B}\mathbb{Q}=0\,. \tag{4.9}\] At order \(\epsilon^{0}\), we have \(\mathbb{B}^{\intercal}\mathbb{B}\mathbb{Q}^{\prime}+\mathbb{T}_{0}\mathbb{Q}=1\) and hence by use of eq. (4.9), \(\mathbb{B}^{\intercal}\mathbb{B}\mathbb{Q}^{\prime}\mathbb{B}^{\intercal}= \mathbb{B}^{\intercal}\), which implies that \(\mathbb{B}\mathbb{Q}^{\prime}\mathbb{B}^{\intercal}\) acts as the identity on \(\mathrm{im}\,\mathbb{B}\). Since it is also zero on \(\ker\mathbb{B}^{\intercal}\), we have \[\mathbb{B}\mathbb{Q}^{\prime}\mathbb{B}^{\intercal}=\mathbb{P}_{\mathrm{im}\, \mathbb{B}}\,, \tag{4.10}\] where \(\mathbb{P}_{\mathrm{im}\,\mathbb{B}}\) is the projector onto the image of \(\mathbb{B}\). eqns. (4.9) and eq. (4.10) are important identities that we will use repeatedly in what follows. Using eqns. (4.9) and (4.10) in eq. (4.6), we see that the terms quadratic in \(p\) give \[\epsilon^{-1}p^{\intercal}(1-\epsilon^{-1}\mathbb{B}\mathbb{T}^{ -1}\mathbb{B}^{\intercal})p = \epsilon^{-1}p^{\intercal}\mathbb{P}_{\ker\mathbb{B}^{ \intercal}}p-p^{\intercal}\mathbb{B}\mathbb{Q}^{\prime\prime}\mathbb{B}^{ \intercal}p+\mathcal{O}(\epsilon) \tag{4.11}\] \[= \left(V\epsilon\right)^{-1}\left(\sum_{n}p_{n}\right)^{2}-p^{ \intercal}\mathbb{B}\mathbb{Q}^{\prime\prime}\mathbb{B}^{\intercal}p+ \mathcal{O}(\epsilon)\,.\] In the second line we have used that \(\ker\mathbb{B}^{\intercal}\) for a connected graph is one-dimensional and spanned by \((\frac{1}{\sqrt{V}},\frac{1}{\sqrt{V}},\dots)\). We can now safely take the limit \(\epsilon\to 0\) in eq. (4.6), producing the expected delta function as well as some extra finite terms \[\hat{I}(\tau_{i};p_{n};z_{i})=(4\pi)^{-\frac{dL}{2}}(2\pi)^{d}\delta\left(\sum _{n}p_{n}\right)\Delta^{-\frac{d}{2}}\exp\left\{\tfrac{1}{4}z^{\intercal} \mathbb{Q}\,z-iz^{\intercal}\mathbb{R}\,p-p^{\intercal}\mathbb{U}p\right\}\,. \tag{4.12}\] Here we defined the shorthands \[\Delta \equiv V^{-1}\lim_{\epsilon\to 0}\epsilon^{P-L}\det\mathbb{T}\,, \tag{4.13}\] \[\mathbb{R} \equiv \mathbb{Q}^{\prime}\mathbb{B}^{\intercal}\,,\] (4.14) \[\mathbb{U} \equiv \mathbb{B}\mathbb{Q}^{\prime\prime}\mathbb{B}^{\intercal}\,, \tag{4.15}\] and we recall that \(\mathbb{Q}\), \(\mathbb{Q}^{\prime}\) and \(\mathbb{Q}^{\prime\prime}\) were defined in eq. (4.8). In the second term in the exponential in eq. (4.6) we have used eq. (4.9). From this, we extract the function \(I\) from eq. (3.9) by dropping the delta function \[I(\tau_{i};p_{n};z_{i})=(4\pi)^{-\frac{dL}{2}}\Delta^{-\frac{d}{2}}\exp\left\{ \tfrac{1}{4}z^{\intercal}\mathbb{Q}\,z-iz^{\intercal}\mathbb{R}\,p-p^{\intercal }\mathbb{U}p\right\}\,. \tag{4.16}\] Finally, we can derive two more interesting relations: \[\mathrm{tr}\,\mathbb{T}_{0}\mathbb{Q} = L\,, \tag{4.17}\] \[\mathbb{R}^{\intercal}\mathbb{T}_{0}\mathbb{R} = -\mathbb{U}\,, \tag{4.18}\] which easily follow from the \(\epsilon\) expansion of \(\mathbb{T}\mathbb{T}^{-1}=1\) and the relations (4.9) and (4.10). Eq. (4.18) shows that the matrix \(\mathbb{U}\) is actually not independent and that the vertex momenta \(p\) enter in \(I\) only in the combination \(\mathbb{R}p\). ### Simple examples at tree-level and one-loop Although certainly not the main objective of the formalism, it can be applied to tree level graphs, for instance for integrating out heavy fields diagrammatically (as opposed to simply applying the equations of motion). Since \(\mathbb{B}\) has trivial kernel, \(\mathbb{B}^{\intercal}\mathbb{B}\) is actually invertible and one has \[\Delta=1\qquad\mathbb{Q}=0\qquad\mathbb{R}=\mathbb{B}^{+}\qquad\mathbb{U}=-( \mathbb{B}^{+})^{\intercal}\mathbb{T}_{0}\mathbb{B}^{+}\,, \tag{4.19}\] where \(\mathbb{B}^{+}=(\mathbb{B}^{\intercal}\mathbb{B})^{-1}\mathbb{B}^{\intercal}\) (notice that for tree-level graphs, \(\mathbb{B}^{\intercal}\mathbb{B}\) has full rank and is invertible). At one-loop order, consider the polygon graph from figure 3. One easily finds \[\Delta=\sum_{i}\tau_{i}\qquad\mathbb{Q}_{ij}=\frac{1}{\Delta} \tag{4.20}\] from the definition and the explicit form of the incidence matrix. We give the expressions for \(\mathbb{R}\) and \(\mathbb{U}\) in terms of their polynomials: \[z^{\intercal}\mathbb{R}p=-\sum_{i,j}\frac{\tau_{i}z_{j}}{\Delta}k_{ij}\,, \qquad p^{\intercal}\mathbb{U}p=-\sum_{i<j}\frac{\tau_{i}\tau_{j}}{\Delta}k_{ ij}^{2}\,, \tag{4.21}\] where \[k_{ij}\equiv\frac{i-j}{V}\sum p_{n}+\mathrm{sgn}(j-i)\!\!\!\!\!\sum_{\min(i,j) +1}\!\!\!\!\!\!p_{n}\,, \tag{4.22}\] (the first term only contributes a total derivative to the effective Lagrangian). In the next section we will analyze these quantities in the general case. ## 5 Characterization of \(\Delta\), \(\mathbb{Q}\), \(\mathbb{R}\) and \(\mathbb{U}\) The quantities \(\Delta\), \(\mathbb{Q}\), \(\mathbb{R}\) and \(\mathbb{U}\) appearing in \(I\), eq. (4.16), are entirely encoded in the \(\epsilon\) expansion of \(\det\mathbb{T}\) and \(\mathbb{T}^{-1}\), which provide a simple and efficient way of calculating them directly, this is likely the preferred method for a possible future automation of Figure 3: The one-loop polygon graph. the formalism. In this section we collect some of their properties. The quantities \(\Delta\) and \(p^{T}\mathbb{U}p\) are in fact well known. \(\Delta\) is called the first Szymanzik polynomial, and \(\Delta(p^{T}\mathbb{U}p+\sum_{i}m_{i}^{2}\tau_{i})\) the second Szymanzik polynomial. Some of their graph theoretic properties were studied in [40], see also ref. [39]. The polynomial \(\Delta z^{T}\mathbb{Q}z\) was studied in [41] with the objective of simplifying calculations in quantum electrodynamics. ### Graph theory basics Let us first define a few more graph theoretic terms. A _spanning tree_ of a connected graph (\(C=1\)) is a connected tree subgraph that contains all vertices. It is easily shown that a spanning tree always has \(L\) edges less than the original graph. Removing further edges from a spanning tree, the graph becomes disconnected (\(C^{\prime}>1\)), in this case one obtains so-called _spanning \(C^{\prime}\)-forests_ (spanning 2-forests, spanning 3-forests etc). Examples of a spanning tree and a spanning forest for the graph of figure 2 are shown in figure 4. A _cut-vertex_ is a vertex that, if removed, splits the graph into two disconnected subgraphs with non-empty edge sets.7 A _cut-edge_ or _bridge_ is an edge that, if removed, splits the graph into two disconnected subgraphs. All edges of a tree graph are bridges. Notice that the two vertices connected to a bridge are also cut-vertices (unless they are not connected to any other edge). A graph containing cut-edges and cut-vertices is shown in figure 5. Footnote 7: The cut vertex belongs to both subgraphs. Moreover, _edge cuts_ are sets of edges that, if removed, disconnect the graph. Bridges are edge cuts with just one element. A minimal edge cut (i.e. none of its proper subsets is an edge cut) is called a _bond_. For instance, the edges \(\{e_{1},e_{2},e_{3}\}\) and \(\{e_{3},e_{4}\}\) in figure 2 are bonds, and so are any two edges of the polygon graph in figure 3. Finally, define an equivalence relation on edges as follows: For each edge, let \(\mathcal{C}(e)\) be the set of all cycles \(c\) for which \(e\in c\). Define \(e_{1}\sim e_{2}\) iff \(e_{1}=e_{2}\) or \(\mathcal{C}(e_{1})=\mathcal{C}(e_{2})\neq\emptyset\). Figure 4: A spanning tree and a spanning 2-forest for the graph of figure 2. oosely speaking, such edges always occur together in cycles. We show in appendix A that the following definition is equivalent: \(e_{1}\sim e_{2}\) iff \(e_{1}=e_{2}\) or \(\{e_{1},e_{2}\}\) form a bond. We call two equivalent edges _allies_ and the equivalence classes alliances. 8 The edges 3 and 4 in the graph in figure 2 form an alliance, and so do all edges of the polygon graph in figure 3. Footnote 8: These classes do not appear to be a common topic in graph theory and we have not found any standard terminology for them. In ref. [39] alliances were called chains, this term has however a different meaning in the math literature. ### Explicit expressions in terms of subgraphs Let us define the so-called _edge-based Laplacian_ \[\mathbb{L}\equiv\mathbb{B}^{\intercal}\mathbb{B}. \tag{5.1}\] Our approach has to be contrasted with the usual one in terms of the (vertex-based) Laplacian \(\mathbb{B}\mathbb{B}^{\intercal}\). Since we will not be making use of the latter in this work we will refer to \(\mathbb{L}\) simply as the Laplacian in what follows. Moreover denote by \(\mathbb{L}_{(\ell_{1}\dots\ell_{K})}\), with \((\ell_{1}<\ell_{2}<\dots<\ell_{k})\) the Laplacians with rows and columns \(\ell_{1}\dots\ell_{K}\) deleted. These matrices are themselves the Laplacians of the subgraphs obtained by removing the indexed edges from the original graph. Directly from the definition eq. (4.5), we can then write the explicit but slightly cumbersome expression for \(\mathbb{T}^{-1}\): \[\mathbb{T}^{-1}=\frac{\epsilon(\mathbb{L}+\epsilon\mathbb{T}_{0})^{\mathrm{ adj}}}{\det(\mathbb{L}+\epsilon\mathbb{T}_{0})}=\frac{\sum_{K=0}^{P}\epsilon^{K+1} \sum_{\ell_{1}<\ell_{2}<\dots<\ell_{K}}\tau_{\ell_{1}}\cdots\tau_{\ell_{K}}( \mathbb{L}_{(\ell_{1}\dots\ell_{K})})^{\mathrm{adj}}}{\sum_{K=0}^{P} \epsilon^{K}\sum_{\ell_{1}<\ell_{2}<\dots<\ell_{K}}\tau_{\ell_{1}}\cdots\tau_{ \ell_{K}}\det\mathbb{L}_{(\ell_{1}\dots\ell_{K})}}\,. \tag{5.2}\] Here, adj stands for adjuate matrix (the cofactor matrix), and the bar means to restore the deleted rows and columns and fill them with zeroes. As stated above, the matrices \(\mathbb{L}_{(\ell_{1}\dots\ell_{K})}\) are Laplacians of subgraphs, and hence the sums over the \(\ell_{i}\) can be interpreted as sums over subgraphs. Let us characterize the type of subgraphs that contribute at Figure 5: A graph with cut-edges (in green) and cut-vertices (in red). a given \(K\). The nullity of \(\mathbb{L}\) (the same as the one of \(\mathbb{B}\)) is \(L\), let \(L^{\prime}\) denote the nullity of \(\mathbb{L}_{(\ell_{1}\ldots\ell_{K})}\). Note that \(L^{\prime}\leq L\) since we are removing edges and potentially opening loops. Another thing that can happen is that the graph becomes disconnected, let us denote by \(C^{\prime}\) the number of connected components. Using the standard relation eq. (4.2) as well as \(V=V^{\prime}\) then gives \[L^{\prime}+K=L+(C^{\prime}-1)\,. \tag{5.3}\] By standard linear algebra results, the rank of the adjugate matrix of an \(N\) dimensional square matrix \(\mathbb{M}\) is \[\operatorname{rank}\mathbb{M}^{\operatorname{adj}}=\begin{cases}N& \operatorname{rank}\mathbb{M}=N\\ 1&\operatorname{rank}\mathbb{M}=N-1\\ 0&\operatorname{rank}\mathbb{M}<N-1\end{cases} \tag{5.4}\] The first nonzero term in the numerator occurs when \(L^{\prime}=1\), or \(K=L-1\), \(C^{\prime}=1\). In the denominator, the first nonzero term occurs for \(L^{\prime}=0\) or \(K=L\), \(C^{\prime}=1\). Hence, \(\Delta\), \(\mathbb{Q}\), \(\mathbb{R}\) and \(\mathbb{U}\) get contributions from the subgraphs listed in table 3. In particular: \[\Delta = \sum_{S}\omega_{S}\frac{\det\mathbb{L}_{S}}{V}\,, \tag{5.5}\] \[\mathbb{Q} = \sum_{C}\frac{\omega_{C}}{\Delta}\frac{\overline{\mathbb{L}_{C}^ {\operatorname{adj}}}}{V}\,,\] (5.6) \[\mathbb{R} = \sum_{S}\frac{\omega_{S}}{\Delta}\frac{\overline{\mathbb{L}_{S}^ {\operatorname{adj}}}\mathbb{B}^{\intercal}}{V}\,,\] (5.7) \[\mathbb{U} = \sum_{F}\frac{\omega_{F}}{\Delta}\frac{\{\mathbb{B}\overline{ \mathbb{L}_{F}^{\operatorname{adj}}}\mathbb{B}^{\intercal}-\mathbb{P}_{ \operatorname{im}\mathbb{B}}\det\mathbb{L}_{F}\}}{V}\,. \tag{5.8}\] where \[\omega_{X}\equiv\prod_{i\notin X}\tau_{i} \tag{5.9}\] is the monomial containing the \(\tau_{i}\) parameters corresponding to the deleted edges. #### 5.2.1 The Kirchhoff polynomial \(\Delta\) The determinant \[\Delta=\frac{1}{V}\sum_{\begin{subarray}{c}\text{spanning}\\ \text{trees }S\end{subarray}}\omega_{S}\det\mathbb{L}_{S} \tag{5.10}\] is equal to the so-called Kirchhoff polynomial of the graph9 Footnote 9: Sometimes the Kirchhoff polynomial is defined as \(\Delta^{\prime}=\prod_{i}\tau_{i}\sum_{S}\omega_{S}^{-1}\). The latter may be given as the determinant of any cofactor of the matrix \(\mathbb{B}\mathbb{T}_{0}\mathbb{B}^{\intercal}\). In this work we will refer to \(\Delta\) as the Kirchhoff polynomial. \[\Delta=\sum_{\begin{subarray}{c}\text{spanning}\\ \text{trees }S\end{subarray}}\omega_{S}\,. \tag{5.11}\] In order to show the equivalence of the two we can use the Cauchy-Binet formula to compute the determinant of \(\mathbb{L}_{S}\) \[\det\mathbb{L}_{S}=\sum_{n=1}^{V}(\det\mathbb{B}_{S}^{(n)})^{2} \tag{5.12}\] where \(\mathbb{B}_{S}^{(n)}\) is the square matrix obtained from \(\mathbb{B}_{S}\) by removing the \(n\)th row. One can show by induction that \(\det\mathbb{B}_{S}^{(n)}=\pm 1\)[42] and hence by eq. (5.12), \[\det\mathbb{L}_{S}=V\,. \tag{5.13}\] As a side remark, the proof implies that it does not matter whether one uses the directed or undirected graph's incidence matrix. The Kirchhoff polynomial has the following properties 1. It is a homogeneous polynomial of degree \(L\) in the \(\tau_{i}\) parameters. 2. All the coefficients of the monomials appearing in \(\Delta\) are equal to one. 3. It is at most linear in the parameters \(\tau_{i}\). 4. The \(\tau\) parameter of a cut-edge does not appear in \(\Delta\). \begin{table} \begin{tabular}{c c c c} & \(\Delta\) & \(\mathbb{Q}\) & \(\mathbb{Q}^{\prime},\ \mathbb{R}\) & \(\mathbb{Q}^{\prime\prime},\ \mathbb{U}\) \\ \hline edges & \(P-L\) & \(P-L+1\) & \(P-L\) & \(P-L-1\) \\ loops & \(0\) & \(1\) & \(0\) & \(0\) \\ connected comp. & \(1\) & \(1\) & \(1\) & \(2\) \\ \hline & spanning tree & one-loop connected & spanning tree & spanning 2-forest \\ & \(S\) & \(C\) & \(S\) & \(F=T_{1}\oplus T_{2}\) \\ \end{tabular} \end{table} Table 3: The types of subgraphs contributing to the various \(\Delta\), \(\mathbb{Q}\), \(\mathbb{R}\) and \(\mathbb{U}\). They all have the same number of vertices as the original graph, only fewer edges. 5. If the graph contains a cut vertex, the corresponding \(\Delta\) factorizes, \(\Delta=\Delta_{1}\Delta_{2}\), where \(\Delta_{1}\) and \(\Delta_{2}\) are the respective polynomials of the separated graphs. 10 Footnote 10: This includes the special case of a line starting and ending at the same vertex (i.e., a self-loop). 6. The \(\tau_{i}\) parameters of an alliance only enter as their sum. 7. It is an invariant polynomial with respect to the automorphism group of the reduced graph. Properties 1-4 are elementary consequences of eq. (5.11). To see property 5, notice that the spanning trees \(S\) are in one-to-one correspondence with the spanning trees \((S_{1},S_{2})\) of the subgraphs separated by the cut-vertex. Property 6 will be shown in sec. 5.2.2, and property 7 is clear from eq. (5.10), as the determinant is invariant for any value of \(\epsilon\), and hence the coefficients of the \(\epsilon\) expansion have to be separately invariant. #### 5.2.2 The matrix \(\mathbb{Q}\) The matrix \(\mathbb{Q}\) given in eq. (5.6) contains the Laplacians of subgraphs with \(P^{\prime}=P-(L-1)=V\) edges, and hence the corresponding incidence matrices \(\mathbb{B}_{C}\) are square matrices. Therefore, \[\mathbb{L}_{C}^{\text{adj}}=[\mathbb{B}_{C}^{\intercal}\mathbb{B}_{C}]^{ \text{adj}}=\mathbb{B}_{C}^{\text{adj}}(\mathbb{B}_{C}^{\text{adj}})^{ \intercal}\,. \tag{5.14}\] According to eq. (5.4) the rank of \(\mathbb{B}_{C}^{\text{adj}}\) must be one. It is shown in app. A that \[\mathbb{B}_{C}^{\text{adj}}=v_{C}(1,1,\ldots 1)\,, \tag{5.15}\] where the vector \(v_{C}\) spans the one-dimensional kernel of \(\mathbb{B}_{C}\), that is, it describes the single loop of \(C\). Its components are \(\pm 1\) for edges belonging to the cycle and zero otherwise. Thus, \[\mathbb{L}_{C}^{\text{adj}}=Vv_{C}v_{C}^{\intercal}\,. \tag{5.16}\] Only the relative sign for different \(i,j\) matters, they have opposite signs if they are oppositely oriented ("anti parallel"). Therefore \[v_{C}^{i}v_{C}^{j}=\begin{cases}+1&i,j\in C,\ i,j\text{ parallel}\\ -1&i,j\in C,\ i,j\text{ anti-parallel}\\ 0&i\notin C\text{ or }j\notin C\end{cases} \tag{5.17}\] and \[\mathbb{Q}=\sum_{\begin{subarray}{c}C\text{ connected,}\\ \text{one loop}\end{subarray}}\frac{\omega_{C}}{\Delta}\;\overline{v_{C}v_{C} ^{\intercal}}\,, \tag{5.18}\] where, as previously, the bar instructs us to fill missing rows/columns (edges not belonging to the graph \(C\)) with zeros. We notice furthermore that \(\overline{v}_{C}\) is in the kernel of \(\mathbb{B}\), which confirms our previous result eq. (4.9). The objects \(v_{C}\) are elements of the so-called cycle space (see App. A), in particular they are simple cycles (one-loop). Since there are usually several one loop graphs \(C\) that give rise to the same simple cycle \(c\), one can simplify the expression for \(\mathbb{Q}\) further by summing over the simple cycles \(c\) instead of the graphs \(C\). One gets \[\mathbb{Q}=\sum_{\begin{subarray}{c}c\text{ simple}\\ \text{cycle}\end{subarray}}\frac{\Delta_{c}}{\Delta}\,c\,c^{\intercal} \tag{5.19}\] where \(\Delta_{c}\) is the Kirchhoff polynomial for the graph obtained from the original one by contracting that cycle to a point. The equivalence of eqns. (5.18) and (5.19) is proven in appendix A. The polynomial \(\Delta z^{\intercal}\mathbb{Q}z\) has been described previously in [41] where it was dubbed the cycle polynomial. If there are any cut vertices, then the matrix \(\mathbb{Q}\) is block diagonal, since any simple cycle can only belong to one of the two subgraphs separated by the cut-vertex. The representation of \(\mathbb{Q}\) in eq. (5.19) allows for a simple proof of the alliance property of \(\Delta\) (property 6 in the list in sec. 5.2.1). Since \(\operatorname{tr}\mathbb{Q}\mathbb{T}_{0}=L\), \[L\Delta=\sum_{\begin{subarray}{c}c\text{ simple}\\ \text{cycle}\end{subarray}}\Delta_{c}\,c^{\intercal}\mathbb{T}_{0}c\,. \tag{5.20}\] For two allied edges there are two possibilities. Either they are both in \(c\) or both absent from it. If they are both in \(c\) they cannot appear in \(\Delta_{c}\) because by construction the contracted graph does not contain them. In this case they clearly appear as a sum in \(\Delta\) (from the term \(c^{T}\mathbb{T}_{0}c\)). If they do not appear in \(c\), then consider the contracted graph. The contracted graph again features an alliance with the same two edges, so one can proceed recursively, _q.e.d._ It follows from the alliance property of \(\Delta\) and \(\Delta_{c}\) that \(\mathbb{Q}\) also has the same property. Let us summarize the essential characteristics of \(\mathbb{Q}\): 1. \(\mathbb{Q}\) is symmetric. 2. \(\operatorname{tr}\mathbb{T}_{0}\mathbb{Q}=L\). 3. \(\operatorname{im}\mathbb{Q}=\ker\mathbb{B}\), in particular \(\operatorname{rank}(\mathbb{Q})=L\). 4. \(\mathbb{Q}\) can be written as a sum over simple cycles, eq. (5.19). 5. The \(\tau\) parameters of alliances only enter as their sum. 6. If \(e_{i}\), \(e_{j}\) are allies then \(\mathbb{Q}_{ik}=\pm\mathbb{Q}_{jk}\). This is clear from eqns. (5.18) or (5.19). 7. If there are any cut vertices, \(\mathbb{Q}\) becomes block-diagonal. #### 5.2.3 The matrix \(\mathbb{R}\) Starting from eq. (5.7), we compute \(\mathbb{R}\) explicitly as follows. We first notice that \[\overline{\mathbb{L}_{S}^{\rm adj}}\mathbb{B}^{\intercal}=\overline{\mathbb{ L}_{S}^{\rm adj}\mathbb{B}_{S}^{\intercal}}\,. \tag{5.21}\] For a tree graph such as \(S\), the matrix \(\mathbb{L}_{S}\) is in fact regular, \(\det\mathbb{L}_{S}=V\), see eq. (5.13), so one gets \[V^{-1}\mathbb{L}_{S}^{\rm adj}\mathbb{B}_{S}^{\intercal}=\mathbb{L}_{S}^{-1} (\mathbb{B}_{S})^{\intercal}\equiv\mathbb{B}_{S}^{+}\,, \tag{5.22}\] and therefore, \[\mathbb{R}=\sum_{\begin{subarray}{c}\text{spanning}\\ \text{trees }S\end{subarray}}\frac{\omega_{S}}{\Delta}\overline{\mathbb{B}_{S}^{+}}\,. \tag{5.23}\] Acting with \(\mathbb{B}_{S}^{+}\) on \(\mathbb{B}_{S}\) shows that it is a left inverse of \(\mathbb{B}_{S}\) and it therefore satisfies the first relation in eq. (B.2) (with \(\mathbb{P}_{\text{\tiny$\Pi$}\,\mathbb{B}_{S}^{\intercal}}=1\)). Since it also satisfies the second relation in eq. (B.2) it is equal to the Moore-Penrose pseudo-inverse (see app. B for details). It remains to calculate the MP pseudo inverse explicitly. In view of momentum conservation, \(p+\mathbb{B}_{S}k=0\) \[(\mathbb{B}_{S})^{+}p=-k\,, \tag{5.24}\] we expect that \([(\mathbb{B}_{S})^{+}p]_{i}\) equals minus the momentum flowing through propagator \(i\). For a tree level graph such as \(S\), this momentum can be found in a rather simple way. For instance consider the connected tree graph in figure 5(a). The momentum flowing through edge \(i\) can be expressed by either the ingoing or outgoing momenta (recall that Figure 6: The definition of the subgraphs \(T_{i}^{1,2}\) appearing in \(\mathbb{R}\). all \(p_{n}\)_enter_ the vertices): \[p_{T_{i}^{1}}\equiv\sum_{n\in T_{i}^{1}}p_{n}\,,\qquad p_{T_{i}^{2}}\equiv\sum_{n \in T_{i}^{2}}p_{n}\,, \tag{5.25}\] where \(T_{i}^{1}\) and \(T_{i}^{2}\) denote the disjoint tree graphs resulting from removing edge \(i\) from \(S\), where by definition the edge \(i\) points from \(T_{i}^{1}\) to \(T_{i}^{2}\), see figure (b)b, i.e. \(k_{i}=p_{T_{i}^{1}}=-p_{T_{i}^{2}}\). Any of the two choices defines a left inverse for \(\mathbb{B}_{S}\), and many more are possible by using momentum conservation. We show in app. B that the one corresponding to the MP inverse is given by \[k_{S,i}=-(\mathbb{B}_{S}^{+}p)_{i}=\frac{V_{T_{i}^{2}}}{V}p_{T_{i}^{1}}-\frac{ V_{T_{i}^{1}}}{V}p_{T_{i}^{2}} \tag{5.26}\] Filling in the missing rows of \(k_{S}\) with zeroes using the bar notation, we can write \[\mathbb{R}p=-\sum_{\begin{subarray}{c}\text{spanning}\\ \text{trees }S\end{subarray}}\frac{\omega_{S}}{\Delta}\overline{k_{S}}\,. \tag{5.27}\] Let us summarize some of the properties of \(\mathbb{R}\): 1. \(\mathbb{R}\) is a right pseudo-inverse of \(\mathbb{B}\), i.e. \(\mathbb{R}\mathbb{R}=\mathbb{P}_{\text{im}\,\mathbb{B}}\), eq. (4.10). 2. Notice that \(\mathbb{R}\) is not the MP inverse of \(\mathbb{B}\), however, \(\mathbb{B}^{+}\equiv\mathbb{P}_{\text{im}\,\mathbb{B}^{\intercal}}\mathbb{R}\) is. This follows from the characterization of the MP inverse in appendix B, see eq. (B.3). 3. \(\mathbb{R}\) satisfies \(\mathbb{R}\mathbb{P}_{\text{ker}\,\mathbb{B}^{\intercal}}=0\), eq. (4.14). 4. \(\mathbb{R}\) can be written as a sum over spanning trees, eq. (5.23). 5. Replacing \(\mathbb{B}_{S}^{+}\) by any arbitrary left inverse, for instance \(k^{\prime}_{S,i}=p_{T_{i}^{1}}\), just adds a total derivative to the effective Lagrangian. The corresponding function \(I^{\prime}_{G_{0}}(\tau_{i};p_{n};z_{i})\) is invariant under graph automorphisms only modulo momentum conservation. #### 5.2.4 The matrix \(\mathbb{U}\) The matrix \(\mathbb{U}\) is closely related to the so-called second Szymanzik polynomial. According to table 3, it can be written as a sum over spanning two-forests, i.e., tree subgraphs with two connected components containing all the vertices of the original graph. We first observe that the generalization of eq. (5.13) is \[\det\mathbb{L}_{F}=\det\mathbb{L}_{T_{1}}\det\mathbb{L}_{T_{2}}=V_{T_{1}}V_{T_ {2}}\,, \tag{5.28}\] since by a suitable renumbering of the edges we can write \(\mathbb{L}_{F}\) in a block-diagonal form. Therefore, we can rewrite eq. (5.8) as \[\mathbb{U}=\sum_{\begin{subarray}{c}\text{spanning}\\ \text{2-forest}\\ F=T_{1}\oplus T_{2}\end{subarray}}\frac{\omega_{F}}{\Delta}\frac{V_{T_{1}}V_{T _{2}}}{V}\{\overline{\mathbb{B}_{F}\mathbb{L}_{F}^{-1}\mathbb{B}_{F}^{\mathsf{ T}}}-\mathbb{P}_{\text{im\,\mathbb{B}}}\}\,. \tag{5.29}\] We can reexpress it as \[\mathbb{U}=\sum_{\begin{subarray}{c}\text{spanning}\\ \text{2-forest}\\ F=T_{1}\oplus T_{2}\end{subarray}}\frac{\omega_{F}}{\Delta}\frac{V_{T_{1}}V_{T _{2}}}{V}\left(\overline{\mathbb{P}}_{\text{im\,\mathbb{B}}_{T_{1}}}+\overline {\mathbb{P}}_{\text{im\,\mathbb{B}}_{T_{2}}}-\mathbb{P}_{\text{im\,\mathbb{B} }}\right)\,. \tag{5.30}\] In terms of the momenta, this simply becomes \[p^{\mathsf{T}}\mathbb{U}p=-\sum_{\begin{subarray}{c}\text{spanning}\\ \text{2-forest}\\ F=T_{1}\oplus T_{2}\end{subarray}}\frac{\omega_{F}}{\Delta}\left(\frac{V_{T_{ 2}}}{V}p_{T_{1}}-\frac{V_{T_{1}}}{V}p_{T_{2}}\right)^{2}\,, \tag{5.31}\] where \(p_{T_{k}}=\sum_{n\in T_{k}}p_{n}\) is the total momentum influx into \(T_{k}\). Using momentum conservation, this can also be written as \[p^{\mathsf{T}}\mathbb{U}^{\prime}p=\sum_{\begin{subarray}{c}\text{spanning}\\ \text{2-forest}\\ F=T_{1}\oplus T_{2}\end{subarray}}\frac{\omega_{F}}{\Delta}p_{T_{1}}\cdot p_{ T_{2}}\,, \tag{5.32}\] which is the form given in [39; 40]. We summarize the main properties of \(\mathbb{U}\): 1. \(\mathbb{U}\) is symmetric 2. \(\mathbb{P}_{\ker\mathbb{B}^{\mathsf{T}}}\,\mathbb{U}=\mathbb{U}\,\mathbb{P}_{ \ker\mathbb{B}^{\mathsf{T}}}=0\) 3. \(\mathbb{U}=-\mathbb{R}^{\mathsf{T}}\mathbb{T}_{0}\mathbb{R}\) 4. \(p^{\mathsf{T}}\mathbb{U}p\) can be given as a sum over spanning 2-forests, eq. (5.31). ## 6 Symmetries, symmetry factors, and complex conjugation In this section we would like to investigate the symmetries of a given Feynman graph in relation to our formalism. The purpose is threefold: firstly, we would like to define the symmetry factors of a graph in a more formal way, usually not done in standard quantum field theory textbooks. Secondly, we would like to study the induced transformations on the \(I\) and \(\Gamma\) factors and in the case of the latter point out some computational simplification that they imply. Thirdly, we would like to study complex conjugation of graphs in our formalism. We start by defining _isomorphisms_ of graphs [43]. An isomorphism between two Feynman graphs \(G\) and \(G^{\prime}\) is a label-preserving bijection \(\varphi=(\varphi_{v},\varphi_{e},\varphi_{o})\), where \(\varphi_{v}\) (\(\varphi_{e}\)), maps vertices (edges) of \(G\) to vertices (edges) of \(G^{\prime}\), and \(\varphi_{0}\) is a flip of the orientation of some subset of the real edges, such that \[\mathbb{S}_{\varphi_{v}}\mathbb{B}_{G}\left(\mathbb{S}_{\varphi_{e}}\right)^{ \intercal}\mathbb{S}_{\varphi_{o}}=\mathbb{B}_{G^{\prime}} \tag{6.1}\] Here, and \(\mathbb{S}_{\varphi_{v,e,o}}\) are the matrix representation of \(\varphi_{v,e,o}\). The diagonal matrix \(\mathbb{S}_{\varphi_{o}}\) has entries \((\mathbb{S}_{\varphi_{o}})_{ii}=-1\) for flipped edges and \((\mathbb{S}_{\varphi_{o}})_{ii}=+1\) for all other edges. "Label-preserving" simply means that \(\varphi_{v}\) only maps vertices to vertices of the same kind and similarly for edges. Two isomorphic graphs give the same value when applying the Feynman rules. An _automorphism_ (or symmetry) of a graph \(G\) is an isomorphism between \(G\) and itself. In particular, an automorphism satisfies \[\mathbb{S}_{\varphi_{v}}\mathbb{B}_{G}\left(\mathbb{S}_{\varphi_{e}}\right)^{ \intercal}\mathbb{S}_{\varphi_{o}}=\mathbb{B}_{G} \tag{6.2}\] Automorphisms form a group, \(\mathrm{Aut}(G)\), which is a subgroup of \(S_{V}\times(S_{P}\ltimes C_{2}^{P})\) where the first factor is the group of vertex permutations, and the second factor the semi-direct product of edge permutations and orientation flips of real edges. It is clear that the underlying reduced graph \(G_{0}\) is also invariant under the same operations. Notice that if \(G\) contains a real self-loop \(e_{k}\), \(\mathrm{Aut}(G)\) always contains an element that flips the orientation of this edge and does nothing else, according to \(\mathbb{S}_{\varphi_{v}}=\mathbb{1}\), \(\mathbb{S}_{\varphi_{e}}=\mathbb{1}\), and \((\mathbb{S}_{\varphi_{o}})_{ij}=\delta_{ij}(-1)^{\delta_{ik}}\). We define the symmetry factor of the graph as \[\mathrm{SF}=\frac{1}{|\mathrm{Aut}(G)|} \tag{6.3}\] that is, the reciprocal of the order of the automorphism group. Nontrivial automorphism groups are more common for graphs without any external legs. Notice that \(x_{n}\) and \(p_{n}\) naturally transform under \(S_{V}\), \(\tau_{i}\) under \(S_{P}\), and \(z_{i}\) and \(k_{i}\) transform under \(S_{P}\ltimes C_{2}^{P}\). Since the graph remains unchanged, we have that under the action of \(\mathrm{Aut}(G)\) \[\Gamma_{G}(\tau_{i};x_{n},k_{i}) \rightarrow \Gamma_{G}(\tau_{i};x_{n},k_{i})\] \[I_{G_{0}}(\tau_{i};p_{n},z_{i}) \rightarrow I_{G_{0}}(\tau_{i};p_{n},z_{i}) \tag{6.4}\] The invariance of \(I(\tau_{i};p_{n};z_{i})\) in eq. (4.16) is manifest due to the symmetric treatment of the momenta \(k_{i}\) and \(p_{n}\) in section 4. Let us give some examples. Consider the one-loop graph in Yukawa theory (with a real scalar coupling to fermion-antifermion) in figure 6(a). It has a single nontrivial automorphism given by following transformation (consisting of permuting both the vertices and edges). \[\{v_{1},v_{2};e_{1},e_{2}\}\to\{v_{2},v_{1};e_{2},e_{1}\}\,. \tag{6.5}\] It is easily seen that the graph is unchanged by this operation 11 Footnote 11: Notice that it is irrelevant how the graphs are drawn, they are the same and not merely isomorphic. (6.6) Since the two graphs are equal they must give the same \(\Gamma\). This can be verified by applying the transformation to \(\Gamma\) directly: \[\mathrm{tr}\left[C(x_{1})(i\not{D}_{1}+\not{k}_{1}+m)B(\tau_{1};x _{1},x_{2})C(x_{2})(i\not{D}_{2}+\not{k}_{2}+m)B(\tau_{2};x_{2},x_{1})\right]e ^{-m^{2}(\tau_{1}+\tau_{2})}\] \[\to \mathrm{tr}\left[C(x_{2})(i\not{D}_{2}+\not{k}_{2}+m)B(\tau_{2}; x_{2},x_{1})C(x_{1})(i\not{D}_{1}+\not{k}_{1}+m)B(\tau_{1};x_{1},x_{2})\right]e ^{-m^{2}(\tau_{1}+\tau_{2})}\,. \tag{6.7}\] The two expressions are the same due to the cyclic property of the trace. This is the only symmetry and therefore the symmetry factor is \(\left|\mathrm{Aut}(G)\right|^{-1}=\frac{1}{2}\). Next, consider the graph from Yang-Mills theory in figure 6(b). It has two nontrivial automorphisms, given by \[\{v_{1},v_{2};e_{1},e_{2},e_{3}\} \to\{v_{2},v_{1};-e_{1},-e_{2},-e_{3}\}\,, \tag{6.8}\] \[\{v_{1},v_{2};e_{1},e_{2},e_{3}\} \to\{v_{1},v_{2};e_{2},e_{1},e_{3}\}\,. \tag{6.9}\] Figure 7: Some graphs with nontrivial automorphism groups. In the first transformation, the two vertices are permuted and the gauge boson edges are flipped. The graph is indeed invariant under this: \[\raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/graph_loop_ As an example, consider the graph in Yukawa theory with a complex scalar field \[\raisebox{-14.226378pt}{\includegraphics[scale=14.226378pt]{figs/example.eps}}\quad \stackrel{{*}}{{\longrightarrow}}\quad\quad\raisebox{-14.226378pt}{ \includegraphics[scale=14.226378pt]{figs/example.eps}}\quad\stackrel{{ *}}{{\longrightarrow}}\quad\quad\raisebox{-14.226378pt}{\includegraphics[scale=14. 226378pt]{figs/example.eps}}\quad\raisebox{-14.226378pt}{\includegraphics[scale=14. ## Acknowledgments I would like to thank Kevin Santos and Sylvain Fichet for discussions. I acknowledge financial support by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) under fellowship number 309448/2020-4. ## Appendix A Cycle spaces In this appendix we briefly discuss the kernel of the incidence matrix \(\mathbb{B}\) and the closely related concept of cycle space. Consider a graph \(G\) and a spanning tree \(S\). Adding back in any of the \(L\) deleted edges gives a one-loop connected graph, let us denote this graph with \(C\) and the corresponding incidence matrix with \(\mathbb{B}_{C}\), which is (being connected and one loop) a \(V\times V\) matrix of rank \(V-1\). Therefore, its adjugate (a.k.a. cofactor transposed) matrix \((\mathbb{B}_{C})^{\rm adj}\) has rank 1, see eq. (10). Since we know that the cokernel of \(\mathbb{B}_{C}\) is spanned by \((1,1,\ldots 1)\), and from \(\mathbb{B}_{C}(\mathbb{B}_{C})^{\rm adj}=0=(\mathbb{B}_{C})^{\rm adj}\mathbb{ B}_{C}\), we must have that \[\mathbb{B}_{C}^{\rm adj}=v_{C}(1,1,\ldots 1)\,, \tag{12}\] where the column vector \(v_{C}\) spans the one dimensional kernel of \(\mathbb{B}_{C}\), describing the single loop of \(C\). It remains to fix the normalization of \(v_{C}\). By a known result in graph theory (see for instance Lemma 2.6 in ref. [42]), the determinants of the incidence matrix of a connected tree graph with one row deleted are all \(\pm 1\). Therefore the entries of \((\mathbb{B}_{C})^{\rm adj}\) and hence the \((v_{C})_{i}\) are \(\pm 1\) when \(i\) participates in the loop and zero otherwise. It is clear from the form of \(\mathbb{B}_{C}\) that when the edges \(i\) and \(j\) are oriented in the same way in the loop they have the same sign, and if they are oppositely oriented they have the opposite sign (otherwise \(v_{C}\) would not be a vector of the kernel of \(\mathbb{B}_{C}\)). This provides a formal way of finding a basis for the null space of \(\mathbb{B}\) (by choosing an arbitrary spanning tree, adding in deleted edges one at a time, and computing the adjugate matrices). There are \(L\) different possibilities to add in back deleted edges, the \(v_{C}\) are called the fundamental cycles of \(S\). Clearly, they are linearly independent (they have one edge unique to each of them) and hence the \(v_{C}\) form a (non-orthonormal) basis for \(\ker\mathbb{B}\). They can be normalized so all entries equal \(\pm 1\).12 Footnote 12: One can of course also read off these fundamental cycles by inspection of the graph \(C\). Adding in an edge to \(S\) to form \(C\) connects two vertices, these two vertices are already connected by a unique path in \(S\). The cycle is given by joining the path and the edge, and \(v_{C}\) can be read off immediately. The \(v_{C}\) obtained this way are called the simple cycles of \(G\) (simple referring to the fact that the corresponding \(C\) are one-loop. Here is a list of graph of figure 2. \[\begin{pmatrix}1\\ 1\\ 0\\ 0\\ 0\end{pmatrix},\begin{pmatrix}0\\ 1\\ 1\\ 1\\ 0\end{pmatrix},\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 1\\ 1\end{pmatrix},\begin{pmatrix}1\\ 0\\ 0\\ 0\\ 1\\ 1\end{pmatrix},\begin{pmatrix}1\\ 0\\ -1\\ -1\\ 0\end{pmatrix},\begin{pmatrix}0\\ 1\\ 1\\ 1\\ 0\end{pmatrix},\begin{pmatrix}1\\ 1\\ 1\\ -1\\ 0\end{pmatrix},\begin{pmatrix}1\\ 1\\ -1\\ 0\end{pmatrix},\begin{pmatrix}1\\ 1\\ -1\\ 0\end{pmatrix}. \tag{10}\] Any three of them are linearly independent and form a basis for \(\ker\mathbb{B}\). A note on terminology. In graph theory the term "cycle space" usually refers to the kernel of \(\mathbb{B}\) over the two-element field \(\mathbb{Z}_{2}\). This space has dimension \(L\) and consists of \(2^{L}\) vectors (for instance, the complete cycle space of the graph in figure 2 consists of the vectors in eq. (10) modulo 2, as well as the zero-vector and the vector \((1,1,0,0,1,1)^{\mathsf{T}}\)). Throughout this work, cycle space means \(\ker\mathbb{B}\) (over the reals), and we will always use the term cycle to refer to elements of the cycle space normalized such that its entries equal \(\pm 1\). The alliances defined in section 5.1 can be identified by looking at any cycle basis. Two edges \(e_{i},\ e_{j}\) that are not bridges are allies iff for every element \(c\) in the basis \(c_{i}=c_{j}\mod 2\). Removing \(e_{i}\) and \(e_{j}\) from the graph reduces the dimension of the cycle space by one (it must decrease because \(e_{i}\) and \(e_{j}\) are not bridges, and it cannot decrease by more than one because in this case a cycle would exist that contained one edge but not the other). Using \(P^{\prime}=P-2\), \(L^{\prime}=L-1\), \(V^{\prime}=V\), we have \(C^{\prime}=C+1\), hence the number of disconnected components increased by one and \(\{e_{i},e_{j}\}\) form a bond. Conversely, if \(\{e_{i},e_{j}\}\) form a bond, removing these edges results in \(C^{\prime}=C+1\) and hence \(L^{\prime}=L-1\) which implies that there cannot be any cycle that contains one edge but not the other, as otherwise we would have had \(L^{\prime}=L-2\). This shows equivalence of the two characterizations of alliances in section 5.1. 13 Footnote 13: Let \(b\) be the vector that has a one at positions \(i\) and \(j\) and zeroes elsewhere. Then the condition can be rephrased as \(b^{\mathsf{T}}c=0\mod 2\) for all basis cycles \(c\). In other words, \(b\) is an element of the orthogonal compliment of the cycle space over \(\mathbb{Z}_{2}\), which is also called bond space [43]. It is isomorphic to the row space of the incidence matrix over \(\mathbb{Z}_{2}\). Let us now prove that eq. (10) follows from eq. (11). We need to show that for any given simple cycle \(c\), \[\sum_{\begin{subarray}{c}C\\ \mathfrak{v}_{C}=c\end{subarray}}\omega_{C}=\Delta_{c} \tag{11}\] Fix any \(i\) with \(c_{i}\neq 0\), that is, any edge that participates in \(c\). Denote by \(j\neq i\) the remaining edges of \(c\). Then the graphs \(C\) are in one to one correspondence with the spanning trees \(S\) that do not contain \(i\) but contain all \(j\), (the correspondence being removing/adding edge \(i\)). Hence \[\sum_{\begin{subarray}{c}C\\ \overline{v}_{C}=c\end{subarray}}\omega_{C}=\sum_{\begin{subarray}{c}S\\ i\notin S,j\in S\end{subarray}}\frac{\omega_{S}}{\tau_{i}}=\sum_{S}[\partial_{ \tau_{i}}\omega_{S}]_{\tau_{j}=0}=[\partial_{\tau_{i}}\Delta]_{\tau_{j}=0}\] (A.4) Owing to the contraction and deletion relations [39], this amounts to first deleting edge \(i\) and then contracting edges \(j\). But this is the same as contracting the whole loop. ## Appendix B Moore-Penrose pseudo inverses Consider an \(N\times M\) real-valued matrix \(\mathbb{M}\) with possibly nontrivial kernel and cokernel. The Moore-Penrose pseudo inverse of \(M\) is defined via the conditions [44] \[\mathbb{M}\mathbb{M}^{+}\mathbb{M}=\mathbb{M}\,,\qquad\mathbb{M}^{+}\mathbb{ M}\mathbb{M}^{+}=\mathbb{M}^{+}\,,\qquad(\mathbb{M}^{+}\mathbb{M})^{\intercal}= \mathbb{M}^{+}\mathbb{M}\,,\qquad(\mathbb{M}\mathbb{M}^{+})^{\intercal}= \mathbb{M}\mathbb{M}^{+}\,.\] (B.1) These conditions fix \(\mathbb{M}^{+}\) uniquely. An equivalent formulation can be given in terms of projectors onto column and row spaces of \(\mathbb{M}\), denoted here by \(\mathbb{P}_{\rm im\,\mathbb{M}}\) and \(\mathbb{P}_{\rm im\,\mathbb{M}^{\intercal}}\) respectively. Either of the following pairs of equations is equivalent to the four equations in eq. (B.1) \[\mathbb{M}^{+}\mathbb{M}=\mathbb{P}_{\rm im\,\mathbb{M}^{\intercal}}\,,\qquad \mathbb{M}^{+}=\mathbb{M}^{+}\mathbb{P}_{\rm im\,\mathbb{M}}\,.\] (B.2) \[\mathbb{M}\mathbb{M}^{+}=\mathbb{P}_{\rm im\,\mathbb{M}}\,,\qquad\mathbb{M}^{+ }=\mathbb{P}_{\rm im\,\mathbb{M}^{\intercal}}\mathbb{M}^{+}\,.\] (B.3) _Proof._ We will first show that each of (B.1), (B.2), (B.3) defines a unique matrix. Uniqueness of \(\mathbb{M}^{+}\) from eq. (B.1) is a classic result [44]. To see that eq. (B.2) fixes \(\mathbb{M}^{+}\) uniquely, consider any \(v\in{\rm im\,\mathbb{M}}\). By standard linear algebra, there exists a unique \(w\in{\rm im\,\mathbb{M}^{\intercal}}\) with \(v=\mathbb{M}w\). For two matrices \(\mathbb{X},\mathbb{Y}\) satisfying the relations in eq. (B.2), calculate \(\mathbb{X}v=\mathbb{X}\mathbb{M}w=w\), \(\mathbb{Y}v=\mathbb{Y}\mathbb{M}w=w\), and hence \(\mathbb{X}v=\mathbb{Y}v\) for all \(v\in{\rm im\,\mathbb{M}}\), and hence \(\mathbb{X}\mathbb{P}_{\rm im\,\mathbb{M}}=\mathbb{Y}\mathbb{P}_{\rm im\, \mathbb{M}}\). The second relation in eq. (B.2) then implies \(\mathbb{X}=\mathbb{Y}\). The uniqueness from eq. (B.3) is analogous. To see existence of \(\mathbb{M}^{+}\) and to prove that all three definitions give the same \(\mathbb{M}^{+}\) one considers the singular value decomposition of \(\mathbb{M}\)[45] \[\mathbb{M}=\mathbb{O}_{L}\begin{pmatrix}\mathbb{M}_{\rm diag}&0\\ 0&0\end{pmatrix}\mathbb{O}_{R}^{\intercal}\,,\] (B.4) where \(\mathbb{M}_{\rm diag}\) is the diagonal matrix containing the \(\rm rank\,\mathbb{M}\) nonzero singular values of \(\mathbb{M}\), the zeroes stand for rectangular matrices of the appropriate dimensions, and \(\mathbb{O}_{L,R}\) are orthogonal matrices. One then defines the \(M\times N\) matrix [45] \[\mathbb{M}^{+}\equiv\mathbb{O}_{R}\begin{pmatrix}\mathbb{M}_{\text{ diag}}^{-1}&0\\ 0&0\end{pmatrix}\mathbb{O}_{L}^{\intercal}\,. \tag{101}\] It is a trivial matter to show that \(\mathbb{M}^{+}\) defined in this way satisfies all equations in eq. (100), (101) and (102), and hence they all define the same matrix \(\mathbb{M}^{+}\) and are therefore equivalent characterizations of the MP inverse, _q.e.d_. The characterizations eq. (101) and eq. (102) are useful in practice. For instance consider _any_ left pseudo-inverse \(\mathbb{X}\) of \(\mathbb{M}\), i.e., any matrix that satisfies the first relation in eq. (101). Then \(\mathbb{M}^{+}=\mathbb{X}\mathbb{P}_{\text{im}\,\mathbb{M}}\). Consider now the case of trivial kernel, but nontrivial cokernel (an injective but not surjective map). It is easily seen that \(\mathbb{M}^{\intercal}\mathbb{M}\) is regular, and that \((\mathbb{M}^{\intercal}\mathbb{M})^{-1}\mathbb{M}^{\intercal}\) is a left inverse for \(\mathbb{M}\). It is readily checked by either eq. (100) or eq. (101) that it is equal to the MP inverse \(\mathbb{M}^{+}\). In eq. (103) we wrote particular left inverses for the incidence matrix \(\mathbb{B}_{S}\) of some connected tree graph \(S\). We now simply multiply any of these left inverses with \(\mathbb{P}_{\text{im}\,\mathbb{B}_{S}}\) from the right to get the Moore-Penrose pseudo inverse. Let us chose \(k_{S,i}^{1}=p_{T_{i}^{1}}\), and call this left inverse \(\mathbb{X}\). Let us assume that without loss of generality the ingoing momenta are the first \(V_{i}^{1}\) vertices. Then the \(i\)th row of the left inverse \(\mathbb{X}\) reads \((-1,-1\ldots,-1,0,\ldots,0)\). The projector reads \[\mathbb{P}_{\text{im}\,\mathbb{B}_{S}}=\mathbb{1}-\tfrac{1}{V}(1,\ldots,1)^{ \intercal}(1,\ldots,1)\,, \tag{104}\] and hence \[(-1,-1\ldots,-1,0,\ldots,0)\mathbb{P}_{\text{im}\,\mathbb{B}_{S}}=(\tfrac{-V_ {i}^{2}}{V},\ldots\tfrac{-V_{i}^{2}}{V},\tfrac{V_{i}^{1}}{V},\ldots\tfrac{V_{ i}^{1}}{V})\,, \tag{105}\] is the \(i\)th row of the matrix \(\mathbb{B}_{S}^{+}\).
2310.20241
Enhanced dendrite nucleation and Li-clustering at vacancies on graphene
An ever present challenge for Li-ion batteries is the formation of metallic dendrites on cycling that dramatically reduces cycle life and leads to the untimely failure of the cell. In this work we investigate the modes of Li-cluster formation on pristine and defective graphene. Firstly, we demonstrate that on a defect free surface the cluster formation is impeded by the thermodynamic instability of \ce{Li_2} and \ce{Li_3} clusters. In contrast, the presence of a vacancy dramatically favours clustering. This provides insights into the two modes of Li-growth observed: for the pristine basal plane if the Li-Li repulsion of the small clusters can be overcome then plating type behaviour would be predicted (rate / voltage dependent and at any point on the surface); whilst dendritic growth would be predicted to nucleate from vacancy sites, either pre-existing in the material or formed as a result of processing.
Jonathon Cottom, Qiong Cai, Emilia Olsson
2023-10-31T07:50:32Z
http://arxiv.org/abs/2310.20241v1
# Enhanced dendrite nucleation and Li-clustering at vacancies on graphene ###### Abstract An ever present challenge for Li-ion batteries is the formation of metallic dendrites on cycling that dramatically reduces cycle life and leads to the untimely failure of the cell. In this work we investigate the modes of Li-cluster formation on pristine and defective graphene. Firstly, we demonstrate that on a defect free surface the cluster formation is impeded by the thermodynamic instability of Li\({}_{2}\) and Li\({}_{3}\) clusters. In contrast, the presence of a vacancy dramatically favours clustering. This provides insights into the two modes of Li-growth observed: for the pristine basal plane if the Li-Li repulsion of the small clusters can be overcome then plating type behaviour would be predicted (rate / voltage dependent and at any point on the surface); whilst dendritic growth would be predicted to nucleate from vacancy sites, either pre-existing in the material or formed as a result of processing. Lithium-ion batteries (LIBs) have emerged as one of the most important electrochemical energy storage technologies in the quest for a sustainable fossil fuel-free energy future [1]. LIBs typically employ a liquid electrolyte, layered cathode and graphite anode. Despite their wide use, challenges remain in terms of lifetime, and stability [2, 3, 4]. Fast charging LIBs are increasingly pursued which further accelerates the current LIB lifetime and stability challenges. These challenges relate to the formation of lithium dendrites at the anode surface, and irreversible lithium plating [5, 6, 7, 8]. Dendrites can form and grow due to uneven deposition and accumulation of lithium at the anode electrolyte interface, leading to short-circuiting [9, 10, 11, 12]. For lithium plating, the lithium deposits on the anode surface at a faster rate than it can intercalate [9]. This leads to reduced battery life and limited charging. As for dendrite formation, plating can also be inhomogeneous, either due to kinetics or thermodynamics, but the exact mechanism is poorly understood [13]. The operating voltage of graphite LIB anodes is very close to the Li metal voltage, leading to Li metal plating occurring at high charging rates or low temperatures (below 273 K) [2]. At extreme temperatures (above 373 K), carbon anodes are increasingly susceptible to dendrite formation [9]. All solid state batteries (ASSBs) are widely seen as the next frontier in energy storage due to their high energy density as a consequence of the possibility of using Li metal anodes with solid electrolytes [14, 15]. With the introduction of a solid electrolyte, the expectation was that Li dendrite formation would be eradicated. However, it was found that Li dendrites are also a challenge in ASSBs [14]. To this end, carbon-based materials are also finding a use in ASSBs as current collectors, protective layers, and support materials for Li metal anodes, making the atomic scale understanding of lithium nucleation on carbon surfaces imperative [10, 15, 16, 17, 18, 19, 20, 21]. A plethora of different carbon-materials have been employed for battery applications, including graphite, graphene, hard carbons, carbon nanotubes and soft carbons [9, 22, 23, 24]. Whilst these materials are structurally different, they all contain sp\({}^{2}\) hybridized surfaces, forming the commonly present planar basal planes with connected six-membered carbon rings [9, 15, 25, 26, 27]. In real applications, the basal plane surface can also be defective, either as a result of synthesis conditions or battery operation. Atomic scale understanding of the effect of carbon vacancy (V\({}_{C}\)) on Li deposition and nucleation is crucial for the design of longer lifetime, safer battery architectures. In this letter, we conduct a density functional theory (DFT) study of Li cluster growth on graphene. In our previous work we studied the interaction of a single lithium ion with pristine and defective graphene in the context of LIBs [28, 29, 30, 31, 32]. These studies showed that defects have a direct impact on the single lithium binding energy and the initial lithiation during charging. Through calculations of adsorption and migration energies, we postulated that V\({}_{C}\) defects would lead to irreversible capacity loss for LIBs, a statement we here extend. Separate studies have investigated the interaction of multiple Li atoms with different graphene models to probe the effect of lithium concentration on adsorption energy as a proxy of charging profile [33, 34, 35, 36, 37]. Small Li clusters were found to be able to adsorb more readily on graphene than Li metal (001) surface [36], and a separate study of Li clusters on pristine graphene showed that the cluster binding energy was dependent on Li concentration [34]. Here, we extend our considerations to the step wise growth of lithium clusters, exploring the energetic stability of these clusters on the pristine versus V\({}_{\rm C}\) as the initial dendritic growth and plating step. The calculations were performed spin polarised using the CP2K [38, 39, 40, 41, 42] code with the DZVP-SR-MOLOPT [39] family of basis-sets to describe the valence electrons, and the GTH pseudopotentials [43, 44, 45] to describe the core electrons. The initial defect free relaxations were performed on a 308 atom graphene cell (see Fig S1 in the supporting information). Both the xy-lattice vectors and ion positions were fully relaxed using the quasi-Newton BFGS update scheme (ion positions only for defect calculations), the vacuum slab was converged above 10 A, with 25 A used to account for all the tested cluster geometries [46, 47, 48, 49, 50, 42]. The PBE functional [51], with the D3-BJ dispersion correction scheme [52, 53, 54, 55, 56] was employed for all calculations in line with our previous work [12, 29, 30, 31]. The convergence criteria was 1 \(\times\) 10\({}^{-7}\) eV, and 0.005 eV / A for forces, with an energy cutoff of 750 Ry and a relative cutoff of 60 Ry to achieve an initial convergence of 0.1 meV / formula unit. To identify the lowest energy cluster configurations for each size, an exhaustive sampling was performed starting from the single atom (Li\({}_{1}\)). In each case the identified lowest energy configuration becomes the starting point for Li\({}_{\rm(n+1)}\). The process is iterated for the cluster n = 1 - 12 presented here. For completeness, the results for the higher energy cluster configurations are included in the supplementary information Fig S2-3. The interaction energies are calculated using the standard formalism of Zhang and Northrup [57] in terms of formation energy E\({}_{\rm f}\) (eq. 1), binding energy (E\({}_{\rm bind}\)) (eq. (2)), and cohesive energy (E\({}_{\rm coh}\)) (eq. (3)). \[E_{f}=\frac{E_{Li_{n}@surface}-E_{surface}-n\mu_{Li}}{n} \tag{1}\] \[E_{bind}=\frac{E_{Li_{n}@surface}-E_{surface}-E_{Lin_{n}}}{n} \tag{2}\] \[E_{coh}=\frac{E_{Li_{n}}-n\mu_{Li}}{n} \tag{3}\] Here, \(E_{Li_{n@surface}}\) is the total energy of the \(Li_{n}\) cluster on the surface, \(E_{surface}\) is the total energy of the surface, n the number of Li, and \(\mu_{Li}\) the lithium chemical potential. The Li chemical potential was varied from the vacuum (\(\mu_{Li}\) defined as a single Li atom in vacuum) to metallic reference (\(\mu_{Li}\) defined from Li bulk). To explore how V\({}_{\rm C}\) impact the Li nucleation on graphene, two key questions need to be addressed. Firstly, how the presence of a vacancy impacts E\({}_{\rm bind}\) of successive Li atoms with respect to the isolated Li, in essence how is surface-Li binding aided or impeded by the presence of a V\({}_{\rm C}\) defect? Secondly, how does the presence of the defect influence the morphology and E\({}_{\rm coh}\) of the forming clusters? Probing whether the cluster formation is significantly aided or frustrated on forming at a vacancy site. To act as a reference, cluster binding is first calculated on the defect free basal plane, Li-atoms are added step-wise to the lowest energy sites identified in ref [28], E\({}_{\rm bind}\), E\({}_{\rm coh}\) and E\({}_{\rm f}\) are then extracted (Fig.1). It is clear from the configurations in Figure 1a, that the preferred hole site absorption Figure 1: a) lowest energy cluster configurations for the range n = 1 - 12 on the pristine where purple circles are Li (striped purple circles denotes Li in next layer), and grey C. Figures made using the Atomic Simulation Environment (ASE) [58]. b) the interaction energies decomposed into (E\({}_{\rm bind}\)), (E\({}_{\rm coh}\)) and (E\({}_{\rm f}\)), and Li-C and Li-Li separation (c). geometry of the isolated Li, results in a significant distortion for clusters from n \(\geq\) 2. Each Li cannot be accommodated at the high symmetry hole site, as a result it is distorted from its central position. For the smallest clusters (n = 2 and 3) the repulsive energetic penalty can be estimated for the induction of a single neighbouring Li as 0.34 eV. This is expressed in both E\({}_{\rm bind}\) (Figure 1b) and the C-Li separation with the cluster moving away from the surface (Figure 1c). The repulsive interaction is partially compensated for by the Li-Li E\({}_{\rm coh}\) (Figure 1b), which shows a marked increase with increasing cluster size. For clusters n \(>\) 3 the addition of the (n+1)\({}^{th}\) Li to the next layer becomes favoured. The cluster then grows stepwise via the building up of node sharing trigonal pyramidal units. The ground state cluster geometries and relative energies are in agreement with the results from Liu and co-workers [34]. For the Li clusters explored, E\({}_{\rm bind}\) rapidly decreases and converges to -0.45 eV / atom, at the same rate E\({}_{\rm coh}\) of the cluster increases to -0.91 eV. As the cluster structure forms layers, a further differentiation between those atoms that interact with the C-surface and those that only interact with Li is seen. This is expressed in the partial charges for the larger clusters with a clear negative shift of the Mulliken charge for the non-surface interacting Li atoms (+0.02 e) when compared to the surface interacting Li in a given cluster (+0.47 e) and the isolated case (+0.5 e). It is important to note that for the defect free system the small clusters (n \(<\) 3) are energetically unfavoured with respect to the dispersed Li, additionally clusters in the n =4 - 6 are iso-energetic with the dispersed Li. Therefore, clustering would only be predicted to be stable for the larger clusters (n \(\geq\) 7). Figure 2a shows the lowest energy of Li configurations for binding on the V\({}_{\rm C}\). In agreement with the previously reported isolated Li [28], E\({}_{\rm bind}\) is -3.3 eV and the Li interacts with the under-coordinated C-atom, while the other 2 vacancy C-atoms form a 5-membered ring. This geometry remains stable for n = 1 & 2, however for clusters n \(\geq\) 3 the 5-membered ring opens, partially for n=3 and completely at n = 4 resulting in a Li sitting at the central C\({}_{3}\) site (Figure 2a). The under-coordinated C-atoms show a significant distortion lifting 1.2 A out of the plane. This can clearly be seen in the Li-Li separation (Figure 2c), with a decrease from n=2 to n=4, followed by a sharp increase once the defect and adjacent sites are occupied. Above n = 4 the next Li can either be added adjacent to the cluster or on top forming a new Li-layer. Similar to metal adsorption on the basal plane, the metal top site is energetically favoured. The cluster then grows in lateral extent by the addition of the next Li to the adjacent hole site in a manner similar to that seen on the basal plane. The presence of a vacancy leads to a marked increase in E\({}_{\rm f}\) and E\({}_{\rm bind}\) for the smallest clusters (n = 2 - 5). E\({}_{\rm coh}\) as a function of cluster size is of the same order for both the V\({}_{\rm C}\) and pristine, being a function of cluster size and the effective coordination that this supports. As the cluster increases in size E\({}_{\rm bind}\) will converge to that of the pristine (the influence of the vacancy bound Li becomes negligible in per atom terms). Whereas E\({}_{\rm coh}\) will converge to the pristine (Fig. 2c), the cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is energetically favoured. The cluster is favoured. The cluster is energetically favoured. The cluster is favoured. towards Li-metal (-1.78 eV / atom) for clusters of sufficient size. The presence of the vacancy in essence anchors the cluster in place, allowing the repulsive interactions seen in the small clusters on the pristine to be overcome. Hence cluster nucleation would be predicted to be favourable at the V\({}_{\text{C}}\) for all of the cluster sizes considered. It is clear that in both the pristine and the V\({}_{\text{C}}\) there is a propensity to form clusters by preferentially adding Li-atoms to the next layer for clusters greater than n=3 and n=4, respectively. In each case it is most energetically favourable to grow the cluster _via_ filling the adjacent hole sites, followed by the addition to the top site. This leads to the stepwise growth of the cluster, for the range of cluster sizes considered here the growth is limited to 2- / 3-layers. The main difference between the two binding modes is the magnitude of the surface binding energy, with binding at the vacancy dramatically favoured. It should be noted that the reduction in the cluster binding energy for the pristine is largely driven by strain between neighbouring Li with the system attempting to minimise repulsive Li-Li interaction, while maximising Li-surface interaction. Here the average picture given by E\({}_{\text{bind}}\) gives a reasonable picture of cluster binding as each site is approximately equivalent. In the V\({}_{\text{C}}\) system the situation is complicated as there is a large discrepancy in E\({}_{\text{bind}}\) between sites [28], in this case the vacancy site (n = 1), and the filling of the adjacent sites (n = 2 - 4) are in essence anchored in position interacting strongly with the vacancy site(s). As the cluster grows the available sites for the next Li begin to resemble the pristine system and this is seen in the increase in C-Li separation (Figure 2c), which approaches the pristine values as the number of pristine sites dominate. Figure 3a shows E\({}_{\text{f}}\) of the Li clusters on pristine and defective pristine (E\({}_{\text{f}}\) in Figure 1b and 2b) as a function of Li chemical potential. For Li-poor conditions, the chemical potential is taken from the vacuum reference, Li-rich from the Li metal reference, and solvated Li is Li in 3:1 EC:DMC as described by Fan et al. (2019) [59]. Taking the energy of a single Li on the pristine in the solvated Li assumption as reference, it can be seen from Figure 3b that only the larger Li clusters are energetically stable on the pristine basal plane, and formation of the small clusters (n \(<\) 4) is thermodynamically hindered. Upon the introduction of a V\({}_{\rm C}\), all clusters become thermodynamically favoured. The pristine and V\({}_{\rm C}\) show a number of similarities and some important differences upon the formation of Li clusters. In both cases stepwise growth is observed, with adjacent hole sites (Li\({}_{2}\) and Li\({}_{3}\)) being filled before adding a new layer (Li\({}_{4}\)). Cluster growth on the pristine is significantly hampered for the smallest cluster n=2-4, which are 0.25 eV/atom less favoured than the dispersed Li. This suggests that cluster growth will only be observed at high Li-concentrations / -rates, and is potentially linked to Li-plating behaviour reported. In contrast the presence of the V\({}_{\rm C}\) dramatically increases the binding strength for the small clusters, the formation of which is unfavourable on the pristine surface. In addition, this Figure 3: a) Li cluster formation energy on pristine (filled markers and non-dotted lines) and defective (empty markers and dotted lines) pristine with the Li energy taken at the Li-poor, Li-rich, and Li solvated in EC:DMC ([3:1]) chemical potential. b) shows the lithium cluster formation energy with the solvated Li chemical potential, referenced to a single Li bonded to the pristine surface from the solvated Li reference. strong binding anchors the growing cluster onto the vacancy site. As the cluster size increases the available site resemble the pristine basal plane, reducing the effect of the V\({}_{\rm C}\). In summary, cluster growth is explored on both the pristine basal plane and the V\({}_{\rm C}\) system. Cluster formation in the pristine system is hampered by the thermodynamic instability of the small Li-clusters with respect to disperse Li adsorption. Therefore, clustering would only be observed at high deposition rates (Li-concentration). This mode of growth would be homogeneous across the surface and analogous to the Li-plating behaviour observed. In contrast the presence of the V\({}_{\rm C}\) results in a dramatic increase in Li-binding energy, facilitated by the relaxation of the cluster to strongly bind the n = 4 cluster. The result of this is two-fold firstly, the thermodynamic instability of the small clusters seen on the pristine surface is overcome. Secondly, the forming cluster is anchored in place at the vacancy site. This results in inhomogeneous growth mediated by surface vacancies resulting in dendritic growth. Further investigations are underway to characterise those additional defect types that favour, or indeed hamper clustering. ## Acknowledgement This work made use of the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-2434. The authors thank SURF (www.surf.nl) for the support in using the Lisa Compute Cluster and National Supercomputer Snellius. Q.C. would like to acknowledge funding support from Faraday Institution through the LiSTAR programme (EP/S003053/1, Grant FIRG014), and funding support from Horizon Europe through the OPERA consortium (Grant No. 101103834). E.O. is grateful for a WISE Fellowship from the Dutch Research Council (NWO). This work has been carried out at the Advanced Research Center for Nanolithography (ARCNL). ARCNL is a public-private partnership with founding partners UvA, VU, NWO-I, and ASML and associate partner RUG. ## Supporting Information Available Higher energy Li cluster configurations are included in the supplementary information.
2309.04294
Microscopic analysis of dipole electric and magnetic strengths in $^{156}$Gd
The dipole electric ($E1$) and magnetic ($M1$) strengths in strongly deformed $^{156}$Gd are investigated within a fully self-consistent Quasiparticle Random Phase Approximation (QRPA) with Skyrme forces SVbas, SLy6 and SG2. We inspect, on the same theoretical footing, low-lying dipole states and the isovector giant dipole resonance in $E1$ channel and the orbital scissors resonance as well as the spin-flip giant resonance (SFGR) in $M1$ channel. Besides, $E1$ toroidal mode and low-energy spin-flip $M1$ excitations are considered. The deformation splitting and dipole-octupole coupling of electric excitations are analyzed. The origin of SFGR gross structure, impact of the residual interaction and interference of orbital and spin contributions to SFGR are discussed. The effect of the central exchange $\textbf{J}^2$-term from the Skyrme functional is demonstrated. The calculations show a satisfactory agreement with available experimental data, except for the recent NRF measurements of M. Tamkas et al for $M1$ strength at 4-6 MeV, where, in contradiction with our calculations and previous $(p,p')$ data, almost no $M1$ strength was observed.
V. O. Nesterenko, P. I. Vishnevskiy, P. -G. Reinhard, A. Repko, J. Kvasil
2023-09-08T12:34:19Z
http://arxiv.org/abs/2309.04294v2
# Microscopic analysis of dipole electric and magnetic strengths in \({}^{156}\)Gd ###### Abstract The dipole electric (\(E1\)) and magnetic (\(M1\)) strengths in strongly deformed \({}^{156}\)Gd are investigated within a fully self-consistent Quasiparticle Random Phase Approximation (QRPA) with Skyrme forces SV-bas, SLy6 and SG2. We inspect, on the same theoretical footing, low-lying dipole states and the isovector giant dipole resonance in \(E1\) channel and orbital scissors resonance as well as spin-flip giant resonance in \(M1\) channel. Besides, \(E1\) toroidal mode and low-energy spin-flip \(M1\) excitations are considered. The calculations show a good agreement with available experimental data, except for the recent NRF measurements of M. Tamkas et al for \(M1\) strength at 4-6 MeV, where, in contradiction with our calculations and previous (\(p,p^{\prime}\)) data, almost no \(M1\) strength was observed. pacs: 21.60.Jz, 27.30.+t, 13.40.-f, 25.30.Dh, 21.10.-k ## I Introduction Electric and magnetic dipole excitations represent an important manifestation of nuclear dynamics [1; 2; 3; 4; 5]. These excitations include at least four nuclear modes: i) a group of low-energy \(E1\) states often summarized as pygmy dipole resonance (PDR), which are of great importance in astrophysical reaction chains [2; 4; 5], ii) isovector E1 giant dipole resonance (GDR) as a benchmark for the isovector channel in modern density functionals [2; 5; 6; 7], iii) isovector \(M1\) low-energy orbital scissors resonance (OSR) [8; 9] as a remarkable example of a magnetic orbital flow [8] and mixed symmetry states [10], and iv) \(M1\) spin-flip giant resonance (SFGR) which is important to test spin-orbit splitting and tensor forces (see e.g. [11; 12; 13; 14; 15]). Besides, the low-energy dipole spectrum can incorporate a toroidal \(E1\) resonance (see [2; 16] for reviews and [17; 18; 19; 20; 21] for some recent studies) and so called \(M1\) "spin scissors" states (see [22; 23; 24] for macroscopic predictions and [25; 26] for microscopic analysis). The deformed \({}^{156}\)Gd is one of the most suitable nuclei to investigate all these dipole modes altogether, including OSR which exists only in deformed nuclei. A large quadrupole deformation of \({}^{156}\)Gd favors an appearance of \(E1(\Delta K=1)\) toroidal mode [19] and low-energy spin-flip excitations [25; 26] (microscopic realization of the predicted "spin-scissors" mode). What is important, for \({}^{156}\)Gd there are experimental data for most of dipole resonances listed above: GDR [27], OSR [9; 28; 29; 30] and SFGR [29; 31; 32]. Moreover, quite recently the _separate_\(E1\) and \(M1\) strengths for individual states at 3.1-6.2 MeV in \({}^{156}\)Gd, obtained within one NRF experiment [32], were reported. Thus, for the first time, the OSR and partly PDR energy regions in a heavy strongly deformed nucleus were experimentally explored in detail. Dipole modes in \({}^{156}\)Gd were already theoretically explored in the quasiparticle random phase approximation (QRPA) [33; 34; 35] and Quasiparticle-Phonon Nuclear Model (QPNM) [36; 37]. The early studies [33] and [34] were devoted to OSR and SFGR, respectively. A recent comprehensive QRPA analysis of \(E1\) excitations (GDR and PDR) in Gd isotopes, including \({}^{156}\)Gd, can be found in Ref. [35]. In QPNM, low energy \(E1\) and \(M1\) excitation in \({}^{156}\)Gd were scrutinized taking into account the coupling with complex configurations (CCC) [36; 37]. However all these studies were performed within _not fully self-consistent_ models employing a _separable_ residual interaction. In this paper, we analyze \(E1\) and \(M1\) excitations in \({}^{156}\)Gd within _fully self-consistent_ deformed QRPA calculations using Skyrme forces [38; 39; 40; 19]. As compared with previous studies [33; 34; 35; 36; 37], we additionally include to our analysis the \(E1\) toroidal mode and discuss a possible existence of low-energy \(M1\) spin-scissors resonance. Note that our Skyrme QRPA model was already successfully applied to describe \(E1\)[41; 7; 19] and \(M1\)[11; 12; 25; 26; 42] modes in various deformed nuclei. A simultaneous exploration of electric and magnetic modes within a self-consistent Skyrme approach based on the family of SV parametrizations [6] was recently carried out for \({}^{208}\)Pb [44]. It was shown that the chosen Skyrme forces can well describe electric modes but lead to significant variations of the results for magnetic excitations. It was concluded that further developments of the Skyrme functional in the spin channel are necessary. Here we suggest a simultaneous microscopic exploration of \(E1\) and \(M1\) in a strongly deformed nucleus \({}^{156}\)Gd. A similar aim is pursued: to check ways how to improve Skyrme energy-density functionals, in particular in its spin channel. The paper is organized as follows. In Sec. II, the calculation scheme is outlined. In Sec. III, results of the calculations for GDR, SFGR, PDR and OSR in \({}^{156}\)Gd are considered. The low-energy dipole spectra are compared with recent NRF data [32]. The toroidal \(E1\) and low energy spin-flip \(M1\) excitations are shortly inspected. In Sec. IV, the conclusions are done. ## II Calculation scheme The calculations were performed within the QRPA model [38; 39; 40; 19] based on Skyrme functional [43]. The model is fully self-consistent since i) both mean field and residual interaction are derived from the same Skyrme functional, ii) the contributions of all time-even densities and time-odd currents from the functional are taken into account, iii) both particle-hole and pairing-induced particle-particle channels are included, iv) the Coulomb (direct and exchange) parts are involved in both mean field and residual interaction. The QRPA is implemented in its matrix form [38]. Spurious admixtures caused by violation of the translational and rotational invariance are removed using the techniques from [40]. A representative set of Skyrme forces is used. We employ the force SLy6 [45] which was shown optimal for description of \(E1\) excitations in the given framework [7], the recently developed force SV-bas [6], and the force SG2 [46] which is often used in analysis of \(M1\) modes, see e.g. [25; 26; 11; 34; 12]. As seen from Table 1, these three forces differ by: i) the sum-rule enhancement parameter \(\kappa\) which are important for description of \(E1\) excitations [43; 47], ii) the isoscalar effective mass \(m_{0}/m\) which can significantly influence the single-particle spectra, iii) the spin-orbit parameters \(b_{4}\) and \(b_{4}^{\prime}\) (defined e.g. in Refs. [48; 11]) which are crucial for description of \(M1\) spin-flip modes [25; 26; 11; 34; 12]. Besides these forces use different sorts of pairing (density-dependent surface for SV-bas and volume for SLy6 and SG2), see details below. The mean field spectra and pairing characteristics are calculated by the code SKYAX [49] using a two-dimensional grid in cylindrical coordinates. The calculation box extends up to three nuclear radii, the grid step is 0.4 fm. The axial quadrupole equilibrium deformation is obtained by minimization of the energy of the system. As seen from Table 2, the calculated values of the deformation parameter \(\beta_{2}\) are close to the experimental value. The pairing is described by the zero-range pairing interaction [51] \[V_{\rm pair}^{q}({\bf r},{\bf r}^{\prime})=G_{q}\Big{[}1-\eta\left(\frac{\rho ({\bf r})}{\rho_{\rm pair}}\right)\Big{]}\delta({\bf r}-{\bf r}^{\prime}) \tag{1}\] where \(\rho_{\rm pair}\)=0.2011 fm\({}^{-3}\), \(G_{q}\) are proton (\(q=p\)) and neutron (\(q=n\)) pairing strength constants (shown in Table 1) fitted to reproduce empirical pairing gaps along selected isotopic and isotonic chains [52]. Further, \(\rho({\bf r})=\rho_{p}({\bf r})+\rho_{n}({\bf r})\) is the sum of proton and neutron densities. We switch the pure volume pairing with \(\eta\)=0 (SLy6, SG2) and the density-dependent surface pairing with \(\eta\)=1 (SV-bas). The model parameter for surface pairing \(\rho_{\rm pair}\) is determined in the fit for SV-bas [6]. Pairing is calculated within HF-BCS (Hartree-Fock and Bardeen-Cooper-Schrieffer) method [49; 19]. To cope with a divergent character of zero-range pairing forces, an energy-dependent cut-off is used [51; 19]. The QRPA calculations use a large configuration space. For example, the single-particle basis for SV-bas includes 683 proton and 787 neutron levels. For \(E1\) excitations, the energy-weighted sum rule \[EWSR=\frac{\hbar^{2}e^{2}}{8\pi m}9\frac{NZ}{A}(1+\kappa) \tag{2}\] is exhausted by 99% (SLy6) and 100% (SV-bas, SG2). The moments of inertia \(J\) are calculated in the framework of Thouless-Valatin model [53] using the QRPA spectrum [20]. With that the energy \(E_{2^{+}_{1}}=3\hbar^{2}/J\) of the first state in the ground-state rotational band is estimated. This energy is sensitive to both deformation and pairing. As seen in Table 2, SV-bas overestimates and SLy6 underestimates this energy while SG2 gives very nice agreement. The reduced probability for electric dipole transition \(0^{+}0\to 1^{-}K_{\nu}\) (K=0,1) between the ground state \(0^{+}0\) and \(\nu\)-th QRPA state \(1^{-}K_{\nu}\) reads \[B_{\nu}(E1,K)=(1+\delta_{K,1})|\left<\nu\right|\hat{\Gamma}(E1,K)\left|0 \right>|^{2} \tag{3}\] with \[\hat{\Gamma}(E1,K)=e\sum_{qcp,n}e_{\rm eff}^{q}\sum_{keq}[rY_{1K}(\Omega)]_{k} \tag{4}\] where \(Y_{1K}\) is the spherical harmonic, \(e_{\rm eff}^{p}=N/A\) and \(e_{\rm eff}^{n}=-Z/A\) are the effective charges. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(\beta_{2}\) & \(\Delta_{p}\) & \(\Delta_{n}\) & \(E_{2^{+}_{1}}\) \\ & & MeV & MeV & keV \\ \hline exper & 0.34 & & & & 89 \\ \hline SV-bas & 0.327 & 0.83 & 0.98 & 106 \\ \hline SLy6 & 0.333 & 0.77 & 0.43 & 63 \\ \hline SG2 & 0.329 & 0.85 & 0.81 & 90 \\ \hline \end{tabular} \end{table} Table 2: Deformation parameters \(\beta_{2}\), proton and neutron pairing gaps \(\Delta_{p}\) and \(\Delta_{n}\) and energy of \(2^{+}_{1}\) state of the ground-state rotational band, calculated with Skyrme forces SV-bas, SLy6 and SG2. The experimental data are taken from database [50]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline force & \(\kappa\) & \(m_{0}/m\) & \(b_{4}\) & \(b_{4}^{\prime}\) & \(G_{p}\) & \(G_{n}\) \\ & & & MeV fm\({}^{5}\) & MeV fm\({}^{5}\) & MeV fm\({}^{3}\) & MeV fm\({}^{3}\) \\ \hline SV-bas & 0.4 & 0.9 & 62.3 & 34.1 & 674.62 & 606.90 \\ SLy6 & 0.25 & 0.69 & 61.0 & 61.0 & 298.76 & 288.52 \\ SG2 & 0.53 & 0.79 & 52.5 & 52.5 & 296.76 & 269.58 \\ \hline \end{tabular} \end{table} Table 1: Isovector enhancement factors \(\kappa\), isoscalar effective masses \(m_{0}/m\), isoscalar and isovector spin-orbit parameters \(b_{4}\) and \(b_{4}^{\prime}\), proton and neutron pairing constants \(G_{p}\) and \(G_{n}\) for Skyrme forces SV-bas, SLy6, and SG2. For \(M1\) transitions, we have \[B_{\nu}(M1,K)=(1+\delta_{K,1})|\left<\nu\right|\hat{\Gamma}(M1,K)\left|0\right>|^{2} \tag{5}\] with \[\hat{\Gamma}(M1,K)=\mu_{N}\sqrt{\frac{3}{4\pi}}\sum_{qcp,n}\sum_{keq}[g_{s}^{q }\hat{s}_{k}(K)+g_{l}^{q}\hat{l}_{k}(K)]. \tag{6}\] Here \(\mu_{N}\) is the nuclear magneton; \(g_{s}^{q}\) and \(g_{l}^{q}\) are spin and orbital gyromagnetic factors; \(\hat{s}_{k}(K)\) and \(\hat{l}_{k}(K)\) are K-components of the spin and orbital operators. In the present calculations, we use \(g_{s}^{q}=q\widetilde{g}_{s}^{q}\), where \(\widetilde{g}_{s}^{q}=5.56\) and \(\widetilde{g}_{s}^{q}=-3.83\) are bare g-factors and \(q=0.7\) is the quenching parameter. The orbital g-factors are \(g_{l}^{p}=1\), \(g_{l}^{n}=0\). Reduced transition probabilities (3) and (5) are used for calculation of the strength functions \[S(E1,K;E) = \sum_{\nu}E_{\nu}B_{\nu}(E1,K)\zeta(E_{\nu},E), \tag{7}\] \[S(M1,K;E) = \sum_{\nu}B_{\nu}(M1,K)\zeta(E_{\nu},E) \tag{8}\] where \(\zeta(E-E_{\nu})=\Delta/[2\pi[(E-E_{\nu})^{2}+\frac{\Delta^{2}}{4}]]\) is a Lorentz weight with an averaging parameter \(\Delta\). The \(E1\) photoabsorption cross section reads \[\sigma(E)=\alpha\sum_{K=0,1}S(E1,K;E), \tag{9}\] where \(\alpha=16\pi^{3}/(137\cdot 9e^{2})=0.40/e^{2}\). The Lorentz weight is used for the convenience of comparison of the calculated strength with experimental data. It simulates smoothing effects beyond QRPA: escape width and coupling to complex configurations (CCC). The higher the excitation energy \(E\), the larger the density of states. So one may expect an increase of CCC-produced width with \(E\). Since the \(E1\) GDR lies at much higher excitation energy (10-20 MeV) than the \(M1\) SFGR (5-10 MeV), it is reasonable to use for GDR a larger smoothing \(\Delta\) than for SFGR. Following our previous calculation practice for GDR [7] and SFGR [11; 12], we use here \(\Delta\)=2 MeV for the GDR and \(\Delta\)=1 MeV for the SFGR. Further interesting observables are the toroidal and compression isoscalar strengths, \(B_{\rm tor}(E1K,IS)\) and \(B_{\rm com}(E1K,IS)\), which are calculated using current-dependent vortical and compression transition operators from Refs. [18; 19; 20]. ## III Results and discussion ### \(E1\) and \(M1\) giant resonances As a first step, we apply SLy6, SV-bas and SG2 to describe the IV \(E1\) GDR and \(M1\) SFGR. In Fig. 1, the calculated strength functions (7) and (8) are compared with the photoabsorption data for \(E1\) GDR [27] and \((p,p^{\prime})\) data for \(M1\) SFGR [31]. As discussed above, E1 and \(M1\) strengths use different Lorentz averaging parameters: \(\Delta\)=2 MeV and 1 MeV, respectively. The \((p,p^{\prime})\) data [31] are given in arbitrary units. So, in this case, not the absolute scale but the distribution of strength is of interest. In the upper panels of Fig. 1, we present the GDR in the branches \(K=0\) and \(K=1\) and its total strength. In general, all three Skyrme forces (with some preference for SLy6) give a nice description of GDR, including its deformation splitting into \(K=0\) and \(K=1\) branches. A small overestimation of the peak height of the \(K=1\) branch can be explained by insufficient smoothing. The \(K=1\) branch is located at a higher energy than \(K=0\) and so should acquire more broadening from CCC. This could be simulated by an energy dependent folding width which we avoid here to keep the analysis simple. The bottom panels of Fig. 1 show the \(K=1\) branch of \(M1\) strength, responsible for the SFGR [1; 3; 11; 12]. In accordance to experiment [31], the calculated strengths exhibit two large peaks produced by proton and neutron spin-orbit excitations. All Skyrme forces describe rather well the energy difference \(\Delta E\approx 2.1\) MeV between the peaks. At the same time, the energies of \(M1\) peaks are overestimated by 0.7-0.8 MeV (SV-bas), 1.8-2.0 MeV (SLy6) and 0.7-0.9 MeV (SG2). Following previous stud Figure 1: E1 photoabsorption (upper panels) and \(M1\) SFGR (bottom panels) strengths in \({}^{166}\)Gd, calculated with the forces SV-bas, SLy6, and SG2. For \(E1\) GDR, the \(K=0\) (left-side red dashed line) and \(K=1\) (right-side blue dashed line) strengths are also depicted. For \(M1\) SFGR, the unperturbed 2qp strength (magenta dotted line) is shown. \(E1\) GDR and \(M1\) SFGR strengths use the Lorentz weight with \(\Delta\)=2 MeV and 1 MeV, respectively. The strengths are compared with the photoabsorption data for \(E1\) GDR [27] and \((p,p^{\prime})\) data for \(M1\) SFGR [31]. ies [11; 12], the peak energies are determined by two factors: single-particle spin-orbit splitting and upshift due to the isovector residual interaction in the spin channel of Skyrme functional. The latter effect is seen by comparison of unperturbed two-quasiparticle (2qp) and QRPA \(M1\) strengths in Fig. 1. However, the impact of the residual spin-spin interaction in Fig. 1 is obviously too strong and CCC will not help to correct the average position [14]. This shortcoming could be amended by taking into account tensor forces. Following [11], this allows to downshift \(M1\) strength without changing the double-well structure. So our results indicate the need for the inclusion of tensor forces. ### Low-energy conventional, toroidal and compression \(E1\) strengths Figure 2 shows \(E1\) strengths for excitations at 1.5 - 6.5 MeV, calculated with the forces SV-bas, SLy6 and SG2. Note that the NRF experiment [32] covers the energy region 3.1-6.2 MeV. We see that, in agreement with this experiment, all three Skyrme forces produce many QRPA states in the range 3.1-6.2 MeV. Most of the strength comes from the \(K=1\) branch. Following Table 3, the calculated summed \(E1\) strength is in acceptable agreement with the data [32]. A small underestimation of the experimental strength can be explained by omitting the CCC which can somewhat redistribute \(E1\) strength. For some states, our QRPA calculations give appreciably large B(E1) values which are absent in the data [32]. This can be again explained by missing CCC in our calculations. In general, all three applied Skyrme forces produce rather similar results. Following Fig. 1, the PDR region in \({}^{156}\)Gd lies at 5-9 MeV safely below GDR. Fig. 2 demonstrates a concentration of \(E1\) strength at the low-energy part of this energy range. The \(K=0\) and \(K=1\) branches are more completely exhibited in upper panels of Fig. 3. The summed low-lying \(E1\) strength is given in Table 4. It is seen that both branches give comparable contributions. The \(K=0\) branch dominates at \(E<5\) MeV. In general, a deformation splitting of the \(K\)-branches in the PDR region is not apparent. Figure 3 shows the calculated \(E1\) strengths for of isoscalar toroidal and compression modes. Like in our previous studies for spherical [18; 19; 20] and deformed [16; 19; 55; 56] nuclei, the toroidal states lie in the PDR energy region. Moreover, in accordance with studies for deformed nuclei, [16; 19; 55; 56] the toroidal resonance Figure 2: The calculated (SV-bas, SLy6, SG2) \(K=0\), \(K=1\) and total low-energy \(E1\) strengths in \({}^{156}\)Gd as compared with \(E1\) experimental data [32]. is mainly present in the \(K=1\) branch. In this energy interval, the compression strength is much weaker than toroidal one and seen in both \(K=0\) and \(K=1\) branches. ### Low-energy orbital and spin-flip \(M1\) strengths In Fig. 4, the calculated orbital, spin and total (orbital + spin) low-energy \(M1(K=1)\) strengths in \({}^{156}\)Gd are compared with \(M1\) experimental data at 2.95-6.25 MeV [32]. The calculations show a wide fragmentation of \(M1\) strengths throughout the whole interval 1-10 MeV. In the OSR energy region 3-4 MeV, the calculated orbital strength is locally concentrated which is in accordance with NRF data [32]. The properties of particular orbital states are given in Table 5. Following Fig. 4 and Table 6, the calculated spin-flip strength in the OSR energy range is rather weak and so \(M1\) strength here is mainly orbital. Nevertheless, a small spin fraction is important for the estimation of the total \(M1\) strength. As seen in Fig. 4 and Table 6, there is \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Force & \(\nu\) & E & \(B(M1)\) [\(\mu_{N}^{2}\)] & main 2qp components \\ \hline & & [MeV] & orb & spin & total & \([N,n_{z},\Lambda]\) & \% \\ \hline SV-bas & 2 & 2.21 & 0.03 & 0.39 & 0.21 & pp [411 \(\downarrow\), 411 \(\uparrow\)] & 99 \\ & 9 & 3.23 & 0.92 & 0.10 & 1.63 & nn [521 \(\uparrow\), 530 \(\uparrow\)] & 39 \\ & & & & & pp [402 \(\uparrow\), 411 \(\uparrow\)] & 15 \\ \hline SLy6 & 4 & 2.46 & 0.07 & 0.32 & 0.09 & pp [411 \(\downarrow\), 411 \(\uparrow\)] & 98 \\ & 10 & 3.33 & 1.76 & 0.20 & 3.10 & pp [532 \(\uparrow\), 541 \(\uparrow\)] & 41 \\ & & & & & nn [521 \(\uparrow\), 530 \(\uparrow\)] & 39 \\ \hline SG2 & 3 & 2.43 & 0.01 & 0.30 & 0.22 & pp [411 \(\downarrow\), 411 \(\uparrow\)] & 99 \\ & 9 & 3.35 & 1.10 & 0.06 & 1.64 & pp [523 \(\uparrow\), 532 \(\uparrow\)] & 36 \\ & & & & & & nn [521 \(\uparrow\), 530 \(\uparrow\)] & 28 \\ \hline \end{tabular} \end{table} Table 5: Properties of particular QRPA low-energy \(K^{\pi}=1^{+}\) states in \({}^{156}\)Gd: number of QRPA state \(\nu\), excitation energy \(E\), orbital, spin and total \(B(M1)\)-values, and main proton (pp) and neutron (nn) components (notation by Nillson quantum numbers and contribution to the state norm). Figure 3: \(K=0\) (red) and \(K=1\) (black) branches of isovector \(E1\) (upper plots), isoscalar toroidal \(E1\) (middle plots) and isoscalar compression \(E1\) (bottom plots) strengths in \({}^{156}\)Gd, calculated for with the forces SV-bas, SLy6 and SG2. a large constructive interference of the orbital and spin fractions \((B(M1)_{\rm tot}>B(M1)_{\rm orb}+B(M1)_{\rm spin})\). The interference can be estimated by the factor \[R=\frac{B(M1)_{\rm tot}}{B(M1)_{\rm orb}+B(M1)_{\rm spin}} \tag{10}\] which is \(R=1,>1,<1\) for zero, constructive and destructive interference. Table 6 shows that, due to the interference, the total \(M1\) strength at 2.5-4 MeV becomes 1.5-2 times larger than the pure orbital strength. Some time ago a so called \(M1\) "spin scissors" mode (located just below the OSR) was predicted in deformed nuclei within macroscopic Wigner function moments approach [22; 23; 24]. The microscopic Skyrme QRPA analysis for \({}^{160,162,164}\)Dy and \({}^{232}\)Th [25; 26] has shown that, indeed, some low-energy spin-flip states can exist in some nuclei. However these states are usually not collective and their current fields deviate from the macroscopic spin-scissor flow. As seen in Fig. 4 and Table 6, such low-energy spin-flip states can exist in deformed \({}^{156}\)Gd as well. They lie at 2.21 MeV (SV-bas), 2.46 MeV (SLy6) and 2.43 MeV (SG2). These states are almost pure 2qp and have obvious spin-flip structure. Fig. 4 shows a huge discrepancy between calculated and experimental \(M1\) strengths at 4-6 MeV. The experimental strength [32] here is almost absent while our calculations give essential orbital, spin and total \(B(M1)\)-values. Just for this reason, the calculated \(M1\) strength summed at 2.95-6.25 MeV essentially overestimates the experimental value 3.99 \(\mu_{N}^{2}\) (see Table 7). Note that a large \(M1\) strength at 4-6 MeV was observed in \((e,e^{\prime})\) reaction [29; 30]. Moreover, a significant \(M1\) strength in this energy range in \({}^{156,158}\)Gd was reported in Skyrme QRPA calculations of P. Sarrington et al [34]. Our Skyrme QRPA calculations [12] and \((p,p^{\prime})\) data [31] also give significant \(M1\) strength at 4-6 MeV in the neighbor nucleus \({}^{158}\)Gd. Following our Fig. 1 and previous studies mentioned above, the interval 4-6 MeV in \({}^{156,158}\)Gd _should include the proton fraction of SFGR and thus em Figure 4: The calculated (SV-bas, SLy6, SG2) orbital, spin, and total low-energy \(M1\) strengths in \({}^{156}\)Gd as compared with \(M1\) experimental data [32]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline force & orb & spin & total & R \\ \hline SV-bas & 1.89 & 0.52 & 3.57 & 1.48 \\ \hline SLy6 & 2.31 & 0.38 & 5.59 & 2.08 \\ \hline SG2 & 2.49 & 0.40 & 3.91 & 1.35 \\ \hline \end{tabular} \end{table} Table 6: The calculated orbital, spin and total strength summed B(M1) (in \(\mu_{N}^{2}\)) in OSR region 2.5-4 MeV. _brace an appreciable \(M1\) strength._ In this connection, a disappearance of \(M1\) strength at 4-6 MeV in the NRF experiment for \({}^{156}\)Gd [32] looks strange. We suspect that this experiment misses most of the spin-flip fraction of \(M1\) strength. ## IV Conclusions E1 and \(M1\) excitations in \({}^{156}\)Gd were simultaneously investigated within fully self-consistent deformed QRPA calculations with Skyrme forces SV-bas, SLy6 and SG2. In the \(E1\) channel, the analysis covers low-lying \(E1\) strength, coined pygmy dipole resonance (PDR), giant dipole resonance (GDR), and low-energy toroidal and compression \(E1\) excitations. In \(M1(K=1)\) channel, we consider spin-flip giant resonance (SFGR), orbital scissors resonance (OSR), and low-energy spin-flip excitations. The calculations well describe the energy and gross-structure (deformation splitting) of \(E1\) GDR. In PDR energy region, we reasonably reproduce the summed \(E1\) strength at 2.95-6.25 MeV, measured in the NRF experiment [32]. Both \(K=0\) and \(K=1\) branches give similar contributions to the total PDR \(E1\) strength. A deformation splitting of the \(K\)-branches is not apparent in the PDR region. In agreement with previous studies for deformed nuclei [16; 19; 55; 56], the present calculations confirm the existence of the prominent toroidal \(E1\) resonance in the PDR energy region. The toroidal \(E1\) resonance is mainly presented in \(K=1\) branch. The calculations for the \(M1\) SFGR reproduce proton-neutron splitting of the resonance rather well, but somewhat overestimate the energies of the proton and neutron \(M1\) peaks. The latter is caused by a too strong spin-spin residual interaction. In principle, this shortcoming may be cured by taking into account tensor forces. Further, the calculations reveal a strong constructive interference of the orbital and spin \(M1\) modes in OSR energy region. There was found a puzzling discrepancy between calculated and experimental [32] M1 strengths at 4-6 MeV. Following our calculations and previous theoretical [12; 34] and experimental [31] work, this energy interval should host the proton branch of SFGR and so carry an impressive \(M1\) strength. At the same time, the NRF experiment [32] delivers almost no \(M1\) strength at 4-6 MeV, which looks unclear and doubtful. ## Acknowledgements J. K. appreciates the support by a grant of the Czech Science Agency, Project No. 19-14048S. A. R. acknowledges support by the Slovak Research and Development Agency under Contract No. APVV-20-0532 and by the Slovak grant agency VEGA (Contract No. 2/0067/21).
2309.14087
Adaptive Three Layer Hybrid Reconfigurable Intelligent Surface for 6G Wireless Communication: Trade-offs and Performance
A potential candidate technology for the development of future 6G networks has been recognized as Reconfigurable Intelligent Surface (RIS). However, due to the variation in radio link quality, traditional passive RISs only accomplish a minimal signal gain in situations with strong direct links between user equipment (UE) and base station (BS). In order to get over this fundamental restriction of smaller gain, the idea of active RISs might be a suitable solution. In contrast to current passive RIS, which simply reflects and directs signals without any additional amplification, active RISs have the ability to enhance reflected signals by the incorporation of amplifiers inside its elements. However, with additional amplifiers, apart from the relatively complex attributes of RIS-assisted arrangements, the additional energy consumption of such technologies is often disregarded. So, there might be a tradeoff between the additional energy consumption for the RIS technologies and the overall gain acquired by deploying this potential advancement. The objective of this work is to provide a primary idea of a three-layer hybrid RIS-assisted configuration that is responsive to both active and passive RIS, as well as an additional dormant or inactive state. The single RIS structure should be capable of adjusting its overall configuration in response to fluctuations in transmit power and radio link quality. Furthermore, our fabricated passive RIS-assisted structure verifies a portion of the proposed idea, with simulations highlighting its advantages over standalone passive or active RIS-assisted technologies.
Rashed Hasan Ratul, Muhammad Iqbal, Tabinda Ashraf, Jen-Yi Pan, Yi-Han Wang, Shao-Yu Lien
2023-09-25T12:30:03Z
http://arxiv.org/abs/2309.14087v1
Adaptive Three Layer Hybrid Reconfigurable Intelligent Surface for 6G Wireless Communication: Trade-offs and Performance ###### Abstract A potential candidate technology for the development of future 6G networks has been recognized as Reconfigurable Intelligent Surface (RIS). However, due to the variation in radio link quality, traditional passive RISs only accomplish a minimal signal gain in situations with strong direct links between user equipment (UE) and base station (BS). In order to get over this fundamental restriction of smaller gain, the idea of active RISs might be a suitable solution. In contrast to current passive RIS, which simply reflects and directs signals without any additional amplification, active RISs have the ability to enhance reflected signals by the incorporation of amplifiers inside its elements. However, with additional amplifiers, apart from the relatively complex attributes of RIS-assisted arrangements, the additional energy consumption of such technologies is often disregarded. So, there might be a tradeoff between the additional energy consumption for the RIS technologies and the overall gain acquired by deploying this potential advancement. The objective of this work is to provide a primary idea of a three-layer hybrid RIS-assisted configuration that is responsive to both active and passive RIS, as well as an additional dormant or inactive state. The single RIS structure should be capable of adjusting its overall configuration in response to fluctuations in transmit power and radio link quality. Furthermore, our fabricated passive RIS-assisted structure verifies a portion of the proposed idea, with simulations highlighting its advantages over standalone passive or active RIS-assisted technologies. Hybrid RIS, O-RAN, wireless communication, 6G, beamforming, cellular communication networks. ## I Introduction Wireless communications were typically regarded as non-configurable from the first generation (1G) through the fifth generation (5G) [1]. Reconfigurable Intelligent Surface (RIS) have recently been proposed as a result of advancements in meta-materials for the purpose of dynamically regulating wireless communication channels to obtain enhanced communication performance [2]. An RIS, in particular, can be regarded as an array made up of a massive quantity of passive components which reflect electromagnetic waves in a way which alters the direction wireless signals propagate. A significant benefit of RIS is its high array gain, which is made possible by the little noise that passive RISs introduce. Moreover, RIS-empowered systems can be anticipated to significantly increase wireless system capacity using sensing-based awareness to enable transparent 3GPP 5G service [3]. Nevertheless, in real-world applications, the anticipated increases in capacity are usually only noticed in wireless communication environments where the primary communication link between the transmitting end and the receiving end is either totally disrupted or extremely weak. Conventional RISs, on the other hand, can only yield minor capacity improvements in many instances where the direct link is not severely attenuated. The path losses of the transmitting end-RIS end and RIS end-receiving end links are multiplied to generate the analogous path loss of the transmitting end-RIS-receiving end wireless link, which is ultimately several times larger than the overall path loss of the directly connected wireless link [4]. Consequently, in many wireless situations, passive RISs are nearly incapable to provide significant capacity enhancements. While active RISs can dynamically reflect signals with amplified response, they do so at the expense of additional amplifier circuitry cost and energy consumption. This is in contrast to standard passive RISs, which only essentially reflect signals without amplification. Indeed, all RIS-assisted procedures, whether active or passive, necessitate significant logistical assistance and power usage. By evaluating the performance of passive RIS, active RIS, and no RIS-assisted system across a range of transmit power levels, this study aims to determine whether or not RIS-assisted systems are feasible in a challenging radio link condition. In this paper, we propose a three-stage hybrid RIS-assisted algorithm, that can be operated at minimal power consumption in different radio link quality. Moreover, we introduce the concept of inactive RIS alongside the traditional active and passive RIS-assisted systems in a single RIS structure. Our proposed configuration is further verified in part using practical implementation and fabrication of a passive 4.7 GHz RIS-assisted Open Radio Access Network (O-RAN). The rest of the paper is organized as follows. Section 2 describes the related works associated with this research. Section 3 provides the generalized concept of the system model of RIS. Section 4 discusses the fabrication and simulation agenda of the RIS-empowered structure. The results and discussion have been presented in Section 5. Finally, Section 6 concludes this paper. ## II Related Works In the past few decades, wireless communication technology have advanced remarkably. This evolution has taken place across several generations, from 1G to 4G, and is currently undergoing more advancement strategies for 5G and 6G [5]. RIS have emerged as a promising avenue for wireless communication due to their ability to manipulate wireless propagation. These surfaces consist of sub-wavelength meta-materials and enable precise control over the wireless channel. In the realm of millimeter-wave (mmWave) communication, RIS-assisted systems have the scope to hold the potential for achieving remarkable spectral efficiency while maintaining low hardware complexity and power usage. Consequently, both academic and industrial circles have shown keen interest in RISs for their potential to revolutionize wireless communication. One of the key advantages of RIS technology is its capacity to address coverage challenges in challenging bands like terahertz (THz) and mmWave. By intelligently redirecting wireless signals towards the receiver, RISs can overcome coverage limitations. However, it's imperative to comprehensively assess RIS performance in realistic communication scenarios to ensure their practical viability. In order to validate the performance and feasibility of RIS-assisted systems, several studies have been performed. The related work associated with this research can be broadly divided into three primary categories: passive RIS, active RIS, and hybrid RIS. **Passive RIS-assisted Scenario** Emenonye et al. explored RIS-aided localization, taking into account the impact of RIS misorientation on the positioning error bound (PEB) [6]. The authors also investigated the effect of RIS orientation offset on RIS orientation estimation. Schroeder et al. conducted an analysis between spectral efficiency (SE) and channel estimation (CE) in mmWave [7]. A two-stage CE strategy has been proposed for passive RIS and a hybrid RIS design with active components as part of a wave MIMO system. Tradeoffs between hybrid RIS configurations were also investigated. Wang et al. investigated the joint optimization of waveforms design and passive beamforming in RIS-assisted Dual-Functional Radar Communication systems with the goal of minimizing multiple user interference [8]. The benefits of combining RIS with passive beamforming are highlighted. **Active RIS-assisted Scenario** Lyu et al. demonstrated an active RIS for secure communication within a symbiotic radio system [9]. The authors were able to achieve simultaneous secure delivery to primary and secondary consumers. They introduced a strong secure transmission method, an approach to use active RIS for multicasting, a way to reduce the amount of power used, a strategy to adjust limitations, and an alternating optimization procedure. Active RIS was proposed by Zhang et al. to overcome the capacity constraints of passive RIS under strong direct link conditions [4]. They also developed a model for active RIS signals using real-world measurements to validate the claims. Active and passive RIS were compared by Zhi et al. using the same power budget [1]. The ideal power distribution proportions for both types were investigated. It was discovered that active RIS could outperform passive RIS in terms of the signal strength received. Niu et al. used iterative penalty dual decomposition to build an algorithm for combined optimization of BS precoding and RIS beamforming [10]. Channel estimation, signal recovery, and optimization of fairness and sum rate were also addressed in their research. **Hybrid RIS-assisted Scenario** Yildirim et al.'s hybrid transmission proposal combines passive RIS with decode-and-forward relay [11]. The authors aimed at maximizing these technologies' interaction for improved performance to maximize efficiency. A hybrid RIS technique with active and passive elements was presented by Sankar et al. for integrating sensing and communications networks [12]. They also discussed the combined design of the RIS coefficients and transmit beamformers. Although RIS-assisted systems have been proposed by multiple studies, Trichopoulos et al. developed an experimental RIS prototype and investigated its effectiveness in practical wireless communication environments [13]. The authors also assessed RIS performance in outdoor communication environments, focusing on signal-to-noise ratio (SNR) gains using directional antennas. In numerous facets of wireless communication systems, each of these studies expands the understanding of active, passive, and hybrid RIS technologies [4][13][14]. While several active and passive RIS hybrid structures have been proposed in previous studies, this paper makes an attempt to propose an adaptive, three-stage hybrid RIS-assisted technique that can be operated as an active, passive, or even dormant-state RIS structure. With the goal of lowering power consumption and optimizing operating efficiency in accordance with various radio link quality levels, the inactive state would function exactly like a typical reflector. This paper also presents the partial results of our fabricated passive RIS-empowered system to validate the feasibility of our approach. ## III Concept of RIS System Model The concept of RIS has been extensively explored in recent studies [7][8]. In essence, passive RIS units, which have been the focus of most investigations, comprise reflective patches combined with impedance-tunable circuits to facilitate phase modulation as shown in Equation 1. Their operation in a passive manner notably minimizes the impact of thermal noise. However, the anticipated substantial capacity enhancement attributed to passive RISs is frequently unattainable in practical scenarios, particularly when direct links between transmitters and receivers exhibit strong signal strength. This effect stems from the fact that the equivalent path loss of the transmitter-RIS-receiver reflected path is determined by the multiplication, rather than summation, of the individual path losses of the transmitter-RIS and RIS-receiver links [15]. \[y=(\mathbf{h}^{H}+\mathbf{f}^{H}\underbrace{\mathbf{\Phi}^{H}}_{\text{Phase shift only}}(\mathbf{f}^{H}\mathbf{G}))\mathbf{w}s+\mathbf{z} \tag{1}\] Equation 1 describes the received signal (\(\mathbf{y}\)) as a combination of several components. The conjugate transpose of the channel vector (\(\mathbf{h}^{H}\)) captures channel effects from BS to UE. The conjugate transpose of the diagonal matrix (\(\mathbf{\Phi}^{H}\)) incorporates phase coefficients, modeling shifts introduced by reconfigurable surfaces. The operation (\(\mathbf{f}^{H}\)) generates a conjugate transpose of the channel vector between the RIS and UE. Matrix \(\mathbf{G}\) signifies the channel matrix between BS and RIS. The beamforming vector (\(\mathbf{w}\)) shapes the transmitted signal, denoted as \(s\), which may carry information. The noise vector (\(\mathbf{z}\)) accounts for randomness or interference in the received signal. Consequently, this leads to a path loss that is orders of magnitude greater than that of an unobstructed direct link. As a result, in order to accomplish noticeable capacity gain, a large number of RIS elements, frequently in the thousands, are necessary to mitigate for this enormous path loss. To surmount the intrinsic limitations posed by passive RIS, an alternative solution could be found in the form of active RIS [15]. Similar to their passive counterparts, active RIS units possess the capability to manipulate incident signals by altering their phase. Nevertheless, the distinguishing factor is that active RISs can not only modify the phase but also amplify the reflected signals. This is facilitated through the integration of active components, which in turn incurs additional power consumption for signal amplification. It's noteworthy that, unlike passive RIS, the influence of thermal noise introduced by active RIS components cannot be disregarded. Thus, investigating active RIS offers a possible way around the performance constraints inherent in passive RIS. Equation 2 represents the received signal model of active RIS where \(\mathbf{P}\) is the amplification matrix [4]. \[y=(\mathbf{h}^{H}+\mathbf{\xi}_{\text{Amplification}}^{H}\mathbf{G})\mathbf{w}s +\underbrace{\mathbf{\xi}^{H}\mathbf{P}\mathbf{n}}_{\text{Extra noise from active RIS}}+\mathbf{z} \tag{2}\] Although passive RISs have an inherent benefit when it comes to reducing thermal noise, active RISs are distinguished by their capacity to amplify reflected signals, even though this comes with a higher power consumption and the addition of thermal noise to the system model. Fig. 1 depicts the basic functionality of active and passive RIS. ``` 1:Initialization:\(\mu\) symbolizes the measurement_report messages: \(\mu_{\text{W}}\) (weak transmit power), \(\mu_{\text{S}}\) (strong transmit power), and \(\mu_{\text{H}}\) (high transmit power \(>60\,\text{dBm}\)). 2:Input: Channel conditions \(h_{k}\) for \(k\in\bigcap_{k=1}^{n}\{k\}\), threshold transmit power \(\rho\), gain \(\tau\). 3:for each \(k\)do 4: Determine \(\mu_{k}\) based on \(h_{k}\). 5:if\(\mu_{k}=\mu_{\text{W}}\) (measurement_report_weak) then 6: Enhance Signal Strength: (Active RIS if \(\tau>\rho\), else Select Passive RIS) 7:elseif\(\mu_{k}=\mu_{\text{S}}\) (measurement_report_strong) then 8: Enhance Signal Strength: (Active RIS if \(\tau>\rho\), passive RIS gain, else Select Passive RIS) 9:elseif\(\mu_{k}=\mu_{\text{H}}\) (measurement_report_high) then 10: Avoid Active RIS: (Switch to Passive RIS if \(\tau>\rho\), else Power Off and Initiate Dormant RIS) 11:endif 12:endfor ``` **Algorithm 1** Proposed Three Stage Hybrid RIS System ## IV Fabrication and Simulation Setup To validate the proposed three-stage hybrid RIS configuration, firstly, we have designed and fabricated a passive RIS structure with \(\pm 40^{\circ}\) phase shifting capacity for experimental measurements, as shown in Fig. 2. Note that this design is in the primary stage of development with 400 arrays and can be applied directly to the case of large arrays. Furthermore, the phase-shifting capability of passive RIS has been thoroughly demonstrated as shown in Fig. 2 (a) and Fig. 2 (b). Integrating our proposed simplified algorithm as shown in Algorithm 1 for adaptive hybrid configuration with the necessary module has the potential to perform equally well. Fig. 2(c), Fig. 2(d), and Fig. 2(e) depict the experimental setup of the RIS-assisted environment, the opposite portion of the RIS radiation layer, and the main fabricated RIS structure. However, as the active RIS fabrication was not available during this study, we explore a system augmented by an active RIS in [15]. It needs to be noted that, for keeping the simulation environment simplified and fair, the fabricated passive RIS precoding has not been implemented in the simulation. Rather, for the active RIS, we utilize the algorithm proposed in [15] for the joint design of precoding and beamforming. Meanwhile, for passive RIS, we adopt the methodology outlined in [16]. In particular, we investigate two distinct scenarios marked by differing channel conditions. In the first scenario, the direct link stands strong due to favorable radio link quality Fig. 1: Hardware setup and beamforming of a passive and an active RIS. [4]. or unimpeded signal propagation between the UE and the BS. Conversely, in the second scenario, the direct link exhibits weakness. To be precise, we employ two distinct path loss models from the 3GPP TR 38.901 standard to describe the significant attenuation of the channels [17]: \[PL_{\text{strong}} =13.54+39.08\log d \tag{3}\] \[PL_{\text{weak}} =37.3+22.0\log d \tag{4}\] Here, \(d\) signifies the distance between the two devices. \(PL_{\text{weak}}\) characterizes the feeble BS-UE link in scenario 2, whereas \(PL_{\text{strong}}\) models the robust BS-UE link in scenario 1. It's notable that, for both scenarios, \(PL_{\text{weak}}\) is employed to replicate the characteristics of the BS-RIS and RIS-UE channels, creating an intentionally challenging overall radio link condition. Incorporating the effects of small-scale fading, we adopt the Rician fading channel model for all relevant communication channels. The BS is positioned at coordinates \((0,-45\,\text{m})\), and the three-stage hybrid RIS-assisted structure is located at \((180\,\text{m},20\,\text{m})\). Additionally, five users are randomly distributed within a circular region with a radius of 6 m, centered at coordinates \((200\,\text{m},0)\). The configuration involves 5 BS antennas and 400 RIS elements (\(N=400\)). The ambient noise power is set to -65 dBm. In order to facilitate an equitable comparison, we impose a constraint on the total power consumption. As a result, the collective transmit power for the active RIS encompasses the power utilized by both the active RIS itself and the BS. Importantly, considering that the power consumption of passive RIS is significantly lower than that of active RIS, only the power associated with the BS is taken into account when calculating the passive RIS's total transmit power. Similarly, for dormant RIS, for the sake of simplicity, we assume power consumption similar to that of passive RIS, although in practical scenarios, the energy consumption would indeed be lower. ## V Result and Discussion Though RIS-assisted beamforming can significantly improve user experience in terms of signal gain, its effectiveness is not consistent in all situations. Excessive directional transmit power has a negative impact on the performance of the active RIS-assisted system. Moreover, passive RIS or even no RIS performs better at transmit powers greater than 60 dBm. Fig. 3 and Fig. 4 illustrate the relationship between spectral efficiency and total power consumption for the two distinct scenarios under examination, characterized by strong and weak direct links respectively. In the initial scenario, where a robust direct link is present (scenario 1), the passive RIS exhibits marginal performance gains, whereas the active RIS Fig. 3: Performance comparison of RIS with strong direct link. Fig. 2: Experimental passive RIS (a) with RIS (b) without RIS (c) overall RIS-assisted environment (d) behind the radiation layer (e) fabricated RIS demonstrates comparatively superior results. It is important to acknowledge that the BS-RIS and RIS-user channels are designed with a notably weak path loss model. This is why, despite the inclusion of the passive RIS, the overall performance remains relatively modest. That is why, in cases where path loss models and the broader radio link quality are conducive to efficient wave propagation, our devised passive RIS outperforms the passive RIS presented in the simulation. This difference in performance stems from the use of distinct algorithms, channel models, and propagation environments between the simulated and fabricated passive RIS setups. The simulation results reveal that while active RIS-assisted systems perform better in both scenarios, increasing transmit power beyond 60 dBm leads to a significant decrease in spectral efficiency, making active RIS less viable for maintaining user experience. During high transmission power and significant power output, the passive RIS outperforms the active RIS. High power transmission levels are uncommon in conventional deployments, but might be relevant in future intelligent deployments. ## VI Conclusion This paper presents a simple explanation of the principles of passive and active RIS. It carries out an extensive comparison study that includes dormant, passive, and active RIS. Additionally, the work proposes a compact three-stage hybrid RIS method for best spectral efficiency. A practical implementation of 4.7 GHz passive RIS-assisted O-RAN, with a \(20\times 20\) array, has been included in the scope of this investigation. Under real-world conditions, this implementation validates the phase modulation in the initial phase of our proposed strategy. Under real-world conditions, this implementation validates the phase modulation in the initial phase of our proposed strategy. The simulation phase, on the other hand, employs separate precoding methods for RIS testing, replicating a discerning outdoor scenario with higher path loss. The software simulation clarifies the effects of poor radio link quality. Results from the study confirm that active RIS outperforms passive RIS in both standard and dynamic wave propagation scenarios. However, active RIS efficacy degrades significantly when total transmission strength exceeds 60 dBm. This decline might be linked to the increased additional noise that occurs with the increased transmission power of active RIS. ## Acknowledgment This work was supported in part by the Advanced Institute of Manufacturing with Hightech Innovations (AIM-HI) from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MoE) in Taiwan.
2309.16848
A genetic algorithm to search the space of Ehrhart $h^*$-vectors
We describe a genetic algorithm to find candidates for $h^*$-vectors satisfying given properties in the space of integers vectors of finite length. We use an implementation of such algorithm to find a 52-dimensional lattice polytope having a non-unimodal $h^*$-vector which is the Cartesian product of two lattice polytopes having unimodal $h^*$-vectors. This counterexample answers negatively to a question by Ferroni and Higashitani.
Gabriele Balletti
2023-09-28T20:57:22Z
http://arxiv.org/abs/2309.16848v1
# A genetic algorithm to search the space of Ehrhart \(h^{*}\)-vectors ###### Abstract. We describe a genetic algorithm to find candidates for \(h^{*}\)-vectors satisfying given properties in the space of integers vectors of finite length. We use an implementation of such algorithm to find a \(52\)-dimensional lattice polytope having a non-unimodal \(h^{*}\)-vector which is the Cartesian product of two lattice polytopes having unimodal \(h^{*}\)-vectors. This counterexample answers negatively to a question by Ferroni and Higashitani. 2020 Mathematics Subject Classification: 52B20, 05A20, 68W50 ## 1. Introduction A lattice polytope \(P\subset\mathbb{R}^{n}\) is a convex polytope whose vertices are elements of the lattice \(\mathbb{Z}^{n}\). The map \(t\mapsto|tP\cap\mathbb{Z}^{n}|\) for integer values of \(t\) has been proven by Ehrhart [1] to be a polynomial \(\mathrm{ehr}_{P}(x)=c_{d}x^{d}+c_{d-1}x^{d-1}+\cdots+c_{0}\) of degree \(d\), where \(d\) is the dimension of \(P\). This is the _Ehrhart polynomial_ of \(P\). Through a change of basis, one can obtain the \(h^{*}\)_-polynomial_ of \(P\) as the polynomial \(h^{*}_{P}(x)=h^{*}_{0}+h^{*}_{1}x+\cdots+h^{*}_{d}x^{d}\), which is explicitly given by \[h^{*}_{P}(x)=\sum_{i=0}^{d}c_{i}A_{i}(x)(1-x)^{d-i}, \tag{1}\] where \(A_{i}(x)\) is the \(i\)-th Eulerian polynomial. The inverse transformation, which allows to recover \(\mathrm{ehr}_{P}\) from \(h^{*}_{P}\), is given by \[\mathrm{ehr}_{P}(x)=\sum_{i=0}^{d}h^{*}_{i}\binom{x+d-i}{d}. \tag{2}\] The significance of these changes of bases comes from the algebraic interpretation of \(\mathrm{ehr}_{P}\) and \(h^{*}_{P}\) as the Hilbert function and polynomial of the Ehrhart ring associated to \(P\). See [1] for a comprehensive introduction to invariants of lattice polytopes and their algebraic meaning. While the coefficients of the Ehrhart polynomial are rational numbers, the coefficients of the \(h^{*}\)-polynomial are nonnegative integers. This follows from a combinatorial interpretation of the coefficients \(h^{*}_{i}\), but holds in the more general settings of rational polytopes, thanks to a result by Stanley [16]. Moreover, it is easy to deduce that \(h^{*}_{0}=1\), \(h^{*}_{1}=|P\cap\mathbb{Z}^{n}|-d-1\), and \(h^{*}_{d}=|P^{\circ}\cap\mathbb{Z}^{n}|\), where \(P^{\circ}\) is the relative interior of \(P\). From the last two equalities it follows immediately that \(h^{*}_{1}\geq h^{*}_{d}\). Other more profound inequalities have been proven over the years, we collect the most relevant ones by Stanley [16], Hibi [17] and Stapledon [16] in Theorem 1.1, together with the previous one. **Theorem 1.1**.: _Let \(P\) be a lattice polytope of dimension \(d\) and let \(s\) be the degree of its \(h^{*}\)-polynomial \(h^{*}_{P}\). Then, the coefficients of \(h^{*}_{P}\) satisfy the following inequalities:_ 1. \(h^{*}_{1}\geq h^{*}_{d}\)_,_ 2. \(h^{*}_{2}+h^{*}_{3}+\cdots+h^{*}_{i}\geq h^{*}_{d-1}+h^{*}_{d-2}+\cdots+h^{*}_ {d-i+1}\) _for every_ \(2\leq i\leq\lfloor\frac{d}{2}\rfloor\)_,_ 3. \(h^{*}_{0}+h^{*}_{1}+\cdots+h^{*}_{i}\leq h^{*}_{s}+h^{*}_{s-1}+\cdots+h^{*}_ {s-i}\) _for every_ \(0\leq i\leq\lfloor\frac{s}{2}\rfloor\)_,_ 4. _If_ \(s=d\)_, then_ \(h^{*}_{1}\leq h^{*}_{i}\) _for all_ \(1\leq i\leq d-1\)_, and_ _._ 5. _If_ \(s\leq d-1\)_, then_ \(h_{0}^{*}+h_{1}^{*}\leq h_{i}^{*}+h_{i-1}^{*}+\cdots+h_{i-(d-s)}^{*}\) _for all_ \(1\leq i\leq d-1\)_._ All these inequalities are known to be sharp. There are other general inequalities that have been proven in recent years, most notably the work of Stapledon [14], but we will not consider them here. Even considering all the known inequalities, we are still far from having a complete understanding of the set of all possible \(h^{*}\)-vectors of lattice polytopes, and even in dimension three the problem is still open (see [1, Conjecture 8.7]). A set of inequalities between the coefficients of \(h_{P}^{*}\) which is of particular interest, is the one defining unimodality. We say that \(h_{P}^{*}\) is _unimodal_ if there exists an index \(i\) such that \(h_{0}^{*}\leq h_{1}^{*}\leq\cdots\leq h_{i}^{*}\geq h_{i+1}^{*}\geq\cdots\geq h _{d}^{*}\). While, for a general polytope \(P\), unimodality of the \(h^{*}\)-polynomial is not guaranteed, a particular hierarchy of polytopes has attracted much attention in connection to this property, with a central role played by the so-called IDP polytopes. A polytope \(P\) is _IDP_ (short for has the _integer decomposition property_) if for every positive integer \(k\), any lattice point \(p\in kP\cap\mathbb{Z}^{n}\) can be written as \(p=p_{1}+\cdots+p_{k}\) where \(p_{1},\ldots,p_{k}\in P\cap\mathbb{Z}^{n}\). Proving that the \(h^{*}\)-polynomial of an IDP polytope is unimodal is arguably one of today's most significant challenges in Ehrhart Theory. It has been conjectured to be true - in a more general setting - by Brenti [1] in the nineties, while a direct formulation can be found in [11]. This challenge has been the object of recent developments, as a proof has been given by Adiprasito, Papadakis, Petrotou and Steinmeyer [1], under the additional assumption of reflexivity. It is not known if the IPD assumption can be relaxed, e.g. to very ample polytopes. See the surveys by Braun [1] and by Ferroni and Higashitani [13] for definitions, examples, and a detailed overview of the state of the art. Unimodality of \(h^{*}\)-polynomial is often studied in conjunction with two stronger properties, namely log-concavity and real-rootedness. We say that a degree \(s\) polynomial \(\sum_{i=0}^{s}a_{i}x^{i}\) is _log-concave_ if \(a_{i}^{2}\geq a_{i-1}a_{i+1}\) for all \(1\leq i\leq s-1\). The same polynomial is said to be _real-rooted_ if all its roots are real. We restrict these notions to polynomials with positive coefficients, i.e. \(a_{i}>0\) for all \(0\leq i\leq s\). Under these assumptions, it is well known that log-concavity implies unimodality, and that real-rootedness implies log-concavity. In this context, Ferroni and Higashitani focus on cartesian products of lattice polytopes, the idea being that being IDP, as well as other related properties in the aforementioned hierarchy, are preserved under cartesian products [13, Proposition 2.2]. Moreover, it follows from a more general result of Wagner [26], that the \(h^{*}\)-polynomial of the cartesian product of two polytopes with real-rooted \(h^{*}\)-polynomials is real-rooted too. It is natural then to wonder if the log-concavity and the unimodality of the \(h^{*}\)-polynomial are preserved under cartesian products. They do so in [13, Question 3.4], expecting a positive answer for log-concavity and a negative answer for unimodality. In this paper we focus on, and answer to, the unimodality question. **Question 1.2** ([13]).: _Let \(P\) and \(Q\) be two lattice polytopes. If \(h_{P}^{*}(x)\) and \(h_{Q}^{*}(x)\) are unimodal, is \(h_{P\times Q}^{*}(x)\) necessarily unimodal too?_ A negative answer is given by constructing examples of lattice polytopes having a non-unimodal \(h^{*}\)-vector which are cartesian products of polytopes having unimodal \(h^{*}\)-vectors. We also construct a lattice polytope having a non-unimodal \(h^{*}\)-vector which is the cartesian product of a polytope having unimodal \(h^{*}\)-vectors with itself. This means that we cannot exclude cartesian products from the tools one might attempt to use to disprove unimodality conjectures. The lowest dimension we can construct such a counterexample is 52. ### Operators between Ehrhart and \(h^{*}\)-polynomials One might attempt a direct approach to Question 1.2 by explicitly describing the \(h^{*}\)-polynomial of the cartesian product of two polytopes in terms of the \(h^{*}\)-polynomials of the two factors. While it follows directly from the definitions that \(\mathrm{ehr}_{P_{1}\times P_{2}}=\mathrm{ehr}_{P_{1}}\mathrm{ehr}_{P_{2}}\) for any two polytopes \(P_{1}\) and \(P_{2}\), the description of the \(h^{*}\)-polynomial of the cartesian product is not as straightforward. We can define two operators \(\mathscr{W}\) and \(\mathscr{E}\), each inverse of the other, mapping Ehrhart polynomials to the corresponding \(h^{*}\)-polynomials and vice versa. Since the dimension \(d\) of the polytope plays a role in these transformations, it is convenient to think of these operators as being defined between the spaces of vectors of finite length. Here, each vector stores the coefficients of the corresponding polynomial up to degree \(d\), regardless of the actual degree of the polynomial. For the \(h^{*}\)-polynomial, the corresponding vector is the vector \((h_{0}^{*},h_{1}^{*},\ldots,h_{d}^{*})\), while for the Ehrhart polynomial it is the vector \((c_{0},c_{1},\ldots,c_{d})\). The first of the two is known as the \(h^{*}\)_-vector_ of the polytope \(P\), and it is denoted by \(h_{P}^{*}\); note that it can have trailing zeroes. We can use Equations (1) and (2) to extend \(\mathscr{W}\) and \(\mathscr{E}\) to the space of all real vectors of finite length, which we write as the disjoint union \(\bigsqcup_{i>0}\mathbb{R}^{i}\): \[\mathscr{W}:\bigsqcup_{i>0}\mathbb{R}^{i} \to\bigsqcup_{i>0}\mathbb{R}^{i}\] \[(c_{0},c_{1},\ldots,c_{d}) \mapsto\sum_{i=0}^{d}c_{i}A_{i}(x)(1-x)^{d-i},\] and \[\mathscr{E}:\bigsqcup_{i>0}\mathbb{R}^{i} \to\bigsqcup_{i>0}\mathbb{R}^{i}\] \[(h_{0}^{*},h_{1}^{*},\ldots,h_{d}^{*}) \mapsto\sum_{i=0}^{d}h_{i}^{*}\binom{x+d-i}{d}.\] If we now define the binary operator \[\Pi:\bigsqcup_{i>0}\mathbb{R}^{i} \times\bigsqcup_{i>0}\mathbb{R}^{i} \to\bigsqcup_{i>0}\mathbb{R}^{i} \tag{3}\] \[(h,h^{\prime}) \mapsto\mathscr{W}(\mathscr{E}(h)\mathscr{E}(h^{\prime})),\] then we have a compact way to express the \(h^{*}\)-vector of a cartesian product: \[h_{P_{1}\times P_{2}}^{*}=\Pi(h_{P_{1}}^{*},h_{P_{2}}^{*}).\] While this description allows one to concretely calculate the \(h^{*}\)-polynomial of the cartesian product when the two factors are given, it is very hard to deduce general statements about the unimodality of \(h^{*}\)-polynomials from it, partly because explicitly expanding the operators \(\mathscr{W}\) and \(\mathscr{E}\) is nontrivial when the dimensions in which we operate are not fixed, partly because we would have to consider all possible ways in which the \(h^{*}\)-polynomials of the factors and of the product can be unimodal. For these reasons, we take a different approach to build our way to a counterexample. We start from the following example from [11]. **Example 1.3** ([11, Example 3.5]).: _The vectors \(h=(1,1,1,1,1,6)\) and \(h^{\prime}=(1,1,1,1,2,5)\) are unimodal, while \(\Pi(h,h^{\prime})=(1,38,300,962,2059,7442,7194,7292,4320,854,30)\) is not._ Note that it is not a counterexample to Question 1.2, as the vectors involved cannot be \(h^{*}\)-vectors of lattice polytopes, as they violate inequality 1 of Theorem 1.1. We use this example (and a slight variant of it, Example 3.1), as a starting point for a genetic algorithm designed to explore the space of nonnegative integer vectors of finite length, searching for possible counterexamples to Question 1.2. The idea is to keep mutating the vectors \(h\) and \(h^{\prime}\) until they become "more and more similar to \(h^{*}\)-vectors of lattice polytopes", in the hope that they will eventually become \(h^{*}\)-vectors of lattice polytopes. We do so by defining a fitness function which penalizes vectors depending on how much they violate the inequalities of Theorem 1.1. This needs to be done while ensuring that \(h\) and \(h^{\prime}\) remain unimodal, while \(\Pi(h,h^{\prime})\) remains non-unimodal. ### Structure of the paper In Section 2 we give a brief introduction to genetic algorithms, as well as set the notation and terminology for the actual implementation. Section 3 is the core of this paper, as it contains details on the implementation of the algorithm, and the results obtained. In Section 4 we show how to realize the vectors found by the algorithm as \(h^{*}\)-vectors of lattice polytopes, thus answering Question 1.2 negatively. A final section, Section 5, includes some concluding remarks and possible future developments. ### Acknowledgements This project started in response to an unexpected invitation to give a talk at the KTH Combinatorics Seminar in Stockholm, after several years away from academia. I am very grateful to Luis Ferroni for providing this opportunity, and for double checking some computations present in this paper. Both him and Akihiro Higashitani provided valuable feedback on a preliminary version of this paper. ## 2. An introduction to genetic algorithms Genetic algorithms were formalized by Holland in the 1960s. The original intent was not that of designing a process to solve a specific problem, but rather building a formal framework to study evolutionary phenomena as they occur in nature. In his work, summarized in [10], Holland describes genetic algorithms as an abstraction of biological evolution, where genetic operators such as mutation, crossover are used to simulate the natural selection of individuals in a population, and where the fitness of an individual is determined by its ability to survive and reproduce. Nowadays, genetic algorithms are used in a wide variety of fields, mainly as a tool to solve optimization problems in which traditional methods are not applicable, or are not efficient enough. See the introduction by Mitchell [14] for a detailed overview of genetic algorithms and their use. Today there is no universally accepted definition of what a genetic algorithm is, and there are many different variants of the same basic idea. Often, the term _evolutionary algorithm_ is used to refer to a broader class of algorithms which includes genetic algorithms and are inspired by the same biological principles. In this section we give a brief description of what a genetic algorithm is, trying to both tailor it to our needs, but also remaining quite faithful to the original formulation by Holland. This acts both as a reference for the implementation described in Section 3, and as way to fix notation and terminology. ### Genetic algorithms in a nutshell The goal of a genetic algorithm is to look for certain solutions in a set, which we call the _search space_. We model the search space with a directed weighted graph on a (possibly infinite) vertex set \(V\), and with edges having weights defined by a function \(\mu:V\times V\to\mathbb{R}_{\geq 0}\). The edges are the pairs \((v,v^{\prime})\in V\times V\) such that \(\mu(v,v^{\prime})>0\). Elements of \(V\) are called _individuals_ or _chromosomes_, and a subset \(U\subset V\) is referred to as a _population_. The directed edges define the possible _mutations_ between pairs of individuals, with probabilities given by the weights. For example, the directed edge \(e=v_{1}\to v_{2}\) indicates that the individual \(v_{1}\) can mutate into the individual \(v_{2}\) with probability \(\mu(v_{1},v_{2})\). With an abuse of notation we indicate by \(\mu(v)\) the operation of sampling an individual from the distribution described by \(\mu(v,\cdot)\) starting from an initial individual \(v\). Two individuals \(v_{1},v_{2}\in V\) can generate a new individual \(u\in V\) with probabilities given by a _crossover operation_\(\chi:(V\times V)\times V\to\mathbb{R}_{\geq 0}\), where \(\chi((v_{1},v_{2}),u)\) describes the probability that the individual \(u\) is generated by crossing over the individuals \(v_{1}\) and \(v_{2}\). In general, \(\chi\) is assumed to be a symmetric operation in the first two coordinates, i.e. \(\chi((v_{1},v_{2}),\cdot)=\chi((v_{2},v_{1}),\cdot)\). With another abuse of notation, we will write \(\chi(v_{1},v_{2})\) to indicate the operation of sampling from the distribution \(\chi((v_{1},v_{2}),\cdot)\). Intuitively, \(\mu\) and \(\chi\) are expected to be defined in some way that the "offsprings" inherits properties from their "parents". While mutation and crossover can be used to grow the size of a population, an evolution process also requires a way to reduce its size. This is done by a _selection operator_\(\sigma_{\varphi}:2^{V}\to 2^{V}\), where \(\sigma_{\varphi}(U)\) is the subset of individuals of \(U\) which are selected to survive. Selection is driven by a _fitness function_\(\varphi:V\rightarrow\mathbb{R}_{\geq 0}\), which evaluates the fitness of each individual, where lower values indicate higher fitness. Exactly how the selection is done, and specifically how \(\varphi\) is defined, depends on the specific context, but in general the idea is to select individuals with lower fitness values, and remove the others. The goal of the algorithm is to find elements \(\varphi^{-1}(0)\subset V\), these are the _solutions_ of the algorithm. Once a solution is found, the algorithm terminates and returns it. It is clear that \(\varphi\) needs to satisfy some notion of regularity or continuity, so that it is possible to approach the solutions set \(S\) by iteratively improving the fitness of the individuals in the population. The algorithm starts from an initial population \(U\subset V\) of individuals, and iteratively let evolution act on them through mutations, crossover and selection, until optimal solutions are found. In general there is no guarantee that the algorithm will terminate, so a maximum number of iterations \(T\) is set in advance. This procedure is described in Algorithm 1. ``` input : Initial population \(U\) output : Optimal solutions for\(t\gets 1\)to\(T\)do Repeatedly apply \(\mu\) and \(\chi\) to individuals in \(U\); if\(0\in\varphi(U)\)then return\(\{u\in U\mid\varphi(u)=0\}\) ; end if \(U\leftarrow\sigma_{\varphi}(U)\); end for ``` **Algorithm 1**General procedure for an evolutionary algorithm. Note how the fitness function \(\varphi\) is the cornerstone of the algorithm. It selects which individuals are allowed to survive, and it determines when the algorithm terminates. Note that in other applications \(\varphi\) might never evaluate to zero for any individual, and the algorithm might be forced to terminate when a local minimum is found or after a certain number of iterations. ## 3. Evolving vectors into \(h^{*}\)-vectors In this section we specialize Algorithm 1 to search for a counterexample to Question 1.2. The main idea here is to define a search space as the pairs of vectors of nonnegative integers of finite length satisfying certain unimodality-related properties. On this space and to define a fitness function \(\varphi\) which penalizes individuals for "not looking enough like \(h^{*}\)-vectors of lattice polytopes", in the hope that vectors will be driven into evolving into \(h^{*}\)-vectors of lattice polytopes. We formalize this by defining the following properties that a pair of vectors \((h,h^{\prime})\) can have. We make use of the operator \(\Pi\), previously defined in Equation (3). 1. \(h\) and \(h^{\prime}\) are both unimodal, 2. \(\Pi(h,h^{\prime})\) is not unimodal, 3. \(h\) and \(h^{\prime}\) are \(h^{*}\)-vectors of some lattice polytopes. Any pair of vectors satisfying these properties would give a counterexample to Question 1.2. We will define the search space as the set of all possible vectors of nonnegative integers of finite length satisfying properties i and ii. The verification of properties i and ii is computationally inexpensive, making it possible for the population to evolve quickly through the search space. Moreover, Example 1.3 guarantees that this set is not empty, giving us candidates to start the search with. Conversely, checking property iii is an exceptionally hard problem in Ehrhart Theory. While algorithms to check whether a given vector is the \(h^{*}\)-vector of a lattice polytope can be theoretically implemented through an exhaustive search [1], no efficient algorithm is known to date. Consequently, it is unclear how to set up a fitness function which would allow to approach pairs of vectors satisfying property iii. We know however that the \(h^{*}\)-vectors of lattice polytopes need to fulfill all the conditions from Theorem 1.1, so we can use these to define a fitness function which penalizes vectors which do not satisfy them, depending on how much they violate the conditions. Although this is not a way to certify property iii, we can use it to obtain a large catalogue of candidates for being a counterexample to Question 1.2, largely simplifying the task. We describe this fitness function in Subsection 3.4. The algorithm can be setup in two different variants, each with its own advantages and disadvantages. 1. We can look for individual elements \(h\) such that \((h,h)\) satisfies properties i- iii. This introduces a strong bias in the search, possibly reducing its potential to find solutions, but making the algorithm more efficient. 2. We can look for pairs of elements \((h,h^{\prime})\) such that \((h,h^{\prime})\) satisfies properties i- iii. This allows to explore a large and unbiased search space, but the search will be computationally expensive. For simplicity we focus on variant A, as its implementation and notation will be much lighter. Anyway its generalization B can be described by taking the cartesian product of two copies of the search space, extending the mutations and crossover operations to act independently on each copy, and defining the fitness function to be the sum of the fitness functions of the two copies. The source code, which is available on GitHub1, contains details about both the implementations. Footnote 1: [https://github.com/gabrielebulletti/genetic-search](https://github.com/gabrielebulletti/genetic-search) ### Search space Let \(\mathcal{H}\) be the set of all possible vectors of nonnegative integers of finite length satisfying properties i and ii, i.e. \[\mathcal{H}\coloneqq\{h\in\sqcup_{i>0}\mathbb{Z}_{\geq 0}^{i}\mid h\text{ is unimodal and }\Pi(h,h)\text{ is not}\}.\] We can slightly modify Example 1.3 to show that \(\mathcal{H}\) is not empty. **Example 3.1**.: _The pair \((h,h)\) with \(h=(1,1,1,1,1,6)\) satisfies properties i and ii. Indeed \(h\) is unimodal, while \(\Pi(h,h)=(1,38,300,962,1849,7417,7034,7272,4610,973,36)\) is not._ As for Example 1.3, the vector \(h\) above cannot be the \(h^{*}\)-vector of a lattice polytope. ### Mutation We make the search space \(\mathcal{H}\) into a graph by defining mutations, which will act as the directed edges. For an element \(h=(h_{0},h_{1},\ldots,h_{d})\in\mathcal{H}\) we define the following mutation operations: * the _removal_ mutations \(\mu_{i}^{\text{rem}}\), for all \(0\leq i\leq d\), where \[\mu_{i}^{\text{rem}}(h)=(h_{0},\ldots,h_{i-1},h_{i+1},\ldots,h_{d}),\] * the _insertion_ mutations \(\mu_{i}^{\text{ins}_{a}}\), for all \(0\leq i\leq d\), and \(0\leq a\leq\max_{i}(h)\), where \[\mu_{i}^{\text{ins}_{a}}(h)=(h_{0},\ldots,h_{i},a,h_{i+1},\ldots,h_{d}),\] * the _decrement_ mutations \(\mu_{i}^{-}\), for all \(0\leq i\leq d\), where \[\mu_{i}^{-}(h)=(h_{0},\ldots,h_{i-1},h_{i}-1,h_{i+1},\ldots,h_{d}),\] * the _increment_ mutations \(\mu_{i}^{+}\) for all \(0\leq i\leq d\), where \[\mu_{i}^{+}(h)=(h_{0},\ldots,h_{i-1},h_{i}+1,h_{i+1},\ldots,h_{d}).\] We exclude all mutations which would result in a vector which is not in \(\mathcal{H}\), for example the decrement of a zero entry, or any insertion which would result in a vector which is not unimodal. We denote by \(M(h)\) the set of all possible mutations for an individual \(h\), and we set for each of them weight \(\frac{1}{|M(h)|}\), meaning that applying the mutation operator \(\mu\) we will get any of the possible mutations in \(M(h)\) with equal probability. Note that, a priori, it is not trivial to determine if \(\mathcal{H}\) is connected, or even if \(h\) from Example 3.1 has any neighbors. This means that at each step we will to check, out of all the possible ways to mutate an individual \(h\), which ones are still in \(\mathcal{H}\), and only consider those. **Example 3.2**.: _Let \(h=(1,1,1,1,1,6)\in\mathcal{H}\) be the vector from Example 3.1. The following are the neighbors of \(h\), i.e. all the allowed mutations of \(h\):_ \[\mu_{i}^{\mathrm{ins}_{1}}(h) =(1,1,1,1,1,1,6),\text{ for }0\leq i\leq 4\] \[\mu_{4}^{\mathrm{ins}_{2}}(h) =(1,1,1,1,1,2,6),\] \[\mu_{4}^{\mathrm{ins}_{3}}(h) =(1,1,1,1,1,3,6),\] \[\mu_{5}^{+}(h) =(1,1,1,1,1,7).\] _This means that \(\mu(h)\) will return one of above vectors with equal probability \(\frac{1}{4}\)._ ### Crossover The crossover operation is defined as follows. Let \(h=(h_{0},h_{1},\ldots,h_{d})\) and \(h^{\prime}=(h_{0}^{\prime},h_{1}^{\prime},\ldots,h_{d^{\prime}}^{\prime})\) be two elements of \(\mathcal{H}\), and assume \(d\geq d^{\prime}\). We define their crossover operation \(\chi(h,h^{\prime})\) to return - with equal probability - any of the vectors of the form \(g=(g_{0},g_{1},\ldots,g_{d})\in\mathcal{H}\) such that for each \(0\leq i\leq d^{\prime}\) we have either \(h_{i}\leq g_{i}\leq h_{i}^{\prime}\) or \(h_{i}^{\prime}\leq g_{i}\leq h_{i}\). Although this definition might seem quite arbitrary, we found empirically that crossover greatly improves performance and convergence of the algorithm, as shown in Figure 1. Generally, one can see crossover as a way to allow individuals trapped in local minima for the fitness function to still play a role in the evolution of the population. ### Selection We define the fitness function \(\varphi:\mathcal{H}\to\mathbb{R}_{\geq 0}\) to be the measure of how much an element \(h=(h_{0},h_{1},\ldots,h_{d})\) violates the conditions from Theorem 1.1. Specifically, given any individual \(h\in\mathcal{H}\), \(\varphi(h)\) is the sum of the violations of each condition divided by \(d\), i.e. \[\varphi(h)\coloneqq\frac{1}{d}\left(\varphi_{1}(h)+\varphi_{2}(h)+\varphi_{3}( h)+\varphi_{4}(h)\right),\] where 1. \(\varphi_{1}(h)\coloneqq\max(0,h_{d}-h_{1})\), 2. \(\varphi_{2}(h)\coloneqq\sum_{i=2}^{\lfloor\frac{d}{2}\rfloor}\max(0,h_{d-i+1} +\cdots+h_{d-1}-(h_{2}+\cdots+h_{i}))\), 3. \(\varphi_{3}(h)\coloneqq\sum_{i=0}^{\lfloor\frac{d}{2}\rfloor}\max(0,h_{0}+ \cdots+h_{i}-(h_{s}+\cdots+h_{s-i}))\), 4. \(\varphi_{4}(h)\coloneqq\left\{\begin{array}{l}\sum_{i=1}^{d-1}\max(0,h_{i}-h _{1}),\text{ if }s=d,\\ \sum_{i=1}^{d-1}\max(0,h_{0}+h_{1}-(h_{i}+h_{i-1}+\cdots+h_{i-(d-s)})),\text{ if }s<d.\end{array}\right.\) Above, \(s\) is the index of the last nonzero entry of \(h\). If \(h\) happens to be the \(h^{*}\)-vector of a lattice polytope \(P\), then \(s\) would be the degree of the \(h^{*}\)-polynomial of \(P\). Dividing the sum by \(d\) has a normalizing effect on the fitness function, and allow to interpret the value of \(\varphi(h)\) (up to a constant) as the average violation of the inequalities of Theorem 1.1. This ensures that the algorithm does not penalize long vectors excessively. Note that \(h\) is a solution, i.e. \(h\in\varphi^{-1}(0)\), if and only if \(h\) satisfies all the conditions of Theorem 1.1. ### Population management The population at a given step of the algorithm - we call such step a _generation_ - will be increased through mutations and crossover, and reduced through selection. Note however that mutation and crossover have opposite effects on the diversity of the population. Indeed, mutation tends to increase the diversity of the population, while crossover tends to decrease it. In order to control how the diversity evolves over time, we set probabilities \(p_{\mu}\) and \(p_{\chi}\), such that \(p_{\mu}+p_{\chi}=1\), and we use them to decide whether to apply mutation or crossover to the population every time that we increase population by one. At each generation, we artificially increase the population until a certain constant \(N_{\max}\) is reached, and then we apply selection to reduce it to \(N_{\min}\). ### Avoiding local minima It is common that individuals hits local minima for the fitness function \(\varphi\). Crossover helps such individuals to still play a role in the evolution of the population, but eventually these individuals might become attractors for future generations, slowing down the global convergence towards a solution. In order to avoid this, we introduce a _penalty factor_\(\varepsilon\in\mathbb{R}_{>1}\) to penalize individuals which have been in the population for too many generations. Specifically, for any individual \(h\in\mathcal{H}\) we will multiply the output of the fitness function \(\varphi(h)\) by \(\varepsilon^{\text{age}(h)}\), where \(\text{age}(h)\) is the number of generations that \(h\) has been in the population. As a result, the selection process will tend to favor younger individuals. This is an important step for the convergence of the algorithm: as shown in Figure 2 the starting individual needs to evolve through states with a higher fitness value, before being able to evolve into a solution. ### Pseudocode The pseudocode for the algorithm is presented in Algorithm 2. We use the following two auxiliary functions to make the pseudocode more readable. * increase\({}_{p_{\mu}}(U)\) is the operation of increasing the population \(U\) by one individual, by applying the following steps: 1. randomly choose whether to apply mutation or crossover, depending \(p_{\mu}\), 2. pick one individual (two for crossover) randomly from \(U\), 3. apply the chosen operation, \(\mu\) or \(\chi\), to the individual(s), 4. add the resulting individual to \(U\). * age(\(h\)) is the number of generations that \(h\) has been in the population. ### Further exploration of the solutions space Once one or multiple solutions are found, it is possible to generate new solutions by iteratively mutating the individuals found so far, and adding them to the population. If we consider the induced subgraph of the search space given by the subset of solution \(\varphi^{-1}(0)\subset\mathcal{H}\), then this procedure is equivalent to a breadth-first exploration of the subgraph, or at least of the connected component containing the first solution found. Since there is no obvious way to check if a given \(h\in\mathcal{H}\) is the \(h^{*}\)-vector of a lattice polytope, having a way to generate new solutions will help us to find realizable vectors. ### Runs and results Algorithm 2 is run multiple times with parameters \(N_{\max}=30\), \(N_{\min}=5\), \(p_{\mu}=0.5\), \(p_{\chi}=0.5\), \(\varepsilon=1.05\), and initial population \(U\) set to \(U=\{(1,1,1,1,1,6)\}\). All the runs converged to one or more solutions within the first 100 generations. We can plot the average fitness value of the best individual in the population - which is the one with the lowest fitness score - at each generation, to observe how the population changes over time. In Figure 1 we show such plots for ten different runs, showing the difference between "normal" runs (\(p_{\mu}=0.5\)), and runs without any crossover (\(p_{\mu}=1\)). Picking a very small population will result in longer runs, but it allows us to observe the evolution of a single individual in more detail, as shown in Figure 2. It is interesting to notice that the initial individuals are in a local minimum for the fitness function, and they need to evolve through states with a higher fitness value before being able to evolve into a solution. This suggests that a greedy approach to the problem would have most likely failed. The set of all solutions found are then expanded to 1000 elements each with the procedure described in Subsection 3.8, ending up with a list of almost 5000 elements. Of them, the shorter vector has length 22, and all of them has at least ten leading ones and at least seven trailing Figure 1. Average population fitness value over ten different runs of the algorithm with (left) and without (right) crossover. Most of the runs without crossover did not converge to a solution within the first 100 generations and were interrupted. The runs with crossover had a 50% chance of applying mutation and a 50% chance of applying crossover. Other parameters: \(N_{\max}=30\), \(N_{\min}=5\), \(\varepsilon=1.05\). zeros. See Table 1 for a random sample of solutions found by the algorithm. The full list of solutions found is available together with the source code of the implementation. The fact that the Algorithm has found solutions proves the following nontrivial result. **Theorem 3.3**.: _The set of integer vectors in \(\mathcal{H}\) satisfying all conditions from Theorem 1.1 is not empty._ \begin{table} \begin{tabular}{l} \hline \((1,1,1,1,1,1,1,1,1,1,1,1,3,7,2,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,3,5,5,0,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,9,3,0,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,2,4,7,1,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,5,4,2,2,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,3,5,4,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,2,2,3,3,6,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,2,5,4,2,1,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,4,4,4,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,2,3,3,6,1,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,2,5,6,1,1,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,5,4,1,1,1,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,5,4,1,1,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,3,4,4,2,0,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,5,4,1,1,0,0,0,0,0,0,0,0)\) \\ \((1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,3,4,4,2,1,1,0,0,0,0,0,0,0,0,0)\) \\ \hline \end{tabular} \end{table} Table 1. A random sample of 15 solutions found by the algorithm (variant A) Figure 2. A plot of the fitness values for the evolution of the starting individual \((1,1,1,1,1,6)\) over 178 generations, before it reaches a solution. Plot obtained with \(N_{\max}=3\), \(N_{\min}=1\), \(p_{\mu}=0.5\), \(p_{\chi}=0.5\), \(\varepsilon=1.05\). Still, it is not clear whether any of the solutions found so far is the \(h^{*}\)-vector of a lattice polytope. We proceed by selecting a solution with a peculiar shape: the pair \((h,h)\) with \[h=(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0)\in\mathbb{Z}^{ 28}. \tag{4}\] In Section 4 we will show that it is the \(h^{*}\)-vector of a lattice polytope. Once we know that there are solutions \((h,h)\), with \[h=(1,\underbrace{1,\ldots,1}_{k-1},m,\underbrace{0,\ldots,0}_{k-1})\in \mathbb{Z}^{2k}, \tag{5}\] for some positive integers \(k\) and \(m\), we can focus our efforts on finding solutions of this form, with the goal of finding solutions as small as possible. Unfortunately, a grid search for small values of \(k\) and \(m\) does not lead to any smaller solutions. **Proposition 3.4**.: _The following are all the solutions of the form \((h,h)\) with \(h\) as in (4), with \(k\leq 20\) and \(m\leq 100\):_ * \(k=14\)_,_ \(m=12\)_, which is the solution (_4_),_ * \(k=15\)_,_ \(m=\{12,13\}\)_,_ * \(k=16\)_,_ \(m=\{13,14\}\)_,_ * \(k=17\)_,_ \(m=\{13,14,15,16\}\)_,_ * \(k=18\)_,_ \(m=\{13,14,15,16,17\}\)_,_ * \(k=19\)_,_ \(m=\{14,15,16,17,18,19,20\}\)_,_ * \(k=20\)_,_ \(m=\{14,15,16,17,18,19,20,21,22\}\)_._ It is interesting to notice that fixing the length of the vectors with \(k\) seems to restrict the possible values of \(m\) to a finite interval. On the other hand, we can now also look for solutions of the form \((h,h^{\prime})\) with both \(h\) and \(h^{\prime}\) being in the form (5), but not equal to each other. Another grid search for small values of \(k\) and \(m\) allows us to reduce the dimension of the minimal solution, but only slightly. Indeed the pair \((h,h^{\prime})\), with \[\begin{split}& h=(1,1,1,1,1,1,1,1,1,1,1,1,14,0,0,0,0,0,0,0,0,0,0,0) \in\mathbb{Z}^{26},\\ & h^{\prime}=(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,9,0,0,0,0,0,0,0,0,0,0, 0,0)\in\mathbb{Z}^{28},\end{split} \tag{6}\] satisfies all the conditions from Theorem 1.1, and \(\Pi(h,h^{\prime})\) is not unimodal. ## 4. Realizing \(h^{*}\)-vectors In this section we focus on finding lattice polytopes whose \(h^{*}\)-vectors are the solutions found in the previous section, thus answering Question 1.2 negatively. We do so by finding a family of lattice polytopes in dimension \(d=2k-1\) whose \(h^{*}\)-vectors are of the form \[(1,\underbrace{r,\ldots,r}_{k-1},r+q,\underbrace{0,\ldots,0}_{k-1}),\] where \(r\) and \(q\) can be any nonnegative integers, and \(k\geq 2\). An important family of simplices on which we base our construction is the _generalized Reeve simplex_ by Batyrev and Hofscheier. **Proposition 4.1** ([1]).: _Let \(d=2k-1\) for \(k\geq 2\), and \(q\) be a nonnegative integer. Consider the generalized Reeve simplex in dimension \(d\)_ \[R_{q,k}:=\operatorname{conv}\{0,e_{1},\ldots,e_{d-1},u_{q,k}\},\] _where \(u_{q,k}=(\underbrace{1,\ldots,1}_{k-1},\underbrace{q,\ldots,q}_{k-1},q+1)\). Then we have_ \[h^{*}_{R_{q,k}}=(1,\underbrace{0,\ldots,0}_{k-1},q,\underbrace{0,\ldots,0}_{ k-1}).\] Stanley's monotonicity theorem [14] guarantees that, if we build a lattice polytope by adding vertices to \(R_{q,k}\), then its "spike" on the \(k\)-th entry will be preserved, meaning that any polytope containing \(R_{q,k}\) will have \(h^{*}\)-polynomial with \(k\)-th coefficient greater than or equal to \(q\). In Proposition 4.3 we show that it is possible to add a point to \(R_{q,k}\) and obtain a polytope with the desired \(h^{*}\)-vector. For its proof we need to introduce properties of certain point configurations called circuits, we refer to [10] for a more detailed introduction to the topic, and to [1, 1] for applications to the study of lattice polytopes. We say that a point configuration \(\mathcal{A}\) given by the points \(p_{0},\ldots,p_{d+1}\in\mathbb{R}^{d}\) is a _circuit_ if it satisfies a unique (up to scalar multiplication) affine dependence \(\sum_{i=0}^{d+1}\lambda_{i}p_{i}=0\), where \(\lambda_{i}\neq 0\) for all \(0\leq i\leq d+1\) and \(\sum_{i=0}^{d+1}\lambda_{i}=0\). We denote by \(J_{+}\) and \(J_{-}\) the sets of indices of the positive and negative coefficients of the dependence, i.e. \[J_{+}=\{v_{i}\in\mathcal{A}\mid\lambda_{i}>0\},\quad J_{-}=\{v_{i}\in \mathcal{A}\mid\lambda_{i}<0\},\] the partition of \(\mathcal{A}\) into \(J_{+}\) and \(J_{-}\) is called the _Radon partition_ or the _oriented circuit_ of the circuit. The following lemma lists well-known properties of polytopes obtained as convex hulls of circuits, see [10, Subsections 2.4 and 4.1] for the proofs and further details. **Lemma 4.2**.: _Let \(\mathcal{A}=v_{0},\ldots,v_{d+1}\in\mathbb{R}^{d}\) be a circuit satisfying the affine linear dependence \(\sum_{i=0}^{d+1}\lambda_{i}v_{i}=0\). Let moreover \(\mathcal{A}=J_{+}\cup J_{-}\) be its Radon partition. Then the \(d\)-dimensional polytope \(P_{\mathcal{A}}\coloneqq\operatorname{conv}(v_{0},\ldots,v_{d+1})\) satisfies the following properties._ 1. \(P_{\mathcal{A}}\) _has exactly two triangulations on the vertices_ \(v_{0},\ldots,v_{d+1}\)_, namely_ \[\mathcal{T}_{+}\coloneqq\{C\subset\mathcal{A}:J_{+}\not\subseteq C\}\,,\qquad \text{and}\qquad\mathcal{T}_{-}\coloneqq\{C\subset\mathcal{A}:J_{-}\not \subseteq C\}\,.\] 2. _There is a constant_ \(\alpha\in\mathbb{R}_{>0}\) _such that_ \(\alpha|\lambda_{i}|\) _equals the normalized volume_ \(\operatorname{Vol}(S_{i})\) _of the full-dimensional simplex_ \(S_{i}\coloneqq\operatorname{conv}(v_{0},\ldots,v_{i-1},v_{i+1},\ldots,v_{d+1})\) _for all_ \(0\leq i\leq d+1\)_. In particular,_ \(P_{\mathcal{A}}\) _has normalized volume_ \[\operatorname{Vol}(P_{\mathcal{A}})=\alpha\sum_{v_{i}\in J_{+}}\lambda_{i}=- \alpha\sum_{v_{i}\in J_{-}}\lambda_{i}.\] **Proposition 4.3**.: _Let \(d=2k-1\) for \(k\geq 2\), and \(q,r\) be nonnegative integers. Consider the lattice polytope_ \[P_{q,r,k}:=\operatorname{conv}(R_{q,k}\cup\{v_{r,k}\})=\operatorname{conv}\{0,e_{1},\ldots,e_{d-1},u_{q,k},v_{r,k}\},\] _with \(u_{q,k}=(\underbrace{1,\ldots,1}_{k-1},\underbrace{q,\ldots,q}_{k-1},q+1)\) and \(v_{r,k}=(\underbrace{0,\ldots,0}_{k-1},\underbrace{-r,\ldots,-r}_{k})\). Then we have_ \[h^{*}_{P_{q,r,k}}=(1,\underbrace{r,\ldots,r}_{k-1},r+q,\underbrace{0,\ldots,0 }_{k-1}).\] Proof.: We assume \(r>0\), as the case \(r=0\) is just Proposition 4.1. We denote by \(\mathcal{A}\) the point configuration \(v_{0},\ldots,v_{d+1}\in\mathbb{R}^{d}\) the vertices of \(P_{q,r,k}\), where the \(v_{i}\) appear in the same order as the corresponding vertices appear in Proposition 4.1, and \(v_{d+1}=v_{r,k}\). Note that \(\mathcal{A}\) is a circuit, as it satisfies the affine dependence \(\sum_{i=0}^{d+1}\lambda_{i}v_{i}=0\), where \[(\lambda_{0},\ldots,\lambda_{d+1})=(-q-r-1,\underbrace{-r,\ldots,-r}_{k-1}, \underbrace{r,\ldots,r}_{k},q+1)\in\mathbb{R}^{d+2},\] By Lemma 4.2(a) we know that \(P\) has a triangulation \(\mathcal{T}_{+}\) whose full-dimensional simplices are \(S_{k},\ldots,S_{d+1}\), where \(S_{i}=\operatorname{conv}(v_{0},\ldots,v_{i-1},v_{i+1},\ldots,v_{d+1})\). By Lemma 4.2(b), the normalized volume of \(P_{q,r,k}\) is \(\alpha(kr+q+1)\), for some positive \(\alpha\in\mathbb{R}_{>0}\). By verifying that \(\operatorname{Vol}(S_{d+1})=q+1\) we can deduce that \(\alpha=1\), from which we get \(\operatorname{Vol}(P_{q,r,k})=kr+q+1\). Note that the edge between \(v_{0}\) and \(v_{d+1}\) contains the \(r-2\) (non-vertices) lattice points of the form \((0,\ldots,0,-a,\ldots,-a)\) for all \(1\leq a\leq r-1\). We can then deduce that \(|P_{q,r,k}\cap\mathbb{Z}^{d}|\geq d+2+(r-1)\), where \(d+2\) is the number of vertices of \(P_{q,r,k}\). From this it follows that \(h^{*}_{P_{q,r,k},1}\geq r\). We now verify that the \(P_{q,r,k}\) contains a unimodular copy of the Reeve simplex \(R_{q+r,k}\). We do this by finding the following unimodular linear map \(A\) mapping \(R_{q+r,k}\) to \(S_{0}-v_{d+1}\), where \(S_{0}=\operatorname{conv}(v_{1},\ldots,v_{d+1})\subset P_{q,r,k}\): \[A=\begin{pmatrix}1&0&-r&\cdots&-r&-r\\ &\ddots&&-r&\cdots&-r&-r\\ 0&&1&-r&\cdots&-r&-r\\ 0&\cdots&0&-r+1&&-r&-r\\ \vdots&&\vdots&&\ddots&&\vdots\\ 0&\cdots&0&-r&&-r+1&-r\\ 0&\cdots&0&\underbrace{(k-1)r}_{k-1}&\cdots&(k-1)r&(k-1)r+1\end{pmatrix}.\] Since \(h^{*}_{R_{q+r,k}}(x)=1+(q+r)x^{k}\), it follows that \(h^{*}_{P_{q,r,k},k}\geq q+r\). Note that, although one can verify that \(A\) is indeed unimodular, the inequality above follows simply from \(A\) being injective, as \(A\) maps a full-dimensional simplex to another full-dimensional simplex. We conclude by noting that \(kr+q+1=\operatorname{Vol}(P_{q,r,k})\geq\sum_{i=0}^{k}h^{*}_{P_{q,r,k},i}\geq 1 +(k-1)r+(q+r)=kr+q+1\), which implies that all the lower bounds for the \(h^{*}\) coefficients we have found above are equalities. From the solutions (4) and (6) we found in Section 3, together with Proposition 4.3, we can finally find two counterexamples to Question 1.2, the smallest of which is in dimension 52. **Theorem 4.4**.: _There exists a 52-dimensional lattice polytope having a non-unimodal \(h^{*}\)-vector which is the cartesian product of two lattice polytopes having unimodal \(h^{*}\)-vectors. There exists a 54-dimensional lattice polytope having a non-unimodal \(h^{*}\)-vector which is the cartesian product of a lattice polytope having unimodal \(h^{*}\)-vector with itself._ Proof.: Let \(P_{1}=P_{13,1,13}\subset\mathbb{R}^{25}\), and \(P_{2}=P_{8,1,14}\subset\mathbb{R}^{27}\), then \[h^{*}_{P_{1}} =(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, \[12930814578275,71850371688370,302985944870565,995261924725160,2613021081859780,\] \[5632695584460000,102628746076606101,16321723300358178,23556670816935360,\] \[32503659106949550,45236238837124725,63958867580029500,8697376282355 0460,\] \[107371181389097340,11874982145565325,121429882556886150,121087083599 905800\] \[122150570332920090,122520628829356317,114289161415862640,91957387 766542420,\] \[602985858492287800,31048542899221005,12240036657236410,361952087 2541915,\] \[786880755821820,122905067056750,13400888684480,981797783010,458 28567940,\] \[1259997095,17987346,105649,144,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\] where the underlined value \(h^{*}_{24}\) is strictly smaller than the previous and following values. It is worth mentioning that the polytopes used in the proof of Theorem 4.4, although spanning, are not IDP nor very ample. In particular, they are not counterexamples to the unimodality conjecture in these cases. ## 5. Future directions and challenges While now we know that unimodality of \(h^{*}\)-vectors of lattice polytopes is not guaranteed when taking the product of polytopes with unimodal \(h^{*}\)-vectors, we still do not know what can be said about log-concavity. This means that [11, Question 3.4 (b)] is still open. Although one might think that the techniques used in this paper could be used to try to approach that question, a strong impediment is that there is no equivalent of Example 1.3 for log-concavity, which means the same strategy cannot be used to try to investigate that question, as we have no base case to initialize a genetic algorithm with. One might try to tweak a fitness function to specifically try to force evolution towards log-concave \(h^{*}\)-vectors, but beside the technical difficulties that this might imply, there is belief that a counterexample in the case of log-concavity might not exist, see the related discussion in [11]. The most immediate question that stems from this work is whether the cartesian product is a viable tool to construct counterexamples to the unimodality conjecture for IDP or very ample polytopes. **Question 5.1**.: 1. _Are there lattice polytopes_ \(P_{1}\) _and_ \(P_{2}\) _which are IDP, such that the_ \(h^{*}\)_-polynomial of_ \(P_{1}\times P_{2}\) _is not unimodal?_ 2. _Are there lattice polytopes_ \(P_{1}\) _and_ \(P_{2}\) _which are very ample, such that the_ \(h^{*}\)_-polynomial of_ \(P_{1}\times P_{2}\) _is not unimodal?_ Note that a positive answer to the first question would mean a counterexample to the unimodality conjecture for IDP polytopes, as such we expect that the answer is negative. The second question, which is a natural generalization of the first, might be harder to answer, but it is important to remark that the \(h^{*}\)-vectors found in this paper (of which a random sample is shown in Table 1) are most certainly are not the right candidates for building such examples, due to their peculiar shapes and the restrictions that these impose on the geometry of the polytopes. For this reason it is reasonable to believe that the answer to the second question is also negative. Finally, we believe that the family of polytopes \(P_{q,r,k}\) constructed in Proposition 4.3 could be a rich source of examples for unimodality and log-concavity questions in Ehrhart Theory. As an example, Theorem 4.4 shows that they can used to build nontrivial examples of spanning polytopes with non-unimodal \(h^{*}\)-vectors, while \(P_{3,6,107}\) can be used to replicate the counterexample from [11, Theorem 5.5] of a unimodal \(h^{*}\)-polynomial whose Ehrhart series is not log-concave.
2309.06483
Percolation Transition in a Topological Phase
Transition out of a topological phase is typically characterized by discontinuous changes in topological invariants along with bulk gap closings. However, as a clean system is geometrically punctured, it is natural to ask the fate of an underlying topological phase. To understand this physics we introduce and study both short and long-ranged toy models where a one dimensional topological phase is subjected to bond percolation protocols. We find that non-trivial boundary phenomena follow competing energy scales even while global topological response is governed via geometrical properties of the percolated lattice. Using numerical, analytical and appropriate mean-field studies we uncover the rich phenomenology and the various cross-over regimes of these systems. In particular, we discuss emergence of "fractured topological region" where an overall trivial system contains macroscopic number of topological clusters. Our study shows the interesting physics that can arise from an interplay of geometrical disorder within a topological phase.
Saikat Mondal, Subrata Pachhal, Adhip Agarwala
2023-09-12T18:00:24Z
http://arxiv.org/abs/2309.06483v2
# Percolation Transition in a Topological Phase ###### Abstract Transition out of a topological phase is typically characterized by discontinuous changes in topological invariants along with bulk gap closings. However, as a clean system is geometrically punctured, it is natural to ask the fate of an underlying topological phase. To understand this physics we introduce and study both short and long-ranged toy models where a one dimensional topological phase is subjected to bond percolation protocols. We find that non-trivial boundary phenomena follow competing energy scales even while global topological response is governed via geometrical properties of the percolated lattice. Using numerical, analytical and appropriate mean-field studies we uncover the rich phenomenology and the various cross-over regimes of these systems. In particular, we discuss emergence of "fractured topological region" where an overall trivial system contains macroscopic number of topological clusters. Our study shows the interesting physics that can arise from an interplay of geometrical disorder within a topological phase. _Introduction:_ Percolation transition [1; 2; 3; 4; 5; 6; 7] is a geometrical phase transition which deals with the connectivity of a graph without any underlying energetic considerations. For short-ranged bond percolation in one dimension [3; 6], with \(p\) as the probability of having the nearest-neighbor bond, the percolation transition happens at \(p_{c}=1\). However, in higher dimensions [8; 9; 10] it is known that \(p_{c}\) can be less than 1. Critical exponents at the percolation transition follow universal behavior similar to second-order phase transitions [5; 6; 11]. Thus, percolation transitions offer an alternate playground to study critical phenomena which are not engineered by thermal fluctuations but rather are governed by geometrical "fluctuations" of the graph. Interplay of such geometrical phase transitions with conventional Hamiltonian based classical and quantum symmetry broken phases have been long investigated [12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. In particular, the effective dimension of the critical cluster at the percolation threshold has been shown to determine the nature of the phase diagrams both in higher dimensions [15] and in long-range systems even in one dimension [22; 23; 24; 25; 18; 26]. However, similar questions in the context of topological quantum phases have been little explored [27; 28]. Such questions can help shed light on a plethora of experimental studies where the interplay of disorder and topological phases have become increasingly important [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. In this work, we pose the question- can a topological phase and its response functions detect an underlying geometrical (percolation) phase transition? To answer this question we introduce and study two toy models in one dimension where the graph is a one-dimensional chain with short and long range couplings and the decorated Hamiltonian on the graph is the paradigmatic Su-Schrieffer-Heeger (SSH) chain [41; 42] (see Fig. 1(a)). The first model we discuss is the short-ranged (SR) model, where the inter unit-cell (composed of sites A and B) bond hopping (\(w\)) in the conventional SSH model is percolated while the intra unit-cell hopping (\(v\)) is kept intact. Here, we find interesting signatures of the percolation transition as \(p\rightarrow(p_{c}=1)\) where response function such as polarization [43; 44; 45] approaches 1 and the system hosts zero energy modes. For \(p<1\) we show the breakdown of bulk-boundary correspondence where polarization is \(\sim\)_zero_ even though the system hosts robust zero modes. In order to capture the physics of \(p_{c}<1\) we introduce a long-range (LR) model where inter unit-cell bonds are now long-ranged and chosen probabilistically. In particular, the probability of a long-ranged bond falls as \(p/r^{\alpha}\) (\(\alpha>1\)) but the hopping strength is kept as \(w\) such that geometrical \(p_{c}<1\). Interestingly, we find that Figure 1: (a) Bond-diluted long-range (LR) SSH chain where intra unit-cell (\(A\) and \(B\) sites) hopping is \(v\). The probability of having the bond between the unit cells separated by a distance \(r\) is \(p_{n,n+r}=p/r^{\alpha}\) and the hopping strength is \(w_{n,n+r}=w\). In (b), a schematic phase diagram for \(p\) and \(\alpha\) in the deep topological phase (\(|w|\gg|v|\)) is shown. Schematic phase diagram for \(w/v\) with \(p\) for a fixed \(\alpha\) is shown in (c), where \(p^{*}\) is the threshold where end-to-end connectivity of the chain is established. \(P\) and \(N_{0}\) are the polarization and the number of zero-energy modes, respectively. The dotted curves in (c) are the mean-field critical curves (see text). typical signatures of a topological phase in one dimension such as polarization (\(P\)) and robust zero-modes (\(N_{0}\)) follow distinctive behaviors reflecting either geometrical or energetic considerations. Using a combination of numerical and "mean-field" analytic studies, we show that these models can indeed capture interesting aspects of percolation transitions in a topological phase. _Bond-diluted SSH chain:_ We first consider a percolating SR-SSH chain [41, 42] with the Hamiltonian \[H=v\sum_{n=1}^{L}(c_{n,A}^{\dagger}c_{n,B}+c_{n,B}^{\dagger}c_{n,A})\\ +\sum_{n=1}^{L-1}w_{n,n+1}(c_{n,B}^{\dagger}c_{n+1,A}+c_{n+1,A}^{ \dagger}c_{n,B}), \tag{1}\] where \(L\) is the number of unit cells, \(c_{n,A}\) (\(c_{n,B}^{\dagger}\)) is the annihilation (creation) operator for a fermion in the sub-lattice \(A\) (\(B\)) of \(n\)-th unit cell. The parameters \(v\) and \(w_{n,n+1}\) denote the intra and inter unit-cell hopping strengths where the latter is chosen from a probability distribution: \[\tilde{\mathcal{P}}_{\text{SR}}(w_{n,n+1})=p\delta(w_{n,n+1}-w)+(1-p)\,\delta (w_{n,n+1}). \tag{2}\] Thus, \(p\) (where \(0\leq p\leq 1\)) is the probability of having a bond between the nearest neighboring unit cells with hopping strength \(w\). Choosing \(w\) and \(v\) as real, our system always belongs to the symmetry class BDI [46, 47]. We always study the system at half-filling, such that Fermi energy remains pinned to zero. _Phase diagram:_ At \(p=1\) we restore a clean SSH chain (see Ref. [48, 49, 50]), which is topologically non-trivial with two zero-energy edge-modes (\(N_{0}=2\)) when \(|w|>|v|\) and is trivial when \(|w|<|v|\). The polarization (see Supplemental Material (SM) [51]) takes \(P=1\) and \(P=0\) in the two phases respectively, indicating the existence of the bulk-edge correspondence at \(p=1\). As the chain is diluted (i.e. \(p<1\)), we find that the polarization is always zero (\(P\sim 0\)) for both \(|w|>|v|\) and \(|w|<|v|\), as evident from Fig. 2(a). Importantly, with increasing system size, \(P\to 1\) (in the topological regime) only at \(p=1\) (see Fig. 2(b)). This would suggest that any amount of percolation in the short range SSH chain immediately renders it trivial. However, the number of zero-energy modes in the system shows a nontrivial behavior which we now discuss. Given a typical Kubo gap in the system is \(\Delta E\sim 1/L\), we associate the number of zero modes \(=N_{0}\) to be the number of single-particle eigen-energies \(E_{i}\) which lie within the window \(|E_{i}|<\Delta E\). We find that \(N_{0}\) is identically zero when \(|w|<|v|\) for all values of \(p\) but can take _non-zero_ values when \(|w|>|v|\) even when \(p<1\) (see Fig. 2(c)). This indicates a breakdown of the bulk-edge correspondence in the percolating (i.e. when \(p<1\)) topological phase - where even though the individual clusters have robust zero-energy modes, the overall system does not have a finite polarization. This implies that polarization is a truly global object, which requires end-to-end connectivity in the system. Interestingly, \(N_{0}\) goes to zero at a much lower value of \(p\) given by, \(\sim|v/w|\) shown by a dashed line in Fig. 2(c). To estimate this competing energy scale, we construct an effective "mean-field" Hamiltonian (\(H_{\text{MF}}\)) where a translationally invariant model is considered where hopping strength between unit cells is scaled by the probability of having the corresponding connectivity. This is similar to coherent potential approximation common in study of alloys [52, 53]. For the short-range percolating SSH chain, \(H_{\text{MF}}\) corresponds to another SSH chain where \(w\) gets scaled by \(p\) (see [51] for details). Thus, the trivial to topological phase transition here happens at \(|wp|=|v|\). Therefore, while the appearance of the robust zero modes is governed by \(H_{\text{MF}}\), the behavior of polarization is determined by the geometric transition. Geometrical properties of the percolating clusters determine the number of zero-modes deep in the topological phase (i.e., \(|w|\gg|v|\)) since every cluster hosts two Figure 2: (a) Polarization (\(P\)) as a function of \(w/v\) and \(p\) for a bond-diluted short-range SSH chain (see Eqs. (1) and (2)) with \(L=64\). (b) \(P\) as a function of \(p\) for different sizes \(L\) (\(w/v=10.0\)). (c) The number of zero-energy modes (\(N_{0}\)) as a function of \(w/v\) and \(p\) for \(L=64\). (d) Behavior of \(N_{0}\) with \(p\) in the deep topological regime (\(w/v=100.0\)) for \(L=100\). (e) Schematic phase diagram with the parameters \(w/v\) and \(p\). (f) Mean energy-gap (\(\epsilon\)) per cluster (see Eq. (3)) in the units of \(v\) (where \(v=1\)) as a function of \(p\) for \(L=100\) for different \(w/v\). The dotted lines in (a), (c), (e) are the mean-field critical lines: \(wp=\pm v\). \(p_{c}=1\) is the classical percolation threshold. Configuration averaging is performed over 200 realizations in (a, c) and over 1000 realizations in (b, d, f). The error-bars are always denoted by shaded thickness of the curves. "edge modes". An upper bound on \(N_{0}\) is \(\approx 2\sum_{s=2}^{\infty}\mathcal{N}_{s}=2Lp(1-p)\) where \(\mathcal{N}_{s}\) is the number of clusters with \(s\) unit-cells (see [51] for details). This is confirmed numerically in Fig. 2(d). Thus the following qualitative phase diagram emerges: the percolation transition (\(p_{c}=1\)) is captured by polarization (\(P\)) while the number of zero-energy modes (\(N_{0}\)) follows the mean-field critical lines \(wp=\pm v\) (see Fig. 2(e)). The approach to percolation transition criticality predicts interesting scaling for physical quantities. For instance, \(P\) approaches 1 near \(p\to p_{c}\) as \((1-P)\sim L^{-2/\nu_{p}}\) where \(\nu_{p}=1\) and is thus determined by percolation exponent (see [51]). Similarly, the mean energy gap (\(\epsilon\)) per cluster, defined as \[\epsilon=\frac{\sum_{s}\epsilon_{s}\mathcal{N}_{s}}{\sum_{s}\mathcal{N}_{s}}, \tag{3}\] exhibits interesting behavior near \(p_{c}\) for different values of \(w,v\). Here, \(\epsilon_{s}\) is the energy-gap in a cluster with \(s\) unit cells. A typical cluster of size \(s\), for \(|w|\gg|v|\) has \(\epsilon_{s}\sim\exp(-s/\lambda)\) because of hybridizing edge modes. For \(|w|\ll|v|\), the clusters are trivial and gapped with \(\epsilon_{s}\sim 2|v-w|\) and at \(|w|=|v|\), \(\epsilon_{s}\sim 1/s\). This results in a linear behavior of \(\epsilon\to 0\) as \(p\to 1\) at \(|w|\gg|v|\) and a logarithmic dependence at \(|w|=|v|\) (see [51] for details). All these results are numerically confirmed in Fig. 2(f). Thus, the short-ranged SSH chain shows the non-trivial role of percolation physics in a topological phase. We now discuss long-range chains where \(p_{c}\) can be tuned below 1. _Bond-diluted long-range SSH chain:_ Geometrically, for long-ranged bond-diluted chains [24; 26; 23], where the probability of having a bond between \(n\)-th and \((n+r)\)-th site goes as \(p_{n,n+r}=p/r^{\alpha}\) (where \(\alpha>1\)), the percolation transition is found to occur at \(p_{c}=1\) for \(\alpha>2\) and \(p_{c}<1\) for \(1<\alpha<2\), respectively [24; 26]. Thus, the physics of two situations (\(\alpha>2\) and \(1<\alpha<2\)) are determined by effectively short-ranged and long-ranged theories within a renormalization group study [24; 26]. An estimate of the percolation transition \(p_{c}\) can be done using the Bethe threshold (\(p_{c}^{\rm Bethe}\)) [23; 26] which is the value of \(p\) where a site is coupled to at least one other site, i.e., \(p_{c}^{\rm Bethe}\sum_{r}(2/r^{\alpha})\sim 1\), which implies \(p_{c}^{\rm Bethe}\sim 1/(2\mathrm{Li}_{\alpha}(1))\), where \(\mathrm{Li}_{\alpha}(x)=\sum_{r=1}^{\infty}(x^{r}/r^{\alpha})\). However, careful analytical and numerical studies have shown that \(p_{c}\geq p_{c}^{\rm Bethe}\)[23; 26]. In order to study the role of topological quantum Hamiltonian on such a LR chain we study the following model: \[H = v\sum_{n=1}^{L}(c_{n,A}^{\dagger}c_{n,B}+c_{n,B}^{\dagger}c_{n,A}) \tag{4}\] \[+\sum_{n=1}^{L-1}\sum_{r=1}^{L-n}w_{n,n+r}(c_{n,B}^{\dagger}c_{n +r,A}+c_{n+r,A}^{\dagger}c_{n,B}),\] where the probability distribution of the inter-cell hopping strength \(w_{n,n+r}\) between \(n\)-th and \((n+r)\)-th unit cell is \[\tilde{\mathcal{P}}_{\rm LR}(w_{n,n+r})=\frac{p}{r^{\alpha}}\delta(w_{n,n+r}-w )+\Big{(}1-\frac{p}{r^{\alpha}}\Big{)}\delta(w_{n,n+r}), \tag{5}\] where \(w_{n,n+r}\) can assume the values \(w\) (\(\neq 0\)) and zero with the probabilities \(p/r^{\alpha}\) and \((1-(p/r^{\alpha}))\), respectively where \(\alpha>1\). The probability of having a bond between the unit cells decreases with the distance \(r\), but the inter-cell hopping strength (\(w\)) remains constant. The corresponding \(H_{\rm MF}\) where the inter unit-cell hopping goes as \(\sim\frac{wp}{r^{\alpha}}\) (see [51]) governs physics which is determined by the average energetics of the problem. _Results and Phase diagram:_ In LR system we calculate \(N_{0}\) and find it is non-zero in the same parameter regime where the corresponding \(H_{\rm MF}\) is non-trivial (see Fig. 3(a) where the mean-field critical lines and the \(N_{0}\) are shown). The mean-field critical lines can be easily estimated to be \(wp=(-v)/(\mathrm{Li}_{\alpha}(-1))\) and \(wp=(-v)/(\mathrm{Li}_{\alpha}(1))\). This also corroborates with the behavior in the SR model in \(\alpha\gg 2\) limit. The number of zero modes, however is dependent on the nature of the geometrical disorder - as can be seen in Fig. 3(b) where Model-LR has significantly higher number of zero modes compared to the mean-field (MF) model. This is expected since for translationally invariant MF model \(N_{0}=2\) in the topological regime; while in LR given a finite \(p_{c}\) the system has larger number of disconnected clusters for the same value of \(\alpha\) and \(w/v\) (see [51] for details). We next investigate the behavior of polarization in this system. While in trivial regime (\(|w|<|v|\)) \(P\sim 0\) irrespective of the value of \(\alpha\) and \(p\), deep in the topological regime the behavior of \(P\) is quite unique. For \(|w|\gg|v|\), when \(\alpha\gg 2\), \(P\sim 1\) only when \(p\to 1\) consistent with SR system. However, as \(\alpha\) is reduced, we find that \(P\) undergoes a transition from 1 to zero, at neither the mean field value _nor_ the geometrical percolation transition. The transition happens at a different crossover scale \(p\) we label \(p^{*}\). This divides the complete phase space into three distinct regions: for \(p>p^{*}\), where both \(P\sim 1,N_{0}\neq 0\) and thus the system is "topological" even while percolation disorder is present. For \(p<p_{\rm MF}\) (where \(p_{MF}=|(-v)/(w\mathrm{Li}_{\alpha}(1))|\)) both Figure 3: (a) Number of zero-modes (\(N_{0}\)) as a function of \(w/v\) and \(p\) for Model-LR with \(\alpha=1.6\) and \(L=100\), (b) \(N_{0}\) as a function of \(w/v\) for LR and mean-field (MF) model with \(\alpha=1.6\), \(p=0.3\) and \(L=100\). The dotted lines are the MF critical lines \(wp=(-v)/(\mathrm{Li}_{\alpha}(-1))\) and \(wp=(-v)/(\mathrm{Li}_{\alpha}(1))\). Configuration averaging is performed over 200 realizations in (a) and over 1000 realizations in (b). and \(N_{0}\) are \(zero\) signalling a trivial phase. In between for \(p>p_{\rm MF}\) and \(p<p^{*}\) while the system has a number of zero modes (coming from underlying topological clusters), \(P\sim 0\) signalling an absence of any global topological response (see Fig. 4(a),(b)). We dub this regime a Fractured Topological Region (FTR). Study of local density of states [LDOS (\(n\)) = \(\sum_{i}|\langle n|\psi_{i}\rangle|^{2}\), where \(i\) runs over all the zero-energy eigenstates and \(n\) is site index] also confirms the existence of zero-energy modes localized at the "edges" of different clusters (even in the bulk) in FTR, as shown in Fig. 4(c). The fractured region is thus characterized by an extensive number of zero modes (see [51]), determined by a combination of topological and geometrical properties - before the system eventually transits to a trivial phase. However, for \(p>p^{*}\) the system undergoes a transition from topological to trivial phase as \(w/v\) is tuned at a value determined by the \(H_{\rm MF}\). This qualitatively points out the schematic phase diagrams shown in Fig. 1 (b) and (c). We note that while at \(\alpha\gg 2\) value of \(p^{*}=p_{c}\sim 1\), at small \(\alpha\), \(p^{*}\neq p_{c}\). Thus the polarization physics is not just decided by the formation of a spanning cluster; in fact we find \(p^{*}\) is broadly the value of \(p\) when \(\sum_{r=1}^{\infty}\frac{p}{r^{\alpha}}\sim 1\) (i.e. \(p^{*}\sim({\rm Li}_{\alpha}(1))^{-1}\)) which is the probability when a site is connected to any site on one of its sides. Thus \(p^{*}\) represents end-to-end connectivity between the two ends of the lattice which is \(\to 1\) as \(\alpha\gg 2\) thus coinciding with the geometrical \(p_{c}\). In another model where \(w_{n,n+r}\) has the following probability distribution (\(p_{c}=0\)) \[\tilde{\mathcal{P}}_{\rm LR2}(w_{n,n+r})=p\delta\left(w_{n,n+r}-\frac{w}{r^{ \alpha}}\right)+(1-p)\,\delta(w_{n,n+r}), \tag{6}\] another \(p^{*}\) emerges due to different percolation properties of the links (see [51] for details), even though the \(p_{\rm MF}\) remains the same as before. _Discussion:_ It is pertinent to note that while in a translationally symmetric Hamiltonian, a topological and trivial phase are separated by a critical point where both the topological invariant and its boundary manifestations undergo concurrent transitions - the same physics does not occur under a geometrical disorder. As a topological phase is punctured (here using a percolation protocol), it is expected that eventually the topological phase gives way to a trivial phase. However, our study shows that these two well-defined limits are separated by a large region where bulk-boundary correspondence breaks down. We also point out that even though we maintain the symmetry class of the underlying parent topological phase; these different regimes which seem to be separated out by distinct energy scales or geometrical properties are not separated via a critical point in the sense of a diverging length scale. Studying the behavior of quantities such as gap, fidelity susceptibility, inverse participation ratios and entanglement entropy suggest that all these phases and their crossover boundaries have localized states near Fermi energy (see [51] for details). In this sense, these transitions are also distinct from studies of perturbative disorder such as Anderson/hopping disorder where no geometrical phase transitions occur [54; 55; 37; 56]. We also point out that polarization and its real space equivalent as a topological response has been rigorously investigated for one dimensional topological insulators in both clean and disordered settings [54; 55; 37; 56], however they haven't been discussed in context of geometrical transitions. While we have used \(P\) to characterize the global topological response of the phase here, it is important to see if there are other topological markers which may be better suited to capture such physics. _Outlook:_ While the nature of topological phases and their phase transitions are well understood in crystalline systems; an alternate method to trivialize a topological phase is via geometrical disorder. Unlike perturbative energetic disorder where a disorder energy scale is introduced - a geometrical disorder allows an interplay of both geometrical and Hamiltonian based phase transitions in the same system. We create toy models based on the simplest one-dimensional topological phases (SSH) to delineate such interplay of geometrical and topological physics. In the SR model we find that as a topological SSH model is percolated, the system creates macroscopic number of topological zero modes, even when the global polarization \(\to 0\). We then study LR model where a \(p_{c}<1\) can be achieved. We find a new percolation scale \(p^{*}\) above which polarization remains large, and below which we find a fractured topological region - where zero modes appear even in absence of global topological order. Figure 4: (a) Polarization (\(P\)) as a function of \(\alpha\) and \(p\) for Model-LR with \(|w|/|v|=5.0\) and \(L=64\). (b) Number of zero-energy modes (\(N_{0}\)) and \(P\) with \(p\) for LR with \(|w|/|v|=5.0\), \(L=64\) and \(\alpha=3.0\). (c) Local density of states (LDOS) of the zero-energy modes as a function of site index (\(n\)) and \(p\) for LR with \(|w|/|v|=5.0\) and \(\alpha=3.0\). Here, \(p_{c}\) is classical percolation threshold, \(p^{*}=1/({\rm Li}_{\alpha}(1))\) and \(p_{MF}=|(-v)/(w{\rm Li}_{\alpha}(1))|\). Configuration averaging is performed over 200 realizations in (a) and over 1000 realizations in (b). Our work points out the interesting role of geometrical fluctuations in a topological phase and a generalization to higher dimensions and topologically ordered phases is a natural and exciting future direction. _Acknowledgements:_ We reminisce with love the memory of Prof. Amit Dutta with whom we started the discussions in this project. We acknowledge fruitful discussions with Diptarka Das, Deepak Dhar and Sumilan Banerjee. S.M. acknowledges support from PMRF Fellowship and S.P. acknowledges funding from IIT Kanpur Institute Fellowship. AA acknowledges support from IITK Initiation Grant (IITK/PHY/2022010). Numerical calculations were performed on the workstation _Wigner_ at IITK.
2303.00081
Exploiting non-orthogonal eigenmodes in a non-Hermitian optical system to realize sub-100 MHz magneto-optical filters
Non-Hermitian physics is responsible for many of the counter-intuitive effects observed in optics research opening up new possibilities in sensing, polarization control and measurement. A hallmark of non-Hermitian matrices is the possibility of non-orthogonal eigenvectors resulting in coupling between modes. The advantages of propagation mode coupling have been little explored in magneto-optical filters and other devices based on birefringence. Magneto-optical filters select for ultra-narrow transmission regions by passing light through an atomic medium in the presence of a magnetic field. Passive filter designs have traditionally been limited by Doppler broadening of thermal vapors. Even for filter designs incorporating a pump laser, transmissions are typically less than 15\% for sub-Doppler features. Here we exploit our understanding of non-Hermitian physics to induce non-orthogonal propagation modes in a vapor and realize better magneto-optical filters. We construct two new filter designs with ENBWs and maximum transmissions of 181~MHz, 42\% and 140~MHz, 17\% which are the highest figure of merit and first sub-100~MHz FWHM passive filters recorded respectively. This work opens up a range of new filter applications including metrological devices for use outside a lab setting and commends filtering as a new candidate for deeper exploration of non-Hermitian physics such as exceptional points of degeneracy.
Fraser D. Logue, Jack. D. Briscoe, Danielle Pizzey, Steven A. Wrathmall, Ifan G. Hughes
2023-02-28T20:53:59Z
http://arxiv.org/abs/2303.00081v1
Exploiting non-orthogonal eigenmodes in a non-Hermitian optical system to realize sub-100 MHz magneto-optical filters. ###### Abstract Non-Hermitian physics is responsible for many of the counter-intuitive effects observed in optics research opening up new possibilities in sensing, polarization control and measurement. A hallmark of non-Hermitian matrices is the possibility of non-orthogonal eigenvectors resulting in coupling between modes. The advantages of propagation mode coupling have been little explored in magneto-optical filters and other devices based on birefringence. Magneto-optical filters select for ultra-narrow transmission regions by passing light through an atomic medium in the presence of a magnetic field. Passive filter designs have traditionally been limited by Doppler broadening of thermal vapors. Even for filter designs incorporating a pump laser, transmissions are typically less than 15% for sub-Doppler features. Here we exploit our understanding of non-Hermitian physics to induce non-orthogonal propagation modes in a vapor and realize better magneto-optical filters. We construct two new filter designs with ENBWs and maximum transmissions of 181 MHz, 42% and 140 MHz, 17% which are the highest figure of merit and first sub-100 MHz FWHM passive filters recorded respectively. This work opens up a range of new filter applications including metrological devices for use outside a lab setting and commends filtering as a new candidate for deeper exploration of non-Hermitian physics such as exceptional points of degeneracy. oeurmurm ## 1 Introduction Non-Hermitian physics [1, 2, 3, 4, 5, 6] is opening up new possibilities in photonics from the creation of omnipolarizers [7, 8, 9] to sensing applications [10, 11, 12] and tailoring laser output [13, 14, 15]. In general, non-Hermitian matrices do not guarantee orthogonal eigenvectors [16, 17, 18] and have real eigenvalues (PT-symmetric) [19, 20, 21], imaginary eigenvalues (anti-PT-symmetric) [22, 23, 24] or complex eigenvalues throughout [25, 26]. Ultra narrowband magneto-optical filters [27], which rely on complex refractive indices, are perfect candidates for non-Hermitian physics but have not yet been studied in this domain. Magneto-optical filters already see many applications in solar weather studies [28, 29, 30], laser frequency stabilization [31, 32, 33, 34, 35], LIDAR, [36, 37, 38], quantum hybrid systems [39], underwater optical communications [40] and ghost imaging [41]. Optimizing filters is a balance between maximizing transmission and minimizing bandwidth. For example, tens of MHz equivalent noise bandwidth at \(<20\%\) transmission has been realized in active filters incorporating a pump beam [42, 43, 44]. On the other hand, several GHz equivalent noise bandwidth has been realized at \(>80\%\) transmission in passive designs [45, 46]. A fundamental limit on linewidths has not yet been found and could have future implications for metrology [47, 48, 49]. Of equal importance is the platform, particularly when filters are designed for use outside a laboratory setting. Most magneto-optical filters rely on thermal vapor cells due to their compact and resilient setups [50]. Thermal vapor filters can be tuned by varying the magnetic field, temperature and input polarization [51, 52] and can switch between different D-line operation [53]. It has been shown that by cascading cells, line-center or wing features can be selected [54]. However the two setups require the rearrangement of optical elements and the interchange of \(\sim\)kG permanent magnets with \(\sim\)100 G solenoids. Note that several metrological applications, such as those that use space, atmospheric or local environmental conditions to detect fluctuations, require apparatus that is robust to the elements [55, 56, 57]. Therefore there is interest in finding a reconfigurable setup that allows easy switching between wing and line-center operation.
2309.09940
Testing the near-far connection with FIRE simulations: inferring the stellar mass function of the proto-Local Group at z > 6 using the fossil record of present-day galaxies
The shape of the low-mass (faint) end of the galaxy stellar mass function (SMF) or ultraviolet luminosity function (UVLF) at z > 6 is an open question for understanding which galaxies primarily drove cosmic reionisation. Resolved photometry of Local Group low-mass galaxies allows us to reconstruct their star formation histories, stellar masses, and UV luminosities at early times, and this fossil record provides a powerful `near-far' technique for studying the reionisation-era SMF/UVLF, probing orders of magnitude lower in mass than direct HST/JWST observations. Using 882 low-mass (Mstar < 10^9 Msun) galaxies across 11 Milky Way- and Local Group-analogue environments from the FIRE-2 cosmological baryonic zoom-in simulations, we characterise their progenitors at z ~ 6 - 9, the mergers/disruption of those progenitors over time, and how well their present-day fossil record traces the high-redshift SMF. A present-day galaxy with Mstar ~ 10^5 Msun (10^9 Msun) had ~1 (~30) progenitors at z ~ 7, and its main progenitor comprised ~100% (~50%) of the total stellar mass of all its progenitors at z ~ 7. We show that although only ~ 15% of the early population of low-mass galaxies survives to present day, the fossil record of surviving Local Group galaxies accurately traces the low-mass slope of the SMF at z ~ 6 - 9. We find no obvious mass dependence to the mergers and accretion, and show that applying this reconstruction technique to just the low-mass galaxies at z = 0 and not the MW/M31 hosts correctly recovers the slope of the SMF down to Mstar ~ 10^4.5 Msun at z > 6. Thus, we validate the `near-far' approach as an unbiased tool for probing low-mass reionisation-era galaxies.
Pratik J. Gandhi, Andrew Wetzel, Michael Boylan-Kolchin, Robyn E. Sanderson, Alessandro Savino, Daniel R. Weisz, Erik J. Tollerud, Guochao Sun, Claude-Andre Faucher-Giguere
2023-09-18T16:57:04Z
http://arxiv.org/abs/2309.09940v1
Testing the near-far connection with FIRE simulations: inferring the stellar mass function of the proto-Local Group at z > 6 using the fossil record of present-day galaxies ###### Abstract The shape of the low-mass (faint) end of the galaxy stellar mass function (SMF) or ultraviolet luminosity function (UVLF) at \(z\gtrsim 6\) is an open question for understanding which galaxies primarily drove cosmic reionisation. Resolved photometry of Local Group low-mass galaxies allows us to reconstruct their star formation histories, stellar masses, and UV luminosities at early times, and this fossil record provides a powerful 'near-far' technique for studying the reionisation-era SMF/UVLF, probing orders of magnitude lower in mass than direct HST/JWST observations. Using 882 low-mass (\(M_{\rm star}\lesssim 10^{9}\) M\({}_{\odot}\)) galaxies across 11 Milky Way- and Local Group-analogue environments from the FIRE-2 cosmological baryonic zoom-in simulations, we characterise their progenitors at \(z=6-9\), the mergers/disruption of those progenitors over time, and how well their present-day fossil record traces the high-redshift SMF. A present-day galaxy with \(M_{\rm star}\sim 10^{5}\) M\({}_{\odot}\) (\(\sim 10^{9}\) M\({}_{\odot}\)) had \(\approx 1\) (\(\approx 30\)) progenitors at \(z\approx 7\), and its main progenitor comprised \(\approx 100\%\) (\(\approx 50\%\)) of the total stellar mass of all its progenitors at \(z\approx 7\). We show that although only \(\sim 15\%\) of the early population of low-mass galaxies survives to present day, the fossil record of surviving Local Group galaxies accurately traces the low-mass slope of the SMF at \(z\sim 6-9\). We find no obvious mass dependence to the mergers and accretion, and show that applying this reconstruction technique to just the low-mass galaxies at \(z=0\) and not the MW/M31 hosts correctly recovers the slope of the SMF down to \(M_{\rm star}\sim 10^{4.5}\) M\({}_{\odot}\) at \(z\gtrsim 6\). Thus, we validate the 'near-far' approach as an unbiased tool for probing low-mass reionisation-era galaxies. keywords: galaxies: high-redshift - galaxies: Local Group - galaxies: evolution - cosmology: reionisation ## 1 Introduction ### Motivating questions The Epoch of Reionisation (EoR), during which the hydrogen gas in the intergalactic medium (IGM) went from being neutral to ionised, was one of the most important phase transitions in the history of the Universe. The current consensus is that energetic radiation from the first star-forming galaxies predominantly drove cosmic reionisation, but major questions remain about the nature of the galaxies that contributed most to the overall ionising photon budget. Most models of reionisation argue that low- to intermediate-mass galaxies contributed more total ionising photons than brighter, more massive galaxies, because of their larger numbers and higher ionising photon escape fractions (e.g., Kuhlen & Faucher-Giguere, 2012; Wise et al., 2014; Robertson et al., 2015; Ma et al., 2018) (though see e.g., Naidu et al., 2020, for an example of analyses that favour more massive galaxies for driving the bulk of reionisation). To address this question of whether lower mass, fainter galaxies did indeed drive a majority of the reionisation process, we need to understand the shape of the galaxy stellar mass function (SMF) and rest-frame ultra-violet luminosity function (UVLF) during the EoR at \(z\gtrsim 6\). _A key specific question is: what is the slope of the galaxy SMF/UVLF at the low-mass/faint end at \(z\gtrsim 6\)?_ Direct HST observations have provided strong constraints on the UVLF at \(z\sim 6-9\) for galaxies as faint as \(M_{\rm UV}\sim-17\), and studies that leveraged the power of gravitational lensing have provide information about systems that are \(\approx 2-3\) orders of magnitude fainter (Finkelstein et al., 2015; Bouwens et al., 2017; Livermore et al., 2017;
2309.16860
Hyperbolicity in non-metric cubical small-cancellation
Given a non-positively curved cube complex $X$, we prove that the quotient of $\pi_1X$ defined by a cubical presentation $\langle X\mid Y_1,\dots, Y_s\rangle$ satisfying sufficient non-metric cubical small-cancellation conditions is hyperbolic provided that $\pi_1X$ is hyperbolic. This generalises the fact that finitely presented classical $C(7)$ small-cancellation groups are hyperbolic.
Macarena Arenas, Kasia Jankiewicz, Daniel T. Wise
2023-09-28T21:21:49Z
http://arxiv.org/abs/2309.16860v2
# Hyperbolicity in non-metric cubical small-cancellation ###### Abstract. Given a non-positively curved cube complex \(X\), we prove that the quotient of \(\pi_{1}X\) defined by a cubical presentation \(\langle X\mid Y_{1},\ldots,Y_{s}\rangle\) satisfying sufficient non-metric cubical small-cancellation conditions is hyperbolic provided that \(\pi_{1}X\) is hyperbolic. This generalises the fact that finitely presented classical \(C(7)\) small-cancellation groups are hyperbolic. Key words and phrases:Small Cancellation, Cube Complexes, Hyperbolic Groups 2010 Mathematics Subject Classification: 20F06, 20F67, 20F65 ## 1. Introduction A cubical presentation is a higher dimensional generalisation of a standard group presentation in terms of generators and relators. A non-positively curved cube complex \(X\) plays the role of the "generators", and the "relators" are local isometries of non-positively curved cube complexes \(Y_{i}\hookrightarrow X\). The associated group is the quotient of \(\pi_{1}X\) by the normal closure \(\langle\!\langle\,\{\pi_{1}Y_{i}\}\,\rangle\!\rangle_{\pi_{1}X}\) of \(\pi_{1}Y_{i}\). Just as in the classical setting, this group is the fundamental group of \(X\) with the \(Y_{i}\)'s coned off. Likewise, cubical small-cancellation theory, introduced in [20], is a generalisation of classical small-cancellation theory (see e.g. [10]). In both the classical and cubical cases, the small-cancellation conditions are expressed in terms of pieces. A _piece_ in a classical presentation is a word that appears in two different places among the relators. The _non-metric small-cancellation condition_\(C(p)\) where \(p>1\) asserts that no relator is a concatenation of fewer than \(p\) pieces. The _metric small-cancellation condition_\(C^{\prime}(\alpha)\)\(\alpha\in(0,1)\) asserts that \(|P|\leq\alpha|R|\) whenever \(P\) is a piece in a relator \(R\). Note that \(C^{\prime}(\frac{1}{p})\Rightarrow C(p+1)\). Pieces in cubical presentation are defined similarly, and the same implication holds in the cubical case. Cubical small-cancellation has proven to be a fruitful tool in the study of groups acting on CAT(0) cube complexes. It was used by Wise as a step in his proof of the Malnormal Special Quotient Theorem [20], and as such, played a crucial role in the proofs of the Virtual Haken and Virtual Fibering conjectures [1]. Cubical presentations and cubical small-cancellation theory were also studied and utilised in [1, 2, 3, 4, 5, 6, 7, 8]. While classical small cancellation groups have virtual cohomological dimension \(\leq 2\)[11], there ###### Abstract We consider the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cubical small-cancellation condition for the \(C(p)\)-cub cubical presentations in Section 2.2 can be particularised to this setting, and a version of Greendlinger's Lemma also holds (see for instance [12, V.4.5]). **Theorem**.: _Let \(X\) be a \(2\)-complex satisfying the \(C(7)\) small-cancellation condition. Then \(X\) is hyperbolic with the piece metric._ This implies hyperbolicity of finitely presented \(C(7)\) groups, since in that case the piece-metric and the usual combinatorial metric on the Cayley graph are quasi-isometric (see Proposition 4.3). Proof.: We check that all bigons in \(X^{(0)}\) are \(\epsilon\)-thin in the piece metric for some \(\epsilon>0\), and apply Proposition 3.1 to conclude that \(X\) is hyperbolic. Let \(\gamma_{1},\gamma_{2}\) be piece-geodesics forming a bigon in \(X\), and let \(D\to X\) be a reduced disc diagram with \(\partial D=\gamma_{1}\bar{\gamma}_{2}\). We claim that \(D\) is a (possibly degenerate) ladder, and hence that the bigon \(\gamma_{1},\gamma_{2}\) is \(1\)-thin, since by definition any two cells in a ladder intersect along at most \(1\) piece. Indeed, by Greendlinger's Lemma, \(D\) is either a ladder, or contains at least \(3\) shells and/or spurs. First note that \(D\) cannot have spurs, as these can be removed to obtain paths \(\gamma_{1}^{\prime},\gamma_{2}^{\prime}\) with the same endpoints as \(\gamma_{1},\gamma_{2}\), and which are shorter in the piece metric, thus contradicting that \(\gamma_{1},\gamma_{2}\) are piece-geodesics. If \(D\) has at least \(3\) shells, then at least \(1\) shell \(S\) must have its outerpath \(Q\) contained in either \(\gamma_{1}\) or \(\gamma_{2}\). Since both cases are analogous, assume \(Q\subset\gamma_{1}\), and let \(R\) be the innerpath of \(S\). Since \(S\) is a shell and \(X\) satisfies \(C(7)\), then \(R\) is the concatenation of at most \(3\) pieces, so \(|R|_{\mathfrak{p}}<|Q|_{\mathfrak{p}}\), and the path \(\gamma_{1}^{\prime}\) obtained from \(\gamma_{1}\) by traversing \(R\) instead of \(Q\) is the concatenation of less pieces than \(\gamma_{1}\), contradicting that \(\gamma_{1}\) is a piece-geodesic. Thus, \(D\) is a ladder, and the proof is complete. ### Structure of the paper The paper is organised as follows. In Section 2, we give background on cube complexes, cubical group presentations, and cubical small-cancellation. In Section 3, we recall a criterion for hyperbolicity for groups acting on graphs. In Section 4, we define and analyse the piece metric. In Section 5, we prove Theorem 5.1. **Acknowledgement**.: The first author was supported by a Cambridge Trust & Newnham College Scholarship, and by the Denman Baynes Junior Research Fellowship. The second author was supported by the NSF grants DMS-2203307 and DMS-2238198. The third author was supported by NSERC. ## 2. Cubical background ### Non-positively curved cube complexes We assume that the reader is familiar with _CAT(0) cube complexes_, which are CAT(0) spaces having cell structures where each cell is isometric to a cube. We refer the reader to [1, 11, 12]. A _non-positively curved cube complex_ is a cell-complex \(X\) whose universal cover \(\widetilde{X}\) is a CAT(0) cube complex. A _hyperplane_\(\widetilde{H}\) in \(\widetilde{X}\) is a subspace whose intersection with each \(n\)-cube \([0,1]^{n}\) is either empty or consists of the subspace where exactly one coordinate is restricted to \(\frac{1}{2}\). For a hyperplane \(\widetilde{H}\) of \(\widetilde{X}\), we let \(N(\widetilde{H})\) denote its _carrier_, which is the union of all closed cubes intersecting \(\widetilde{H}\). The _combinatorial metric_\(\mathsf{d}\) on the 0-skeleton of a non-positively curved cube complex \(X\) is a length metric where the distance between two points is the length of the shortest combinatorial path connecting them. A map \(\phi:Y\to X\) between non-positively curved cube complexes is a _local isometry_ if \(\phi\) is locally injective, \(\phi\) maps open cubes homeomorphically to open cubes, and whenever \(a,b\) are concatenable edges of \(Y\), if \(\phi(a)\phi(b)\) is a subpath of the attaching map of a 2-cube of \(X\), then \(ab\) is a subpath of a 2-cube in \(Y\). ### Cubical presentations We recall the notion of a cubical presentation, and the cubical small-cancellation conditions from [20]. A _cubical presentation_\(\langle X\mid Y_{1},\ldots,Y_{m}\rangle\) consists of a non-positively curved cube complex \(X\), and a set of local isometries \(Y_{i}\looparrow X\) of non-positively curved cube complexes. We use the notation \(X^{*}\) for the cubical presentation above. As a topological space, \(X^{*}\) consists of \(X\) with a cone on \(Y_{i}\) attached to \(X\) for each \(i\). The vertices of the cones on \(Y_{i}\)'s will be referred to as _cone-vertices_ of \(X^{*}\). The cellular structure of \(X^{*}\) consists of all the original cubes of \(X\), and the "pyramids" over cubes in \(Y_{i}\) with a cone-vertex for the apex. As mentioned in the introduction, cubical presentations generalise "standard" group presentations. Indeed, a standard presentation complex associated with a group presentation \(G=\langle S\mid R\rangle\) can be viewed as a cubical presentation where the non-positively curved cube complex \(X\) is just a wedge of circles, one corresponding to each generator in \(S\). The complexes \(Y_{i}\) correspond to relators \(r_{i}\) in \(R\). Each cycle \(Y_{i}\) has length \(|r_{i}|\), and the local isometry \(Y_{i}\looparrow X\) is defined by labelling the edges of \(Y_{i}\) with the letters of \(r_{i}\). The universal cover \(\widetilde{X^{*}}\) consists of a cube complex \(\widehat{X}\) with cones over copies of \(Y_{i}\)'s. The complex \(\widehat{X}\) is a covering space of \(X\). A _combinatorial geodesic_ in \(\widetilde{X^{*}}\) is a combinatorial geodesic in \(\widehat{X}\), viewed as a path in \(\widetilde{X^{*}}\). ### Disc diagrams in \(X^{*}\) Throughout this paper, we will be analysing properties of _disc diagrams_, which we introduce below together with some associated terminology: A map \(f:X\longrightarrow Y\) between 2-complexes is _combinatorial_ if it maps cells to cells of the same dimension. A complex is _combinatorial_ if all attaching maps are combinatorial, possibly after subdividing the cells. A _disc diagram_ is a compact, contractible 2-complex \(D\) with a fixed planar embedding \(D\subseteq\mathbb{S}^{2}\). The embedding \(D\hookrightarrow\mathbb{S}^{2}\) induces a cell structure on \(\mathbb{S}^{2}\), consisting of the 2-cells of \(D\) together with an additional 2-cell, which is the 2-cell at infinity when viewing \(\mathbb{S}^{2}\) as the one point compactification of \(\mathbb{R}^{2}\). The _boundary path_\(\partial D\) of \(D\) is the attaching map of the 2-cell at infinity. Similarly, an _annular diagram_ is a compact 2-complex \(A\) with a fixed planar embedding \(A\subseteq\mathbb{S}^{2}\) and the homotopy type of \(\mathbb{S}^{1}\). The annular diagram \(A\) has two boundary cycles \(\partial_{in}A\)\(\partial_{out}A\). A _disc diagram in \(X^{*}\)_ is a combinatorial map \((D,\partial D)\to(X^{*},X^{(1)})\) of a disc diagram. The 2-cells of a disc diagram \(D\) in \(X^{*}\) are of two kinds: squares mapping onto squares of \(X\), and triangles mapping onto cones over edges contained in \(Y_{i}\). The vertices in \(D\) which are mapped to the cone-vertices of \(X^{*}\) are also called the _cone-vertices_. Triangles in \(D\) are grouped into cyclic families meeting around a cone-vertex. We refer to such families as _cones_, and treat a whole such family as a single 2-cell. A _cone-cell_\(C\) is the union of an annular square diagram \(A\to D\) whose interior embeds in \(D\), together with a cone over \(\partial_{in}A\). See Figure 1. Note that this definition differs slightly from the definition of a cone-cell in the literature, where \(A\) is simply a circle. The _square part_\(D_{\square}\) of \(D\) is a subdiagram which is the union of all the squares that are contained in cone-cells. A _square disc diagram_ is a disc diagram whose square part is the whole diagram, i.e. it contains no cone-cells. A _mid-interval_ in a square, viewed as \([0,1]\times[0,1]\), is an interval \(\{\frac{1}{2}\}\times[0,1]\) or \([0,1]\times\{\frac{1}{2}\}\). A _dual curve_ in a square disc diagram \(D\) is a curve which intersect each closed square either trivially, or along a mid-interval, i.e., a dual curve is a restriction of a hyperplane in \(X\) to \(D\). We note that for each 1-cube of \(D\), there exists a unique dual curve crossing it [20, 2e]. The _complexity_ of a disc diagram \(D\) in \(X^{*}\) is defined as \[\operatorname{Comp}(D)=(\#\text{cone-cells},\#\text{squares in }D_{\square}).\] We say that \(D\) has _minimal complexity_ if \(\operatorname{Comp}(D)\) is minimal in the lexicographical order among disc diagrams with the same boundary path as \(D\). A disc diagram Figure 1. Cone-cells in a disc diagram. In figures we will often omit the cell structure of cone-cells, unless needed. \(D\) in \(X^{*}\) is _degenerate_ if \(\operatorname{Comp}(D)=(0,0)\). A disc diagram \(D\), in \(X^{*}\) is _singular_ if \(D\) is not homeomorphic to a closed ball in \(\mathbb{R}^{2}\). This is equivalent to \(D\) either being a single vertex or an edge, or containing a cut vertex. In particular, every degenerate disc diagram is singular. A square \(s\) is a _cornsquare_ on a cone-cell \(C\) if a pair of dual curves emanating from consecutive edges \(a,b\) of \(c\) terminates on consecutive edges \(a^{\prime},b^{\prime}\) of \(\partial D\). **Definition 2.1** (Reduction moves).: We define six types of reduction moves. See Figure 2. 1. Cancelling a pair of squares \(s,s^{\prime}\) meeting at one edge \(e\) in the disc diagram, whose map to \(X^{*}\) factors through a reflection identifying them. That is, cutting out \(e\cup Int(s)\cup Int(s^{\prime})\) and then glueing together the paths \(\partial s-e\) and \(\partial s^{\prime}-e\). 2. Replacing a minimal bigon-diagram, i.e. a disc subdiagram containing two dual curves intersecting each other twice, which is not contained in any other such subdiagram, with a lower complexity square disc diagram with the same boundary. 3. Replacing a pair of adjacent cone-cells with a single cone-cell. 4. Replacing a cone-cell with a square disc diagram with the same boundary. 5. Absorbing a cornsquare \(s\) to a cone-cell \(C\), i.e. replace a minimal subdiagram containing \(C\) and the two dual curves starting at \(C\) and ending in \(s\) with a lower complexity disc diagram with the same boundary and containing a cone-cell \(C\cup s^{\prime}\) for some square \(s^{\prime}\). 6. Absorbing a square with a single edge in a cone-cell into the cone-cell. **Definition 2.2** (Reduced and weakly reduced disc diagram).: A disc diagram \(D\to X^{*}\) in a cubical presentation is * (5) from Definition 2.1 can be performed in \(D\). * (5) from Definition 2.1 can be performed in \(D\). Figure 2. The six reduction moves from Definition 2.1. Note that if \(D\) has minimal complexity then \(D\) is reduced, and that, in particular, each reduction move outputs a diagram \(D^{\prime}\) with \(Area(D^{\prime})<Area(D)\) and \(\partial D^{\prime}=\partial D\). Consequently: **Lemma 2.3**.: _Let \(D\to X^{*}\) be a disc diagram, then there exist disc diagrams \(D^{\prime}\to X^{*}\) and \(D^{\prime\prime}\to X^{*}\) satisfying:_ 1. \(\partial D=\partial D^{\prime}=\partial D^{\prime\prime}\)_,_ 2. \(D^{\prime}\) _is weakly reduced and_ \(D^{\prime\prime}\) _is reduced,_ 3. \(D^{\prime}\) _is obtained from_ \(D\) _after a a finite number of moves of types (_1_)-(_5_), and_ \(D^{\prime\prime}\) _is obtained from_ \(D\) _after a finite number of moves of type (_0_)-(_5_)._ **Remark 2.4**.: Many theorems about disc diagrams in the literature assume that the disc diagram is reduced or minimal complexity, but it is in fact sufficient to consider weakly reduced diagrams. For example, this is the case with Lemma 2.7 (the Cubical Greendlinger's Lemma). ### Cubical small-cancellation We use the convention where \(\overline{\rho}\) denotes the path \(\rho\) with the opposite orientation. A _grid_ is a square disc diagram isometric to the product of two intervals. Let \(\rho\) and \(\eta\) be two combinatorial paths in \(\widetilde{X^{*}}\). We say \(\rho\) and \(\sigma\) are _parallel_ if there exists a grid \(E\to\widetilde{X^{*}}\) with \(\partial E=\mu\rho\overline{\nu\eta}\), where the dual curves dual to edges of \(\rho\), ordered with respect to its orientation, are also dual to edges of \(\eta\), ordered with respect to its orientation. Concretely, if \(\rho=e_{1}\cdots e_{k}\) and \(\sigma=f_{1}\cdots f_{k}\), and \(h(e_{i})\) and \(h(f_{i})\) are the curves dual to \(e_{i}\) and \(f_{i}\) respectively, then \(\rho\) is a piece if \(h(e_{i})=h(f_{i})\) for each \(i\in\{1,\ldots,k\}\). An _abstract contiguous cone-piece_ of \(X^{*}\) in \(Y_{i}\) is a component of \(\widetilde{Y}_{i}\cap\widetilde{Y}_{j}\), where \(\widetilde{Y}_{i}\) is a fixed elevation of \(Y_{i}\) to the universal cover \(\widetilde{X}\), and either \(i\neq j\) or where \(i=j\) but \(\widetilde{Y}_{j}\neq\widetilde{Y}_{i}\). Each abstract contiguous cone-piece \(P\) induces a map \(P\to Y_{i}\) which is the composition \(P\hookrightarrow\widetilde{Y}_{i}\to Y_{i}\), and a _contiguous cone-piece_ of \(Y_{j}\) in \(Y_{i}\) is a combinatorial path \(\rho\to P\) in an abstract contiguous cone-piece of \(Y_{j}\) in \(Y_{i}\). An _abstract contiguous wall-piece_ of \(X^{*}\) in \(Y_{i}\) is a component of \(\widetilde{Y}_{i}\cap N(\widetilde{H})\), where \(\widetilde{H}\) is a hyperplane that is disjoint from \(\widetilde{Y}_{i}\). Each abstract contiguous wall-piece \(P\) induces a map \(P\to Y_{i}\), and a _contiguous wall-piece_ of \(Y_{i}\) is a combinatorial path \(\rho\to P\) in an abstract contiguous wall-piece of \(Y_{i}\). A _piece_ is a path parallel to a contiguous cone-piece or wall-piece. The difference between contiguous pieces and pieces is illustrated in Figure 3. Figure 3. Blue paths are contiguous pieces, and yellow paths are pieces but not contiguous pieces. For an integer \(p>0\), we say \(X^{*}\) satisfies the \(C(p)\)_small-cancellation_ condition if no essential combinatorial closed path in \(Y_{i}\) can be expressed as a concatenation of less than \(p\) pieces. For a constant \(\alpha>0\), we say \(X^{*}\) satisfies the \(C^{\prime}(\alpha)\)_small-cancellation_ condition if \(\operatorname{diam}(P)<\alpha\|Y_{i}\|\) for every piece \(P\) involving \(Y_{i}\). Note that the \(C^{\prime}(\frac{1}{p})\) condition implies the \(C(p+1)\) condition. When \(p\geq 9\) and \(X^{*}\) is \(C(p)\), then each immersion \(Y_{i}\looparrowright X\) lifts to an embedding \(Y_{i}\hookrightarrow\widetilde{X^{*}}\). This is proven in [20, Thm 4.1] for \(p\geq 12\), and in [15] for \(p\geq 9\). We record the following observation, a proof of which can be found in [1]. **Lemma 2.5**.: _Let \(X^{*}=\langle X\mid Y_{1},\ldots,Y_{m}\rangle\) be a cubical presentation where \(X\) and \(Y_{1},\ldots,Y_{m}\) are compact non-positively curved cube complexes. If \(X^{*}\) satisfies the cubical \(C(p)\) condition for \(p\geq 2\), then there is a bound on the combinatorial length of pieces of \(X^{*}\)._ ### Greendlinger's Lemma A cone-cell \(C\) in a disc diagram \(D\) is a _boundary cone-cell_ if \(C\) intersect the boundary \(\partial D\) along at least one edge. A non-disconnecting boundary cone-cell \(C\) is a _shell of degree \(k\)_ if \(\partial C=RQ\) where \(Q\) is the maximal subpath of \(\partial C\) contained in \(\partial D\), and \(k\) is the minimal number such that \(R\) can be expressed as a concatenation of \(k\) pieces. We refer to \(R\) as the _innerpath_ of \(C\) and \(Q\) as the _outerpath_ of \(C\). A _corner_ in a disc diagram \(D\) is a vertex \(v\) in \(\partial D\) of valence \(2\) in \(D\) that is contained in some square of \(D\). A _cornsquare_ is a square \(c\) and a pair of dual curves emanating from consecutive edges \(a,b\) of \(c\) that terminate on consecutive edges \(a^{\prime},b^{\prime}\) of \(\partial D\). We abuse the notation and refer to the common vertex of \(a^{\prime},b^{\prime}\) as a cornsquare as well. A _spur_ is a vertex in \(\partial D\) of valence \(1\) in \(D\). If \(D\) contains a spur or a cut-vertex, then \(D\) is _singular_. **Definition 2.6** (Ladder).: A _pseudo-grid_ between paths \(\mu\) and \(\nu\) is a square disc diagram \(E\) where the boundary path \(\partial E\) is a concatenation \(\mu\rho\overline{\nu\eta}\) such that 1. each dual curve starting on \(\mu\) ends on \(\nu\), and vice versa, 2. no pair of dual curves starting on \(\mu\) cross each other, 3. no pair of dual curves cross each other twice. If a pseudo-grid \(E\) is degenerate then either \(\mu=\nu\) or \(\rho=\eta\). A _ladder_ is a disc diagram \((D,\partial D)\to(X^{*},X^{(0)})\) which is an alternating union of cone-cells and/or vertices \(C_{0},C_{2}\ldots,C_{2n}\) and (possibly degenerate) pseudo-grids \(E_{1},E_{3}\ldots,E_{2n-1}\), with \(n\geq 0\), in the following sense: 1. the boundary path \(\partial D\) is a concatenation \(\lambda_{1}\overline{\lambda_{2}}\) where the initial points of \(\lambda_{1},\lambda_{2}\) lie in \(C_{0}\), and the terminal points of \(\lambda_{1},\lambda_{2}\) lie in \(C_{2n}\), 2. \(\lambda_{1}=\alpha_{0}\rho_{1}\alpha_{2}\cdots\alpha_{2n-2}\rho_{2n-1}\alpha_ {2n}\) and \(\lambda_{2}=\beta_{0}\eta_{1}\beta_{2}\cdots\beta_{2n-2}\eta_{2n-1}\beta_{2n}\), 3. the boundary path \(\partial C_{i}=\nu_{i-1}\alpha_{i}\overline{\mu_{i+1}\beta_{i}}\) for some \(\nu_{i-1}\) and \(\mu_{i+1}\) (where \(\nu_{-1}\) and \(\mu_{2n+1}\) are trivial), and 4. the boundary path \(\partial E_{i}=\mu_{i}\rho_{i}\overline{\nu_{i}\eta_{i}}\). See Figure 4. **Lemma 2.7** (Cubical Greendlinger's Lemma [21, 15]).: _Let \(X^{*}=\langle X\mid Y_{1},\ldots,Y_{s}\rangle\) be a cubical presentation satisfying the \(C(9)\) condition, and let \(D\to X^{*}\) be a weakly reduced disc diagram. Then one of the following holds:_ * \(D\) _is a ladder, or_ * \(D\) _has at least three shells of degree_ \(\leq 4\) _and/or corners and/or spurs._ We note that our definition of ladder differs slightly from the definitions in [21, 15], so that a single cone-cell and a single vertex count as ladders here. Also, the statements in [21, 15] assume that the disc diagrams are reduced/minimal complexity, but the proofs work for weakly reduced disc diagrams. ## 3. Hyperbolic background We explain the convention we will follow. A pair \((Y,\mathsf{d})\) is a _metric graph_, if there exists a graph \(\Gamma\) such that \(Y\) is the vertex set of \(\Gamma\), and \(\mathsf{d}\) is defined as follows. For each edge of \(\Gamma\), we assign a positive number which is the _length_ of that edge. The _length_ of a simple path in \(\Gamma\) is the sum of the lengths of the edges in the path. A metric \(\mathsf{d}\) on a set \(Y\) is a _graph metric_, if \((Y,\mathsf{d})\) is a metric graph. In this paper, all edges of metric graphs have one of two lengths: \(1\) or \(\frac{1}{2}\). ### Thin bigon criterion for hyperbolicity A _bigon_ in a geodesic metric space \(Y\) is a pair of geodesic segments \(\gamma_{1},\gamma_{2}\) in \(Y\) with the same endpoints, i.e. such that \(\gamma_{1}(0)=\gamma_{2}(0)\) and \(\gamma_{1}(\ell)=\gamma_{2}(\ell)\) where \(\ell\) is the length of \(\gamma\). A bigon \(\gamma_{1},\gamma_{2}\) is _\(\varepsilon\)-thin_ if \(d(\gamma_{1}(t),\gamma_{2}(t))<\epsilon\) for all \(t\in(0,\ell)\). If we do not care about the specific value of \(\varepsilon\), the above condition is equivalent to the condition that \(\operatorname{im}\gamma_{1}\subseteq N_{\varepsilon^{\prime}}(\operatorname{ im}\gamma_{2})\) and \(\operatorname{im}\gamma_{2}\subseteq N_{\varepsilon^{\prime}}(\operatorname{ im}\gamma_{1})\) for some \(\varepsilon^{\prime}>0\). Indeed, suppose that for every \(t\in(0,\ell)\) there exists \(t^{\prime}\in(0,\ell)\) such that \(d(\gamma_{1}(t),\gamma_{2}(t^{\prime}))<\varepsilon^{\prime}\). Then \(|t-t^{\prime}|<\varepsilon\), as otherwise \(\gamma_{1}\) and \(\gamma_{2}\) are not geodesic segments. That implies that \(d(\gamma_{1}(t),\gamma_{2}(t))\leq d(\gamma_{1}(t),\gamma_{2}(t^{\prime}))+d( \gamma_{2}(t^{\prime}),\gamma_{2}(t))<2\varepsilon^{\prime}\). This generalizes to paths \(\gamma_{1},\gamma_{2}\) whose endpoints are not necessarily the same. We say \(\gamma_{1},\gamma_{2}\)_\(\epsilon\)-fellow travel_ if \(d(\gamma_{1}(t),\gamma_{2}(t))<\epsilon\) for all \(t\). The following is a hyperbolicity criterion for graphs, due to Papasoglu [11, Thm 1.4] (see also [21, Prop 4.6]). Figure 4. Example of a ladder. **Proposition 3.1** (Thin Bigon Criterion).: _Let \(Y\) be a graph where all bigons are \(\varepsilon\)-thin for some \(\varepsilon>0\). Then there exists \(\delta=\delta(\varepsilon)\) such that \(Y\) is \(\delta\)-hyperbolic._ Of course, there is also a converse. **Proposition 3.2**.: _If a graph \(Y\) is \(\delta\)-hyperbolic, then every bigon in \(Y\) is \(\delta\)-thin._ ## 4. The piece metric Let \(X^{*}=\langle X\mid Y_{1},\dots,Y_{s}\rangle\) be a cubical presentation. As explained in Section 2.2, we write \(X^{*}\) to denote the complex \(X\) with cones over \(Y_{i}\)'s attached. In particular, \(X\) can be viewed as a subspace of \(X^{*}\). The preimage of \(X\) in the universal cover \(\widehat{X^{*}}\) of \(X^{*}\) is denoted by \(\widehat{X}\). Note that \(\widehat{X}\) is a covering space of \(X\). The preimage of the \(0\)-skeleton of \(X\) in \(\widehat{X^{*}}\) is also the \(0\)-skeleton of \(\widehat{X}\), so it is denoted by \(\widehat{X}^{(0)}\). **Definition 4.1**.: The _piece length_ of a combinatorial path \(\gamma\) in \(\widehat{X}^{(0)}\) is the smallest \(n\) such that \(\gamma=\nu_{1}\cdots\nu_{n}\) where each \(\nu_{k}\) is a \(1\)-cube or a piece. The _piece metric_\(\mathsf{d_{p}}\) on \(\widehat{X}^{(0)}\) is defined as \(\mathsf{d_{p}}(a,b)=n\) where \(n\) is the smallest piece length of a path from \(a\) to \(b\). We note that \(\mathsf{d_{p}}\) is a graph metric when \(\widehat{X}^{(0)}\) is viewed as the graph with all edges of length \(1\) obtained from the \(1\)-skeleton \(\widehat{X}^{(1)}\) of \(\widehat{X}\) by adding extra edges between vertices contained in a single piece. We will denote this graph by \((\widehat{X}^{(0)},\mathsf{d_{p}})\). A _piece decomposition_ of a path \(\gamma\) is an expression \(\gamma=\nu_{1}\cdots\nu_{k}\), where each \(\nu_{i}\) is a piece or \(1\)-cube. We make the following easy observation: **Lemma 4.2**.: _Let \(\gamma,\gamma_{1},\gamma_{2}\) be piece-metric geodesics in \(\widehat{X}^{(0)}\) where \(\gamma=\gamma_{1}\gamma_{2}\). Then_ \[|\gamma_{1}|_{\mathsf{p}}+|\gamma_{2}|_{\mathsf{p}}-1\leq|\gamma|_{\mathsf{p }}\leq|\gamma_{1}|_{\mathsf{p}}+|\gamma_{2}|_{\mathsf{p}}.\] Proof.: Any piece decomposition \(\gamma=\nu_{1}\cdots\nu_{k}\) yields piece decompositions of both \(\gamma_{1}\) and \(\gamma_{2}\), where at most one piece \(\nu_{i}\) for \(i\in\{1,\dots,k\}\) further decomposes into the concatenation of \(2\) pieces \(\nu_{i}^{\prime},\nu_{i}^{\prime\prime}\), so \(\gamma_{1}=\nu_{1}\cdots\nu_{i}^{\prime}\) and \(\gamma_{2}=\nu_{i}^{\prime\prime}\cdots\nu_{k}\). Similarly, any two piece decompositions of \(\gamma_{1}\) and \(\gamma_{2}\) can be concatenated to obtain a piece decomposition of \(\gamma\). We now prove a few basic facts about the piece metric. First, it is quasi-isometric to the combinatorial metric. **Proposition 4.3**.: _Let \(X^{*}=\langle X\mid Y_{1},\dots Y_{s}\rangle\) be a cubical presentation satisfying the \(C(p)\) condition for \(p\geq 2\), and where \(X,Y_{1},\dots,Y_{s}\) are compact. Then \((\widehat{X}^{(0)},\mathsf{d_{p}})\) is quasi-isometric to \((\widehat{X}^{(0)},\mathsf{d})\) where \(\mathsf{d}\) is the standard combinatorial metric. Moreover, there is a uniform bound on the \(\mathsf{d_{p}}\)-diameters of cones._ Proof.: Indeed, \(\mathsf{d_{p}}(a,b)\leq\mathsf{d}(a,b)\) for all \(a,b\in\widehat{X}^{(0)}\), and by Lemma 2.5 there is an upper bound \(M\) on the combinatorial length of pieces, so we also have that \(\mathsf{d}(a,b)\leq M\mathsf{d_{p}}(a,b)\). Since there are only finitely many \(Y_{i}\)'s and each \(Y_{i}\) is compact, there must be an upper bound on the diameter of a simple essential curve in \(Y_{i}\) with respect to \(\mathsf{d}\) and thus with respect to \(\mathsf{d}_{\mathsf{p}}\), which implies the second statement. We note that ladders are thin with respect to the piece metric. **Proposition 4.4**.: _Let \(D\to X^{*}\) be a ladder with boundary \(\partial D=\lambda_{1}\overline{\lambda}_{2}\) as in Definition 2.6 where each subpath of \(\lambda_{i}\) contained in a single pseudo-grid is a geodesic. Then the bigon \(\lambda_{1},\lambda_{2}\) is \(\epsilon\)-thin with respect to \(\mathsf{d}_{\mathsf{p}}\) for a uniform constant \(\epsilon>0\) dependent only on the hyperbolicity constant of \(\widetilde{X}\)._ Proof.: We only show that \(\lambda_{1}\subseteq N_{\epsilon}(\lambda_{2})\), since the argument for \(\lambda_{2}\subseteq N_{\epsilon}(\lambda_{1})\) is analogous. Let \(x\in\lambda_{1}\). We want to show that \(\mathsf{d}_{\mathsf{p}}(x,\lambda_{2})\leq\epsilon\). If \(x\) belongs to a cone-cell \(C\), then by the definition of the ladder, \(\lambda_{2}\) also intersects \(C\), so \(\mathsf{d}_{\mathsf{p}}(x,\lambda_{2})\) is bounded by the piece-metric diameter of \(C\), which is uniformly bounded by some constant \(\epsilon_{1}\) by Proposition 4.3. Otherwise \(x\) lies in a pseudo-grid. Let \(\rho,\eta\) be subpaths of \(\lambda_{1},\lambda_{2}\) respectively, contained in the pseudo-grid which contains \(x\). The paths \(\rho,\eta\) are both combinatorial geodesics by the assumption. By Proposition 4.3\(\rho,\eta\) start and end at a uniform distance, since they lie in the same cone-cell. By hyperbolicity of \(\widetilde{X}\), there exists \(\epsilon_{2}>0\) such that \(\rho,\eta\)\(\epsilon_{2}\)-fellow travel. The conclusion follows with \(\epsilon=\max\{\epsilon_{1},\epsilon_{2}\}\). In the proof of Theorem 5.1 we will use the following technical lemma. **Lemma 4.5**.: _Let \(X^{*}\) be a cubical presentation, and let \(E\to X^{*}\) be a square diagram, with the induced metric \(\mathsf{d}_{\mathsf{p}}\). Suppose that \(\partial E=\ell Qr\overline{\gamma}\) where \(\gamma\) is a piece geodesic, no dual curve in \(E\) crosses \(\ell Qr\) twice, and each of \(\ell,Q,r\) contains no cornsquares of \(E\) in its interior. Moreover, assume that \(|Q|_{\mathsf{p}}\geq 3\). Then \(|\gamma|_{\mathsf{p}}\geq|\ell|_{\mathsf{p}}+|Q|_{\mathsf{p}}+|r|_{\mathsf{p}}-3\)._ Proof.: See Figure 5 for a diagram \(E\) with \(\partial E=\ell Qr\overline{\gamma}\). By the assumptions every dual curve of \(E\) starting at \(\ell Qr\) must exit the diagram in \(\gamma\). Thus each edge of \(\ell Qr\) is naturally paired with an edge of \(\gamma\). For every piece \(\nu\) in \(\gamma\), we consider all the dual curves \(h_{1},\ldots,h_{n}\) starting at \(\nu\) that exit \(E\) in \(\ell Qr\). These define a collection of edges in \(\ell Qr\), and every subcollection of such consecutive edges forms a path that is a piece, as it is parallel to some path contained in one of \(Y_{i}\). By grouping consecutive edges into maximal subpaths contained in one of \(\ell\), \(Q\), or \(r\), we get pieces \(\nu^{1},\ldots,\nu^{k}\) whose interiors are pairwise disjoint (ordered consistently with the orientation of \(\ell Qr\)), and say that \(\nu\)_projects_ to \(\nu^{1},\ldots,\nu^{k}\). First we claim that each of \(\ell,Q,r\) contains at most one piece \(\nu^{i}\). Suppose to the contrary that \(\nu^{i},\nu^{i+1}\) are both contained in \(\ell\) (and the same argument applies to \(Q,r\)). Then each dual curve starting at an edge of \(\ell\) lying between \(\nu^{i}\) and \(\nu^{i+1}\) must intersect at least one dual curve starting at edges of \(\nu^{i},\nu^{i+1}\), as otherwise it would also lie in a projection of \(\nu\), yielding a cornsquare in \(\ell\). Thus we can denote the projection of \(\nu\) by \(\nu^{\ell},\nu^{Q},\nu^{r}\) where each piece is a possibly empty projection onto \(\ell,Q,r\) respectively. See left diagram in Figure 5. We will assume that they are oriented consistently with \(\ell,Q,r\) respectively, not necessarily consistently with \(\nu\). Let \(\gamma=\nu_{1}\cdots\nu_{n}\) be a minimal piece decomposition of \(\gamma\), i.e. \(|\gamma|_{\mathfrak{p}}=n\). Let \(\ell=\nu^{\ell}_{i_{\ell}(1)}\cdots\nu^{\ell}_{i_{\ell}(n_{\ell})}\) be the induced piece-decomposition where we only write non-trivial pieces. In particular, \(i_{\ell}:\{1,\ldots,n_{\ell}\}\to\{1,\ldots,n\}\) is an injective function. We now claim that \(i_{\ell}\) is monotone. Suppose to the contrary, that \(1\leq j<k\leq n_{\ell}\) but \(i_{\ell}(j)>i_{\ell}(k)\). Then there must exists a cornsquare in the connected subpath of \(\ell\) containing \(\nu^{\ell}_{i_{\ell}(k)}\) and \(\nu^{\ell}_{i_{\ell}(j)}\), which is a contradiction. Analogously, we get \(Q=\nu^{Q}_{i_{Q}(1)}\cdots\nu^{Q}_{i_{Q}(n_{Q})}\) and \(r=\nu^{r}_{i_{r}(1)}\cdots\nu^{r}_{i_{r}(n_{r})}\), and the functions \(i_{Q},i_{r}\) are monotone. These are not necessarily the minimal piece decompositions, but certainly we have \(|\ell|_{\mathfrak{p}}\leq n_{\ell}\), \(|Q|_{\mathfrak{p}}\leq n_{Q}\), and \(|r|_{\mathfrak{p}}\leq n_{r}\). To prove the lemma we will show that \(|Q|_{\mathfrak{p}}+n_{\ell}+n_{r}\leq n+3\). Note that \(i_{\ell}(n_{\ell})\) is the largest index in \(\{1,\ldots,n\}\) such that \(\nu_{i_{\ell}(n_{\ell})}\) has non-trivial projection onto \(\ell\), and similarly \(i_{r}(1)\) is the lowest index in \(\{1,\ldots,n\}\) such that \(\nu_{i_{r}(1)}\) has non-trivial projection onto \(r\). See middle diagram in Figure 5. Since \(i_{\ell},i_{r}\) are monotone, \(n_{\ell}\leq i_{\ell}(n_{\ell})\) and \(n_{r}\leq n-i_{r}(1)\). Thus, it remains to prove that \(|Q|_{\mathfrak{p}}\leq i_{r}(1)-i_{\ell}(n_{\ell})+3\). Let \(k_{\ell}\) is the largest number such that \(i_{Q}(k_{\ell})<i_{\ell}(n_{\ell})\). We claim that \(\nu^{Q}_{i_{Q}(1)}\cdots\nu^{Q}_{i_{Q}(k_{\ell})}\) is a single piece in \(Q\). Indeed, the dual curves starting in \(\nu^{Q}_{i_{Q}(1)}\ldots\nu^{Q}_{i_{Q}(k_{\ell})}\) must all intersect a dual curve starting in \(\nu_{i_{\ell}(n_{\ell})}\) and exiting the diagram in \(\ell\). See right diagram in Figure 5. Similarly, let \(k_{r}\) be the smallest number such that \(i_{Q}(k_{r})>i_{r}(1)\) and note that \(\nu^{Q}_{i_{Q}(k_{r})}\ldots\nu^{Q}_{i_{Q}(n_{Q})}\) is a single piece in \(Q\). By assumption, \(|Q|_{p}\geq 3\), so the subpath \(\nu^{Q}_{i_{Q}(k_{\ell}+1)}\cdots\nu^{Q}_{i_{Q}(k_{r}-1)}\) is nonempty. In particular, \(k_{r}>k_{\ell}+1\). Since by definition of \(k_{\ell},k_{r}\) we have \(i_{Q}(k_{\ell}+1)\geq i_{\ell}(n_{\ell})\) and \(i_{Q}(k_{r}-1)\leq i_{r}(1)\), we conclude Figure 5. Steps of the proof of Lemma 4.5. that \[|\nu^{Q}_{i_{Q}(k_{\ell}+1)}\nu^{Q}_{i_{Q}(k_{\ell}+2)}\cdots\nu^{Q}_ {i_{Q}(k_{r}-1)}|_{\mathfrak{p}}\] \[\leq |\nu_{i_{Q}(k_{\ell}+1)}\nu_{(i_{Q}(k_{\ell}+1)+1)}\cdots\nu_{i_{Q }(k_{r}-1)}|_{\mathfrak{p}}\] \[\leq |\nu_{i_{\ell}(n_{\ell})}\nu_{(i_{\ell}(n_{\ell})+1)}\cdots\nu_{ i_{r}(1)}|_{\mathfrak{p}}\] \[\leq i_{r}(1)-i_{\ell}(n_{\ell})+1.\] This proves that \(|Q|_{\mathfrak{p}}\leq i_{r}(1)-i_{\ell}(n_{\ell})+3\) and completes the proof. ## 5. Proof of hyperbolicity In the proof of the next theorem, we show that, under suitable assumptions, \((\widehat{X}^{(0)},\mathsf{d}_{\mathfrak{p}})\) is a \(\delta\)-hyperbolic graph to deduce that \(\pi_{1}X^{*}\) is hyperbolic. The basic strategy is similar to [20, Thm 4.7], but the details in this case are significantly more involved. **Theorem 5.1**.: _Let \(X^{*}=\langle X\mid Y_{1},\ldots,Y_{s}\rangle\) be a cubical presentation satisfying the \(C(p)\) cubical small-cancellation condition for \(p\geq 14\), where \(X,Y_{1},\ldots,Y_{s}\) are compact, and \(\widetilde{X}\) is \(\delta\)-hyperbolic. Then \(\pi_{1}X^{*}\) is Gromov-hyperbolic._ Before we proceed with the proof of the above theorem we introduce a construction that is used in the proof. Let \(Y\subset X\). The _cubical convex hull_ of \(Y\) in \(X\) is the smallest cubically convex subcomplex of \(X\) contained in \(Y\). That is, it is the smallest subcomplex \(Hull(Y)\) satisfying that whenever a corner of an \(n\)-cube \(c\) with \(n\geq 2\) lies in \(Hull(Y)\), then \(c\subset Hull(Y)\). **Construction 5.2** (Square pushes).: Let \(D\) be a minimal complexity disc diagram, and let \(\gamma\rho=\partial D\). Let \(\lambda\) be a path with the same endpoints as \(\gamma\) and lying in the cubical convex hull of \(\gamma\), such that \(\gamma\overline{\lambda}\) bounds a disc subdiagram \(D_{0}\) of \(D\) of maximal area. In particular, \(D_{0}\) is a square disc diagram, and \(D^{\prime}=D-D_{0}\) is a disc diagram with \(\partial D^{\prime}=\lambda\rho\), which has no corners contained in the interior of the path \(\lambda\). The diagram \(D_{0}\) can be obtained via a finite sequence of _square pushes_, i.e. a sequence of subdiagrams \[\gamma=K_{0}\subseteq K_{1}\subseteq\cdots\subseteq K_{n-1}\subseteq K_{n}=D _{0}\] where for each \(i=0,\ldots,n-1\) the subdiagram \(K_{i+1}\) contains \(K_{i}\) and an additional square \(s\) such that at least two consecutive edges of \(s\) are contained in \(K_{i}\). Choosing a square \(s\) and adding it to \(K_{i}\) to obtain \(K_{i+1}\) will be referred to as _pushing a square_. Note that the sequence of diagrams \(K_{0},\ldots K_{n}\) is indeed finite, as \(Area(K_{i+1})=Area(K_{i})+1\) for each \(i\), and thus \(Area(D-K_{i+1})=Area(D-K_{i})-1\), so \(n-1\leq Area(D)\). By construction, every dual curve \(h\) in \(D_{0}\) starting in \(\gamma^{\prime}\) must exit in \(\gamma\). Indeed, every square \(S\) that is being pushed has at least two consecutive edges on \(\gamma\) (in the first step) or on some \(K_{i}\) (in general). Thus, the 2 dual curves emanating from either directly terminate on \(\gamma\) or enter \(K_{i}\), crossing some of the previously added squares. By induction on the area of \(K_{i}\), we can thus conclude that these dual curves terminate on \(\gamma\). **Construction 5.3** (Sandwich decomposition of a bigon).: Let \(\gamma_{1},\gamma_{2}\) be paths forming a bigon. Let \(D\to X^{*}\) be a reduced disc diagram with \(\partial D=\gamma_{1}\overline{\gamma}_{2}\). We define a decomposition of \(D\) into three (possibly singular) subdiagrams \(D_{1}\cup D^{\prime}\cup D_{2}\) by applying Construction 5.2 twice as follows: * We first apply it to the subpath \(\gamma_{1}\subseteq\partial D\) to obtain a decomposition \(D=D_{1}\cup D^{\prime\prime}\) where \(\partial D_{1}=\gamma_{1}\overline{\lambda}_{1}\) and \(\partial D^{\prime\prime}=\lambda_{1}\overline{\gamma}_{2}\). * Then we apply it to the subpath \(\gamma_{2}\) of \(\partial D^{\prime\prime}\) and we obtain a decomposition \(D^{\prime\prime}=D_{2}\cup D^{\prime}\) where \(\partial D^{\prime}=\lambda_{1}\overline{\lambda}_{2}\) and \(\partial D_{2}=\lambda_{2}\overline{\gamma}_{2}\). See Figure 6 for an example. We note that \(D_{1},D_{2}\) are square diagrams. **Lemma 5.4**.: _Let \(X^{*}=\langle X\mid Y_{1},\ldots,Y_{s}\rangle\) be a cubical presentation satisfying the \(C(p)\) cubical small-cancellation condition for \(p\geq 14\), where \(X,Y_{1},\ldots,Y_{s}\) are compact, and \(\widetilde{X}\) is \(\delta\)-hyperbolic._ _Then for any weakly reduced disc diagram \((D,\partial D)\to(X^{*},X)\) with \(\partial D=\gamma_{1}\overline{\gamma}_{2}\) where \(\gamma_{1},\gamma_{2}\) are \(\mathsf{d}_{\mathsf{p}}\)-geodesics \(D^{\prime}\) from its sandwich decomposition \(D=D_{1}\cup D^{\prime}\cup D_{2}\) is a ladder._ Proof.: Suppose to the contrary that \(D^{\prime}\) is not a ladder. We will derive a contradiction with the fact that \(\gamma_{1},\gamma_{2}\) are \(\mathsf{d}_{\mathsf{p}}\)-geodesics. By Lemma 2.7, \(D^{\prime}\) has at least three exposed cells, i.e. shells of degree \(\leq 4\), corners and/or spurs. Two of those exposed cells might contain \(q\) and \(q^{\prime}\), but there still must be at least one other exposed cell whose boundary path is disjoint from both \(q\) and \(q^{\prime}\). By construction of \(D^{\prime}\) in Construction 5.3, there are no corners or spurs contained in the interior of the paths \(\gamma_{1}\) and \(\gamma_{2}\), so we conclude that there must be a shell \(S\) of degree \(\leq 4\) in \(D^{\prime}\) with the outerpath \(Q\) contained in \(\gamma_{1}\) or \(\gamma_{2}\). Up to switching names of \(\gamma_{1}\) and \(\gamma_{2}\), we can assume that \(Q\) is contained in \(\gamma_{1}\). Let \(R\) denote the innerpath of \(S\) in \(D^{\prime}\). Let \(e_{\ell}\) and \(e_{r}\) be the leftmost (first) and the rightmost (last) edge of \(R\), and let \(h_{\ell},h_{r}\) be their dual curves in \(D_{1}\). By Construction 5.2\(h_{\ell},h_{r}\) exit \(D_{1}\) in \(\gamma_{1}\). Let \(\gamma_{1}^{\prime}\) be the minimal subpath of \(\gamma_{1}\) that contains the edges dual to \(h_{\ell},h_{r}\). Let \(H_{\ell},H_{r}\) be the hyperplanes of \(\widehat{X}\) extending \(h_{\ell},h_{r}\) respectively. Let \(\ell,r\) be combinatorial paths in \(D_{1}\) parallel to \(h_{\ell},h_{r}\) and starting at the two endpoints of the path \(Q\), respectively. Consider a minimal complexity square disc diagram \(E\) with boundary \(\partial E=\ell Qr\overline{\gamma}_{1}^{\prime}\) where \(\ell\) and \(r\) are combinatorial paths contained in \(N(H_{\ell}),N(H_{r})\). In particular, \(\ell\) and \(r\) do not intersect \(H_{\ell}\) and \(H_{r}\) respectively. Such a diagram \(E\) exists since we can choose a subdiagram of \(D_{1}\). Amongst all possible choices of \(\ell,r\) and \(E\) we pick a diagram with minimal area. A feature of the choice of \(E\) is that it has no cornsquares in the interiors of \(\ell\) and \(r\), as otherwise we could push that cornsquare out and reduce the area. Up to possibly replacing \(Q\) with another path with the same endpoints contained in the same cone, we can assume that \(Q\) has no cornsquares either. We will assume that this is the case for the remainder of the proof. We will be applying Lemma 4.5 to \(E\), so we first verify that the assumptions are satisfied. By Lemma 2.7, \(|Q|_{\mathsf{p}}\geq p-4>3\). Next, we claim that every dual curve starting in \(\ell Qr\) exits \(E\) in \(\gamma_{1}^{\prime}\). The cases of dual curves starting in \(\ell\) and \(r\) are analogous, so we only explain the argument for \(\ell\). Consider the subdiagram \(E^{\prime}=E\cup S\) of \(D^{\prime}\). Let \(e\) be an edge in \(\ell\). Note that the dual curve \(h\) to \(e\) in \(E^{\prime}\) cannot terminate on \(Q\), since this would imply that there is a cornsquare on \(Q\). If \(h\) terminates on \(r\), then \(h\) is parallel to \(Q\), and therefore \(Q\) is a single wall-piece, contradicting the \(C(p)\) condition. Thus, \(h\) must terminate on \(\gamma_{1}^{\prime}\). Let now \(e\) be an edge of \(Q\), and \(h\) its dual curve in \(E\). We already know that \(h\) cannot exit \(E\) in \(\ell\) or \(r\). If \(h\) exited \(E\) in \(Q\), it would either yield a cornsquare in the interior of \(Q\), contradicting the choice or \(Q\), or it would yield a bigon formed from \(2\) squares glued along a pair of adjacent edges, contradicting the minimal complexity of \(E\). Since no dual curve in \(E\) crosses \(\ell Qr\) twice, there are no cornsquares in none of \(\ell\), \(Q\), and \(r\), and \(|Q|_{\mathsf{p}}\geq 3\), Lemma 4.5 implies that \(|Q|_{\mathsf{p}}+|\ell|_{\mathsf{p}}+|r|_{\mathsf{p}}-3\leq|\gamma_{1}^{\prime }|_{\mathsf{p}}\). Recall that \(R\) is the innerpath of the shell \(S\) of degree \(\leq 4\) in \(D^{\prime}\). By definition \(|R|_{\mathsf{p}}\leq 4\), and so the \(C(p)\) condition with \(p\geq 14\) for \(X^{*}\) implies that \(|Q|_{\mathsf{p}}\geq p-4\geq 10>|R|_{\mathsf{p}}+5\). Combining the two inequalities we get \[|\overline{\ell}Rr|_{\mathsf{p}}\leq|R|_{\mathsf{p}}+|\ell|_{\mathsf{p}}+|r|_{ \mathsf{p}}<(|Q|_{\mathsf{p}}-5)+|\ell|_{\mathsf{p}}+|r|_{\mathsf{p}}\leq| \gamma_{1}^{\prime}|_{\mathsf{p}}-2.\] In particular, if we write \(\gamma_{1}=\gamma_{1,\ell}\gamma_{1}^{\prime}\gamma_{1,r}\), then using Lemma 4.2 we get \[|\gamma_{1}|_{\mathsf{p}}\geq|\gamma_{1,\ell}|_{\mathsf{p}}+|\gamma_{1}^{ \prime}|_{\mathsf{p}}+|\gamma_{1,r}|_{\mathsf{p}}-2>|\gamma_{1,\ell}|_{\mathsf{ p}}+|\overline{\ell}Rr|_{\mathsf{p}}+|\gamma_{1,r}|_{\mathsf{p}}\geq| \gamma_{1,\ell}\overline{\ell}Rr\gamma_{1,r}|_{\mathsf{p}}\] which contradicts the fact that \(\gamma_{1}\) is a piece-geodesic, completing the proof. We now combine the previous ingredients to finish the proof of Theorem 5.1. Proof of Theorem 5.1.: We prove that the coned-off space \((\widehat{X}^{(0)},\mathsf{d}_{\mathsf{p}})\) is \(\delta^{\prime}\)-hyperbolic for some \(\delta^{\prime}\) by showing that it satisfies the bigon criterion (Proposition 3.1). Figure 6. On the left, notation in the proof of Lemma 5.4; on the centre, possible overlapping paths in the proof of 5.1; on the right, the impossible transversal intersections described in the proof of 5.1. Let \(\gamma_{1},\gamma_{2}\) be \(\mathsf{d_{p}}\)-geodesic segments forming a bigon. Pick combinatorial geodesics \(\lambda_{1},\lambda_{2}\) such that for \(i=1,2\)\(\gamma_{i}\overline{\lambda}_{i}\) bounds a square disc diagram, denoted by \(D_{i}\), and let \(D^{\prime}\to X^{*}\) be a reduced disc diagram with boundary \(\partial D^{\prime}=\lambda_{1}\overline{\lambda}_{2}\). By glueing \(D_{1}\cup D^{\prime}\cup D_{2}\) along \(\lambda_{1}\) and \(\lambda_{2}\) respectively, we obtain a (possibly non weakly reduced) disc diagram \(D\to X^{*}\) with \(\partial D=\gamma_{1}\overline{\gamma}_{2}\). By Lemma 2.3, there exists a sequence of reduction moves (1) - (5) from Definition 2.1 that turns \(D\) into a weakly reduced disc diagram. We describe how each reduction move transforms a quintuple \((D,\lambda_{1},\lambda^{\prime}_{1},\lambda_{2},\lambda^{\prime}_{2})\) into a quintuple \((\breve{D},\breve{\lambda}_{1},\breve{\lambda}^{\prime}_{1},\breve{\lambda}_{ 2},\breve{\lambda}^{\prime}_{2})\), both satisfying conditions below. Our sequence starts with \((D,\lambda_{1},\lambda^{\prime}_{1},\lambda_{2},\lambda^{\prime}_{2})=(D, \lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2})\). At each step \((D,\lambda_{1},\lambda^{\prime}_{1},\lambda_{2},\lambda^{\prime}_{2})\) satisfies: * \(D\) is a disc diagram with \(\partial D=\gamma_{1}\overline{\gamma}_{2}\), * each \(\lambda_{i}\) is an embedded combinatorial path in \(D_{\square}\), such that the three subdiagrams \(D_{1}\), \(D_{2}\) and \(D^{\prime}\) of \(D\) with boundaries \(\gamma_{1}\overline{\lambda}_{1}\), \(\lambda_{2}\overline{\gamma}_{2}\) and \(\lambda_{1}\overline{\lambda}_{2}\) respectively, are reduced, and \(D_{1},D_{2}\) are square diagrams, * each \(\lambda^{\prime}_{i}\) is an embedded combinatorial path in \(D\) (not necessarily in \(D_{\square}\)) such that \(\lambda^{\prime}_{i}\) is a combinatorial geodesic in \(\widetilde{X}^{*}\), and \(\lambda^{\prime}_{i}\cap D_{\square}=\lambda_{i}\cap D_{\square}\), and after applying a reduction move, we obtain a new quintuple \((\breve{D},\breve{\lambda}_{1},\breve{\lambda}^{\prime}_{1},\breve{\lambda}_ {2},\breve{\lambda}^{\prime}_{2})\) satisfying the above conditions. Since the subdiagrams \(D_{1},D_{2}\) are square disc diagrams and \(D^{\prime}\) is reduced, we never apply the reduction moves (3) and (2), since those would have to be performed within \(D^{\prime}\), contradicting that \(D^{\prime}\) is reduced. We now describe the transformation from \(\lambda_{i}\) to \(\breve{\lambda}_{i}\) and from \(\lambda^{\prime}_{i}\) to \(\breve{\lambda}^{\prime}_{i}\), for each reduction move. In each case the change will occur only within a subdiagram \(B\) that is transformed to \(\breve{B}\) by the reduction. See Figure 7. We assume \(\lambda_{i}\) intersects the interior of \(B\). We note that it might happen that both \(\lambda_{1},\lambda_{2}\) intersect \(B\), in which case we apply the transformations to both \(\lambda_{1},\lambda^{\prime}_{1}\) and \(\lambda_{2},\lambda^{\prime}_{2}\), according to the rules described below. It might happen that paths \(\lambda_{1},\lambda_{2}\) overlap, but at no step they intersect transversally, i.e. at each step they yield a decomposition of the diagram \(D\) into \(D_{1}\cup D^{\prime}\cup D_{2}\). See Figure 6. Consider reduction move (1). Let \(B\) be a bigon-subdiagram of \(D\), which is to be reduced. Since this reduction move involves only squares, we have \(\lambda_{i}\cap B=\lambda^{\prime}_{i}\cap B\). Note that \(\lambda_{i}\cap B\) cannot join the two corners of \(B\) since \(\lambda^{\prime}_{i}\) is a combinatorial geodesic. Thus \(\lambda_{i}\cap B\) must be a combinatorial geodesic crossing both dual curves associated to \(B\). For each \(\lambda_{i}\cap B\) we set \(\breve{\lambda}_{i}\cap\breve{B}=\breve{\lambda}^{\prime}_{i}\cap\breve{B}\) to the combinatorial geodesic with the same endpoints in \(\breve{B}\) and maximizing the area of \(D_{i}\). See the first diagram in Figure 7. Thus the reduction move yields a new disc diagram \(\breve{D}\) with new paths \(\breve{\lambda}_{i},\breve{\lambda}^{\prime}_{i}\) for \(i=1,2\) satisfying the required conditions. We note that in the case where both \(\lambda_{1},\lambda_{2}\) intersect \(B\), the choices of \(\breve{\lambda}_{1},\breve{\lambda}_{2}\) ensure that \(\breve{\lambda}_{1},\breve{\lambda}_{2}\) do not intersect transversally. Consider reduction move (4). Let \(B\) be a subdiagram associated to a cornsquare \(s\) and its dual curves ending on a cone-cell \(C\). We set \(\ddot{\lambda}_{i}\cap\ddot{B}\) to the combinatorial path with the same length and endpoints in \(\ddot{B}\) and maximizing the area of \(D_{i}\). If \(\lambda^{\prime}_{i}\) coincides with \(\lambda_{i}\) in \(B\) we set \(\ddot{\lambda}^{\prime}_{i}\cap\ddot{B}=\ddot{\lambda}_{i}\cap\ddot{B}\). See the second diagram in Figure 7. Otherwise, we set \(\ddot{\lambda}^{\prime}_{i}\cap\ddot{B}=\lambda^{\prime}_{i}\cap B\). See the third diagram in Figure 7. Again, in the case where both \(\lambda_{1},\lambda_{2}\) intersect \(B\), the choices of transformed paths ensure that no transversal intersection occurs. Finally, consider reduction move (5). Let \(B\) be a subdiagram consisting of a square \(s\) overlapping with a cone-cell \(C\) along a single edge \(e\). Thus \(\lambda_{i}\cap B=e\). We set \(\ddot{\lambda}_{i}\cap\ddot{B}\) to the path \(\partial s-e\). We also set \(\ddot{\lambda}^{\prime}_{i}\cap\ddot{B}=\lambda^{\prime}_{i}\cap B\). See the last diagram in Figure 7. **Working under the assumption of weakly reduced:** We now assume that \((D,\lambda_{1},\lambda^{\prime}_{1},\lambda_{2},\lambda^{\prime}_{2})\) satisfies conditions (a)-(c) above and that \(D\) is weakly reduced. Following the notation in (b), we claim that either \(D_{1}\cup D^{\prime}\cup D_{2}\) is the sandwich decomposition of \(D\), or we can push squares into \(D_{1}\) and \(D_{2}\) modifying \(\lambda_{i},\lambda^{\prime}_{i}\) while preserving conditions (a)-(c). Since \(\lambda_{i}\cap D_{\square}=\lambda^{\prime}_{i}\cap D_{\square}\) and \(\lambda^{\prime}_{i}\) is a geodesic in \(\widetilde{X^{*}}\), no square \(s\) in \(D^{\prime}_{\square}\) has three sides on \(\lambda_{i}\). So if a square \(s\) on \(\lambda_{i}\) can be pushed into \(D_{i}\), then \(s\) must have two consecutive edges \(a,b\) on \(\lambda_{i}\). Let \(\lambda_{i}=\ell_{i}abr_{i}\) where \(\ell_{i},r_{i}\) are subpaths of \(\lambda_{i}-ab\). Likewise, let \(\lambda^{\prime}_{i}=\ell^{\prime}_{i}abr^{\prime}_{i}\). Finally, define \(\ddot{\lambda}_{i}=\ell_{i}cdr_{i}\) and \(\ddot{\lambda}^{\prime}_{i}=\ell^{\prime}_{i}cdr^{\prime}_{i}\) where \(c,d\) are the other two edges of \(s\). The quintuple \((D,\ddot{\lambda}_{1},\dot{\lambda}^{\prime}_{1},\ddot{\lambda}_{2},\ddot{ \lambda}^{\prime}_{2})\) satisfies conditions (a)-(c). Indeed, since \(|\lambda^{\prime}_{i}|=|\ddot{\lambda}^{\prime}_{i}|\), condition (c) is still satisfied. As this replacement does not affect \(D\), nor the property of being (weakly) reduced, nor that \(D_{1}\) and \(D_{2}\) are square diagrams, conditions (a) and (b) are also preserved. We arrive at the sandwich decomposition after finitely many square-pushes. **Working under the further assumption that \(D_{1}\cup D^{\prime}\cup D_{2}\) is the sandwich decomposition of \(D\):** For the remainder of the proof, we assume that \((D,\lambda_{1},\lambda^{\prime}_{1},\lambda_{2},\lambda^{\prime}_{2})\) satisfies conditions (a)-(c) and that \(D\) is weakly reduced, and that the associated \(D_{1}\cup D^{\prime}\cup D_{2}\) is the sandwich decomposition of \(D\). By Lemma 5.4, the subdiagram \(D^{\prime}\) with \(\partial D^{\prime}=\lambda_{1}\overline{\lambda}_{2}\) is a ladder. Let \(D^{\prime\prime}\) be the subdiagram of \(D^{\prime}\) with \(\partial D^{\prime\prime}=\lambda_{1}^{\prime}\overline{\lambda}_{2}^{\prime}\). Then \(D^{\prime\prime}\) is also a ladder, since \(\lambda_{i}\cap D^{\prime}=\lambda_{i}^{\prime}\cap D^{\prime}\). By Proposition 4.4, the bigon \(\lambda_{1}^{\prime},\lambda_{2}^{\prime}\) is \(\epsilon\)-thin for a uniform constant \(\epsilon\). By Proposition 4.3 the metrics \(\mathsf{d}_{\mathsf{p}}\) and \(\mathsf{d}\) are quasi-isometric on \(X\). Thus, there exists a uniform constant \(M\) such that \(\gamma_{i},\lambda_{i}^{\prime}\) is \(M\)-thin. Consequently, \(\gamma_{1},\gamma_{2}\) is \((\epsilon+2M)\)-thin.
2304.00026
Understanding Reinforcement Learning Algorithms: The Progress from Basic Q-learning to Proximal Policy Optimization
This paper presents a review of the field of reinforcement learning (RL), with a focus on providing a comprehensive overview of the key concepts, techniques, and algorithms for beginners. RL has a unique setting, jargon, and mathematics that can be intimidating for those new to the field or artificial intelligence more broadly. While many papers review RL in the context of specific applications, such as games, healthcare, finance, or robotics, these papers can be difficult for beginners to follow due to the inclusion of non-RL-related work and the use of algorithms customized to those specific applications. To address these challenges, this paper provides a clear and concise overview of the fundamental principles of RL and covers the different types of RL algorithms. For each algorithm/method, we outline the main motivation behind its development, its inner workings, and its limitations. The presentation of the paper is aligned with the historical progress of the field, from the early 1980s Q-learning algorithm to the current state-of-the-art algorithms such as TD3, PPO, and offline RL. Overall, this paper aims to serve as a valuable resource for beginners looking to construct a solid understanding of the fundamentals of RL and be aware of the historical progress of the field. It is intended to be a go-to reference for those interested in learning about RL without being distracted by the details of specific applications.
Mohamed-Amine Chadi, Hajar Mousannif
2023-03-31T17:24:51Z
http://arxiv.org/abs/2304.00026v1
Understanding Reinforcement Learning Algorithms: The Progress from Basic Q-learning to Proximal Policy Optimization ###### Abstract This paper presents a review of the field of reinforcement learning (RL), with a focus on providing a comprehensive overview of the key concepts, techniques, and algorithms for beginners. RL has a unique setting, jargon, and mathematics that can be intimidating for those new to the field or artificial intelligence more broadly. While many papers review RL in the context of specific applications, such as games, healthcare, finance, or robotics, these papers can be difficult for beginners to follow due to the inclusion of non-RL-related work and the use of algorithms customized to those specific applications. To address these challenges, this paper provides a clear and concise overview of the fundamental principles of RL and covers the different types of RL algorithms. For each algorithm/method, we outline the main motivation behind its development, its inner workings, and its limitations. The presentation of the paper is aligned with the historical progress of the field, from the early 1980s Q-learning algorithm to the current state-of-the-art algorithms such as TD3, PPO, and offline RL. Overall, this paper aims to serve as a valuable resource for beginners looking to construct a solid understanding of the fundamentals of RL and be aware of the historical progress of the field. It is intended to be a go-to reference for those interested in learning about RL without being distracted by the details of specific applications. Reinforcement learning, Deep reinforcement learning, Brief review. ## 1 Introduction Reinforcement learning (RL) is a branch of machine learning that focuses on training agents to make decisions in an environment to maximize a reward signal [1]. Despite its growing popularity and success in a variety of applications, such as games [2, 3, 4], healthcare [5, 6, 7], finance [8], and robotics [9, 10, 11], RL can be a challenging field for beginners to understand due to its unique setting, jargon, and seemingly more complex mathematics compared to other areas of artificial intelligence. Moreover, many of the existing review papers on RL are application-focused, like the ones presented previously, as well as others [12, 13, 14]. This can be distracting for readers who are primarily interested in understanding the algorithms themselves. The only works we know that had focused solely on the RL algorithms are [15, 16, 17], however, since they were published in 1996 and 2017 respectively, they did not include many of the classic algorithms that appeared afterward. To address these challenges, this paper presents a review of the field of RL with a focus on providing a comprehensive overview of the key concepts, techniques, and algorithms for beginners. Unlike other review papers that are application-focused, this paper aims to provide a clear and concise overview of the fundamental principles of RL, without the distractions of specific applications. It also discusses the motivation, inner workings, and limitations of each RL algorithm. In addition, the paper traces the historical progress of the field, from the early 1980s Q-learning algorithm [18] to the current state-of-the-art algorithms, namely, deep Q-learning (DQN) [19], REINFORCE [20], deep deterministic policy gradient (DDPG) [21], twin delayed DDPG (TD3) [22], and proximal policy optimization (PPO) [23]. Overall, this paper aims to serve as a valuable resource for beginners looking to construct a solid understanding of the fundamentals of RL and be aware of the historical progress of the field. By providing a clear and concise introduction to the key concepts and techniques of RL, as well as a survey of the main algorithms that have been developed over the years, this paper aims to help readers overcome the intimidation and difficulties often encountered when learning about RL. The rest of this paper is organized as follows. In the next section, we provide a brief overview of the general pipeline of an RL setting as well as the key concepts and vocabulary. We then cover the different types of RL algorithms, including value-based, and policy-based methods. In the following section, we trace the historical progress of the field, starting with the early Q-learning algorithm until current state-of-the-art algorithms such as TD3 and PPO. ## 2 Common background 1. The general pipeline In the RL pipeline, as illustrated in Figure 1, the agent receives observations from the environment and takes actions based on these observations. The environment responds to the actions by providing a reward signal and transitioning to a new state. The agent uses this feedback to update its policy, which is a mapping from observations to actions. The goal of the agent is to learn a policy that maximizes the long-term return, which is the sum of the rewards received over time. To learn a good policy, the agent must **explore** the environment and try different actions to learn which actions lead to higher rewards. At the same time, the agent must also **exploit** its current knowledge by taking actions that are expected to lead to the highest reward based on its current policy. This balance between exploration and exploitation is known as **the exploration-exploitation trade-off** and is a key challenge in RL. Overall, the general pipeline of RL involves an agent interacting with an environment, receiving feedback in the form of a reward signal, and updating its policy based on this feedback to maximize the long-term reward. ## 2 Technical terms Although among the three types of learning (supervised, unsupervised, and reinforcement learning), it is the most natural and closest to how humans learn, RL is particular in the sense that it has many unique jargons and vocabulary. In this sub-section, we aim to elucidate the main components of the RL setting. As illustrated in Figure 1, the RL setting is composed of an **agent** (a learning-based model) that is interacting with an **environment** (e.g., simulation such as a game, systems such as an autonomous vehicle or drone, processes such as scheduling program, or even life itself, etc.). The environment receives the agent's **action** and changes its **state** accordingly, and this state is generally associated with a specific **reward**. For example, in an autonomous driving problem, when the car is in a safe and legal position, that is a state which the agent will receive a positive reward for. If the car is in an illegal position or in an accident, on the other hand, the agent will receive a negative reward, also called punishment. The sequence of actions executed by the agent given the corresponding environment's states is referred to as the **policy**. All RL and deep RL algorithms must ensure an optimal balance between exploration and exploitation, which is a concept that is particularly associated with RL. It states that agents should explore the environment more in Figure 1: The general framework in RL the first learning episodes, and progressively converge towards exploiting the learned knowledge that enables making optimal decisions. This is important because, if the agent starts by exploiting/using a learned policy from the beginning, it might miss other policies of higher optimality, thus, exploration is necessary. Mathematically, these components are all gathered in one framework known as the Markov decision process (MDP) [1], as {S, A, T, R, \(\gamma\)}. Where S is the state, A is the action, T is the transition function describing the dynamics of the environment, R is the reward function, and\(\gamma\) is a discount factor. The discount factor determines how much the RL agent cares about rewards in the immediate future compared to those in the distant future. This idea is borrowed from economics, where a certain amount of money is generally worth more now than in the distant future, also the discount factor helps in mathematical convergence. Solving this MDP refers to finding an optimal policy (\(\pi\)*), that is the one yielding the maximum rewards over an entire learning episode. Following the process depicted in Figure 1, first, the agent is presented with an initial state of the environment \(s_{t}=s_{0}\), the reward associated with the initial state is usually considered null, i.e., \(r_{t}\)=\(r\)=\(0\)=\(0\). The agent generates an action \(a_{t}\)the state \(s_{t}\). This action changes the environment's state to \(s_{t+1}\), as well as the new associated reward \(r_{t+1}\). The cumulated sum of rewards is known as the return \(G_{t}\) and can be calculated as described in equation 1: \[G_{t}=R_{t+1}+R_{t+2}+R_{t+3}+\ldots\qquad\text{we begin by }R_{t+1}\text{ since }R_{t}\text{ is considered }0 \tag{1}\] Then, we introduce the discount factor gamma \(\gamma\) \[\begin{array}{l}G_{t}=\gamma^{(0)}R_{t+1}+\gamma^{(1)}R_{t+2}+\gamma^{(2)}R _{t+3}+\ldots\text{ since }\gamma^{(0)}\text{= }1,\text{ the equation becomes:}\\ G_{t}=R_{t+1}+\gamma^{(1)}R_{t+2}+\gamma^{(2)}R_{t+3}+\ldots=\sum_{k=0}^{T} \gamma^{(k)}R_{t+k+1}\end{array} \tag{2}\] Besides its role in modeling the importance of future rewards relative to immediate ones, the parameter \(\gamma\) gamma also ensures the mathematical convergence of the RL training. For instance, if \(\gamma=0.99\) (which is usually the value given in the literature 1\(<\)\(\gamma\)\(<\)\(0\)), then: \(G_{t}=R_{t+1}+0.99^{(1)}R_{t+2}+0.99^{(2)}R_{t+3}+\ldots=\sum_{k=0}^{T}0.99^{( k)}R_{t+k+1}\), while the powers of \(\gamma\) tend to 0 (i.e., \(\gamma^{(1)}\) =0.99,..., \(\gamma^{(3)}\)=0.97,..., \(\gamma^{(10)}\)=0.90,..., \(\gamma^{(T)}\)\(\sim\)\(0\)). In addition to the discount factor added to the equation of the return \(G_{t}\), to make this setting even more suitable for real case scenario, we shall consider a stochastic version of equation 2, that is, the expected return of \(G_{t}\) also called the value function \(V_{t}\), described in equation 3: \[V_{t}=\ E[G_{t}|s_{t}=s]=\ E[\sum_{k=0}^{T}\gamma^{(k)}R_{t+k+1}|s_{t}=s] \tag{3}\] Given the mathematical expectation in equation 4: \[E[X]=\sum_{i=0}^{n}x_{i}P(x_{i}) \tag{4}\] where \(x\) is a random variable and \(P\) is the probability of \(x\). Solving the MDP now consists of maximizing the value function (equation 3). Nevertheless, this is still a difficult problem, indeed, this is known as the infinite horizon problem since rewards are completely untraceable at a certain level. For this, we introduce the infamous **Bellman equation** trick. In the 1950s, Richard Bellman et al. [1] proposed a recursive approximation of the return \(G\) and the value \(V\) as follows: We know that: \(G_{t}=R_{t+1}+\gamma^{(1)}R_{t+2}+\gamma^{(2)}R_{t+3}+\ldots\) This can be rewritten as the immediate reward \(R_{t+1}\) plus the discounted future values: \(G_{t}=R_{t+1}+\gamma^{(1)}G_{t+1}\) and so on for later returns. Similarly, the value function \(V_{t}\) can also be arranged in this way: \[V_{t}=R_{t+1}+V_{t+1} \tag{5}\] This recursive representation helps avoid the infinite horizon problem by dividing it into solvable sub - problems. Later, this became known as the Bellman equation. Based on the concepts and ideas elaborated above, many algorithms and techniques were proposed to solve such reinforcement learning problems. In Table 1 we present a subset of the foundational RL and deep RL algorithms with a taxonomy of characteristics. The following are definitions of the terms used in the taxonomy presented in Table 1: * Value-based: updates the model's policy based on the value function \(V\). * Policy-based: updates the model's policy directly without referring to \(V\). * Model-based: has access to the environment's dynamics (e.g., rules of chess). * Model-free: does not have access to the environment's dynamics. * On-policy: uses a model for exploration, and the same for exploitation. * Off-policy: uses a model for exploration, and a separate one for exploitation. * Temporal difference: updates the model each time step. * Monte Carlo: updates the model using a statistical description (e.g., the mean value) of many time steps (often, the entire episode) * Tabular: uses a table to store the computed values * Neural network (NN): uses a neural network as the function approximator In the following section, we shall present in detail the algorithms listed in Table 1, each with the motivation behind its development, its inner working, as well as some of its known limitations. These algorithms are all model-free, leaving model-based algorithms for future work. ## 3 Algorithms: motivation, inner workings, limitation ### 1 Q-learning (value-based) #### Motivation Before the NN and DL revolution, tremendous research was conducted to propose efficient techniques for solving the Bellman equation in the context of RL. For this purpose, there have been three principal classes of methods: dynamic programming, Monte Carlo methods, and temporal-difference (TD) learning. Although dynamic programming methods are well-developed mathematically, they must be provided a complete and accurate model of the environment, which is often not available. Monte Carlo methods on the other hand do not require a model and are conceptually simple but are not suited for step-by-step incremental computation (i.e., only episodic training). Finally, TD methods require no model and are fully stepwise incremental and episodic as well. Consequently, we will see that most algorithms are either TD-based (mostly) or Monte Carlo-based (less often), but rarely based on dynamic programming. Thus, to illustrate the RL paradigm here, we present the Q-learning algorithm, which is a TD-based model that belongs to the tabular RL model family. Other RL algorithms (i.e., do not rely on NNs), such as the State-action-reward-state-action (SARSA) which is an on-policy version of Q-learning, and the Monte Carlo Tree Search (MCTS) thatrelies on heuristic trees \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Algorithm & Value-based & Model-based & On-policy & Temporal-diff. & Tabular \\ & Policy-based & Model-free & Off-policy & Monte Carlo & NN \\ \hline Q-learning & **0** & **1** & **1** & **0** & **0** \\ \hline DQN & **0** & **1** & **1** & **0** & **1** \\ \hline REINFORCE & **1** & **1** & **0** & **1** & **1** \\ \hline DDPG & **0 and 1** & **1** & **0 and 1** & **0** & **1** \\ \hline TD3 & **0 and 1** & **1** & **0 and 1** & **0** & **1** \\ \hline PPO & **0 and 1** & **1** & **0 and 1** & **0** & **1** \\ \hline \end{tabular} \end{table} Table 1: Taxonomy of the RL and DRL algorithms instead of a finite size table, can be found in Sutton & Barto's book [1]. #### 2.1.2 Inner-workings Q-learning (founded in 1989 by Christopher Watkins ) is one of the basic, yet classic algorithms. Instead of the value function \(V\) presented previously, Q-learning considers a variant of this, called \(Q\)-function, where Q designates quality. While \(V\) is only associated with states, \(Q\) is conditioned on states and actions as well. \[Q_{t}=\ E[Q_{t}|s_{t}=s,a_{t}=a]=\ E[\sum_{k=0}^{T}\gamma^{(k)}R_{t+k+1}|s_{t}=s,a_{t}=a] \tag{6}\] This makes the evaluation of the agent's policy easier and explicit, and since we can utilize the mathematical expectation formula (equation 4), the relationship between \(V\) and \(Q\) is formulated as: \[V^{\pi}=\sum_{a\in A}\pi(a|s)Q^{\pi}(s,a)\] As illustrated in Figure 2, Q-learning uses Q-table to store the Q-value of taking a certain action in a given state. Then it updates the Q-value based on the observed reward and the estimated future value. \[Q(st,at)=Q(st,at)+\alpha\cdot\left[r_{t}+\gamma\cdot\max_{a}Q(st+1,a)-Q(st,at )\right] \tag{7}\] After running many iterations of the algorithm described in **Algorithm 1** below, the model (i.e., the Q-table) should converge, meaning that the highest Q-values will be assigned to the optimal state-action pairs. If interested, see the proof of convergence here [24]. Note for algorithm 1: * s' and a' means next state and next action, respectively. * \(\varepsilon\)-greedy is a method to ensure the exploration-exploitation process. Here, \(\varepsilon\) is set to a small number (\(<\)0.5), and a number \(x\) is randomly generated, if \(x\)\(<\)\(\varepsilon\), the algorithm explores, that is, it selects actions randomly. Whereas, if \(x\)\(>\)\(\varepsilon\), the algorithm exploits, that is, it uses the Q-table to sample the best action, i.e., the one with the highest Q-value. Figure 2: Q-learning ``` Initialize Q(s,a) arbitrarily Repeat for each episode: Initialize s Repeat each step of the episode: Choose a given s using the policy derived from Q (e.g., \(\varepsilon\)-greedy) Take action a, observe r, s' Q(s, a) \(<=\) Q(s, a) \(+\)\(\alpha\)\([\)r\(+\)\(\gamma\)\(max\)\(Q(s^{\prime},a^{\prime})-Q(s,a)\)\(]\) s \(<=\) s' Until s is terminal ``` **Algorithm 1**Q-learning ### Limitation Although it was standard in the past years before the NNs revolution. Q-learning algorithm can only be applied in limited applications. This is because of its main drawback, which is its inability to (directly) learn in environments with continuous states and actions since it uses finite-size tables as brains. ## 2 Deep Q-learning (value-based) #### Motivation Given that in the previous sub-section, we laid out many concepts related to RL in general and the Q-learning algorithm specifically. understanding the deep version of the latter, that is, Deep-Q-network (DQN) should be relatively easy. All concepts related to Q-learning (e.g., Bellman equation, \(\varepsilon\)-greedy exploration, etc.) still apply to DQN, except for one fundamental difference, which is the use of function approximators, specifically, NNs instead of tabular storage for the Q-values. In 2013, Mnih et al. from DeepMind introduced the DQN architecture. The work was labeled revolutionary as they used DQN to play a famous Atari game at the super-human level. Since then, DQN was used in many applications including medicine [25], industrial control and engineering [26], traffic management [27], resource allocation [28], as well as games [3, 4]. Moreover, DQN work has opened the doors for many theoretical improvements and DRL algorithms developments, and to date, the DQN paper has been cited more than 9500 times. #### Inner-workings DQN has recognized two successive architectures, where the second one imported significant improvement to the first. The former DQN architecture involved a naive straight-forward replacement of the Q-table in Q-learning by a NN, enabling a gradient-based update of the TD loss instead of the traditional dynamic programming approach. While this has worked for a few easier tasks, it suffered from two main problems, namely: * The experience correlation: when the agent learns from consecutive experiences as they occur, data can be highly correlated. This is because a given state may provoke a certain action which in turn will provoke a specific state and action and so on, given those environments are governed by well-defined transition functions. * The moving target problem: unlike supervised learning (SL), where data is fixed, in RL, the agent generates its data, and the more the agent changes its policy, the more the data distributions change, which makes calculating the Q-values very unstable. Therefore, the model is said to be chasing a moving target. This is because each time the Q-network is updated, and so is the Q-value will be changing constantly. Formally, if the update is computed for \(rt+\max\limits_{a}Q(st+1,a;\theta)-Q(st,at;\theta)\), then the learning is prone to becoming unstable since the target, \(rt+\max\limits_{a}Q(st+1,a;\theta)\), and the prediction \(Q(st,at;\theta)\), are not independent, as they both rely on \(\theta\). For this, the second architecture was proposed and demonstrated significant mitigation of the two problems discussed. This was achieved by adding two main blocks to the original architecture: * The replay memory: the replay memory, experience replay, or even the replay buffer. This helps break the correlation between successive experiences and ensures _independent, identical, distributed_ (IID)-like data which is assumed in most supervised convergence proofs. It achieves this by storing transitions of {state, action, reward, new state} until having batches of these. Then, randomly sampling transitions for learning in a phase that is separate from the interaction with the environment/experience generation phase. * The target network: by setting a separate network for the calculation of the target Q-value, the model becomes more robust and stable. It is because only after batches of learning episodes (as opposed to each timestep in the former case) that the target can change due to the update. Thus, the model can succeed in approximating the target progressively. Illustrated in Figure 3 is the enhanced version of the DQN architecture. Along with **algorithm 2**, we shall explain its inner working. After initializing the two networks' parameters (Q-network with \(\theta\) and target network with \(\theta\)', where \(\theta\)\(=\)\(\theta\)'), the learning episodes begin. We first initialize the initial state and begin each episode step by step following the RL general workflow. The agent gives an action based on the state provided, and the environment receives the action and outputs a new state along with the corresponding reward. This (so-called) transition {state, action, reward, new state} is stored in a memory (i.e., the replay memory). After a couple of episodes, the stored transitions are sampled randomly from the replay memory (to avoid correlation of experiences) to compute their corresponding Q-values. The Q-value given by the Q network is the predicted one, whereas the Q-value given by the target network with the Bellman approximation is considered the real one. Finally, the Q network is updated via the mean-square error (MSE) to minimize the loss between the predicted Q and the real one. Where as the target network's parameters are updated by copying the Q network's parameters, which helps avoid the moving target problem. Figure 3: Deep Q-Network (DQN) ``` Initialize replay memory \(D\) to capacity \(N\) Initialize action-value function \(Q\) with random weights \(\theta\)\(\#NN\) Initialize target action-value function \(\bar{Q}\) with weights \(\theta\)' = \(\theta\)\(\#NN\) For\(\text{episode}=1\), \(M\)do Initialize sequence \(\text{s}_{1}\) For\(\text{t}=1\), T do With probability, \(\varepsilon\) select \(a\) random action \(a_{t}\) Otherwise select \(a_{t}=argmax\)\(Q(\text{s}_{t},a;\theta)\) Execute action \(a_{t}\) and observe reward rt and next state \(s_{t+I}\) Store transition \(\text{s}_{t},a_{t},\text{r}_{t},\text{s}_{t+1}\) in \(D\) Sample a random minibatch of transitions from D Set \(\text{y}_{j}=\begin{cases}\text{r}_{j},&\text{if episode terminates at step j+1}\\ \text{r}_{j}+\text{\gamma}\text{max}\bar{Q}(\text{s}_{t}+\text{1},a^{\prime}; \theta^{\prime}),&\text{otherwise}\end{cases}\) Gradient descent on \(MSE\)\((\text{y}_{j},Q(\text{s}_{t},a;\theta))\), with respect to \(\theta\) For every C step, reset \(\bar{Q}=Q\) End for End for ``` **Algorithm 2**DQN #### 2.2.3 Limitation As mentioned, DQN has been driving a tremendous number of applications in many domains. Nevertheless, it is still limited, mainly, because of the categorical way by which actions are generated (\(a_{t}=argmax\)\(Q(\text{s}_{t},a;\theta)\)). This makes it unable to be used in continuous action settings (e.g., controlling velocity). On top of this, DQN has also shown significant training instability in high-dimensional environments where function approximation of the Q-value can become prone to significant overestimation error [29]. ``` Initialize replay memory \(D\) to capacity \(N\) Initialize action-value function \(Q\) with random weights \(\theta\)\(\#NN\) Initialize target action-value function \(\bar{Q}\) with weights \(\theta\)' = \(\theta\)\(\#NN\) For\(\text{episode}=1\), \(M\)do Initialize sequence \(\text{s}_{1}\) For\(\text{t}=1\), T do With probability, \(\varepsilon\) select \(a\) random action \(a_{t}\) Otherwise select \(a_{t}=argmax\)\(Q(\text{s}_{t},a;\theta)\) Execute action \(a_{t}\) and observe reward rt and next state \(s_{t+I}\) Store transition \(\text{s}_{t},a_{t},\text{r}_{t},\text{s}_{t+1}\) in \(D\) Sample a random minibatch of transitions from D Set \(\text{y}_{j}=\begin{cases}\text{r}_{j},&\text{if episode terminates at step j+1}\\ \text{r}_{j}+\text{\gamma}\text{max}\bar{Q}(\text{s}_{t}+\text{1},a^{\prime}; \theta^{\prime}),&\text{otherwise}\end{cases}\) Gradient descent on \(MSE\)\((\text{y}_{j},Q(\text{s}_{t},a;\theta))\), with respect to \(\theta\) For every C step, reset \(\bar{Q}=Q\) End for End for ``` **Algorithm 3**DQN Policy gradient (PG) algorithms are the most natural and direct way of doing RL with neural networks. This family of RL and DRL techniques was popularized by Sutton et al. in [20]. It proposes a new set of algorithms that can directly map states to actions (as opposed to Q-values, then actions) via a learnable differentiable function approximator such as NNs. It should be differentiable since the update is based on the gradient. #### 2.2.4 Inner-workings Unlike value-based methods such as the previously presented Q-learning and DQN, PG algorithms have a much more solid mathematical basis. Indeed, instead of the Bellman approximation of the Q-values via the recursive decomposition. The gradient loss in PG algorithms is directly proportional to the policy in an intuitive way. Below is the PG theorem proof: We know that the mathematical expectation \(E\) is equal to: \(E[X]=\sum_{i=0}^{n}x_{i}P(x_{i})\), where \(x\) is the random variable and \(P\) is the probability of \(x\). therefore, the gradient of the objective function: \(\nabla_{\theta}J(\pi_{\theta})=\nabla_{\theta_{\tau}\sim\pi_{\theta}}E[R(\tau)]\) (where \(\tau\) is the trajectory) can be formulated as: \[=\nabla_{\theta}\int_{\tau}P(\tau|\theta)R(\tau),\hskip 56.905512pt\text{ expanding the expectation formula}\] \[=\int_{\tau}\overline{V_{\theta}}\ \underline{\underline{P(\tau|\theta)}}R(\tau), \qquad\text{multiply and divide by highlighted term}\] \[=\int_{\tau}\overline{\frac{\overline{V_{\theta}}P(\tau|\theta)_{R( \tau)}P(\tau|\theta)}{\underline{\underline{P(\tau|\theta)}}}}\,\qquad\qquad\qquad\qquad\text{the highlighted part is }\frac{ du}{ u}=\ \text{dlog(u)}\] \[=\int_{\tau}P(\tau|\theta)\overline{V_{\theta}}\log\left(P(\tau| \theta)\right)R(\tau)\,\qquad\qquad\qquad\qquad\log\text{ derivative trick}\] \[=E_{\tau\sim\pi_{\theta}}[\overline{V_{\theta}}\log\left(P(\tau| \theta)R(\tau)\right],\qquad\qquad\text{original expectation form}\] Among the terms in the last expression, only \(P(\tau|\theta)\) needs to be defined (reward is known). Since \(P(\tau|\theta)\) the probability of following a trajectory \(\tau\) is dictated by the policy function \(\pi_{\theta}\), thus, the final expression becomes: \[\overline{V_{\theta}}J(\pi_{\theta})=E_{\tau\sim\pi\theta}\Big{[}\sum_{t=0}^{ \tau}\overline{V_{\theta}}\log\left(\pi_{\theta}(a_{t}|s_{t})\right)R(\tau) \Big{]}, \tag{8}\] By performing the gradient ascent on this by a Monte Carlo estimate of the expected value, we can find the optimal policy. While many tricks were proposed to make the PG algorithm better, all improved PG algorithms are still based on this basic theorem. This final expression allows the easy and direct gradient-based update of the agent's parameters solely based on its policy (i.e., actions) instead of the indirect way, i.e., state-action value (Q-value)). Moreover, it permits the training of RL agents in continuous setting environments naturally. This is because the action selection is not based on a categorical sampling method such as the _argmax_ in DQN. Most PG-based algorithms can be used for discrete action environments, via any categorical sampling function, as well as the continuous settings, where the agent tries to learn a distribution function such as the gaussian distribution conditioned on its parameters, the mean (\(\mu\)), and the variance \(\sigma\). The simplest PG algorithm is called REINFORCE. Here, the model follows a naive strategy of using a NN to map states to actions optimally (i.e., in a way that maximizes the objective function). In other words, The REINFORCE algorithm presents a direct differentiation of the reinforcement learning objective, i.e., the expected return described in equation 3: \(E[\sum_{k=0}^{T}\gamma^{(k)}R_{t+k+1}\ |s_{t}=s]\). As illustrated in Figure 4 and presented in **algorithm 3** there is only one network, which is the policy network (\(\pi\)). The agent learns to choose optimal actions given the environment's states to increase the objective function (in equation 8) via gradient ascent. Figure 4: REINFORCE ``` Initialize a differentiable policy function \(\boldsymbol{\pi(a|s,\theta)}\) #\(N\)N Repeat for each episode: Record \(\mathbf{S_{0}}\), \(\mathbf{A_{0}}\), \(\mathbf{R_{1}}\),..., \(\mathbf{S_{T-1}}\), \(\mathbf{A_{T-1}}\), \(\mathbf{R_{T}}\), following \(\boldsymbol{\pi}\) Loop for each step of the episode \(\mathbf{t=0,1,...,T-1}\)\(\boldsymbol{G_{t}}\leftarrow\sum_{\mathbf{k=t+1}}^{T}\boldsymbol{\nu}^{k-t-1} \boldsymbol{R_{k}}\)\(\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}+\boldsymbol{\alpha} \boldsymbol{\nu}^{t}\boldsymbol{G_{t}}\boldsymbol{Vln\pi}(\boldsymbol{A_{t}}| \mathbf{S_{t}},\boldsymbol{\theta})\)\(\#\boldsymbol{\pi}(\mathbf{A_{t}}|\mathbf{S_{t}},\boldsymbol{\theta})\) will return the probability of \(\mathbf{A_{t}}\) #### 3.3.3 Limitation With all the advantages provided by the PG theorem and its direct implementation. The REINFORCE algorithm suffers from fundamental problems, namely, the noisy estimates of the gradient caused by the unclear credit assignment. Thats, it is not clear which action resulted in which reward. This is because, as described in algorithm 3, the agent is only updated when a complete episode of data (i.e., all steps) or more is completed. This is necessary since PG is based on a statistical Monte Carlo-like estimate of the cumulated reward samples (return) generated by the policy. For this, a critical improvement was proposed as a new family of DRL algorithms known as actor-critic architectures [30]. To date, the actor-critic-based models are the SOTA in the DRL field, especially, in the model-free DRL. The key contribution of actor-critic models is the use of both the policy and the value for learning optimal strategies. This is the reason why in Table 1, all models after REINFORCE were dubbed value-based as well as policy-based. In the actor-critic architecture, the actor is a NN that is responsible for learning the policy by trying to maximize an objective function similar to that of the REINFORCE algorithm. Whereas the critic (also a NN) learns the value function \(V\), the action value function \(Q\), or even a relationship between the two, called the advantage \(A\), which is just the difference between the next estimated value (V or Q) and the previous one as described in equation 17 below: \[A_{t}=\left\{\begin{array}{cc}\tau_{t}\ +\ \gamma V(s_{t+1})-\ V(s_{t}),&or \text{:}\\ \tau_{t}\ +\ \gamma Q(s_{t+1},a_{t+1})-Q(s_{t},a_{t})\end{array}\right. \tag{9}\] In the following subsections, we will overview three SOTA models that were making headlines in the past five years or so. These are all actor-critic-based models, so more details about this family of DRL algorithms will be presented in each sub-section. ## 4 DDPG (actor-critic): ``` Initialize a differentiable policy function \(\boldsymbol{\pi(a|s,\theta)}\) #\(N\)N Repeat for each episode: Record \(\mathbf{S_{0}}\), \(\mathbf{A_{0}}\), \(\mathbf{R_{1}}\),..., \(\mathbf{S_{T-1}}\), \(\mathbf{A_{T-1}}\), \(\mathbf{R_{T}}\), following \(\boldsymbol{\pi}\) Loop for each step of the episode \(\mathbf{t=0,1,...,T-1}\)\(\boldsymbol{G_{t}}\leftarrow\sum_{\mathbf{k=t+1}}^{T}\boldsymbol{\nu}^{k-t-1} \boldsymbol{R_{k}}\)\(\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}+\boldsymbol{\alpha} \boldsymbol{\nu}^{t}\boldsymbol{G_{t}}\boldsymbol{Vln\pi}(\boldsymbol{A_{t}}| \mathbf{S_{t}},\boldsymbol{\theta})\)\(\#\boldsymbol{\pi}(\mathbf{A_{t}}|\mathbf{S_{t}},\boldsymbol{\theta})\) will return the probability of \(\mathbf{A_{t}}\) #### 4.0.1 Motivation Deep deterministic policy gradient, or in short DDPG, was developed in 2016 by Lillicrap et al. to have a DRL model that can operate in a high dimensional continuous state and action space. DDPG, as the name suggests, uses a deterministic policy, as opposed to a stochastic one, by mapping states to actions directly instead of a probability distribution over actions via a deterministic policy function named \(\mu(s|\theta^{\mu})\). #### 4.0.2 Inner-workings As shown in Figure 5, DDPG is an actor-critic model, meaning that it contains at least two networks, one called the actor responsible for learning the policy, and a second network named the critic for learning the value function of each state (\(V\)) or state-action pair (\(Q\)). The critic learns by the mean of minimizing the MSE loss between the predicted Q-value and the actual one akin to the DQN model discussed previously. As the critic gets better at predicting the corresponding Q-values, the actor is presented with a convenient value that it can use to maximize in a PG fashion. Further, DDPG makes use of other improvements made for the DQN model, namely, the replay memory and the target network. In **algorithm 4** we quote the line "Store transition \((S_{t},A_{t},r_{t},s_{t+1})\) in \(D\)" which refers to the same replay memory used in the DQN model. This is because it was shown that PG models, including those with actor-critic architectures, are very sample inefficient since they only use a given transition once and then throw it off. The replay memory allows training in mini-batches of data as well as using the same data multiple times during training. Another change is the way the target is updated, while DQN updates the target network by the mean of a hard copy of the parameters, DDPG uses a soft copy based on the Polyak averaging function as follows \(\theta^{Q\prime}\leftarrow\tau\theta^{Q}\ +\ (1\ -\ t)\theta^{Q\prime}\). As we can see, the new parameters of the target will equal an interpolation factor \(\tau\) which is usually set to a value around 0.99, plus a small number (\(\sim\)1\(-\)0.99, i.e., 0.01) times the old parameters of the target network. This constrains the target network to be changed slowly and was shown (in the DDPG paper) to yield better performance than the hard copy of the DQN model. Further, since the DDPG model is merely deterministic, it must ensure the exploration-exploitation somehow. Indeed, the action selection is executed according to the equation mentioned in the algorithm, specifically: \(a_{t}=\mu(s_{t}|\theta_{\mu})+N_{t}\), where \(N_{t}\) is a noise signal added to the policy function to force the stochastic behavior. In the paper, the researchers used the Ornstein Uhlenbeck noise [31], however, later research has demonstrated that other simple noise signals such as the gaussian noise can be used and yield competitive performance. Figure 5: Deep deterministic policy gradient (DDPG) ``` Initialize the critic and the actor networks, \(\mathbf{Q}(\mathbf{s},\mathbf{a}|\mathbf{\theta}^{\mathbf{Q}})\) and \(\mathbf{\mu}(\mathbf{s}|\mathbf{\theta}^{\mathbf{\mu}})\) Initialize target network \(Q\)' and \(\mu\)', \(\mathbf{\theta}^{\mathbf{Q}^{\mathbf{\cdot}}}\leftarrow\mathbf{\theta}^{\mathbf{0}},\mathbf{\theta}^{\bm {\mu}^{\mathbf{\cdot}}}\leftarrow\mathbf{\theta}^{\mathbf{\mu}}\) Initialize replay memory R For episode = 1, M do Initialize a random process N for action exploration Receive initial observation state s1 For t = 1, T do Select action \(\mathbf{a}_{t}=\mathbf{\mu}(\mathbf{s}_{t}|\mathbf{\theta}_{\mathbf{\mu}})+\mathbf{N}_{t}\)\(\mathbf{\#\text{policy}}+\text{noise}\) Execute at, observe reward rt and new state st+1 Store transition \((\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{r}_{t},\mathbf{s}_{t+1})\) in \(D\) Sample a random minibatch of K transitions from \(D\) Set \(\mathbf{yi}=\mathbf{r}_{j}+\mathbf{\gamma}\mathbf{Q}^{\prime}\big{(}\mathbf{s}_{t+1}\mathbf{\mu}^{ \prime}(\mathbf{s}_{t+1}|\mathbf{\theta}^{\mathbf{\mu}^{\prime}})\bigm{|}\mathbf{\theta}^{\bm {0}^{\prime}}\big{)}\) Update critic by minimizing the loss: \(\mathbf{MSE}(\mathbf{y}_{t},\mathbf{Q}(\mathbf{s}_{t},\mathbf{a}_{t}|\mathbf{\theta}^{\mathbf{Q}}))\) Update the actor by maximizing the expected \(Q\): \[\mathbf{\bar{V}}_{\mathbf{\theta}^{\mathbf{\mu}}}\mathbf{J}\approx\frac{\mathbf{1}}{K}\sum_{t= \mathbf{0}}^{K}\mathbf{\bar{V}}_{\mathbf{a}}\mathbf{Q}(\mathbf{s},\mathbf{a}|\mathbf{\theta}^{\mathbf{Q}})\mid _{\mathbf{s}=\mathbf{s}_{t},\mathbf{a}=\mathbf{\mu}(\mathbf{s}_{t})}\mathbf{\bar{V}}_{\mathbf{\theta}^{ \mathbf{\mu}}}\mathbf{\mu}(\mathbf{s}|\mathbf{\theta}^{\mathbf{\mu}})\mid_{\mathbf{s}_{t}}\] Update the target networks: \[\mathbf{\theta}^{\mathbf{0}^{\mathbf{\cdot}}}\leftarrow\mathbf{\tau}\mathbf{\theta}^{\mathbf{ Q}}+(\mathbf{1}-\mathbf{\tau})\mathbf{\theta}^{\mathbf{\theta}^{\mathbf{\cdot}}}\quad\mathbf{\#\text{Polyak averaging}}\] \[\mathbf{\theta}^{\mathbf{0}^{\mathbf{\cdot}}}\leftarrow\mathbf{\tau}\mathbf{\theta}^{\mathbf{ \mu}}+(\mathbf{1}-\mathbf{\tau})\mathbf{\theta}^{\mathbf{\mu}^{\mathbf{\cdot}}}\] End for End for ``` **Algorithm 4**DDPG #### 3.2.3 Limitation While it has empowered several great achievements in the AI field, DDPG is often criticized for being unstable. This is manifested in the form of high sensitivity to hyperparameter tuning and propensity to converge to very poor solutions or even diverge in tasks requiring relatively more exploration [32]. For this, potential improvements were open to further the DRL decision-making optimality, especially for high-dimensional spaces and noisy environments. ## 5 TD3 (actor-critic): ### Motivation The limited performance of the DDPG model was extensively explored. As a result, the cause was identified to be _the overestimation bias_. This problem was already known as an issue regarding Q-learning. It is manifested when the estimated values Q-values are often greater than true ones, thus, the agent is said to be overestimating the expected future rewards, making it select actions of misleading high values. This phenomenon accumulates after each iteration and propagates through the TD error. The overestimation bias is generally related to the statistical-based approximation via NNs or another approximative sample-based method such as dynamic programming decomposition in Q-learning. As a natural successor to DDPG, and given its main limitations discussed. The twin-delayed deep deterministic (TD3) PG model was developed in 2018. ### Inner-workings TD3 solves the overestimation bias in the DDPG model issue by maintaining a pair of critics Q1 and Q2 as illustrated in Figure 5 (hence the name "twin") along with a single actor, and each network with its corresponding target. For each time step, TD3 uses the smaller of the two Q-values and updates the policy less frequently than the critic networks. It is worth mentioning that the use of two networks for better approximation of the Q-value was inspired by previous work concerning the overestimation bias in Q-learning and DQN, namely, double Q-learning and double DQN [29], respectively. Besides these changes, the TD3 model is akin to the DDPG model in both selecting actions and updating the networks as explained in **algorithm 5**. ``` Initialize the critic networks \(Q_{\theta I}\), \(Q_{\theta 2}\), and the actor-network \(\pi_{\phi}\), randomly Initialize the target networks \(\theta_{I}\leftarrow\theta_{I}\), \(\theta_{2}\leftarrow\theta_{2}\), \(\phi\)'\(\leftarrow\phi\) Initialize the replay memory \(B\) For t = 1 to T do Select action with exploration noise \(\mathbf{a}\sim\mathbf{\pi(s)}\ +\ \mathbf{\varepsilon}\), \(\mathbf{\varepsilon}\sim\)N(0, \(\phi\)) and observe reward r and new state \(\mathbf{s}\)' Store transition _(s, a, r, s')_ in \(B\) Sample a mini-batch of N transitions from \(B\) \(\mathbf{\widetilde{a}}\ \leftarrow\ \mathbf{\pi_{\phi^{\prime}}(s)}+\mathbf{\varepsilon}\), \(\mathbf{\#\text{policy}}\ +\ \text{noise}\) \(\mathbf{y}\ \leftarrow\ \mathbf{r}\ +\ \mathbf{\gamma}\min_{i=1,2}\mathbf{Q_{\theta^{\prime}}(s^{ \prime};\mathbf{\widetilde{a}})}\ \ \#\text{Clipped double Q-learning}\) Update critics \(\mathbf{\theta_{\ell}}\leftarrow\underset{\mathbf{\theta_{i}}}{\text{min}}\ MSE(\mathbf{y}, \mathbf{Q_{\theta_{i}}(s,a)})\) If t mode \(d\) then #Delayed update of target and policy networks Update \(\Phi\) by the deterministic policy gradient: \(\mathbf{\nabla_{\phi}J(\Phi)}=\mathbf{N^{-1}}\mathbf{\sum}_{i=0}^{K}\mathbf{\nabla_{\alpha}Q _{\theta_{1}}(s,a)}|_{a=\pi_{\phi}(s)}\ \mathbf{\nabla_{\phi}\pi_{\phi}(s)}\) Update target networks: \(\mathbf{\theta^{\prime}}_{i}\leftarrow\ \mathbf{\tau}\mathbf{\theta}\mathbf{Q}\ +\ (\mathbf{1}-\mathbf{\tau})\mathbf{ \theta^{\prime}_{i}}\ \ \ \ \ \#\text{Polyak averaging}\) \(\mathbf{\Phi^{\prime}}\leftarrow\ \mathbf{\tau}\mathbf{\Phi}\ +\ (\mathbf{1}-\mathbf{\tau})\mathbf{ \Phi^{\prime}}\) End if End for ``` **Algorithm 5** TD3 Figure 6: Twin delayed deep deterministic (TD3) policy gradient Limitation The exploration is handled manually with a gaussian noise, and the use of six networks is computationally heavy. Further, TD3 can only be used naturally (if no features engineering is considered) with continuous space environments. ## 6 Ppo (actor-critic): ### Motivation In contrast to DDPG and TD3, where the exploration is executed by manually introducing a noise parameter to the policy. There exists another category of actor-critic PG algorithms that uses a stochastic policy, as opposed to the deterministic one. This inherently makes the exploration phase handle automatically. Among the main SOTA models that utilize this approach is the proximal policy optimization (PPO) algorithm. #### 6.0.1 Inner-workings Similar to the REINFORCE model, PPO learns online, meaning that it collects mini batches of experience while interacting with the environment and use them to update its policy. Whereas DDPG and TD3 are considered off-policy (or off-line) models since the function used to collect experiences (_argmax_) is different from the one updated for learning (\(\pi\)). Once the PPO policy is updated, a newer batch is collected with the newly updated policy. The critical issue introduced by PPO is the concept of _trust region_, which refers to the fact that a new update of the policy should not change it too much from the previous policy. This resulted in significantly less variance and smoother training and makes sure the agent does not go down an unrecoverable path. Indeed, we quote the original paper: "These methods have the stability and reliability of trust region policy optimization (TRPO) [33] methods but are much simpler to implement, requiring only a few lines of code change to a vanilla policy gradient implementation" The PPO model was developed for three main points: * ease of implementation * sample efficiency * ease of tuning As one can tell from Figure 5, although the TD3 model has shown impressive improvement compared to previous models, it involves many complexities regarding the implementation and training, especially, its use of six separate networks, as opposed to the architecture of PPO illustrated in Figure 6. Additionally, the main problem of the previously presented on-policy model (REINFORCE) suffered from the sample inefficiency since it only uses the collected transition once. Here, PPO is an on -policy model, yet it can also utilize the same transition multiple times by using a clipping function, known as the surrogate loss Figure 7: Proximal policy optimization (PPO) (mentioned in **algorithm 6**), to force the ratio of the old policy and the new one to stay between a manageable interval. This key contribution of the PPO model is also helpful for hyperparameters tuning, which can be very laborious for DDPG and TD3 given the complex architectures with several parts as well as the risk of going outside the trust region, especially, if the learning rate is larger than it should be. Formally, PPO has imported small, yet very effective changes to the simple PG form such as the REINFORCE model. That is, by replacing the "log \((\pi_{\theta}(a_{t}|s_{t}))\)" in the PG objective function of equation 16, recall: \[\nabla_{\theta}J(\pi_{\theta})=E_{\tau-\pi\theta}\left[\sum_{t=0}^{T}\nabla_{ \theta}\log\left(\pi_{\theta}(a_{t}|s_{t})\right)R(\tau)\right]\] by the ratio \(\mathbf{r_{t}}(\mathbf{\theta})=\frac{\pi_{\theta_{new}}}{\pi_{\theta_{old}}}\), clipping it between \(1-\epsilon\)_and_\(1+\epsilon\), and maximizing it using gradient ascent just like a normal PG algorithm. ``` Initialize policy parameters \(\theta_{0}\), initialize value function parameters \(\Phi_{0}\) For k= 0, 1, 2,..., do Collect a set of trajectories \(\mathbf{D_{k}}=\{\mathbf{\tau_{l}}\}\) by running policy \(\mathbf{\pi}(\mathbf{\theta}_{k})\)in the environment Compute rewards-to-go \(\mathbf{\tilde{R}_{l}}\). Compute the advantage estimates, \(\mathbf{\tilde{A}_{t}}=\mathbf{\delta_{t}}+(\mathbf{\gamma}\mathbf{\lambda})\mathbf{\delta_{t}}+ \mathbf{1}+...+(\mathbf{\gamma}\mathbf{\lambda})\mathbf{T}-\mathbf{t}+\mathbf{1}\mathbf{\delta_{T}}+\mathbf{1}\), where \(\mathbf{\delta_{t}}=\mathbf{r_{t}}+\mathbf{\gamma}\mathbf{V}\mathbf{\Phi}(\mathbf{s_{t}}+\mathbf{1})-\mathbf{ V}\mathbf{\Phi}(\mathbf{s_{t}})\) #like TD in the Bellman equation Update the policy by maximizing the PPO-clip objective: #Below is the clipped surrogate loss, where \(\mathbf{r_{t}}(\mathbf{\theta})=\frac{\pi_{\theta_{new}}}{\pi_{\theta_{old}}}\) \(\mathbf{L_{\theta_{k}}}(\mathbf{\theta})=\mathbf{E}\big{[}\sum_{t=0}^{T}[\text{min }(\mathbf{r_{t}}(\mathbf{\theta})\mathbf{\tilde{A}_{t}}^{\pi_{k}},\mathbf{clip}(\mathbf{r_{t}}( \mathbf{\theta}),\mathbf{1}-\mathbf{\epsilon},\mathbf{1}+\mathbf{\epsilon})\mathbf{\tilde{A}_{t}}^{ \pi_{k}}]\big{]}\) Fit the value function: \(\mathbf{V_{\Phi}}(\mathbf{s_{t}},\mathbf{\tilde{R}_{t}})\)via MSE loss End for ``` **Algorithm 6**PPO ### Limitation To date, PPO is still considered a SOTA algorithm, and its limitations are yetto be explored. It was created by a teamat OpenAI, and it is still the main algorithm used in the company. Indeed, it was recently used to fine-tune their ChatGPT model as well. ## 4 Summary: In this review, we have provided a comprehensive overview of the field of reinforcement learning (RL), covering the key concepts, techniques, and algorithms. We have traced the historical progress of the field, specifically, the main model-free algorithms, starting with the early Q-learning algorithm and covering the current state-of-the-art algorithms of deep RL such as DQN, TD3, and PPO (all discussed algorithms are listed in Figure 7 below). We have also discussed the challenges and limitations of RL, including sample efficien cy and the exploration-exploitation trade-off, and how these challenges have been addressed in the development of newer algorithms. One of the main advantages of RL algorithms is their ability to learn from experience and adapt to changing environments. This allows RL agents to solve complex and dynamic problems that may be difficult to model using other techniques. However, RL algorithms can also be computationally intensive and may require a large amount of data and interactions with the environment to learn effectively. Given the diverse range of RL algorithms that have been developed, it can be challenging to choose the best algorithm for a particular problem. In general, model-based algorithms are more sample efficient but may be less robust than value-based or policy-based algorithms. Value-based algorithms are widely used and can be effective in many situations, but they may suffer from instability or divergence in certain cases. Policy-based algorithms are generally more stable and can handle high-dimensional action spaces, but they may be slower to converge compared to value-based algorithms. In conclusion, the choice of the RL algorithm will depend on the specific characteristics of the problem at hand and the trade-offs between sample efficiency, stability, and convergence speed. This review paper aims to provide a comprehensive and accessible overview of the key concepts, techniques, and algorithms of RL, as well as the challenges and limitations of the field and its historical progress. It is intended to serve as a valuable resource for researchers and practitioners interested in learning more about RL and its applications. ## Acknowledgement: None.
2307.16472
Gating ferromagnetic resonance of magnetic insulators by superconductors via modulating electric-field radiation
We predict that ferromagnetic resonance in insulating magnetic film with inplane magnetization radiates electric fields polarized along the magnetization with opposite amplitudes at two sides of the magnetic insulator, which can be modulated strongly by adjacent superconductors. With a single superconductor adjacent to the magnetic insulator this radiated electric field is totally reflected with a $\pi$-phase shift, which thereby vanishes at the superconductor side and causes no influence on the ferromagnetic resonance. When the magnetic insulator is sandwiched by two superconductors, this reflection becomes back and forth, so the electric field exists at both superconductors that drives the Meissner supercurrent, which in turn shifts efficiently the ferromagnetic resonance. We predict an ultrastrong coupling between magnons in the yttrium iron garnet and Cooper pairs in NbN with the frequency shift achieving tens of percent of the bare ferromagnetic resonance.
Xi-Han Zhou, Tao Yu
2023-07-31T08:03:22Z
http://arxiv.org/abs/2307.16472v2
Gating ferromagnetic resonance of magnetic insulators by superconductors via modulating electric-field radiation ###### Abstract We predict that ferromagnetic resonance in _insulating_ magnetic film with inplane magnetization radiates electric fields polarized along the magnetization with opposite amplitudes at two sides of the magnetic insulator, which can be modulated strongly by adjacent superconductors. With a single superconductor adjacent to the magnetic insulator this radiated electric field is totally reflected with a \(\pi\)-phase shift, which thereby vanishes at the superconductor side and causes no influence on the ferromagnetic resonance. When the magnetic insulator is sandwiched by two superconductors, this reflection becomes back and forth, so the electric field exists at both superconductors that drives the Meissner supercurrent, which in turn shifts efficiently the ferromagnetic resonance. We predict an ultrastrong coupling between magnons in the yttrium iron garnet and Cooper-pair supercurrent in NbN with a frequency shift achieving tens of percent of the bare ferromagnetic resonance. ## I Introduction "Magnonics" exploits magnetic excitations, i.e., spin waves or their quanta, magnons, as potential information carriers for spin transport in insulators with low-energy consumption [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Interaction between magnons and Cooper-pair supercurrent in heterostructures composed of magnets and superconductors may modulate the transport of spin information [11, 12, 13, 14, 15, 16, 17, 18, 19, 20], strongly enhance the magnon-photon interaction [21, 22, 23, 24, 25, 26, 27], and lead to the emergence of triplet Cooper pairing [28, 29, 30, 31, 32], which may bring unprecedented functionalities in spintronics [28, 29, 30], quantum information [33, 34, 35, 36, 37, 38, 39], and topological quantum computation [40]. In this heterostructure, the hybridized quantum states and distribution of macroscopic electromagnetic fields govern its properties. For example, the "ultrastrong coupling" [41] with the coupling strength close to the ferromagnetic resonance (FMR) frequency unveils the importance of the dipolar interaction in the superconductor(S)[metallic ferromagnet(F)[superconductor(S) heterostructure [22, 23, 24]], where the photon mode with a large mode density is localized in the nano-scale between two superconductors [42]. The importance of the dipolar interaction also manifests in the superconductor gating effect on magnons [43, 44, 45, 46, 14, 47, 48, 49, 50], in which the frequency of magnons with finite wave number [47, 48, 49, 50] can be shifted up to tens of GHz, as recently predicted [14, 15] and observed [20] in the superconductor(S)[ferromagnet insulator(FI) heterostructure. The stray electric field of magnons drives the supercurrent in the adjacent superconductor which in turn generates the Oersted magnetic field that affects the low-frequency magnetization dynamics. This gating effect favors the spin diode [51, 10] and magnon trap [52, 53, 54] in proper gating configurations. The FMR frequency in this S[FI bilayer is not affected, however. On the other hand, the FMR of the metallic ferromagnet sandwiched by two superconductors was shifted up to 50 mT in the resonant field when the thickness of the two superconductor layers is larger than the London's penetration depth, as observed in several recent experiments [55, 56, 57]. Above the superconducting transition temperature, the FMR frequency recovers to the Kittel mode [58], which may be exploited to realize the magnetic logic gate through a phase transition in the superconductor. This phenomenon may be related to the frequency splitting induced by spin-triplet superconducting state [55], Meissner screening [57], and giant demagnetization effects [16, 59]. It appears that this modulation could be absent for the FMR in the ferromagnetic insulators [55, 57, 16, 59], however, wh Figure 1: Snapshots of magnetization-radiated electric fields in different heterostructure configurations. The electric field changes linearly across the thickness of the ferromagnetic insulating film. (a) The electric-field amplitude is opposite at two sides of the thin magnetic insulator. (b) When fabricating a superconductor thin film on a ferromagnetic insulator, the electric field is suppressed to vanish at the superconductor side but enhanced at the other side of the magnet. When the magnet is sandwiched by two superconductors, the electric field exists but differs at both sides in both symmetric (c) and asymmetric (d) configurations. in the experiments yet [60; 61; 62]. Silaev predicted recently ultrastrong coupling between magnons and microwave photons in a magnetic insulator when sandwiched by two superconductors of infinite thickness, where the radiation of the electric field out of the heterostructure is completely suppressed [25]. The experiment [55] showed that inserting a thin insulator layer in the heterostructures composed of a metallic ferromagnet sandwiched by two superconductors completely suppresses the shift of FMR. This raises the issue of whether the FMR can be gated or not in magnetic insulators by adjacent superconductors in proper configurations. In this work, we study this issue by going beyond the quasi-static approximation for magnetostatic modes [63] and demonstrate that although the stray magnetic field of Kittel magnon with uniform magnetization precession is vanishingly small outside of the in-plane magnetized ferromagnetic insulating film, the radiated electric field is significant with opposite amplitudes at two sides of the magnetic film and polarization parallel to the magnetization direction. This distribution of the radiated electric field is sensitive to the adjacent superconductors due to the total reflection, as illustrated in Fig. 1 for snapshots of the distribution of electric fields in different heterostructure configurations. The electric field is opposite at two sides of a single thin ferromagnetic insulator [Fig. 1(a)]; contra-intuitively, in the S\(|\)FI bilayer this electric field is suppressed to vanish at the superconductor side [Fig. 1(b)], when the superconductor thickness is larger than a nanometer; nevertheless, when sandwiched by two superconductors, the electric field is neither shifted to vanish nor screened completely, as plotted in Figs. 1(c) and (d) for symmetric and asymmetric configurations. These features are well understood by our mechanism of modulated reflection of magnetization-induced electric fields by superconductors, which predicts the absence of FMR shift in ferromagnetic insulator\(|\)superconductor heterostructure and the ultrastrong modulation of FMR, shifted up to tens of percent of the bare frequency when the ferromagnetic insulator is sandwiched by two thin superconductors. This paper is organized as follows. We address the model and general formalism in Sec. II. In Sec. III, IV, and V, we analyze the distribution of the electric fields from FMR of a single ferromagnetic insulator, S\(|\)FI bilayer, and S\(|\)FI\(|\)S heterostructure, respectively, and address the ultrastrong interaction between the FMR and supercurrent. We conclude and discuss in Sec. VI. ## II Model and general formalism We consider a heterostructure composed of a ferromagnetic insulating film of thickness \(2d_{F}\sim O(100\) nm) with inplane magnetization sandwiched by two thin superconductor layers with thickness \(d_{S}\lesssim\lambda\) and \(d^{\prime}_{S}\lesssim\lambda\), respectively, as illustrated in Fig. 2. Here \(\lambda\sim O(100\) nm) is London's penetration depth of conventional superconductors. In the ferromagnetic insulators, the dynamics of magnetization \(\mathbf{M}=M_{x}\hat{\mathbf{x}}+M_{y}\hat{\mathbf{y}}+M_{0}\hat{\mathbf{z}}\), where \(M_{0}\) is the saturated magnetization, is phenomenologically governed by the Landau-Lifshitz-Gilbert (LLG) equation [64] \[\partial\mathbf{M}/\partial t=-\mu_{0}\gamma\mathbf{M}\times\mathbf{H}+\alpha _{G}(\mathbf{M}/M_{0})\times\partial\mathbf{M}/\partial t, \tag{1}\] where \(\mu_{0}\) is the vacuum permeability, \(-\gamma\) is the electron gyromagnetic ratio, and \(\alpha_{G}\) is the damping coefficient of the magnetic insulator. The magnetization precesses around the effective magnetic field \(\mathbf{H}=\mathbf{H}_{\mathrm{app}}+\mathbf{H}_{r}\) that contains the external static field \(\mathbf{H}_{\mathrm{app}}=H_{0}\hat{\mathbf{z}}\) and the radiated dynamic field \(\mathbf{H}_{r}\) generated by the "magnetic dipole radiation" [65; 14]. The energy flow out of the magnetic insulator then causes the radiation damping since the radiated magnetic field out of phase of the magnetization can exert a damping-like torque on the magnetization. The exchange interaction plays no role in the FMR since the gradient of \(\mathbf{M}\) vanishes for the uniform precession. The oscillating magnetic induction \(\mathbf{B}=\mu_{0}(\mathbf{M}+\mathbf{H})\) governs the radiation of electric fields inside and outside the ferromagnetic insulator according to [65] \[\nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t},\hskip 28.452756pt \nabla\times\mathbf{H}=\mathbf{J}_{s}+\varepsilon_{0}\frac{\partial\mathbf{E }}{\partial t}, \tag{2}\] where \(\varepsilon_{0}\) is the vacuum permittivity. When coupled with superconductors, this electric field drives the supercurrent \(\mathbf{J}_{s}\) via London's equation [66] \[\frac{\partial\mathbf{J}_{s}}{\partial t}=\frac{1}{\mu_{0}\lambda^{2}} \mathbf{E},\hskip 28.452756pt\nabla\times\mathbf{J}_{s}=-\frac{1}{\mu_{0} \lambda^{2}}\mathbf{B}. \tag{3}\] Here London's penetration depth at different temperatures \(T<T_{c}\) follows the relation [66] \[\lambda(T)=\lambda_{0}\left(1-\left(\frac{T}{T_{c}}\right)^{4}\right)^{-1/2}, \tag{4}\] Figure 2: S(1)\(|\)FI\(|\)S(2) heterostructure. The thickness of superconductors above and beneath the thin ferromagnetic insulator of thickness \(2d_{F}\) is \(d_{S}\) and \(d^{\prime}_{S}\), respectively. The driven supercurrents \(\mathbf{J}_{s}\) and \(\mathbf{J}^{\prime}_{s}\) by FMR flow oppositely along the magnetization direction. where \(\lambda_{0}\) is London's penetration depth at zero temperature. The boundary condition describes the fields at the interfaces [65]. For the magnetic induction and field, \(\mathbf{B}_{\perp}\) and \(\mathbf{H}_{\parallel}\) are continuous at the boundaries. Since there is no surface current or charge accumulation, the electric field \(\mathbf{E}\) is continuous at interfaces. At low frequencies and with near fields, the quasi-static approximation is usually applied [65], in which situation the radiation damping should be negligibly small. This is proved according to the calculation of radiation damping in Sec. III.1. It is then sufficient to express the radiated magnetic field \(\mathbf{H}_{r}\) as the summation of the dipolar field \(\mathbf{H}_{d}\) and the Oersted field \(\mathbf{H}_{s}\) from the superconductor [63; 64]. The dipolar field \[\mathbf{H}_{d,\beta}(\mathbf{M}) =\frac{1}{4\pi}\partial_{\beta}\sum_{\alpha}\partial_{\alpha}\int d \mathbf{r}^{\prime}\frac{M_{\alpha}(\mathbf{r}^{\prime})}{|\mathbf{r}- \mathbf{r}^{\prime}|}\] \[=\frac{1}{4\pi}\partial_{\beta}\int d\mathbf{r}^{\prime}\frac{- \rho_{m}(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|} \tag{5}\] is governed by Coulomb's law in terms of the magnetic charge \(\rho_{m}=-\nabla\cdot\mathbf{M}\). With the quasi-static approximation, \(\nabla\times\mathbf{B}=\mu_{0}\mathbf{J}_{s}\) in superconductors. Taking the curl of Eq. (2) and substituting Eq. (3) into it, the electric field inside the superconductor obeys \[\nabla^{2}\mathbf{E}-\mathbf{E}/\lambda^{2}=0. \tag{6}\] On the other hand, taking the curl of \(\nabla\times\mathbf{B}=\mu_{0}\mathbf{J}_{s}\) and combining with Eq. (3), the magnetic induction inside the superconductor obeys \(\nabla^{2}\mathbf{B}-\mathbf{B}/\lambda^{2}=0\). The driven supercurrent then affects the magnetization dynamics. From Eq. (3), the electric field drives supercurrent inside the superconductor, which then generates the vector potential. With the uniform magnetization precession, the system is translational invariant in the \(y\)-\(z\) plane, so the supercurrent only depends on \(x\) and as it for the vector potential [65] \[\mathbf{A}(x)=\frac{\mu_{0}}{4\pi}\int d\mathbf{r}^{\prime}\frac{\mathbf{J}_{ s}(x^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}. \tag{7}\] Accordingly, the Oersted magnetic field \[\mathbf{H}_{s}=(1/\mu_{0})\nabla\times\mathbf{A} \tag{8}\] only contains the \(y\)-component \(H_{y}=-\partial_{x}A_{z}(x)/\mu_{0}\), which drives the magnetization. ## III Single thin ferromagnetic insulator We start with a single insulating ferromagnetic film to address the significant radiated electric fields from the uniform magnetization precession. For a single ferromagnetic insulator of thickness \(2d_{F}\) biased by a static magnetic field \(\mathbf{H}_{\mathrm{app}}=H_{0}\hat{\mathbf{z}}\), the magnetization \(\mathbf{M}\) for the FMR is uniform inside the ferromagnetic layer by the constant demagnetization factor \(N_{xx}=-1\). Since the magnetic film is sufficiently thin, we stick to the uniform precession throughout this work. The opposite magnetic charges at the two surfaces of the film generate opposite magnetic field outside, which results in vanished stray magnetic field \(\mathbf{H}_{d}=0\) outside the ferromagnetic layer, as also calculated from Eq. (5); inside the ferromagnet, \(\mathbf{H}_{d}=\{-M_{x},0,0\}\) and \(\mathbf{B}=\{0,\mu_{0}M_{y},\mu_{0}(H_{0}+M_{0})\}\), in which only the \(y\)-component of \(\mathbf{B}\) oscillates with frequency \(\omega\) that can radiate the electric field. ### Full solution Here we go beyond the quasi-static approximation and solve the radiated electric field. According to Eq. (2), the oscillating electromagnetic field is the source for radiating microwaves in space. Taking the curl of the first equation in Eq. (2), the electric field of frequency \(\omega\) obeys \[\nabla^{2}\mathbf{E}+\varepsilon_{0}\mu_{0}\omega^{2}\mathbf{E}=-i\omega\mu_ {0}\nabla\times\mathbf{M}. \tag{9}\] Such a radiation process is governed by the oscillating "magnetization current" \(\mathbf{J}_{M}=\nabla\times\mathbf{M}\), which is analogous to the radiation caused by the normal oscillating charge current [65]. Via the Green function technique [65], Eq. (9) has the solution \[\mathbf{E}(\mathbf{r})=\frac{i\mu_{0}\omega}{4\pi}\int\frac{[\nabla^{\prime} \times\mathbf{M}(\mathbf{r}^{\prime})]e^{ik|\mathbf{r}-\mathbf{r}^{\prime}|} }{|\mathbf{r}-\mathbf{r}^{\prime}|}d\mathbf{r}^{\prime}, \tag{10}\] where \(k=\omega/c\) is the wave number of microwaves. Since only the \(x\) and \(y\) components of \(\mathbf{M}\) oscillate with frequency \(\omega\) and \(\mathbf{M}\) is uniform inside the ferromagnetic layer, \((\nabla\times\mathbf{M})_{x,y}=0\) in all space, leading to \(E_{x}=E_{y}=0\) and \[E_{z}(x)=\frac{i\mu_{0}\omega}{4\pi}\int\frac{[\partial_{x^{\prime}}M_{y}( \mathbf{r}^{\prime})]e^{ik|\mathbf{r}-\mathbf{r}^{\prime}|}}{|\mathbf{r}- \mathbf{r}^{\prime}|}d\mathbf{r}^{\prime}. \tag{11}\] Using Weyl identity [10] \[\frac{e^{ik|\mathbf{r}-\mathbf{r}^{\prime}|}}{|\mathbf{r}-\mathbf{r}^{\prime}| }=\int dk^{\prime}_{z}dk_{y}\frac{ie^{ik^{\prime}_{z}(z-z^{\prime})+ik^{\prime }_{y}(y-y^{\prime})}e^{i\sqrt{k^{2}-k^{\prime 2}_{z}-k^{\prime 2}_{y}}|x-x^{\prime}|}}{2\pi \sqrt{k^{2}-k^{\prime 2}_{z}-k^{\prime 2}_{y}}-k^{\prime 2}_{y}}, \tag{12}\] we obtain the electric field \[E_{z}=\frac{\mu_{0}\omega M_{y}}{2k}\begin{cases}e^{-ik(x-d_{F})}-e^{ik(x+d_{F} )},&-d_{F}<x<d_{F}\\ e^{ik(x-d_{F})}-e^{ik(x+d_{F})},&x>d_{F}\\ e^{-ik(x-d_{F})}-e^{-ik(x+d_{F})},&x<-d_{F}\end{cases}. \tag{13}\] From Eq. (2), we find the magnetic induction \(B_{x}=0\), \(B_{z}=\mu_{0}(H_{0}+M_{0})\) is static, and \(B_{y}=-\partial_{x}E_{z}/(i\omega)\) fol lows \[B_{y}=\frac{\mu_{0}M_{y}}{2}\begin{cases}e^{ik(x+d_{F})}+e^{-ik(x-d_{F})},&-d_{F}< x<d_{F}\\ e^{ik(x+d_{F})}-e^{ik(x-d_{F})},&x>d_{F}\\ -e^{-ik(x+d_{F})}+e^{-ik(x-d_{F})},&x<-d_{F}\end{cases}. \tag{14}\] We can understand the radiated electric field (13) well via the oscillating "magnetization current" \(\mathbf{J}_{M}\). For the uniform magnetization precession, \(\mathbf{J}_{M}\) is located at the surfaces of the ferromagnetic insulator, i.e., the dynamic component \[\mathbf{J}_{M}(x)=[\delta(x+d_{F})-\delta(x-d_{F})]M_{y}\hat{\mathbf{z}}\propto M _{y} \tag{15}\] has the same magnitude but opposite sign at two surfaces \(x=\pm d_{F}\), as illustrated in Fig. 3. Such oscillating magnetization current then radiates the electromagnetic waves of wave vector \(k\hat{\mathbf{x}}\) and \(-k\hat{\mathbf{x}}\) with \(k=\omega/c\) into two opposite directions. Due to the opposite sign of \(\mathbf{J}_{M}\) at \(x=\pm d_{F}\), the amplitudes of the electric fields radiated by the left and right surfaces are of opposite sign \(E_{L}=-E_{R}\equiv E_{0}\propto M_{y}\). At the right-hand side of the sample, i.e., \(x>d_{F}\), the propagation phases of the radiated electric field from the left and right surfaces are \(k(x+d_{F})\) and \(k(x-d_{F})\), respectively, resulting in a net electric field \(E=E_{0}(e^{ik(x+d_{F})}-e^{ik(x-d_{F})})\). Similarly, when \(x<-d_{F}\), \(E=E_{0}(e^{-ik(x-d_{F})}-e^{-ik(x+d_{F})})\). These recover exactly the solution (13). With the full solutions (13) and (14), we are allowed to calculate the radiation damping of the FMR due to the energy radiated out of the magnetic insulator. According to Eq. (14), the radiated magnetic field inside the magnetic insulating film \[H_{x}^{r}=-M_{x},\] \[H_{y}^{r}=i\frac{\omega d_{F}M_{y}}{c}=-\frac{d_{F}}{c}\frac{dM_ {y}}{dt} \tag{16}\] drives the magnetization, leading to the linearized LLG equation \[-i\omega M_{x}+\mu_{0}\gamma M_{y}H_{0}=i(\alpha_{G}+\alpha_{R}) \omega M_{y},\] \[i\omega M_{y}+\mu_{0}\gamma H_{0}M_{x}=-\mu_{0}\gamma M_{0}M_{x} +i\alpha_{G}\omega M_{x}, \tag{17}\] where the damping coefficient contributed by the radiation reads \[\alpha_{R}=\mu_{0}\gamma M_{0}d_{F}/c. \tag{18}\] It is negligibly small: for the YIG film of thickness \(2d_{F}=120\) nm and \(\mu_{0}M_{0}=0.2\) T [67; 68], \(\alpha_{R}\approx 7.3\times 10^{-6}\ll\alpha_{G}\sim 5\times 10^{-4}\). However, the radiation damping is enhanced with thicker films. We are interested in the field near the ferromagnet with a distance \(\sim\lambda\). In ferromagnetic insulators, \(\omega\sim 2\pi\times 4\) GHz [14], and \(\lambda\sim 100\) nm for conventional superconductors, so \(k\lambda\sim 10^{-5}\ll 1\). When \(kx\to 0\), we have \[E_{z}(x)=\begin{cases}-i\mu_{0}\omega M_{y}x,&-d_{F}<x<d_{F}\\ -i\mu_{0}\omega M_{y}d_{F},&x>d_{F}\\ i\mu_{0}\omega M_{y}d_{F},&x<-d_{F}\end{cases}, \tag{19}\] as plotted in Fig. 1(a) for a snapshot. The magnetic induction \[B_{y}(x)=\begin{cases}\mu_{0}M_{y},&-d_{F}<x<d_{F}\\ 0,&x>d_{F}\\ 0,&x<-d_{F}\end{cases} \tag{20}\] recovers to the results from quasi-static approximation [63] with vanishing magnetic field \(H_{y}\) outside of the ferromagnet. ### Quasi-static approximation The above analysis implies that when focusing on the near-field limit, we may apply the quasi-static approximation that sets \(\nabla\times\mathbf{H}=0\) in Eq. (2). When focusing on the FMR case, \(\mathbf{E}\) is translation invariant in the \(y\)-\(z\) plane. _i.e.,_\(\partial_{z}E_{x}=0\). Taking the \(y\)-component of Eq. (1), the oscillation of \(B_{y}\) only generate \(E_{z}\) parallel to the magnetization: \[-\partial_{x}E_{z}=i\omega\mu_{0}M_{y}. \tag{21}\] Integrating along \(x\) across the ferromagnet yields \[E_{z}(x)=-i\omega\mu_{0}M_{y}(x+d_{F})+E_{z}(x=-d_{F}). \tag{22}\] Thereby, \(E_{z}\) depends linearly on \(x\) inside the ferromagnet. Outside the ferromagnet, \[E_{z}(x)=-2i\omega\mu_{0}M_{y}d_{F}+E_{z}(x=-d_{F}) \tag{23}\] is uniform, which is consistent with the vanished magnetic field \(H_{y|\text{outside}}=0\) in the quasi-static approximation. According to the symmetry, \(E_{z}(x=0)=0\), so the electric field is exactly the same as Eq. (19). Figure 3: Electric field radiated from the surface magnetization current at the two surfaces of the magnetic insulator. S\(|\)Fi heterostructure We consider the S\(|\)FI heterostructure composed of a ferromagnetic film of thickness \(2d_{F}\) and a superconductor of thickness \(d_{S}\), as shown in Fig. 4. We demonstrate the adjacent superconductors modulate strongly the radiated electric field which explains the absence of the FMR shift in this configuration [20; 55]. ### Full solution Inside the ferromagnet, since \(\nabla\times\mathbf{M}=0\) for uniform \(\mathbf{M}\), Eq. (9) has the solution \(E_{z}(x)=E_{1}e^{ikx}+E_{1}^{\prime}e^{-ikx}\). Inside the superconductor, according to Eqs. (1) and (3), the electric field obeys \[\partial_{x}^{2}E_{z}+(\varepsilon_{0}\mu_{0}\omega^{2}-1/\lambda^{2})E_{z}=0, \tag{24}\] which has the solution \(E_{z}(x)=E_{2}e^{ik^{\prime}x}+E_{2}^{\prime}e^{-ik^{\prime}x}\), where \(k^{\prime}=\sqrt{(\omega/c)^{2}-1/\lambda^{2}}\approx i/\lambda\) is purely imaginary with microwave frequencies. For example, with frequency \(\omega\sim 2\pi\times 4\) GHz, \(k=\omega/c\sim 83.8\) m\({}^{-1}\) is much smaller than \(1/\lambda\sim 10^{7}\) m\({}^{-1}\) with London's penetration depth \(\lambda\sim 100\) nm. Therefore, due to the Meissner effect, the low-frequency electromagnetic waves no longer propagate but decay in the superconductor. Out of the heterostructure, the electric fields \(E_{3}e^{ikx}\) and \(E_{4}e^{-ikx}\) are radiated. These radiated electric fields are illustrated in Fig. 4. The amplitudes \(\{E_{1},E_{1}^{\prime},E_{2},E_{2}^{\prime},E_{3},E_{4}\}\) are governed by the boundary conditions, i.e., \(E_{z}\) and \(H_{y}\) are continuous at interfaces. The continuous \(E_{z}\) at interface requests \[E_{1}e^{ikd_{F}}+E_{1}^{\prime}e^{-ikd_{F}}=E_{2}e^{ik^{\prime}d _{F}}+E_{2}^{\prime}e^{-ik^{\prime}d_{F}},\] \[E_{2}e^{ik^{\prime}(d_{F}+d_{S})}+E_{2}^{\prime}e^{-ik^{\prime}(d _{F}+d_{S})}=E_{3}e^{ik(d_{F}+d_{S})},\] \[E_{1}e^{-ikd_{F}}+E_{1}^{\prime}e^{ikd_{F}}=E_{4}e^{ikd_{F}}. \tag{25}\] In the superconductors, \(H_{y}=-1/(i\omega\mu_{0})\partial_{x}E_{z}\), while in the ferromagnet, \(H_{y}=-1/(i\omega\mu_{0})\partial_{x}E_{z}-M_{y}\), so the continuous \(H_{y}\) at interfaces leads to \[k^{\prime}(E_{2}e^{ik^{\prime}d_{F}}-E_{2}^{\prime}e^{-ik^{\prime }d_{F}})=k(E_{1}e^{ikd_{F}}-E_{1}^{\prime}e^{-ikd_{F}})\] \[+\omega\mu_{0}M_{y},\] \[k^{\prime}(E_{2}e^{ik^{\prime}(d_{F}+d_{S})}-E_{2}^{\prime}e^{- ik^{\prime}(d_{F}+d_{S})})=kE_{3}e^{ik(d_{F}+d_{S})},\] \[k(E_{1}e^{-ikd_{F}}-E_{1}^{\prime}e^{ikd_{F}})+\omega\mu_{0}M_{y }=-kE_{4}e^{ikd_{F}}. \tag{26}\] Combining Eqs. (25) and (26), we obtain all the amplitudes. In the ferromagnetic insulator, \[E_{z}(-d_{F}<x<d_{F})=\mathcal{R}E_{0}e^{-ik(x-d_{F})}+E_{\text{ single}}(x), \tag{27}\] where the amplitude \(E_{0}=-[\omega\mu_{0}M_{y}/(2k)]\left(e^{2ikd_{F}}-1\right)\), \(E_{\text{single}}(x)\) is the radiated electric field from a single magnetic insulator [Eq. (13)], and \[\mathcal{R}=\frac{e^{ik^{\prime}d_{S}}(k^{2}-k^{\prime 2})+e^{-ik^{\prime}d_{S}} (k^{\prime 2}-k^{2})}{e^{ik^{\prime}d_{S}}(k-k^{\prime})^{2}-e^{-ik^{\prime}d_{S} }(k+k^{\prime})^{2}} \tag{28}\] is the reflection coefficient of the electric field at the superconductor surface. We plot the dependence of \(\mathcal{R}\) on the superconductor thickness \(d_{S}\) in Fig. 5 with different London's penetration depth \(\lambda\) under the frequency \(\omega\sim 2\pi\times 4\) GHz. The reflection coefficient saturates to \(\mathcal{R}\rightarrow-1\) when \(d_{S}>0.1\) nm, but is reduced to 0 when \(d_{S}\rightarrow\) 0, recovering the solution (13) of the single layer case. We conclude that even with a small \(d_{S}\ll\lambda\), since \(|k|=\omega/c\) is much smaller than \(|k^{\prime}|\approx 1/\lambda\) when \(\omega\sim 2\pi\times 4\) GHz, \(\mathcal{R}\rightarrow-1\). This implies the total reflection of the electric fields at the FI\(|\)S interface even with an ultrathin conventional superconductor layer. As shown below, this indicates the absence of FMR shift in all the available experiments with thick superconductors [20; 55]. Figure 4: Radiated electric field of the FI\(|\)S heterostructure. Inside the superconductor, \[E_{z}(d_{F}<x<d_{F}+d_{S})=\frac{2kE_{0}}{e^{ik^{\prime}d_{S}}(k-k^ {\prime})^{2}-e^{-ik^{\prime}d_{S}}(k+k^{\prime})^{2}}\] \[\times\left((k-k^{\prime})e^{-ik^{\prime}(x-d_{F}+d_{S})}-(k+k^{ \prime})e^{ik^{\prime}(x-d_{F}-d_{S})}\right), \tag{29}\] which is indeed very weak since \(|k|\ll|k^{\prime}|\). Out of the heterostructure, \[E_{z}=\begin{cases}\frac{-4kk^{\prime}E_{0}e^{ik(x-d_{F}-d_{S})}}{e^{ik^{ \prime}d_{S}}(k-k^{\prime})^{2}-e^{-ik^{\prime}d_{S}}(k+k^{\prime})^{2}},x>d_{ F}+d_{S}\\ \mathcal{R}E_{0}e^{-ik(x-d_{F})}+E_{\text{single}}(x),\ x<-d_{F}\end{cases}. \tag{30}\] At low frequencies and near the heterostructure, \(kx\to 0\), \(kd_{F}\to 0\), and \(kd_{S}\to 0\), so the electric fields \[E_{z}(x)=\begin{cases}0,&x>d_{F}\\ -i\omega\mu_{0}M_{y}(x-d_{F}),&-d_{F}<x<d_{F}\\ 2i\omega\mu_{0}M_{y}d_{F},&x<-d_{F}\end{cases}, \tag{31}\] which is illustrated in Fig. 1(b) for a snapshot. The electric field vanishes in the superconductor due to the total reflection with a \(\pi\)-phase shift \(\mathcal{R}=-1\) that generates no supercurrent and thereby leads to no modulation on the FMR. ### Quasi-static approximation The full solution clearly shows the absence of electric fields at the superconductor side of the S\(|\)FI heterostructure, which can be well understood within the quasi-static approximation \(\nabla\times\mathbf{H}=0\) or \(\mathbf{J}_{s}\). Assuming \(E_{z}(x=d_{F})=\tilde{E}_{0}\) at the FI\(|\)S interface, according to Eq. (6) the electric field in the adjacent superconductor \[E_{z}(x)=\tilde{E}_{0}\frac{\cosh{((x-d_{S}-d_{F})/\lambda)}}{\cosh{(d_{S}/ \lambda)}} \tag{32}\] drives the supercurrent. For a thin superconducting film of thickness \(O(\lambda)\), we are allowed to take an average of the supercurrent \(J_{s,z}=[J_{s,z}(x=d_{F})+J_{s,z}(x=d_{F}+d_{S})]/2\), and from the first equation of Eq. (3) \[J_{s,z}=\frac{i}{\mu_{0}\omega\lambda^{2}}\tilde{E}_{0}\frac{1+\cosh(d_{S}/ \lambda)}{2\cosh(d_{S}/\lambda)}. \tag{33}\] The supercurrents generate the vector potential (7) and the Oersted magnetic field according to \(H_{y}=-\partial_{x}A_{z}/\mu_{0}\). Taking \(k=0\) at low frequencies in the Weyl identity (12), i.e., [10] \[\frac{1}{|\mathbf{r}-\mathbf{r}^{\prime}|}=\int dk^{\prime}_{x}dk^{\prime}_{y }\frac{e^{ik^{\prime}_{x}(x-x^{\prime})+ik^{\prime}_{y}(y-y^{\prime})}e^{- \sqrt{k^{\prime}_{x}+k^{\prime}_{y}}|z-z^{\prime}|}}{2\pi\sqrt{k^{\prime}_{x} {}^{2}+k^{\prime}_{y}}}, \tag{34}\] we obtain the Oersted magnetic field generated by the supercurrents \[H_{s,y}(x)=\left\{\begin{array}{ll}d_{S}J_{s,z}/2,&x>d_{F}+d_{S}\\ -d_{S}J_{s,z}/2,&x<d_{F}\end{array}\right.. \tag{35}\] However, constant \(H_{s,y}\) independent of \(x\) should vanish _out of the heterostructure_ within the quasi-static approximation since a constant magnetic field renders the radiated electric field divergent, which requests \(J_{s,z}=0\) when \(d_{S}\neq 0\) and \(E_{z}(x>d_{F})=0\). Since the electric field is continuous at interfaces, \(E_{z}(x=d_{F})=\tilde{E}_{0}=0\) and according to Eq. (21) \(E_{z}(x=-d_{F})=2id_{F}\omega\mu_{0}M_{y}\). These simple calculations thereby capture precisely the key physics of the full solution (31). ## V S\(|\)FI\(|\)S heterostructure Further, we consider the S\(|\)FI\(|\)S heterostructure as illustrated in Fig. 2 composed of the ferromagnetic insulator of thickness \(2d_{F}\) and two adjacent superconductor films of thickness \(d_{S}\) and \(d^{\prime}_{S}\), respectively. In comparison to that of the S\(|\)FI bilayer, the distribution of the electric field in S\(|\)FI\(|\)S heterostructure changes much due to its back-and-forth reflection by the superconductors, as addressed in this section. ### Full solution Similar to the S\(|\)FI heterostructure, inside the ferromagnet, \(E_{z}(x)=E_{1}e^{ikx}+E^{\prime}_{1}e^{-ikx}\); in the superconductor "1", \(E_{z}(x)=E_{2}e^{ik^{\prime}x}+E^{\prime}_{2}e^{-ik^{\prime}x}\); and in the superconductor "2", \(E_{z}(x)=E_{3}e^{ik^{\prime}x}+E^{\prime}_{3}e^{-ik^{\prime}x}\). Out of the heterostructure, the electric fields \(E_{4}e^{ikx}\) and \(E_{5}e^{-ikx}\) are radiated. These electric fields are illustrated in Fig. 6. The amplitudes \(\{E_{1},E^{\prime}_{1},E_{2},E^{\prime}_{2},E_{3},E^{\prime}_{3},E_{4},E_{5}\}\) are governed by the boundary conditions. The continuous \(E_{z}\) at interfaces requests \[E_{1}e^{ikd_{F}}+E_{1}^{\prime}e^{-ikd_{F}}=E_{2}e^{ik^{\prime}d_{F} }+E_{2}^{\prime}e^{-ik^{\prime}d_{F}},\] \[E_{1}e^{-ikd_{F}}+E_{1}^{\prime}e^{ikd_{F}}=E_{3}e^{-ik^{\prime}d_ {F}}+E_{3}^{\prime}e^{ik^{\prime}d_{F}},\] \[E_{2}e^{ik^{\prime}(d_{F}+d_{S})}+E_{2}^{\prime}e^{-ik^{\prime}(d _{F}+d_{S})}=E_{4}e^{ik(d_{F}+d_{S})},\] \[E_{3}e^{-ik^{\prime}(d_{F}+d_{S}^{\prime})}+E_{3}^{\prime}e^{ik^{ \prime}(d_{F}+d_{S}^{\prime})}=E_{5}e^{ik(d_{F}+d_{S}^{\prime})}, \tag{36}\] and the continuous \(H_{y}\) at interfaces leads to \[k^{\prime}(E_{2}e^{ik^{\prime}d_{F}}-E_{2}^{\prime}e^{-ik^{\prime }d_{F}})=k(E_{1}e^{ikd_{F}}-E_{1}^{\prime}e^{-ikd_{F}})\] \[+\omega\mu_{0}M_{y},\] \[k^{\prime}(E_{3}e^{-ik^{\prime}d_{F}}-E_{3}^{\prime}e^{ik^{ \prime}d_{F}})=k(E_{1}e^{-ikd_{F}}-E_{1}^{\prime}e^{ikd_{F}})\] \[+\omega\mu_{0}M_{y},\] \[k^{\prime}(E_{2}e^{ik^{\prime}(d_{F}+d_{S})}-E_{2}^{\prime}e^{- ik^{\prime}(d_{F}+d_{S})})=kE_{4}e^{ik(d_{F}+d_{S})},\] \[k^{\prime}(E_{3}e^{-ik^{\prime}(d_{F}+d_{S}^{\prime})}-E_{3}^{ \prime}e^{ik^{\prime}(d_{F}+d_{S}^{\prime})})=-kE_{5}e^{ik(d_{F}+d_{S}^{ \prime})}. \tag{37}\] Combining Eqs. (36) and (37), we obtain the electric-field distribution. In particular, when \(d_{S}=d_{S}^{\prime}\), in the ferromagnetic film, \[E_{z}(|x|<d_{F})=\frac{-\omega\mu_{0}M_{y}\sinh{(ikx)}}{k\cosh{(ikd_{F})}-k^{ \prime}f(u)\sinh{(ikd_{F})}}, \tag{38}\] where \(u=-[(k+k^{\prime})/(k-k^{\prime})]\exp(-2ik^{\prime}d_{S})\) and \[f(u)=\frac{u-1}{u+1}=\frac{k^{\prime}\sinh{(ik^{\prime}d_{S})}-k\cosh{(ik^{ \prime}d_{S})}}{k\sinh{(ik^{\prime}d_{S})}-k^{\prime}\cosh{(ik^{\prime}d_{S})} }. \tag{39}\] In the superconductor "1", \[E_{z}(d_{F}<x<d_{F}+d_{S})\] \[=\frac{-\omega\mu_{0}M_{y}(ue^{ik^{\prime}(x-d_{F})}+e^{-ik^{ \prime}(x-d_{F})})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}, \tag{40}\] and in the superconductor "2", \[E_{z}(-d_{F}-d_{S}<x<-d_{F})\] \[=\frac{\omega\mu_{0}M_{y}(ue^{-ik^{\prime}(x+d_{F})}+e^{ik^{ \prime}(x+d_{F})})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}. \tag{41}\] They both exist, and \(E_{z}(x=-d_{F})\) and \(E_{z}(x=d_{F})\) are opposite. This feature may be understood from the magnetic dipole radiation: since the magnetization current \(\mathbf{J}_{M}\) (15) is opposite at the two surfaces \(x=\pm d_{F}\) of the magnetic film, the amplitudes of the electric fields radiated by the two surfaces \(x=\pm d_{F}\) are of opposite sign, which launches to superconductors and drives the opposite supercurrents in them. Out of the heterostructure, \[E_{z}(x>d_{F}+d_{S})\] \[=\frac{-\omega\mu_{0}M_{y}(ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime}d _{S}})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}e^{ikx},\] \[E_{z}(x<-d_{F}-d_{S})\] \[=\frac{\omega\mu_{0}M_{y}(ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime}d _{S}})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}e^{-ikx}, \tag{42}\] which, when far away from the heterostructure, is reduced to a simpler form \[E_{z}(x)\approx \frac{i\omega\mu_{0}d_{F}\lambda M_{y}}{\lambda\cosh{(d_{S}/ \lambda)}+d_{F}\sinh{(d_{S}/\lambda)}}\] \[\times\begin{cases}-e^{ikx},&x\gg d_{F}+d_{S}\\ e^{-ikx},&x\ll-(d_{F}+d_{S})\end{cases}. \tag{43}\] The radiation out of the heterostructure is then completely suppressed when \(d_{S}\gg\lambda\). We refer to Appendix A for the solution of asymmetric configuration. We illustrate in Fig. 7 the distribution of the electric fields \(\text{Re}(E_{z}/(i\omega\mu_{0}M_{y}d_{F}))\) at \(T=0.5T_{c}=5.5\) K in the symmetric \(d_{S}^{\prime}=d_{S}=60\) nm and asymmetric \(d_{S}^{\prime}=2d_{S}=120\) nm S\(|\)FI\(|\)S heterostructure, respectively, in the near-field limit. For NbN, \(T_{c}=11\) K, the London penetration depth \(\lambda(T=0)=85\) nm [69] and \(\lambda(T=0.5T_{c})=87.8\) nm. The fields are opposite at the two superconductors in the symmetric heterostructure but are skewed when \(d_{S}\neq d_{S}^{\prime}\). These fields carrying energy are radiated out in the far zone [65]. When the superconductors are sufficiently thick \(\{d_{S},d_{S}^{\prime}\}\gg\lambda\), these electric fields are confined between them, which corresponds to an excellent waveguide with small size [42]. ### Ultrastrong interaction between Kittel magnon and Cooper-pair supercurrent Above we address that the dynamics of magnetization \(\mathbf{M}\) generates \(H_{y}^{\tau}\) via the backaction of superconductors, which, in turn, drives \(\mathbf{M}\) in the ferromagnet, imposing a self-consistent problem that is solved by combining the Landau-Lifshitz and Maxwell's equations. In other words, the precession of the magnetization radiates the electric field that drives the supercurrent in the superconductor via microscopically generating the center-of-mass momentum of the Cooper pairs. Such a collective motion of Cooper pairs, i.e., the supercurrent in turn generates the Oersted magnetic field that affects the dynamics of the magnetization, i.e., a shift in its FMR frequency. Using Eq. (38) and \(B_{y}=-\partial_{x}E_{z}/(i\omega)\), we find the radiated magnetic field inside the ferromagnetic insulator of the symmetric S\(|\)FI\(|\)S heterostructure \[H_{y}^{r}(|x|<d_{F})=\frac{M_{y}k\cosh{(ikx)}}{k\cosh{(ikd_{F})}-k^{\prime}f(u) \sinh{(ikd_{F})}}-M_{y}, \tag{44}\] which drives the precession of the magnetization. In terms of the (linearized) LLG equation (1), we arrive at \[-i\omega M_{x}+\mu_{0}\gamma M_{y}H_{0} =\mu_{0}\gamma M_{0}H_{y}^{r}+i\alpha_{G}\omega M_{y},\] \[\mu_{0}\gamma H_{0}M_{x}+i\omega M_{y} =-\mu_{0}\gamma M_{0}M_{x}+i\alpha_{G}\omega M_{x}. \tag{45}\] We see that the real part of the radiated magnetic field (44) is in the same phase of \(M_{y}\), which provides a field-like torque for the magnetization. Retaining the leading order in \(k\), the homogeneous \[\text{Re}(H_{y}^{r})=-\frac{d_{F}\tanh(d_{S}/\lambda)}{\lambda+d_{F}\tanh(d_{S}/ \lambda)}M_{y} \tag{46}\] renormalizes the FMR frequency to be \[\omega_{\text{K}}=\mu_{0}\gamma\sqrt{(H_{0}+M_{0})\left(H_{0}+\frac{d_{F}\tanh(d _{S}/\lambda)}{\lambda+d_{F}\tanh(d_{S}/\lambda)}M_{0}\right)}, \tag{47}\] which differs from the bare Kittel frequency \(\tilde{\omega}_{\text{K}}=\mu_{0}\gamma\sqrt{H_{0}(H_{0}+M_{0})}\)[58]. When \(d_{S}\gg\lambda\), the solution (47) recovers that in Ref. [25], where an ultrastrong coupling between magnons and microwave photons is predicted in a magnetic insulator when sandwiched by two superconductors of _infinite_ thickness. On the other hand, the imaginary part of the radiated magnetic field is out of phase of \(M_{y}\), which thereby contributes to a damping-like torque. Retaining the leading order in \(k\), \[\text{Im}(H_{y})\approx\frac{M_{y}kd_{F}}{\cosh^{2}\left(d_{S}/\lambda\right) }\left(1+\frac{d_{F}\tanh\left(d_{S}/\lambda\right)}{\lambda}\right)^{-2}\] contributes to a damping coefficient \[\alpha_{R}=\frac{\mu_{0}\gamma M_{0}d_{F}}{c\cosh^{2}\left(d_{S}/\lambda\right) }\left(1+\frac{d_{F}\tanh\left(d_{S}/\lambda\right)}{\lambda}\right)^{-2}.\] In comparison to a single layer of magnetic insulator (Sec. III.1), the radiation of magnetization is suppressed when shielded by two superconductors, and the radiation damping is expected to be reduced. With \(d_{S}=d_{F}=60\) nm, \(\lambda\sim 85\) nm, and \(\omega\sim 2\pi\times 4\) GHz, \(\alpha_{R}\approx 2.2\times 10^{-6}\) is indeed smaller than that of a single magnetic insulator (\(7.3\times 10^{-6}\)). When \(d_{S}\gg\lambda\), \(\alpha_{R}\to 0\) since no field is radiated out of the S\(|\)F\(|\)S heterostructure. The general solution of \(\omega_{\text{K}}\) [Eq. (10)] and \(\alpha_{R}\) [Eq. (10)] in the asymmetric S\(|\)F\(|\)S heterostructure is calculated in Appendix. A. In Appendix B, we calculate them with the quasistatic approximation. To show the FMR shift, we assume an oscillating magnetic field \(\tilde{H}e^{-i\omega_{0}t}\hat{\mathbf{y}}\) of frequency \(\omega_{0}\) applied along the \(\hat{\mathbf{y}}\)-direction (the associated microwave electric field is along the normal \(\hat{\mathbf{x}}\)-direction). The wavelength of this microwave is much larger than the thickness of the heterostructure, so it can be treated as uniform across the heterostructure thickness. It can penetrate the superconductor easily when \(\{d_{S},d_{S}^{\prime}\}\sim\lambda\). With the wave vector (along \(\hat{\mathbf{z}}\)) parallel to the film, it only excites \(\mathbf{M}\) in the ferromagnet but does not drive the superconductor. Including the external pump field \(\tilde{H}e^{-i\omega_{0}t}\hat{\mathbf{y}}\) into the LLG equation (45), we find when \(\alpha_{G}\ll 1\) \[M_{y} =\frac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\text{K}} ^{2}-\omega_{0}^{2}-i\Gamma}\tilde{H},\] \[M_{x} =-iM_{y}\left[\frac{\omega_{0}}{\mu_{0}\gamma(H_{0}+M_{0})}+\frac {i\alpha_{G}\omega_{0}^{2}}{(\mu_{0}\gamma(H_{0}+M_{0}))^{2}}\right], \tag{48}\] where \[\Gamma=\frac{\alpha_{G}\omega_{0}^{3}}{\mu_{0}\gamma(H_{0}+M_{0})}+\mu_{0} \gamma(H_{0}+M_{0})(\alpha_{G}+\alpha_{R})\omega_{0}. \tag{49}\] From Eq. (40), we find the average electric field \(E_{z}=[E_{z}(x=d_{F})+E_{z}(x=d_{F}+d_{S})]/2\) in the thin superconductor "1" as \[E_{z}^{(1)}= -\frac{\tilde{H}}{2}\frac{\omega\mu_{0}(u+1+ue^{ik^{\prime}d_{S}} +e^{-ik^{\prime}d_{S}})}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}\] \[\times \frac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\text{K}} ^{2}-\omega_{0}^{2}-i\Gamma}. \tag{50}\] From Eq. (3), the corresponding average suppertcurrent inside the superconductor is \[J_{z}^{(1)}= -\frac{i\tilde{H}}{2\lambda^{2}}\frac{u+1+ue^{ik^{\prime}d_{S}}+ e^{-ik^{\prime}d_{S}}}{k(1+u)\coth(ikd_{F})-k^{\prime}(u-1)}\] \[\times \frac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\omega_{\text{K}} ^{2}-\omega_{0}^{2}-i\Gamma}. \tag{51}\] Figure 7: Distribution of electric fields in symmetric \(d_{S}=d_{S}^{\prime}=60\) nm [(a)] and asymmetric \(d_{S}^{\prime}=2d_{S}=120\) nm [(b)] S\(|\)F\(|\)S heterostructure. The thickness of the ferromagnetic film \(2d_{F}=120\) nm and London’s penetration depth \(\lambda(T=0.5T_{c})=87.8\) nm. We illustrate the numerical results considering a ytt iron garnet (YIG) film of thickness \(2d_{F}=120\) nm sandwiched by two NbN superconductors of thickness \(d_{S}=d_{S}^{\prime}=60\) nm. Insulating EuS thin magnetic film [70; 71] is also a possible candidate to test our prediction. For YIG, \(\mu_{0}M_{0}=0.2\) T and \(\alpha_{G}=5\times 10^{-4}\)[67; 68]. We use \(\lambda(T=0.5T_{c})=87.8\) nm for NbN [69]. We take the bias field \(\mu_{0}H_{0}=0.05\) T and the excitation field \(\mu_{0}\tilde{H}=0.01\) mT. Figure 8 shows the radiated electric field in (one of) the superconductors and the excited amplitudes of \(\mathbf{M}\) as a function of the excitation frequency \(\omega_{0}\). The frequency shift is \(2\pi\times 1.6\) GHz, comparable to half of the bare FMR frequency \(\tilde{\omega}_{\mathrm{K}}=2\pi\times 3.2\) GHz, corresponding to the decrease of the resonant magnetic field as large as 55 mT. This demonstrates the potential to achieve ultrastrong interaction between magnons and Cooper-pair supercurrent even with magnetic insulators. Before we address the temperature dependence of the frequency shift, we first show that the normal current mainly provides additional damping to the FMR with a tiny frequency shift even when \(T\to T_{c}\). We estimate the contribution of the normal current via the two-fluid model with the conductivity at low frequencies [66] \[\tilde{\sigma}(\omega)\approx\frac{\rho_{n}e^{2}\tau}{m_{e}}+i\frac{\rho_{s}e^ {2}}{m_{e}}\frac{1}{\omega}=\sigma_{n}+i\frac{1}{\omega\mu_{0}\lambda^{2}}. \tag{52}\] where \(\tau\) is the relaxation time of electrons and \(\rho_{n}\) (\(\rho_{s}\)) is the normal fluid (superfluid) density. \(\rho_{n}\) equals to the electron density \(n_{e}\) when \(T>T_{c}\). Incorporating the conductivity (52) into Maxwell's equation, the radiated magnetic field contributed by both the normal and supercurrents in the symmetric S\(|\)F\(|\)S heterostructure (to the leading order of \(k\)) reads \[\tilde{H}_{y}=M_{y}\frac{i\tilde{k}d_{F}(\tilde{k}\tanh{(i\tilde{k}d_{S})}-k)}{ \tanh{(i\tilde{k}d_{S})}(k-i\tilde{k}^{2}d_{F})+\tilde{k}(ikd_{F}-1)}, \tag{53}\] where \(\tilde{k}^{2}=i\omega\mu_{0}\sigma_{n}-1/\lambda^{2}\), with which we find the FMR frequency and the additional damping coefficient \[\omega_{\mathrm{K}} =\mu_{0}\gamma\sqrt{H_{0}+M_{0}}\sqrt{H_{0}-M_{0}\operatorname{ Re}(\tilde{H}_{y})/M_{y}},\] \[\tilde{\alpha} =\mu_{0}\gamma M_{0}\operatorname{Im}(\tilde{H}_{y})/(\omega_{ \mathrm{K}}M_{y}). \tag{54}\] When \(T\to T_{c}\), with \(\sigma_{n}\sim 1.1\times 10^{6}\) (\(\Omega\cdot\)m)\({}^{-1}\) for NbN [72], \(d_{S}=d_{F}=60\) nm, and \(\omega\sim 2\pi\times 4\) GHz, we find the frequency shift \(\delta\omega=\omega_{\mathrm{K}}-\tilde{\omega}_{\mathrm{K}}\sim 10^{-5}\) GHz is negligibly small, while the additional damping is considerably large \(\tilde{\alpha}\sim 2\times 10^{-4}\) for YIG. Since the normal current can be disregarded in the frequency shift, we calculate the temperature dependence of the FMR frequency according to Eq. (4), as plotted in Fig. 8(c) with the same parameters used in Fig. 8(a) and (b). When \(T\to 0\), the resonance frequency reaches its maximum, while when \(T\to T_{c}\), the resonance frequency recovers to the Kittel bare frequency since the superconductivity is depleted. We compare the full solution (black line) and the quasi-static solution (dashed line) and find the quasi-static approximation is excellent in all the temperature regimes when \(d_{S}\lesssim\lambda\). Figure 8: FMR spectra with the excitation field \(\mu_{0}\tilde{H}=0.01\) mT. In (a) and (b), we use the temperature \(T=0.5T_{c}=5.5\) K. (a) plots the excited electric field amplitude in (one of) the superconductors in the symmetric S\(|\)F\(|\)S heterostructure. The amplitude of the resonance electric field \(E_{z}\sim 14\) V/m. (b) is the excited amplitudes of the magnetization \(M_{y}\) with and without two adjacent superconductors. \(M_{x}\approx 0.6M_{y}\) and \(0.5M_{y}\) with and without the superconductors. The frequency shift is as large as \(2\pi\times 1.6\) GHz\(\sim\tilde{\omega}_{\mathrm{K}}/2\). (c) is the temperature dependence of FMR frequency \(\omega_{\mathrm{K}}\) by solutions with the full calculation (black line) and quasi-static approximation (dashed line). The bare FMR frequency \(\tilde{\omega}_{\mathrm{K}}=2\pi\times 3.2\) GHz. Conclusion and discussion Magnetic insulators are ideal candidates for long-range spin transport [1; 2; 5; 6; 8; 9; 10], strong coupling between magnons and microwaves [38], and quantum information processing [33; 34; 36; 37; 39], gating which by superconductors may bring new control dimensions. In comparison to metallic magnets, the mutual proximity effect may differ between magnetic insulators and superconductors, which may be helpful to distinguish different competitive mechanisms [30] in future studies. Our model system differs from the _metallic_ ferromagnets since there are no electric currents flowing in the insulators that, if large, may affect the field distribution via radiation. The formulation of the response in the superconductor by London's equation is phenomenological, which, nevertheless, captures the key physics of the interplay between FMR in the magnetic insulator and supercurrent in the superconductor. Some interesting effects, such as the role of impurity and finite correlation length of Cooper pairs, may be not precisely taken into account in the classical London model, however. Our work can be a starting point for an extension to a fully microscopic model in terms of, e.g., the Usadel equation [73], in the future. In conclusion, we analyze the interaction between the Kittel magnons in insulating magnetic film and Cooper-pair supercurrent in superconductors mediated by the radiated electric fields from the magnetization dynamics. Via highlighting the role of the total reflection of the electric fields at the ferromagnet-superconductor interface that are solved beyond the quasi-static approximation, we provide a comprehensive understanding of the absence of the FMR shift in the FI[S heterostructure and predict its existence in the S\(|\)FI\(|\)S heterostructure with the Meissner screening. The coupling between magnons and Cooper-pair supercurrent is ultrastrong with the frequency shift achieving tens of percent of the bare FMR frequency, which may bring superior advantage in information processing in on-chip magnonics and quantum magnonics. ###### Acknowledgements. We gratefully acknowledge Prof. Guang Yang and Prof. Lihui Bai for many inspiring discussions. This work is financially supported by the National Natural Science Foundation of China under Grant No. 12374109, and the startup grant of Huazhong University of Science and Technology (Grants No. 3004012185 and 3004012198). ## Appendix A General solution of \(E_{z}\) in S\(|\)F\(|\)S heterostructure Here we list the general solution of \(E_{z}(x)\) in the S\(|\)FI\(|\)S heterostructure when \(d_{S}\neq d_{S}^{\prime}\) in Fig. 6. Inside the ferromagnet, \[E_{z}(-d_{F}<x<d_{F})\] \[=\frac{-\omega\mu_{0}M_{y}(Ge^{ikx}+e^{-ikx})}{k(Ge^{ikd_{F}}-e^{ -ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}, \tag{10}\] where \[G=-\frac{-2k\sinh(ikd_{F})+k^{\prime}(f(u)e^{-ikd_{F}}+f(u^{\prime})e^{ikd_{F }})}{-2k\sinh(ikd_{F})+k^{\prime}(f(u)e^{ikd_{F}}+f(u^{\prime})e^{-ikd_{F}})}, \tag{11}\] and \(u^{\prime}=-[(k+k^{\prime})/(k-k^{\prime})]\exp(-2ik^{\prime}d_{S}^{\prime})\). In the superconductor "1", \[E_{z}(d_{F}<x<d_{F}+d_{S})=\frac{ue^{ik^{\prime}(x-d_{F})}+e^{- ik^{\prime}(x-d_{F})}}{1+u}\] \[\times\frac{-\omega\mu_{0}M_{y}(Ge^{ikd_{F}}+e^{-ikd_{F}})}{k( Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}. \tag{12}\] In the superconductor "2", \[E_{z}(-d_{F}-d_{S}^{\prime}<x<-d_{F})=\frac{e^{ik^{\prime}(x+d_{ F})}+u^{\prime}e^{-ik^{\prime}(x+d_{F})}}{1+u^{\prime}}\] \[\times\frac{-\omega\mu_{0}M_{y}(Ge^{-ikd_{F}}+e^{ikd_{F}})}{k( Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}. \tag{13}\] Out of the heterostructure, \[E_{z}(x>d_{F}+d_{S})=\frac{ue^{ik^{\prime}d_{S}}+e^{-ik^{\prime} d_{S}}}{1+u}\] \[\times\frac{-\omega\mu_{0}M_{y}(Ge^{ikd_{F}}+e^{-ikd_{F}})e^{ik( x-d_{F}-d_{S})}}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})},\] \[E_{z}(x<-d_{F}-d_{S}^{\prime})=\frac{e^{-ik^{\prime}d_{S}^{\prime }}+u^{\prime}e^{ik^{\prime}d_{S}^{\prime}}}{1+u^{\prime}}\] \[\times\frac{-\omega\mu_{0}M_{y}(Ge^{-ikd_{F}}+e^{ikd_{F}})e^{-ik( x+d_{F}+d_{S})}}{k(Ge^{ikd_{F}}-e^{-ikd_{F}})-k^{\prime}f(u)(Ge^{ikd_{F}}+e^{-ikd_{F}})}. \tag{14}\] The magnetic field follows \(B_{y}=-\partial_{x}E_{z}/(i\omega)\), which inside the magnetic insulator reads \[H_{y}=-\frac{(2M_{y}d_{F}/\lambda)f(u)f(u^{\prime})}{(f(u)+f(u^{\prime}))+2(d_ {F}/\lambda)f(u)f(u^{\prime})}. \tag{15}\] Retaining the leading order in \(k\), its real part \[\text{Re}(H_{y}) \approx-2d_{F}M_{y}\tanh(d_{S}/\lambda)\tanh(d_{S}^{\prime}/\lambda)\] \[\times[\lambda(\tanh(d_{S}/\lambda)+\tanh(d_{S}^{\prime}/\lambda))\] \[+2d_{F}\tanh(d_{S}/\lambda)\tanh(d_{S}^{\prime}/\lambda)]^{-1} \tag{16}\] leads to the FMR frequency \[\omega_{\text{K}}=\mu_{0}\gamma\sqrt{H_{0}+M_{0}}\sqrt{H_{0}-M_{0}\,\text{Re}( H_{y})/M_{y}}, \tag{17}\] and its imaginary part \[\mathrm{Im}(H_{y})= 2kd_{F}M_{y}\left(\frac{\tanh^{2}(d_{S}^{\prime}/\lambda)}{\cosh^{ 2}(d_{S}/\lambda)}+\frac{\tanh^{2}(d_{S}/\lambda)}{\cosh^{2}(d_{S}^{\prime}/ \lambda)}\right)\] \[\times\left[\tanh(d_{S}/\lambda)+\tanh(d_{S}^{\prime}/\lambda)\right.\] \[+\left.2d_{F}/\lambda\tanh(d_{S}/\lambda)\tanh(d_{S}^{\prime}/ \lambda)\right]^{-2}, \tag{24}\] contributes to the damping coefficient \[\alpha_{R}=\mu_{0}\gamma M_{0}\,\mathrm{Im}(H_{y})/(\omega_{\mathrm{K}}M_{y}). \tag{25}\] ## Appendix B Quasi-static approximation in \(\mathbf{S}|\mathbf{FI}|\mathbf{S}\) heterostructure As justified, the quasi-static approximation \(\nabla\times\mathbf{H}=0\) or \(\mathbf{J}_{s}\) is allowed when solving the electric fields _near_ the heterostructure [65]. In the FMR case, the radiated electric field is uniform in the \(y\)-\(z\) plane, so from \(\nabla\times\mathbf{E}=i\omega\mathbf{B}\), the \(x\)-component \(B_{x}=H_{d,x}+M_{x}=0\) generates no electric field outside the magnet. On the other hand, in the linear response regime for the magnetization dynamics, \(M_{z}=M_{0}\), so \(B_{z}=\mu_{0}(H_{0}+M_{z})\) is static, so only \(B_{y}=\mu_{0}M_{y}\) in the magnet radiates the time-dependent electric field according to \(-\partial_{x}E_{z}=i\omega\mu_{0}(M_{y}+H_{s,y})\). Integrating along \(x\) across the ferromagnet yields the net electric field at the interfaces obeying \[E_{z}(x=d_{F})-E_{z}(x=-d_{F})=-2d_{F}i\omega\mu_{0}(M_{y}+H_{s,y}). \tag{26}\] Out of the heterostructure, from the \(z\)-component of \(\nabla\times\mathbf{H}=0\), \(H_{y}|_{\mathrm{outside}}\) is a constant, which can be proved to vanish as in Sec. IV.2. In the quasi-static approximation, the electric field in the superconductors "1" and "2" obeys Eq. (6). From the boundary conditions with continuous \(E_{z}\) and \(H_{y}\) at interfaces and \(H_{y}|_{\mathrm{outside}}=0\), the electric field in the superconductors reads \[E_{z}(d_{F}<x<d_{F}+d_{S})\] \[=E_{z}(x=d_{F})\frac{\cosh((x-d_{S}-d_{F})/\lambda)}{\cosh(d_{S}/ \lambda)},\] \[E_{z}(-d_{F}-d_{S}<x<-d_{F})\] \[=E_{z}(x=-d_{F})\frac{\cosh((x+d_{S}^{\prime}+d_{F})/\lambda)}{ \cosh(d_{S}^{\prime}/\lambda)}, \tag{27}\] which drive the supercurrents in the superconductors adjacent to the magnet. For thin superconducting films of thickness \(O(\lambda)\), we are allowed to take an average of the supercurrents \(\mathbf{J}_{s}^{(1)}=\left[\mathbf{J}_{s}(x=d_{F})+\mathbf{J}_{s}(x=d_{F}+d_{ S})\right]/2\) and \(\mathbf{J}_{s}^{(2)}=\left[\mathbf{J}_{s}(x=-d_{F})+\mathbf{J}_{s}(x=-d_{F}-d_{ S})\right]/2\), i.e., \[J_{s,z}^{(1)} =\frac{i}{\omega\mu_{0}\lambda^{2}}E_{z}(x=d_{F})\frac{1+\cosh( d_{S}/\lambda)}{2\cosh(d_{S}/\lambda)},\] \[J_{s,z}^{(2)} =\frac{i}{\omega\mu_{0}\lambda^{2}}E_{z}(x=-d_{F})\frac{1+\cosh( d_{S}^{\prime}/\lambda)}{2\cosh(d_{S}^{\prime}/\lambda)}. \tag{28}\] The supercurrents generate the vector potential (7) and the Oersted magnetic field according to \(H_{s,y}=-\partial_{x}A_{z}/\mu_{0}\). Using the Weyl identity (34) we obtain \[H_{s,y}(x)=\left\{\begin{array}{ll}\left(d_{S}J_{s,z}^{(1)}+d_{S}^{\prime}J_ {s,z}^{(2)}\right)/2,&x>d_{F}+d_{S}\\ \left(-d_{S}J_{s,z}^{(1)}+d_{S}^{\prime}J_{s,z}^{(2)}\right)/2,&-d_{F}<x<d_{F} \\ \left(-d_{S}J_{s,z}^{(1)}-d_{S}^{\prime}J_{s,z}^{(2)}\right)/2,&x<-d_{F}-d_{ S}^{\prime}\end{array}\right.. \tag{29}\] \(H_{s,y}|_{\mathrm{outside}}=0\) requests \[d_{S}J_{s,z}^{(1)}+d_{S}^{\prime}J_{s,z}^{(2)}=0, \tag{30}\] so the Oersted magnetic field inside the ferromagnetic slab is reduced to \[H_{s,y}(-d_{F}<x<d_{F})=d_{S}^{\prime}J_{s,z}^{(2)}=-d_{S}J_{s,z}^{(1)}. \tag{31}\] Thereby, when \(d_{S}=d_{S}^{\prime}\), the supercurrents are opposite in the two superconductors. When \(d_{S}^{\prime}\to 0\), \(H_{s,y}\) vanishes in the magnet. Substituting Eqs. (B) and (26) into (30), we obtain the electric field at the surface of the ferromagnetic film: \[E_{z}(x=-d_{F})=i\mu_{0}\omega d_{S}d_{F}(M_{y}+H_{s,y})\frac{ \cosh(d_{S}/\lambda)+1}{\cosh(d_{S}/\lambda)}\] \[\times\left(\frac{d_{S}(\cosh(d_{S}/\lambda)+1)}{2\cosh(d_{S}/ \lambda)}+\frac{d_{S}^{\prime}(\cosh(d_{S}^{\prime}/\lambda)+1)}{2\cosh(d_{S} ^{\prime}/\lambda)}\right)^{-1}. \tag{32}\] Substituting it into Eq. (31), the Oersted magnetic field in the ferromagnetic film \[H_{s,y}(-d_{F}<x<d_{F})=-M_{y}\frac{d_{F}d_{S}^{\prime}d_{S}G(d_{S},d_{S}^{ \prime},\lambda)}{\lambda^{2}+d_{F}d_{S}^{\prime}d_{S}G(d_{S},d_{S}^{\prime}, \lambda)}, \tag{33}\] where \[G(d_{S},d_{S}^{\prime},\lambda)=\frac{(\cosh(d_{S}/\lambda)+1)}{ \cosh(d_{S}/\lambda)}\frac{(\cosh(d_{S}^{\prime}/\lambda)+1)}{\cosh(d_{S}^{ \prime}/\lambda)}\] \[\times\left(\frac{d_{S}(\cosh(d_{S}/\lambda)+1)}{\cosh(d_{S}/ \lambda)}+\frac{d_{S}^{\prime}(\cosh(d_{S}^{\prime}/\lambda)+1)}{\cosh(d_{S}^{ \prime}/\lambda)}\right)^{-1}. \tag{34}\] These results capture precisely the key physics of the full solution and are convenient for the calculation of the interaction between Kittel magnon and Cooper-pair supercurrent. In the linear regime of the magnetization dynamics, substituting \(B_{x}=M_{x}+H_{d,x}=0\) into the Landau-Lifshitz equation \[-i\omega M_{x}+\mu_{0}\gamma M_{y}H_{0}=\mu_{0}\gamma M_{0}H_{s,y},\] \[i\omega M_{y}+\mu_{0}\gamma M_{x}H_{0}=\mu_{0}\gamma M_{0}H_{d,x}, \tag{35}\] we find \(M_{y}\) relates to \(H_{s,y}\) via \[M_{y}=\frac{\mu_{0}^{2}\gamma^{2}M_{0}(H_{0}+M_{0})}{\mu_{0}^{2} \gamma^{2}H_{0}(H_{0}+M_{0})-\omega^{2}}H_{s,y}. \tag{10}\] When \(d_{S}^{\prime}\to 0\), \(H_{s,y}=0\) according to Eq. (10), and the FMR frequency recovers the Kittel formula \(\tilde{\omega}_{\text{K}}=\mu_{0}\gamma\sqrt{H_{0}(H_{0}+M_{0})}\)[58]. With finite \(d_{S}\) and \(d_{S}^{\prime}\), the FMR frequency is self-consistently solved via combining Eqs. (10) and (10), leading to the modified FMR frequency \[\omega_{\text{K}}=\mu_{0}\gamma\] \[\times\sqrt{\frac{\lambda^{2}H_{0}(H_{0}+M_{0})+d_{S}d_{S}^{ \prime}d_{F}G(d_{S},d_{S}^{\prime},\lambda)(H_{0}+M_{0})^{2}}{d_{S}d_{S}^{ \prime}d_{F}G(d_{S},d_{S}^{\prime},\lambda)+\lambda^{2}}}. \tag{11}\] In particular, when \(d_{S}=d_{S}^{\prime}\), \[\omega_{\text{K}} =\mu_{0}\gamma\left(\frac{2\lambda^{2}\cosh{(d_{S}/\lambda)}H_{0 }(H_{0}+M_{0})}{d_{S}d_{F}\left(\cosh{(d_{S}/\lambda)}+1\right)+2\lambda^{2} \cosh{(d_{S}/\lambda)}}\right.\] \[\left.+\frac{d_{S}d_{F}(\cosh{(d_{S}/\lambda)}+1)(H_{0}+M_{0})^{ 2}}{d_{S}d_{F}\left(\cosh{(d_{S}/\lambda)}+1\right)+2\lambda^{2}\cosh{(d_{S}/ \lambda)}}\right)^{1/2}. \tag{12}\] Approaching \(T_{\text{c}}\), \(\lambda\rightarrow\infty\), \(\cosh(d_{S}/\lambda)\to 1\), so the FMR frequency (12) recovers the Kittel formula \(\omega_{\text{K}}\rightarrow\tilde{\omega}_{\text{K}}\); otherwise \(T<T_{\text{c}}\), it is shifted.
2309.11869
Syntactic Variation Across the Grammar: Modelling a Complex Adaptive System
While language is a complex adaptive system, most work on syntactic variation observes a few individual constructions in isolation from the rest of the grammar. This means that the grammar, a network which connects thousands of structures at different levels of abstraction, is reduced to a few disconnected variables. This paper quantifies the impact of such reductions by systematically modelling dialectal variation across 49 local populations of English speakers in 16 countries. We perform dialect classification with both an entire grammar as well as with isolated nodes within the grammar in order to characterize the syntactic differences between these dialects. The results show, first, that many individual nodes within the grammar are subject to variation but, in isolation, none perform as well as the grammar as a whole. This indicates that an important part of syntactic variation consists of interactions between different parts of the grammar. Second, the results show that the similarity between dialects depends heavily on the sub-set of the grammar being observed: for example, New Zealand English could be more similar to Australian English in phrasal verbs but at the same time more similar to UK English in dative phrases.
Jonathan Dunn
2023-09-21T08:14:34Z
http://arxiv.org/abs/2309.11869v1
# Syntactic Variation Across the Grammar: ###### Abstract While language is a complex adaptive system, most work on syntactic variation observes a few individual constructions in isolation from the rest of the grammar. This means that the grammar, a network which connects thousands of structures at different levels of abstraction, is reduced to a few disconnected variables. This paper quantifies the impact of such reductions by systematically modelling dialectal variation across 49 local populations of English speakers in 16 countries. We perform dialect classification with both an entire grammar as well as with isolated nodes within the grammar in order to characterize the syntactic differences between these dialects. The results show, first, that many individual nodes within the grammar are subject to variation but, in isolation, none perform as well as the grammar as a whole. This indicates that an important part of syntactic variation consists of interactions between different parts of the grammar. Second, the results show that the similarity between dialects depends heavily on the sub-set of the grammar being observed: for example, New Zealand English could be more similar to Australian English in phrasal verbs but at the same time more similar to UK English in dairy phrases. complex adaptive system, computational syntax, computational sociolinguistics, construction grammar, syntactic variation, dialect classification 1 ## 1 Introduction Within linguistics and cognitive science, language is increasingly viewed as a complex adaptive system (Bybee, 2007; Beckner et al., 2009). For example, usage-based theories of syntax like Construction Grammar (Goldberg, 2006; Langacker, 2008) view the grammar as a network which contains structures at different levels of abstraction. The network structure of the grammar is made up of inheritance relations (mother-child) and similarity relations (sibling-sibling) between constructions. A _construction_ in this context is a symbolic mapping between form and meaning, where an individual construction is unique either syntactically or semantically. For example, there is an inheritance relationship between the schematic datasitive construction, with examples like (1a), and idiomatic constructions, with examples like (1b) and (1c). While some of the properties of the datasitive are inherited by these idiomatic children, they also retain unique and non-compositional meanings. It is this network structure which makes the grammar a complex system. As with any complex system, there are emergent properties of the grammar which cannot be described by looking at individual constructions in isolation. (1a) _write the store a check_ (1b) _give me a hand_ (1c) _give me a break_ The challenge is that most work on syntactic variation does exactly this: observing a few constructions that have been removed from the context of the larger grammar and modelled as discrete and independent variables. The contribution of this paper is to systematically evaluate whether the picture we get of syntactic variation changes depending on which sub-sets of the grammar we inspect. In other words, to what degree does our view of syntactic variation (for example, the similarity between New Zealand English and Australian English) depend on the sub-set of the grammar which we are observing? This paper makes two significant contributions to models of dialectal variation: First, we examine dialect areas at three levels of spatial granularity. This includes _regional dialects_ (like North American English), _national dialects_ (like Canadian English), and _local dialects_ (like Ontario English). This is the first computational study to systematically experiment with different levels of granularity when modelling dialectal variation. Second, we examine different nodes or clusters of constructions within the grammar. This includes macro-clusters which include hundreds of constructions and smaller micro-clusters which contain dozens of constructions. This is the first computational study to systematically experiment with the distribution of spatial variation across an entire grammar. In the first case, in order to understand syntactic variation we must view the population of speakers itself as a complex network. While most work on syntactic variation considers only a few segments of the population, this paper uses observations from 49 local populations distributed across 16 countries. Speakers of English within one regional dialect are in contact with speakers of other dialects through immigration, short-term travel, media, and digital communication. Thus, the first challenge is to conduct a dialect survey across all representative populations of English speakers in order to understand syntactic variation across the entire population network. In the second case, in order to understand syntactic variation we must view the grammar itself as a complex network so that we can observe variation in its entirety rather than in isolated and disconnected portions of the grammar. In this paper we use Computational Construction Grammar (Dunn, 2017, 2022) to provide an unsupervised network of constructions. For these experiments, this grammar network is learned using independent data from the same register as the geographic corpora used to represent dialects (tweets). Our theoretical question is whether syntactic variation, as captured by systematic grammatical comparisons between dozens of regional populations, is influenced by the sub-set of the grammar network which is used to model variation. We operationalize a model of dialect as a classifier which learns to predict the dialect membership of held-out samples. A dialect classifier is able to handle high-dimensional spaces, important for viewing variation across an entire grammar. And, importantly, the quality of a dialect classifier can be measured using its prediction accuracy on held-out samples. Our goal here is not simply to find sub-sets of the grammar which are in variation but rather to determine how accurate and robust these sub-sets of the grammar are for characterizing dialectal variation as a whole: for example, how accurately does a portion of the grammar characterize the difference between New Zealand and Australian English? To answer this question, we use prediction accuracy, error analysis, and feature pruning methods to determine the quality of dialect models that rely on different nodes within the grammar. ## 2 Related Work Current knowledge of large-scale linguistic variation (i.e., across many countries) consists of (i) non-syntactic studies of lexical variation, (ii) quantitative corpus-based studies of syntactic variation, and (iii) computational studies of syntactic variation. In the first case, lexical variation is the most approachable type of linguistic variation because it does not require any learning of grammars or representations. Thus, early large-scale corpus-based studies of variation focused on the usage of lexical items (Eisenstein et al., 2010; Mocanu et al., 2013; Eisenstein et al., 2014; Donoso et al., 2017; Rahimi et al., 2017). The challenge for lexical variation is to define the envelope of variation (i.e., discover the set of alternations to avoid topic-specific models). Two main approaches to this are, first, to rely on existing dialect surveys to provide hand-crafted alternations (Grieve et al., 2019) and, second, to use contextual embeddings to develop clusters of senses of words (Lucy and Bamman, 2021). In either case, lexical variation is a simpler phenomenon than syntactic variation because the number of potential alternations for each lexical item is limited. Recent work on semantic variation (Dunn, 2023b) has expanded this scope by looking at the conceptual rather than the lexical level and using participant-based measures like abstractness ratings and age-of-acquisition to determine what causes a concept to be subject to dialectal variation. Most corpus-based approaches to syntactic variation choose a single construction to examine and then model variation within that construction alone (Buchstaller, 2008; Grieve, 2012; Schilk and Schaub, 2016; Calle-Martin and Romero-Barranco, 2017; Graffmiller and Szmrecsanyi, 2018; Deshors and Gotz, 2020; Schneider et al., 2020; Xu et al., 2022; Rautionaho and Hundt, 2022; Larsson, 2023; Li et al., 2023). While this line of work can reveal small-scale _syntactic_ variation and change, it can never account for _grammatical_ variation. The difference between these two terms is important in this context: a single syntactic feature, like aspectual marking or noun pluralization, may be in variation but we cannot understand the variation without contextualizing it within the entire grammar. In other words, if the grammar is in fact a complex adaptive system, then measuring variation in a single construction is like assuming that the weather in Miami, Florida is independent of both the weather in Orlando and the current conditions of the Atlantic. By analogy, previous work has shown differences of behaviour in small-scale population networks vs large-scale networks (Luitinen et al., 2020). This paper examines the impact of the granularity or size of the network for both the underlying population (e.g., regional vs local dialects) and the grammar itself (e.g., different clusters of constructions within the grammar). The first computational work which viewed syntactic variation from the perspective of a complex adaptive system used 135 grammatical alternations in English, chosen manually to include features which can be extracted using regular expressions (Grieve, 2016). The alternations include examples like _anyone_ vs _anybody_ and _hear of_ vs _hear about_. This study used a corpus of letters to the editor from 240 US cities, similar in spatial granularity to the local dialects in this paper. While this early work assumed a starting set of simple alternations, it was followed by work which focused on discovering the set of variants while instead assuming the spatial boundaries of dialect areas (Dunn, 2018a). The advantage of this approach is that it both expands the scope of the study (by including more complex constructional features) while also scaling across languages (Dunn, 2019b). The other difference in these two approaches is that Grieve's early work relies on factor analysis to group together grammatical alternations according to their patterns of variation. In order to provide a measure of predictive accuracy on a held-out test set, by which a better model makes better predictions, more recent computational work has instead taken a classification approach (Dunn, 2019c; Dunn and Wong, 2022). As discussed further in the section on Computational Construction Grammar, constructions are organized into a network structure using similarity measures directly within the grammar. This means that the nodes within which variation occurs are derived independently of the model of dialectal variation. There are two questions which remain given this previous work: First, how much of the predictive accuracy of a dialect model is contained in different nodes within the grammar? One issue with surface-level alternations like _anyone_ vs _anybody_ is that, while in variation, they do not capture more schematic differences between dialects and, overall, do not hold much predictive power given their relative scarcity. Second, previous work has always focused on a given size of spatial granularity, usually at the country or city level. This paper uses three levels of granularity to help understand complexity in the underlying population network as well. ## 3 Data The data used for these experiments is drawn from geo-referenced social media posts (tweets), a source with a long history as an observation of dialectal production (c.f., Eisenstein et al., 2014; Dunn, 2019; Grieve et al., 2019). The corpus is drawn from 16 English-speaking countries, as shown in Table 1. Countries are grouped into larger regional dialects (such as North American vs South Asian English). And each country is divided into potentially many sub-areas using spatial clustering (for example, American English is divided into nine local dialect groups). Language identification is undertaken using two existing models to make the corpus comparable with existing work on mapping digital language use (Dunn, 2020; Dunn and Nijhof, 2022). This corpus thus provides three levels of granularity: 7 regions, 16 countries, and 49 local areas. The main challenge is to control for other sources of variation like topic or register that would lead to a successful classification model but would not be directly connected with the local population being \begin{table} \begin{tabular}{|l l l r|r|r|} \hline **Region** & \multicolumn{1}{c}{**Country**} & \multicolumn{1}{c|}{**Areas**} & \multicolumn{1}{c|}{**Corpus Size**} & \multicolumn{1}{c|}{**N. Samples**} \\ \hline Africa, Southern & South Africa & ZA & 2 & 9,205,166 words & 2,299 \\ & Zimbabwe & ZW & 1 & 3,252,685 words & 767 \\ \hline Africa, Sub-Saharan & Kenya & KE & 3 & 5,537,772 words & 1,310 \\ & Nigeria & NG & 3 & 6,139,948 words & 1,492 \\ \hline North America & Canada & CA & 4 & 16,801,386 words & 4,261 \\ & United States & US & 9 & 26,050,840 words & 5,802 \\ \hline Asia, South & Bangladesh & BD & 2 & 6,287,670 words & 1,649 \\ & India & IN & 7 & 28,935,606 words & 7,045 \\ & Pakistan & PK & 2 & 12,765,491 words & 3,271 \\ \hline Asia, Southeast & Indonesia & ID & 1 & 2,392,074 words & 546 \\ & Malaysia & MY & 1 & 8,580,789 words & 2,052 \\ & Philippines & PH & 2 & 9,907,209 words & 2,402 \\ \hline Europe & Ireland & IE & 1 & 13,287,397 words & 3,360 \\ & United Kingdom & UK & 5 & 20,307,094 words & 4,890 \\ \hline Oceania & Australia & AU & 4 & 23,163,447 words & 5,914 \\ & New Zealand & NZ & 2 & 8,113,382 words & 2,047 \\ \hline **Total** & **16 Countries** & **49 Areas** & **200,727,956 words** & **49,107** \\ \hline \end{tabular} \end{table} Table 1: Distribution of Sub-Corpora by Region. Each sample is a unique sub-corpus with the same distribution of keywords, each approximately 3,910 words in length. observed. In other words, we need to constrain the production of the local populations to a specific set of topics: if New Zealand tweets are focused on economics and Australian tweets on rugby, the impact of register would be a potential confound. For this reason, we develop a set of 250 common lexical items (c.f., Appendix 1) which are neither purely functional (like _the_ is) nor purely topical (like _Biden_ is). For each location we create sub-corpora which are composed of one unique tweet for each of these keywords. Thus, each location is represented by a number of sub-corpora which each have the same fixed distribution of key lexical items. This allows us to control for wide variations in topic or content or register, factors that would otherwise potentially contribute non-dialectal sources of variation. Each sub-corpus thus contains 250 tweets, one for each keyword. This creates sub-corpora with an average of approximately 3,910 words. The distribution of sub-corpora (called _samples_) is show in Table 1. For example, the US is represented by a corpus of 26 million words divided into 5,802 individual samples. Because these samples have the same distribution of lexical items, the prediction accuracy of the dialect classifier should not be influenced by topic-specific patterns in each region. The use of lexically-balanced samples, while important for forcing a focus on dialectal variation, reduces the overall size of the corpus that is available because tweets without the required keywords are discarded. To form the local areas, we organize the data around the nearest airport (within a threshold of a 25km radius) as a proxy for urban areas. We then use the density-based H-DBSCAN algorithm to cluster airports into groups that represent contiguous local areas (Campello et al., 2013, 2015; McInnes and Healy, 2017). The result is a set of local areas within a country, each of which is composed of multiple adjacent urban areas. For example, the nine areas within the United States are shown in Figure 1, where each color represents a different group. Manual adjustments of unclustered or borderline points is then undertaken to produce the final clusters. The complete set of local areas are documented in the supplementary materials. It is important to note that these local areas are entirely spatial in the sense that no linguistic information has been included in their formation. These areas represent local geographic groups, but not a linguistically-defined speech community. Figure 1: Distribution of Local Dialects in North America Our experiments in this paper operate at three levels of spatial granularity: first, distinguishing between regional dialects, like North American English vs South Asian English; second, distinguishing between country-level dialects, like American English vs Canadian English; third, distinguishing between local dialects within regions, like Midwestern American English vs Central Canadian English. These different levels of granularity allow us to test how well different portions of the grammar are able to model increasingly fine distinctions between dialects. ## Methods The basic approach in this paper is to use an unsupervised grammar derived from the Construction Grammar paradigm (CxG) as a feature space for dialect classification. A dialect classifier is trained to predict the regional dialect of held-out samples using the frequency of constructions within the sample. This section describes in more detail both the grammar model and the dialect models. ### Computational Construction Grammar A construction grammar is a network of form-meaning mappings at various levels of schematicity. As discussed above, the grammar is a network with inheritance relationships and similarity relationships between pairs of constructions. CxG is a usage-based approach to syntax which, in practical terms, means that more item-specific constructions are learned first and then generalized into more schematic constructions (Nevens et al., 2022; Doumen et al., 2023). The grammar learning algorithm used in this paper is taken from previous work (Dunn, 2017, 2018, 2019; Dunn and Nini, 2021; Dunn, 2022; Dunn and Tayyar Madabushi, 2021), with the grammar trained from the same register as the dialectal data (tweets). Rather than describe the computational details of this line of work, this section instead analyzes constructions within the grammar as examples of the kinds of features used to model syntactic variation. The complete grammar together with examples is available in the supplementary material and the codebase for computational CxG is available as a Python package.1 Footnote 1: [https://github.com/jonathandumn/c2xg/tree/v2.0](https://github.com/jonathandumn/c2xg/tree/v2.0) A break-down of the grammar used in the experiments is shown in Figure 2, containing a total of 15,215 individual constructions. Constructions are represented as a series of slot-constraints and the first distinction between constructions involves the types of constraints used. Computational CxG uses three types of slot-fillers: lexical (lex, for item-specific constraints), syntactic (syn, for form-based or local co-occurrence constraints), and semantic (sem, for meaning-based or long-distance co-occurrence constraints). As shown in (2), slots are separated by dashes in the notation used here. Thus, syn in (2) describes the type of constraint and _determined-permitted_ provides its value using two central exemplars of that constraint. Examples or tokens of the construction from a test corpus of tweets are shown in (2a) through (2d). (2) [ syn: _determined-permitted_ - syn: _to_ - syn: _pushover-backtrack_ ] (2a) refused to play (2b) tried to watch (2c) trying to run (2d) continue to drive Thus, the construction in (2) contains three slots, each defined using a syntactic constraint. These constraints are categories learned at the same time that the grammar itself is learned, formulated within an embedding space. An embedding that captures local co-occurrence information is used for formulating syntactic constraints (a continuous bag-of-words fastText model with a window size of 1) while an embedding which instead captures long-distance co-occurrence information is used for formulating semantic constraints (a skip-gram fastText model with a window size of 5). Constraints are then formulated as centroids within that embedding space. Thus, the tokens for the construction in (2) are shown in (2a) through (2d). For the first slot-constraint, the name (_determined-permitted_) is derived from the lexical items closest to the centroid of the constraint. The proto-type structure of categories is modeled using cosine distance as a measure of how well a particular slot-filler satisfies the constraint. Here the lexical items "reluctant", "ready", "refusal", and "willingness" appear as fillers sufficiently close to the centroid to satisfy the slot-constraint. The contruction itself is a complex verb phrase in which the main verb encodes the agent's attempts to carry out the event encoded in the infinitive verb. This can be contrasted semantically with the construction in (3), which has the same form but instead encodes the agent's preparation for carrying out the social action encoded in the infinitive verb. (3) [ sym: _determined-permitted_ - sym: _to_ - sym: _demonstrate-reiterate_ ] (3a) reluctant to speak (3b) ready to exercise (3c) refusal to recognize (3d) willingness to govern An important idea in CxG is that structure is learned gradually, starting with item-specific surface forms and moving to increasingly schematic and productive constructions. This is called _scaffolded learning_ because the grammar has access to its own previous analysis for the purpose of building more complex constructions (Dunn, 2022). In computational CxG this is modelled by learning over iterations with different sets of constraints available. For example, the constructions in (2) and (3) are learned with only access to the syntactic constraints, while the constructions in (4) and (5) have access to lexical and semantic constraints as well. This allows grammars to become more complex while not assuming basic structures or categorizations until they have been learned. In the dialect experiments below we distinguish between early-stage grammars (which only contain syntactic constraints) and late-stage grammars (which contain lexical, syntactic, and semantic constraints). (4) [ lex: "the" - sem: _way_ - lex: "to" ] (4a) the chance to (4b) the way to (4c) the path to (4d) the steps to Constructions have different levels of abstractness or schematicity. For example, the construction in (4) functions as a modifier, as in the X position in the sentence "Tell me [X] bake yeast bread." This construction is not purely item-specific because it has multiple types or examples. But it is less productive than the location-based noun phrase construction in (5) which will have many more types in a corpus of the same size. CxG is a form of lexico-grammar in the sense that there is a continuum between item-specific and schematic constructions, exemplified here by (4) and (5), respectively. The existence of constructions at different levels of abstraction makes it especially important to view the grammar as a network with similar constructions arranged in local nodes within the grammar. (5) [ lex: "the" - sem: _streets_ ] (5a) the street (5b) the sidewalk (5c) the pavement (5d) the avenues A grammar or construcicon is not simply a set of constructions but rather a network with both taxonomic and similarity relationships between constructions. In computational CxG this is modelled by using pairwise similarity relationships between constructions at two levels: (i) representational similarity (how similar are the slot-constraints which define the construction) and (ii) token-based similarity (how similar are the examples or tokens of two constructions given a test corpus). Matrices of these two pairwise similarity measures are used to cluster constructions into smaller and then larger groups. For example, the phrasal verbs in (6) through (8) are members of a single cluster of phrasal verbs. Each individual construction has a specific meaning: in (6), focusing on the social attributes of a communication event; in (7), focusing on a horizontally-situated motion event; in (8), focusing on a motion event interpreted as a social state. These constructions each have a unique meaning but a shared form. The point here is that at a higher-order of structure, there are a number of phrasal verb constructions which share the same schema. These constructions have sibling relationships with other phrasal verbs and a taxonomic relationship with the more schematic phrasal verb construction. These phrasal verbs are an example of a _micro-cluster_ referenced in the dialect experiments below (c.f., Dunn In Revision). (6) [ sem: _screaming-yelling_ - syn: _through_ ] (6a) stomping around (6b) cackling on (6c) shouting out (6d) drooling over (7) [ sem: _rolled-turned_ - syn: _through_ ] (7a) rolling out (7b) slid around (7c) wiped out (7d) swept through (8) [ sem: _sticking-hanging_ - syn: _through_ ] (8a) poking around (8b) hanging out (8c) stick around (8d) hanging around An even larger structure within the grammar is based on groups of these micro-clusters, structures which we will call _macro-clusters_. A macro-cluster is much larger because it contains many sub-clusters which themselves contain individual constructions. An example of a macro-cluster is given with five constructions in (9) through (13) which all belong to same neighborhood of the grammar. The partial noun phrase in (9) points to a particular sub-set of some entity (as in, "parts of the recording"). The partial depositional phrase in (10) points specifically to the end of some temporal entity (as in, "towards the end of the show"). In contrast, the partial noun phrase in (11) points a particular sub-set of a spatial location (as in, "the edge of the sofa"). A more specific noun phrase in (12) points to a sub-set of a spatial location with a fixed level of granularity (i.e., at the level of a city or state). And, finally, in (13) an depositional phrase points to a location within a spatial object. The basic idea here is to use these micro-clusters and macro-clusters as features for dialect classification in order to determine how variation is distributed across the grammar. (9) [ sem: _part_ - lex: "of" - syn: _the_ ] (9a) parts of the (9b) portion of the (9c) class of the (9d) division of the (10) [ syn: _through_ - sem: _which-whereas_ - lex: "end" - lex: "of" - syn: _the_ ] (10a) at the end of the (10b) before the end of the (10c) towards the end of the (11) [ sem: _which-whereas_ - sem: _way_ - lex: "of" ] (11a) the edge of (11b) the side of (11c) the corner of (11d) the stretch of (12) [ sem: _which-whereas_ - syn: _southside-northside_ - syn: _chicagoland_ ] (12a) in north texas (12b) of southern california (12c) in downtown dallas (12d) the southside chicago (13) [ lex: "of" - syn: _the_ - syn: _courtyard-balcony_ ] (13a) of the gorge (13b) of the closet Figure 2: Break-Down of the Grammar Used in the Experiments by Construction Type (13c) of the room (13d) of the palace The examples in this section have illustrated some of the fundamental properties of CxG and also provide a discussion of some of the features which are used in the dialect classification study. A more detailed linguistic examination of the contents of a grammar like this is available elsewhere (Dunn, 2023a). A break-down of the contents of the grammar is shown in Figure 2. The 15,215 total constructions are first divided into different scaffolds (early-stage vs late-stage), with a smaller number of local-only constructions which tend to be more schematic (1,045 vs. 14,170 constructions in the late-stage grammar). This grammar has a network structure and contains 2,132 micro-clusters (e.g., the phrasal verbs discussed above). At an even higher level of structure, there are 97 macro-clusters or neighborhoods within the grammar (e.g., the sub-set referencing constructions discussed above). We can thus look at variation across the entire grammar, across different levels of scaffolded structure, and across different levels of abstraction. The main reason for doing this is to determine whether all nodes within the grammar vary across dialects in the same way. ### Dialect Classification A dialect classifier is a supervised discriminative approach to modelling dialects: given labelled training data, the model learns to distinguish between dialects like American and Canadian English using syntactic features from computational CxG. There are two advantages to taking a classification approach: First, classifiers work well in high-dimensional spaces while more traditional methods from quantitative sociolinguistics do not scale across tens of thousands of potentially redundant structural features. Second, dialect classifiers provide a ground-truth measure of quality in the form of prediction accuracy: we know how robust a classifier model is given how well it is able to distinguish between different dialects. The classification of dialects or varieties of a language is a robust area, although most work views this as an engineering challenge rather than as a way to learn about dialects themselves (Belinkov and Glass, 2016; Gamallo et al., 2016; Barbaresi, 2018; Kroon et al., 2018; Zampieri et al., 2020). Following previous work on using dialect classification to model linguistic variation (Dunn, 2018a, 2019b, c; Dunn and Wong, 2022), we use a Linear Support Vector Machine for classification. The advantage of an SVM over neural classifiers is that we can inspect the features which are useful for dialect classification; and the advantage over Logistic Regression or Naive Bayes is a better handling of redundant or correlated features. The data is divided into a training set (80%) and a testing set (20%) with the same split used for each classification experiment. This means that the models for each level of spatial granularity (region, country, local area) are directly comparable across feature types. These dialect classifiers become our means of observing a grammar's ability to capture spatial variation: a better grammar models dialect variation with a higher prediction accuracy. This means that it is better able to describe the differences between dialects. For instance, the usage of phrasal verbs in (6) to (8) might differ significantly between Canada and New Zealand English, while at the same time accounting for only a minimal percentage of the overall syntactic difference between these dialects. A predictive model like a classifier, however, is evaluated precisely on how well it characterizes the total syntactic difference between each dialect.2 Footnote 2: This holds true unless multiple models reach the same ceiling of accuracy, a situation which does not occur in this study. We can further explore each dialect model using a confusion matrix to examine the types of errors made. For instance, Figure 3 shows the distribution of errors for the country-level classification of dialects using the late-stage grammar. Correct predictions occur on the diagonal; given the weighted f-score of 0.97, most predictions are correct in this case. Yet the number of errors for each pair of dialects reflects their similarity. Thus, the most similar countries are (i) New Zealand and Australia with 35+28 errors, (ii) Ireland and the UK with 26+21 errors, (iii) Pakistan and India with 16+21 errors, and (iv) Canada and the US with 17+1 errors. The confusion matrix also reveals the dominant variety, in the sense that only one sample of American English is mistakenly predicted to be Canadian (rows represent the true labels) while 17 samples of Canadian English are mistaken for American English. Thus, these are asymmetrical errors. The point here is that these models allow us to measure not only the overall quality of the dialect classifier (its prediction accuracy represented by the f-score) but also determine which dialects are the most similar. This, in turn, means that we can measure the stability of dialect similarity across different nodes within the grammar. Note that this error-based similarity is different from feature-based similarity measures (Szmrecsanyi et al., 2019; Dunn, 2019) which instead operate on the grammatical representations themselves. However, the more reliable a classifier is (i.e., the higher its prediction accuracy), the more its errors can be used directly as a similarity measure. Our focus here is on understanding the dialect model itself by examining its false positive and false negative errors. Figure 3: Distribution of Errors in Country-Level Dialect Model with Late-Stage Grammar ## 4 Results The basic question in this paper is whether different nodes within the grammar equally capture dialect variation and whether the resulting picture of dialectal relations is the same regardless of which node we examine. Now that we have discussed the grammatical features used (from computational CxG) and the means of observing syntactic variation (a dialect classifier), this section analyzes the results. We start with region-level dialects before moving to country-level and then local dialects. ### Regional Dialects The 16 countries used in this study are grouped into regions as shown in Table 1, resulting in seven larger macro-dialects as shown in Table 2. The table shows three measures: precision (false positive errors), recall (false negative errors), and f-score. On the left the table shows the dialect performance with the late-stage grammar (i.e., with constructions containing all three types of slot-constraints) and on the right the early-stage grammar (i.e., with constructions containing only syntactic slot constraints). The late-stage grammar performs better (0.99 f-score vs 0.93) but both perform well. In the early-stage grammar, two dialects are a particular source of the lower performance: Southern African English (South Africa and Zimbabwe) and Oceanic English (Australia and New Zealand). The overall f-score of the late-stage grammar (0.99) tells us that, at this level of spatial granularity, the grammar as a single network is able to make almost perfect distinctions between the syntax of these regional dialects. But how does this ability to capture variation spread across nodes within the grammar? This is explored in Figure 4, which contrasts the f-score of individual nodes using the macro-clusters and micro-clusters discussed above. A macro-cluster is a fairly large group of constructions based on pairwise similarity between the constructions themselves. A micro-cluster is a smaller group within a macro-cluster, based on pairwise similarity between instances or tokens of constructions in a test corpus (Dunn, In Revision). Each point in this figure is a single node of the grammar (blue for macro-clusters and orange for micro-clusters). The position of the point on the y-axis reflects the prediction performance for regional dialect classification using only that portion of the grammar. The dotted horizontal line represents the majority baseline, at which classification performance is no better than chance. This figure shows us, first, that the grammar as a whole is always better able to describe syntactic variation than individual nodes within that grammar. This is important in itself because, if language is a complex adaptive system, then it follows that variation in language is at least in part an emergent phenomenon in \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline & \multicolumn{3}{c|}{**Late-Stage Grammar**} & \multicolumn{3}{c|}{**Early-Stage Grammar**} \\ **Region** & _Precision_ & _Recall_ & _F-Score_ & _Precision_ & _Recall_ & _F-Score_ \\ \hline Africa, Southern & 0.99 & 0.97 & 0.98 & 0.88 & 0.81 & 0.84 \\ Africa, Sub & 0.98 & 0.99 & 0.99 & 0.92 & 0.91 & 0.91 \\ North America & 0.99 & 1.00 & 0.99 & 0.93 & 0.95 & 0.94 \\ Asia, South & 1.00 & 1.00 & 1.00 & 0.99 & 0.98 & 0.98 \\ Asia, Southeast & 0.99 & 0.98 & 0.98 & 0.92 & 0.92 & 0.92 \\ Western Europe & 0.98 & 0.99 & 0.99 & 0.94 & 0.94 & 0.94 \\ Oceania & 0.98 & 0.98 & 0.98 & 0.88 & 0.89 & 0.88 \\ \hline **Weighted Avg** & **0.99** & **0.99** & **0.99** & **0.93** & **0.93** \\ \hline \end{tabular} \end{table} Table 2: Performance of Dialect Classifier With Regional Dialects: Late-Stage Constructions (Left, with all constraint types) and Early-Stage Constructions (Right, with only syntactic constraints) which the interaction between elements (here constructions) cannot be characterized by observing those elements in isolation. The second finding which this figure shows is that individual nodes vary greatly in the amount of dialectal variation which they are subject to. Thus, several nodes in isolation are able to characterize between 70% and 80% of the overall variation, but others characterize very little. Within micro-clusters, especially, we see that many nodes within the grammar are essentially not subject to variation and thus provide little predictive power on their own. The same type of graph is shown again in Figure 5, now for the early-stage grammar with only local constraints. This figure replicates the same findings: First, no individual node is able to capture the overall variation as well as the complete network. Second, there is a wide range across macro-clusters and micro-clusters in terms of that node's ability to characterize dialectal variation. What this tells us, in short, is that the complete picture of dialectal variation cannot be seen by observing small portions of the grammar in isolation. And yet this is precisely what almost all work on syntactic variation does. The next question is which regional dialects are similar according to the representations learned in this classifier. This is shown in Table 3 using the relative number of errors between regional dialects. Thus, for Figure 4: Distribution of Classification Performance Across Sub-Sets of the Grammar, Regional Dialect with the Late-Stage Grammar. Each macro-cluster and micro-cluster of constructions is plotted with its f-score on the dialect classification task, with both the performance of the entire late-stage grammar and the majority baseline also shown. example, European English (UK and Ireland) and Oceanic English (Australia and New Zealand) account for a plurality of errors: 37.5% in the late-stage grammar and 23.74% in the early-stage grammar. Thus, these are the most similar dialects because they are the most likely to be confused. Given the colonial spread of English, this is of course not surprising. What this table also shows is that the similarity between dialects differs to some degree depending on which part of the overall grammar we are observing: for example, North American English and Southeast Asian English are much more similar if we observe constructions from one part of the grammar (late-stage, with 10.83% of the errors) than the other (early-stage, with only 5.93% of the errors). This is important because it means that observing part of the grammar in isolation will not only inadequately characterize the amount of dialectal variation but will also provide different characterizations of the dialects relative to one another. Before we investigate changes in dialect similarity across nodes within the grammar, we first evaluate the degree to which the classifiers depend on a small number of very predictive features. A possible confound with a prediction-based model of dialect is that a small number of constructions within each node could contribute almost all of the predictive power. This would distort our view of variation by implying that many constructions are in variation when, in fact, only a small number are. This is shown in Figure 6 for the late-stage grammar using the unmasking method from forensic linguistics (Koppel et al., 2007). The basic Figure 5: Distribution of Classification Performance Across Sub-Sets of the Grammar, Regional Dialect with the Early-Stage Grammar idea behind this method is to train a linear classifier (in this case an SVM) over many rounds, removing the most predictive features for each dialect on each round. In the figure the f-score for each region is plotted on the y-axis over the course of 500 rounds of feature pruning, where each round removes the top features for each dialect. Overall, the prediction accuracy at round 500 represents the ability to characterize dialects when the top 25% of construction have been removed. A robust classification model exhibits a gentle downward curve on an unmasking plot while a brittle model (depending on just a few constructions) would exhibit a sharp decline in the first several rounds. This figure, then, shows that the regional dialect model is quite robust overall. This supports our further analysis by showing that the predictive power is not constrained to only a few constructions within each node. \begin{table} \begin{tabular}{|c c|c c|} \hline \multicolumn{2}{|c|}{**Late-Stage Grammar**} & \multicolumn{2}{|c|}{**Early-Stage Grammar**} \\ _Region Pairs_ & _\% of Errors_ & _Region Pairs_ & _\% of Errors_ \\ \hline Europe + Oceania & 37.50\% & Europe + Oceania & 23.74\% \\ America + Asia (Southeast) & 10.83\% & America + Oceania & 20.33\% \\ Africa (Southern) + Africa (Sub) & 10.83\% & Africa (Southern) + Africa (Sub) & 10.09\% \\ America + Oceania & 10.00\% & Asia (South) + Asia (Southeast) & 6.38\% \\ Asia (South) + Asia (Southeast) & 8.33\% & America + Asia (southeast) & 5.93\% \\ Africa (Southern) + Oceania & 5.83\% & Africa (Southern) + America & 5.34\% \\ Asia (Southeast) + Oceania & 5.00\% & Africa (Southern) + Asia (Southeast) & 5.19\% \\ \hline \end{tabular} \end{table} Table 3: Distribution of Errors in Dialect Classifier With Regional Dialects: Late-Stage Constructions (Left) and Early-Stage Constructions (Right) Figure 6: Unmasking of Regional Dialects with the Late-Stage Grammar. Each 50 rounds removes approximately 2.5% of the grammar, so that round 500 includes only 75% of the original grammar. Given that these dialect models do not implicitly depend on only a few constructions, we further examine the variation in similarity relationships across nodes within the grammar in Figure 7. This figure shows a violin plot of the distribution of correlation scores between (i) the similarity relations derived from a node within the grammar and (ii) the similarity relations derived from the high-performing late-stage grammar. The full late-stage grammar serves as a sort of ground-truth here because its prediction accuracies are nearly perfect. Thus, this measure can be seen as an indication of how close we would get to the actual relationships between dialects by observing only one node within the grammar (a macro-cluster or micro-cluster). The figure shows that, in all cases, there is no meaningful relationship between the two. If our goal is to characterize syntactic dialect variation as a whole, this means that studies which observe only isolated sets of features will not be able to make predictions beyond those features. In short, language is a complex adaptive system and observing only small pieces of this system is inadequate for capturing emergent interactions between constructions. Figure 7: Correlation of Error Distribution Between Late-Stage Grammar and Nodes within the Grammar for Regional Dialects. High Correlation indicates that the same dialects are similar in each model type while low correlation indicates that the relationships between dialects for a given component of the grammar differ from the late-stage grammar. ### National Dialects The previous section analyzed variation across regional dialects in order to show that variation is spread across many nodes within the grammar and that our view of syntactic variation depends heavily on the specific portions of the grammar under observation. This section continues this analysis with finer-grained country-level dialects in order to determine whether these same patterns emerge. The same methods of analysis are used, starting with the classification performance in Table 4. As before, this is divided into late-stage constructions and early-stage constructions. This table represents a single dialect model but for convenience it is divided into inner-circle varieties on the top and outer-circle varieties on the bottom (Kachru, 1990). As before, the late-stage grammar provides a better characterization of dialects than the early-stage grammar, 0.96 f-score vs. 0.83 f-score. While the late-stage grammar's performance is comparable to the regional-dialect model, the early-stage model shows a sharp decline. Dialects with lower performance are less distinct and thus more influenced by other dialects; for example, we see that New Zealand English in both models is much less distinct or entrenched than other country-level dialects. It is thus more difficult to identify a sample of New Zealand English (c.f., Dunn and Wong 2022). Continuing with the error analysis in Table 5, we see that almost 20% of the errors in the late-stage model are confusions between New Zealand and Australia and over 12% between Ireland and the UK (which includes Northern Ireland here). As before, the similarity between dialects derived from classification errors reflects the similarity between these countries in geographic, historical, and social terms. \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline & \multicolumn{3}{c|}{**Late-Stage Grammar**} & \multicolumn{3}{c|}{**Early-Stage Grammar**} \\ **Country** & _Precision_ & _Recall_ & _F-Score_ & _Precision_ & _Recall_ & _F-Score_ \\ \hline \multicolumn{6}{|c|}{**Inner-Circle Varieties**} \\ \hline Australia (au) & 0.94 & 0.95 & 0.94 & 0.81 & 0.83 & 0.82 \\ Canada (ca) & 0.98 & 0.98 & 0.98 & 0.71 & 0.73 & 0.72 \\ Ireland (ie) & 0.94 & 0.96 & 0.95 & 0.88 & 0.88 & 0.88 \\ New Zealand (nz) & 0.89 & 0.84 & 0.86 & 0.64 & 0.59 & 0.61 \\ United Kingdom (uk) & 0.96 & 0.96 & 0.96 & 0.85 & 0.86 & 0.86 \\ United States (us) & 0.97 & 1.00 & 0.98 & 0.87 & 0.87 & 0.87 \\ south Africa (za) & 0.99 & 1.00 & 0.99 & 0.80 & 0.81 & 0.80 \\ \hline \multicolumn{6}{|c|}{**Outer-Circle Varieties**} \\ \hline Bangladesh (bd) & 0.97 & 0.89 & 0.93 & 0.76 & 0.69 & 0.72 \\ Indonesian (id) & 1.00 & 0.84 & 0.91 & 0.57 & 0.49 & 0.53 \\ India (in) & 0.97 & 0.98 & 0.97 & 0.94 & 0.96 & 0.95 \\ Kenya (ke) & 0.98 & 0.99 & 0.98 & 0.92 & 0.87 & 0.89 \\ Malaysia (my) & 0.96 & 0.98 & 0.97 & 0.82 & 0.74 & 0.78 \\ Nigeria (ng) & 0.98 & 1.00 & 0.99 & 0.90 & 0.83 & 0.87 \\ Philippines (ph) & 0.98 & 0.98 & 0.98 & 0.83 & 0.87 & 0.85 \\ Pakistan (pk) & 0.94 & 0.94 & 0.94 & 0.84 & 0.89 & 0.87 \\ Zimbabwe (zw) & 0.95 & 0.91 & 0.93 & 0.48 & 0.53 & 0.51 \\ \hline **Weighted Avg** & **0.96** & **0.96** & **0.96** & **0.83** & **0.83** & **0.83** \\ \hline \end{tabular} \end{table} Table 4: Performance of Dialect Classifier With National Dialects: Late-Stage Constructions (Left) and Early-Stage Constructions (Right). This is a single model for all dialects, with Inner-Circle Varieties shown at the top and Outer-Circle Varieties at the bottom. The distribution of predictive accuracy across nodes within the grammar is shown in Figure 8. Once again we see that a few portions of the grammar have relatively high performance on their own (capturing upwards of 70% of the predictive power), but that no individual nodes perform as well as the grammar as a whole. As before this means that variation is spread throughout the grammar and that interactions between nodes is important for characterizing syntactic variation at the country level. \begin{table} \begin{tabular}{|c c|c c|} \hline **Late-Stage Grammar** & \multicolumn{2}{|c|}{**Early-Stage Grammar**} \\ _Country Pairs_ & _\% of Errors_ & _Country Pairs_ & _\% of Errors_ \\ \hline Australia + New Zealand & 19.85\% & Canada + United States & 11.76\% \\ Ireland + United Kingdom & 12.11\% & Australia + New Zealand & 6.77\% \\ India + Pakistan & 11.86\% & Bangladesh + Pakistan & 5.58\% \\ Bangladesh + Pakistan & 7.47\% & Australia + Canada & 5.34\% \\ Australia + United Kingdom & 5.93\% & Ireland + United Kingdom & 5.29\% \\ Canada + United States & 4.12\% & Australia + United Kingdom & 4.69\% \\ \hline \end{tabular} \end{table} Table 5: Distribution of Errors in Dialect Classifier With National Dialects: Late-Stage Constructions (Left) and Early-Stage Constructions (Right) Figure 8: Distribution of Classification Performance Across Sub-Sets of the Grammar, National Dialect with the Late-Stage Grammar Continuing with the distribution of similarity values across sub-sets of the grammar, Figure 9 shows the correlation between macro- and micro-clusters and the ground-truth of the high-performing late-stage grammar. Here the correlation is above chance but remains quite low (below 0.2). This again indicates that different nodes of the grammar provide different views of the similarity between country-level dialects, agreeing with the different errors ranks shown in Table 5. For instance, New Zealand English might be close to Australian in one part of the grammar but close to UK English in another. As a high-dimensional and complex system, grammatical variation must be viewed from the perspective of the entire grammar. National dialects are more fine-grained than regional dialects and also have a higher number of categories (from 7 to 16), making the classification task more difficult because it must now distinguish between similar dialects (like American and Canadian English). Given the importance of the nation-state in modern mobility (i.e., in terms of ease of travel and immigration), these country-level dialects are more reflective of the underlying population than cross-country aggregations. In other words, the social network of countries is more coherent than that of larger regions because it is the nation which creates boundaries around human mobility. Since we are viewing both the grammar and the population of speakers of complex networks, it is important to go further and analyze local populations in the form of dialect areas within countries. Figure 9: Correlation of Error Distribution Between Late-Stage Grammar and Nodes within the Grammar for National Dialects. High Correlation indicates that the same dialects are similar in each model type. ### Local Dialects This section takes a closer look at dialectal variation by modelling the differences between local populations within the same region. As discussed above, data is collected from areas around airports as a consistent proxy for urban areas. These urban areas are then clustered into local groups using spatial but not linguistic information. Thus, as shown in the map of North American dialect areas in Figure 1, a country-level dialect is divided into many smaller areas and then the differences between these local dialects are modelled using the same classification methods. This section examines the North American, European, and South Asian models in more detail. The full results are available in the supplementary material. #### 4.3.1 North America We start with North American dialects, mapped in Figure 1 and listed by name in Table 6, again with the late-stage grammar on the left and the early-stage grammar on the right. The distinction between same-country dialects is much smaller which means that the prediction task is much harder: here the f-scores drop to 0.69 and 0.40 (both of which remain much higher than the majority baseline). A classification model thus also provides a measure of the amount of dialectal variation between varieties: here there is much variation overall because the local populations are more similar. Local dialects with lower performance are again less distinguishable and thus less unique in their grammar: for example, Midwestern and Plains American English have the lowest f-scores at 0.57 and 0.52, respectively. The distribution of classification errors is shown in Table 7. All major sources of errors in the late-stage grammar are within the same country and close geographically. For example, over 10% of the errors are between Ontario and Quebec. The distribution of errors also shows that the lower performance of the \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline & \multicolumn{3}{c|}{**Late-Stage Grammar**} & \multicolumn{3}{c|}{**Early-Stage Grammar**} \\ **Local Area** & _Precision_ & _Recall_ & _F-Score_ & _Precision_ & _Recall_ & _F-Score_ \\ \hline Western Canada (CA-1) & 0.82 & 0.83 & 0.82 & 0.64 & 0.58 & 0.61 \\ Ontario (CA-2) & 0.65 & 0.69 & 0.67 & 0.43 & 0.45 & 0.44 \\ Quebec (CA-3) & 0.65 & 0.61 & 0.63 & 0.35 & 0.33 & 0.34 \\ Nova Scotia (CA-4) & 0.57 & 0.56 & 0.57 & 0.19 & 0.24 & 0.21 \\ \hline Midwest (US-1) & 0.57 & 0.57 & 0.57 & 0.32 & 0.26 & 0.29 \\ Central California (US-2) & 0.80 & 0.81 & 0.81 & 0.35 & 0.34 & 0.35 \\ Texas (US-3) & 0.66 & 0.67 & 0.67 & 0.30 & 0.32 & 0.31 \\ Southern (US-4) & 0.68 & 0.56 & 0.61 & 0.26 & 0.27 & 0.27 \\ Florida (US-5) & 0.77 & 0.78 & 0.78 & 0.23 & 0.30 & 0.26 \\ West Texas (US-6) & 0.60 & 0.55 & 0.57 & 0.25 & 0.24 & 0.25 \\ East Coast (US-7) & 0.77 & 0.82 & 0.79 & 0.57 & 0.56 & 0.56 \\ Plains (US-8) & 0.55 & 0.49 & 0.52 & 0.11 & 0.14 & 0.12 \\ Southwestern (US-9) & 0.66 & 0.71 & 0.68 & 0.39 & 0.41 & 0.40 \\ \hline **Weighted Avg** & **0.69** & **0.69** & **0.69** & **0.40** & **0.40** & **0.40** \\ \hline \end{tabular} \end{table} Table 6: Performance of Dialect Classifier With Local Dialects: Late-Stage Constructions (Left) and Early-Stage Constructions (Right). This is a single model for all dialects, with American Varieties shown at the bottom and Canadian at the top. Midwest and Plains areas is a result of their similarity to one another and, in the case of the Midwest, to the Eastern dialect. Thus, the types of errors here are as important as the number of errors. The distribution of nodes within the grammar in terms of classification performance is shown in Figure 10; here most micro-clusters have minimal predictive adequacy on their own but some macro-clusters retain meaningful predictive power. The correlation between similarity relations across nodes within the grammar is visualized in Figure 11. While the overall prediction accuracy is lower, the similarity relationships are \begin{table} \begin{tabular}{|c r|c r|} \hline \multicolumn{2}{|c|}{**Late-Stage Grammar**} & \multicolumn{2}{|c|}{**Early-Stage Grammar**} \\ _Country Pairs_ & \multicolumn{1}{|c|}{\% _of Errors_} & \multicolumn{1}{|c|}{_Country Pairs_} & \multicolumn{1}{|c|}{\% _of Errors_} \\ \hline Ontario (CA-2) + Quebec (CA-3) & 10.31\% & Western (CA-1) + Quebec (CA-3) & 6.09\% \\ Western (CA-1) + Ontario (CA-2) & 8.05\% & Ontario (CA-2) + Quebec (CA-3) & 5.12\% \\ Midwest (US-1) + East (US-7) & 7.73\% & Ontario (CA-2) + East (US-7) & 4.47\% \\ Western (CA-1) + Quebec (CA-3) & 6.44\% & Western (CA-1) + Ontario (CA-2) & 4.39\% \\ Midwest (US-1) + Plains (US-8) & 5.48\% & Midwest (US-1) + Plains (US-8) & 3.82\% \\ \hline \end{tabular} \end{table} Table 7: Distribution of Errors in Dialect Classifier With North American Dialects: Late-Stage Constructions (Left) and Early-Stage Constructions (Right) Figure 10: Distribution of Performance across the Late-Stage grammar, North America more stable. In other words, the representation of which dialects have similar grammars is less dependent here on the sub-set of the grammar being observed. #### 4.3.2 Europe The map of European dialects (within the UK and Ireland) is shown in Figure 12. As before, these areas are formed using spatial clustering with the dialect classifier then used to characterize the syntactic differences between dialect areas. As listed in Table 8, there are six local dialects with an f-score of 0.77 (late-stage grammar) and 0.64 (early-stage grammar). The range of f-scores across dialect areas is wider than in the North American model, with a high of 0.94 (Ireland) and a low of 0.40 (Scotland). This is partly driven by the number of samples per dialect, with Scotland particularly under-represented. This means, then, that the characterizations this model makes about Irish English are much more reliable than the characterizations made about Scottish English. The distribution of performance across nodes in the grammar is shown in Figure 13. The figure shows that many nodes are meaningfully above the majority baseline in terms of predictive power but all fall short of the grammar as a whole. In fact, some micro-clusters have nearly as much power as the best macro-clusters. Figure 11: Correlation of Error Distribution Between Late-Stage Grammar and Nodes within the Grammar for North American Dialects. High Correlation indicates that the same dialects are similar in each model. This indicates that the variation in UK and Irish English is spread throughout many individual parts of the grammar, a fact that would be overlooked if we had focused instead on a few constructions in isolation. ## References * [1] M. C. The main take-away from the European model, then, is once again that an accurate characterization of the grammatical differences between these local dialects requires access to the entire grammar. Focusing on smaller nodes within the grammar does not provide strong predictive power. As before, this has two implications: first, that variation is distributed across the grammar and, second, that emergent relationships between constructions are essential for depicting syntactic variation in the aggregate. #### 4.3.3 South Asia The final model of local dialects we will investigate is the outer-circle varieties from South Asia. These differ from the inner-circle dialects in a number of ways. In the first case, these speakers use English for different purposes than inner-circle speakers, almost always being highly multi-lingual with other languages used in the home (while, for example, North American speakers of English are often monolingual). In the second case, there is a socio-economic skew in the sense that higher class speakers are more likely to use English. The impact of socio-economic status is even more important when we consider that this data is collected from digital sources (tweets) and that the population of Twitter users in South Asia is less representative of the larger population than it is in North America and Europe. Thus, we use South Asia as a case-study in variation within outer-circle dialects. The map of local dialect areas is shown in Figure 14. This encompasses three countries: India, Pakistan, and Bangladesh. Figure 13: Distribution of Performance across the Late-Stage grammar, Europe The performance of the dialect model by local dialect is shown in Table 9, divided by country and by grammar type. The overall f-scores here are more similar to the North American model, at 0.68 and 0.56. As before, it is more difficult to distinguish between these local dialects because they are more similar overall. There is also a larger range of f-scores than before: the Bangladesh dialects are the highest performing, at 0.97 and 0.82. But within India, with many adjacent dialect areas, the f-scores fall as low as 0.14 (Uttar Pradesh) and 0.24 (Bihar). One conclusion to be drawn from these low values for specific local dialects is that the areas posited by the spatial clustering step do not contain unique and predictable dialect variants. This is a clear case where a joint spatial/linguistic approach to forming local areas would be preferable. In other words, the spatial organization suggests a boundary which the linguistic features do not support. Figure 14: Distribution of Local Dialects in South Asia The distribution of predictive power across the grammar is shown in Figure 15. As before, predictive power is distributed across nodes within the grammar and no single node nears the performance of the grammar as a whole. This means that interactions between constructions are an important part of variation and that the variability in different nodes is not simply redundant information. \begin{table} \begin{tabular}{|l|c c c|c c c|} \hline & \multicolumn{3}{c|}{**Late-Stage Grammar**} & \multicolumn{3}{c|}{**Early-Stage Grammar**} \\ **Local Area** & _Precision_ & _Recall_ & _F-Score_ & _Precision_ & _Recall_ & _F-Score_ \\ \hline Dhaka (BD-1) & 0.96 & 0.98 & 0.97 & 0.88 & 0.89 & 0.89 \\ Chattopram (BD-2) & 0.87 & 0.79 & 0.82 & 0.61 & 0.58 & 0.60 \\ \hline New Delhi (IN-1) & 0.56 & 0.64 & 0.60 & 0.51 & 0.53 & 0.52 \\ Gujarat (IN-2) & 0.55 & 0.55 & 0.55 & 0.33 & 0.39 & 0.36 \\ Maharashtra (IN-3) & 0.67 & 0.71 & 0.69 & 0.54 & 0.53 & 0.54 \\ Tamil Nadu (IN-4) & 0.72 & 0.71 & 0.71 & 0.55 & 0.58 & 0.56 \\ Uttar Pradesh (IN-5) & 0.17 & 0.11 & 0.14 & 0.42 & 0.22 & 0.29 \\ Bihar (IN-7) & 0.39 & 0.17 & 0.24 & 0.28 & 0.17 & 0.21 \\ Andhra Pradesh + Odisha (IN-8) & 0.69 & 0.61 & 0.65 & 0.54 & 0.49 & 0.51 \\ \hline Southern (PK-1) & 0.56 & 0.51 & 0.53 & 0.43 & 0.40 & 0.42 \\ Northern (PK-2) & 0.72 & 0.75 & 0.74 & 0.63 & 0.66 & 0.65 \\ \hline **Weighted Avg** & **0.68** & **0.68** & **0.68** & **0.56** & **0.56** & **0.56** \\ \hline \end{tabular} \end{table} Table 9: Performance of Dialect Classifier With Local Dialects in South Asia: Late-Stage Constructions (Left) and Early-Stage Constructions (Right). This is a single model for all local dialects. ## 5 Discussion and Conclusions The main focus of this paper has been to investigate the assumption, common to most studies of grammatical variation across dialects, that individual constructions within the grammar can be viewed in isolation. The results show conclusively that this is simply not the case: language is a complex adaptive system and observing variation in an arbitrary sub-set of that system leads to incomplete models. The paper has repeated the same experimental paradigm at three levels of spatial granularity (regional, national, and local dialects) in order to show that the inadequacy of individual nodes within the grammar is a consistent phenomenon. Across all levels of spatial granularity, these experiments tell us three important things about variation within a complex system: **Finding 1: No individual node within the grammar captures dialect variation as accurately as the grammar as a whole.** The basic approach taken here is to first cluster constructions within the grammar into macro- and micro-clusters using similarity relations between the constructions themselves (thus, independently of their patterns of variation). As shown in Figure 2, this leads to 1,941 micro-clusters in the late-stage grammar and 191 in the early-stage grammar. Even larger nodes are captured by macro-clusters, including 81 in the late-stage grammar and 16 in the early-stage grammar. The dialect classification experiments are repeated with each of these clusters alone, with the results shown in Figures 4, 5, 8, 10, Figure 15: Distribution of Performance across the Late-Stage grammar, South Asia 13, and 15. Additional figures are available in the supplementary material. In each case the grammar as a whole performs much better than any individual sub-set or node within the grammar. Why? Our interpretation is that there are emergent interactions between constructions in different nodes across the grammar. This means, for instance, that the use of plural nouns interacts with the use of certain adpositional phrases which interacts in turn with the use of phrasal verbs. Language is a complex adaptive system and the huge number of such minor interactions provides information about syntactic variation which is not redundant with the information contained in local nodes within the grammar. In other words, a significant part of syntactic variation is contained within emergent interactions which cannot be localized to specific families of constructions. **Finding 2: Individual nodes within the grammar vary widely in the degree to which they are subject to dialectal variation.** Following from the above finding, the degree to which individual macro- and micro-clusters are able to predict dialect membership represents the degree to which they themselves are subject to spatial variation. Thus, the results shown in Figures 4, 5, 8, 10, 13, and 15 also mean that not all clusters of constructions are subject to the same level of variation. Why? Our interpretation is that dialect variation is distributed across the grammar but in uneven portions. For instance, the unmasking experiment (Figure 6 and the supplementary material) shows that, even if we disregard the network structure of the grammar, the performance of the model is distributed across many features: performance remains rather high even with the top 25% of constructions removed. Thus, we know that variation is distributed widely across the grammar (regardless of cluster assignments) and that macro- and micro-clusters also vary widely in predictive power. This is important because it means that a grammar's variation is not simply the sum of the variation of its component constructions. **Finding 3: Similarity relations between dialects diverge widely from the best-performing model depending on the sub-set of the grammar being observed.** Given that the entire late-stage grammar retains high classification performance, we extract similarity relationships between dialects by looking at the errors which the classifier makes. For example, our reasoning is that Midwestern American English and Plains American English have more errors precisely because their grammar is more similar and thus easier to confuse. But the question is whether all sub-sets of the grammar lead to the same similarity errors. The answer is a resounding no, as shown in Tables 3, 5, and7 and in Figures 7, 9, and 11. Taking similarity measures as a characterization of the dialects relative to one another, this means that the overall story of variation depends heavily on which sub-set of the grammar we observe. Why is this a fundamental problem for previous work? Our interpretation is that language is a complex adaptive system so that examining any sub-set in isolation leads to incorrect conclusions. Thus, if we observed a small part of the grammar and tried to predict the most similar grammars overall, we would rarely reach the same relationships. By treating parts of the grammar as independent and self-contained systems, we fail to capture the interactions and complexities of the grammar as a single functioning system. The implication of these findings are that all studies which are based on isolating one or two constructions are fundamentally unable to provide an accurate representation of syntactic variation. ## Supplemental data The full supplementary materials contain the raw classification results, the full grammar and composition of macro- and micro-clusters, and additional figures and maps for each level of spatial granularity. These additional results repeat the main findings of the analysis presented in the paper. The supplementary material can be found **at this link**. ## Data Availability Statement While the corpora analyzed for this study cannot be shared directly, the constructional features used for the classification models can be found **at this link**. This allows the replication of these findings.
2309.05808
Geodesics on Regular Constant Distance Surfaces
Suppose that the surfaces K0 and Kr are the boundaries of two convex, complete, connected C^2 bodies in R^3. Assume further that the (Euclidean) distance between any point x in Kr and K0 is always r (r > 0). For x in Kr, let {\Pi}(x) denote the nearest point to x in K0. We show that the projection {\Pi} preserves geodesics in these surfaces if and only if both surfaces are concentric spheres or co-axial round cylinders. This is optimal in the sense that the main step to establish this result is false for C^{1,1} surfaces. Finally, we give a non-trivial example of a geodesic preserving projection of two C^2 non-constant distance surfaces. The question whether for any C^2 convex surface S0, there is a surface S whose projection to S0 preserves geodesics is open.
J. J. P. Veerman
2023-09-11T20:19:07Z
http://arxiv.org/abs/2309.05808v1
# Geodesics on Regular Constant Distance Surfaces ###### Abstract Suppose that the surfaces \(K_{0}\) and \(K_{r}\) are the boundaries of two convex, complete, connected \(C^{2}\) bodies in \(\mathbb{R}^{3}\). Assume further that the (Euclidean) distance between any point \(x\) in \(K_{r}\) and \(K_{0}\) is always \(r\) (\(r>0\)). For \(x\) in \(K_{r}\), let \(\Pi(x)\) denote the nearest point to \(x\) in \(K_{0}\). We show that the projection \(\Pi\) preserves geodesics in these surfaces if and only if both surfaces are concentric spheres or co-axial round cylinders. This is optimal in the sense that the main step to establish this result is false for \(C^{1,1}\) surfaces. Finally, we give a non-trivial example of a geodesic preserving projection of two \(C^{2}\)_non_-constant distance surfaces. The question whether for any \(C^{2}\) convex surface \(S_{0}\), there is a surface \(S\) whose projection to \(S_{0}\) preserves geodesics is open. 2020 Mathematics Subject Classification: 52A15, 53A05. ## 1 Introduction Suppose \(\gamma(t)\) is a trajectory of an object in \(\mathbb{R}^{3}\) outside a convex body. In this paper, \(\Pi(\gamma(t))\) is called the _projection_ of \(\gamma(t)\). In many applications it is important to track the point \(\Pi(\gamma(t))\) on the surface of the body nearest to the moving object1. In [10], a method to compute and track the projection was considered. Instead, here we consider the question whether this projection can take geodesics to (reparametrized) geodesics. Footnote 1: A case in point is the event on September 26, 2022, when an unmanned spacecraft hit the asteroid Didymos on purpose [9], thereby changing the orbit of the asteroid. Clearly, the change in orbit of the asteroid is related to the locus, angle, and speed of the missile at the time of impact. The asteroid itself is in good approximation a convex set, but far from round [9]. Before describing the main result, we give some general background about this problem. A diffeomorphism \(\phi:S_{1}\to S_{2}\) between (sub) manifolds is called a _geodesic mapping_ if it carries geodesics to geodesics. We restrict our discussion to surfaces in \(\mathbb{R}^{3}\). It is well-known that if \(S_{1}\) has constant Gaussian curvature, then there is a geodesic mapping from \(S_{1}\) to the plane. Vice versa, Beltrami's theorem says that if \(S_{1}\) admits a (local) geodesic mapping to the plane near every point in \(S_{1}\), then \(S_{1}\) has constant Gaussian curvature ([4], Section 4.6, exercises 12 and 13). There is a fairly large body of literature on geodesic mappings, [7, 8] and the references therein. Our own interest here is to find out whether _projections_ from one surface to another can be _geodesic mappings_. Our main result concerns projections to the convex set from a surface whose distance to the convex set is exactly \(r\) (a constant). We call such a surface a surface of constant distance (the word 'equidistant' is already in use for a slightly different concept [11]). Very little has been written about sets of constant distance (but see [3, 2]). What we aim to show here is essentially a rigidity result in \(\mathbb{R}^{3}\): a constant distance surface whose projection takes geodesics to geodesics must be a sphere or a cylinder. We proceed with the details. We imagine a \(C^{2}\) convex body in \(\mathbb{R}^{3}\) whose boundary we denote by \(K_{0}\). Let \(p\) be any point in the surface. By applying an isometry, we may assume that \(p\) is located at the origin of \(\mathbb{R}^{3}\) and that the tangent plane to \(K_{0}\) at \(p\) is given by \(z=0\). Thus the coordinate patch near the origin can be written as \[K_{0}(x_{1},x_{2})=\left(x_{1},x_{2},-\frac{1}{2}(a_{1}x_{1}^{2}+a_{2}x_{2}^{2 })-h(x_{1},x_{2})\right)\,, \tag{1.1}\] where the \(a_{i}\) are the _principal curvatures_ and \(h\) is twice continuously differentiable with \(h(0,0)\) is zero and the same holds for all first and second derivatives. By convexity, the principal curvatures \(a_{i}\) are non-negative. Because of the smoothness and the convexity, we can smoothly coordinatize the space \(\Omega\) surrounding the convex body by using these coordinate patches as follows [10]: \[S(x_{1},x_{2},r)=K_{0}(x_{1},x_{2})+r\hat{n}(x_{1},x_{2})\,. \tag{1.2}\] where \(\hat{n}\) is the unit normal to \(K_{0}\). These 3-dimensional coordinate patches form a differentiable atlas of \(\Omega\). Denote by \(\Pi:\Omega\to K_{0}\) the orthogonal 'projection' from \(\Omega\) onto \(K_{0}\), defined as follows [10]: \(\gamma:=\Pi(z)\) is the unique point on \(K_{0}\) nearest to \(z\in\Omega\). Clearly, the inverse of \(\Pi\) at a point \(\gamma\) of \(K_{0}\) consists of a ray normal to \(K_{0}\) at \(\gamma\). \[\Pi^{-1}(\gamma)=\cup_{r>0}\{\gamma+r\hat{n}(\gamma)\}\,,\] where \(\hat{n}\) is the unit normal at \(\gamma\) pointing outwards. Figure 1.1: _Left, the projection (red) of straight line orthogonal to the axis of symmetry of a solid cylinder. Right, the projection of a line at an arbitrary angle with the axis of symmetry of the cylinder. The former is a geodesic, the latter clearly not._ The simple example of Figure 1.1 shows that the projection of a straight line in \(\mathbb{R}^{3}\) does not usually result in a geodesic in \(K_{0}\). The question arises when is it that geodesics _do_ project to geodesics? In this paper, any non-singular reparametrization (i.e. with non-zero, possibly variable, speed) of a unit speed geodesic will also be called a geodesic. **Definition 1.1**: _i) Given two closed \(C^{2}\) surfaces \(S_{1}\) and \(S_{2}\) in \(\mathbb{R}^{3}\). The projection \(\Pi:S_{1}\to S_{2}\) is defined as follows. For \(x\in S_{1}\),_ \[\Pi(x):=\left\{y\in S_{2}\,:\,y\mbox{ minimizes the Euclidean distance }d(x,y)\right\}.\] _ii) Two surfaces are called regular constant distance surfaces if the Euclidean distance from any point \(x\) in \(S_{1}\) to \(S_{2}\) equals \(r\) (fixed), and the nearest point on \(S_{2}\) is always unique._ It is a curious fact that in general \(\Pi:S_{1}\to S_{2}\) and \(\Pi^{\prime}:S_{2}\to S_{1}\) are _not_ inverses of one another. However, if the \(S_{i}\) are at least \(C^{1}\) and regular constant distance, then \(\Pi\) and \(\Pi^{\prime}\)_are_ inverses. This is the content of Proposition 2.1. In the remainder of this paper, we deal with this case (except where mentioned otherwise). Let \(K_{r}\) denote the surface that has distance \(r\) to \(K_{0}\), or \[K_{r}:=\left\{S(x_{1},x_{2},r)\,:\,r>0\mbox{ fixed}\right\}.\] We are interested in determining when the projection \(\Pi:K_{r}\to K_{0}\) between these surfaces have the property that they send geodesics to (reparametrizations of) geodesics. We call this property _preservation of geodesics_ and \(\Pi\) a _geodesic mapping_. The proof of the following result takes up most of this paper. **Theorem 1.2**: _Let \(K_{0}\) be \(C^{2}\) surface patch given by (1.1) with \(a_{1}\geq 0\) and \(a_{2}\geq 0\) and fix \(r>0\). Then the projection \(\Pi:K_{r}\to K_{0}\) does not preserve geodesics, unless (in that patch) (i) the Gaussian curvature is zero (i.e. \(a_{1}a_{2}=0\)) or (ii) the patch consists of umbilic points (i.e. \(a_{1}=a_{2}\))_ A moment's reflection, will tell us that in \(\mathbb{R}^{3}\), projections between concentric spheres or between co-axial round cylinders _do_ preserve geodesics. The interesting question is, are there any others? Here is a (to the author) surprising corollary of Theorem 1.2. **Corollary 1.3**: _Let \(K_{0}\) and \(K_{r}\) be regular constant distance, complete, convex, connected, \(C^{2}\) surfaces in \(\mathbb{R}^{3}\) at a distance \(r>0\). The projection from \(K_{r}\) to \(K_{0}\) preserves geodesics if and only if both are either spheres or (infinite) round cylinders._ **Remark.** In this context, a (generalized) cylinder \(C\) is a set of points such that for every point \(p\in C\) there is a unique line \(\ell(p)\) in \(C\) and any two such lines are either the same or parallel. A 'perfect' or 'round' cylinder is a cylinder that rotationally symmetric around its axis. In particular, its principal curvatures are constant. **Remark.** In view of Proposition 2.1, \(\Pi:K_{r}\to K_{0}\) and \(\Pi^{\prime}:K_{0}\to K_{r}\) are inverses. So \(\Pi\) preserves geodesics if and only if \(\Pi^{\prime}\) preserves geodesics. **Proof of Corollary 1.3.** It is clear that if \(K_{0}\) and \(K_{r}\) both are either spheres or (infinite) round cylinders, then the geodesics are preserved. Vice versa, if the projection preserves geodesics, then by Theorem 1.2, every \(C^{2}\) surface patch is either a piece of a sphere or piece of a cylinder. The two cannot occur in the same \(C^{2}\) patch, because at any 'intermediate' point, (i) or (ii) in that theorem will be violated, and then geodesics will not be preserved. Thus all of \(K_{0}\) must satisfy either (i) or (ii). It is well-known that a \(C^{2}\) complete surface whose principal curvatures are the identical (or _umbilic_ surface) must be a part of a sphere ([4], Section 3.2). Similarly ([4], section 5.8), a complete surface with Gaussian curvature zero, must be a generalized cylinder. Finally, Proposition 4.1 implies that if \(K_{0}\) and \(K_{r}\) are cylinders and the projection preserves geodesics, then they must be round cylinders. **Remark.** Interestingly, this corollary is clearly false in \(\mathbb{R}^{2}\). For instance, if \(K_{0}\) is an ellipse in \(\mathbb{R}^{2}\) and \(K_{r}\) a circle that contains it, the projection \(K_{r}\to K_{0}\) is surjective. On the other hand, in dimension 4 or higher, nothing appears to be known. We furthermore prove that Corollary 1.3 is optimal in the sense that if we drop \(C^{2}\) in favor of \(C^{1,1}\), that is: once continuously differentiable with a Lipschitz derivatives, then the result does not hold. **Theorem 1.4**: _There exist regular constant distance, complete, convex, \(C^{1,1}\) surfaces \(K_{0}\) and \(K_{r}\) in \(\mathbb{R}^{3}\) with the property that (whenever the surfaces are \(C^{2}\)) either (i) \(a_{1}a_{2}=0\) or (ii) \(a_{1}=a_{2}\) holds, but the projection from \(K_{0}\) to \(K_{r}\) does not preserve geodesics._ **Proof.** The result follows directly from Proposition 5.1. **Remark.** In [1] (see also [10]), a related, but more complicated, counter-example was constructed which carries over to cylinders in \(\mathbb{R}^{3}\). It says that here is a convex \(C^{1,1}\) cylinder such that the projection \(\Pi\) onto this cylinder does not have a derivative. Finally, we are interested in the question whether, given the boundary \(S_{0}\) of a convex body, there is _any_ surface \(S\) outside it, whose projection onto \(S_{0}\) preserves geodesics. For cylinders in \(\mathbb{R}^{3}\), the answer is affirmative, as we show in Section 6. In fact, in that case, the space outside \(S_{0}\) can be foliated by surfaces \(S_{k}\), \(k\geq 0\) so that each projection \(\Pi_{k}:S_{k}\to S_{0}\) preserves geodesics. However, as we will show, these surfaces \(S_{k}\) generally are not convex. **Remark.** For general \(C^{2}\) convex bodies, even in \(\mathbb{R}^{3}\), it is unknown at the time of this writing whether the space outside them can be foliated by surfaces \(S_{k}\) so that each projection \(\Pi_{k}:S_{k}\to S_{0}\) preserves geodesics. ## 2 Preliminaries We first prove that the projections between two regular constant distant surfaces (see Definition 1.1) are inverses of one another. Then we discuss the strategy to prove Theorem 1.2. **Proposition 2.1**: _Let \(S_{1}\) and \(S_{2}\) be \(C^{1}\) surfaces in \(\mathbb{R}^{3}\) such that the Euclidean distance from any point \(x\) in \(S_{1}\) to \(S_{2}\) equals \(r\) (fixed), and the nearest point on \(S_{2}\) is always unique. Then the projections \(\Pi:S_{1}\to S_{2}\) and \(\Pi^{\prime}:S_{2}\to S_{1}\) are inverses of one another._ **Proof.** Consider \(\Pi:S_{1}\to S_{2}\) and \(\Pi^{\prime}:S_{2}\to S_{1}\) and suppose \(\Pi(x)=y\) (see Figure 2.1). Suppose there is \(x^{\prime}\) in \(S_{2}\) not equal to \(x\) such that \(x^{\prime}\in\Pi^{\prime}(y)\). Denote the Euclidean distance by \(d(x,y)\). Now \[x^{\prime}\in\Pi^{\prime}(y) \Longrightarrow\quad\ d(y,x^{\prime})\leq d(y,x)=r\] \[d(x^{\prime},S_{2})=r \Longrightarrow\quad\ d(x^{\prime},y)\geq r\] So \(d(y,x^{\prime})=r\). Consider the plane \(P\) through \(x\), \(x^{\prime}\), and \(y\), and parametrize \(S_{2}\) by the arclength \(t\) and let the geodesic \(c(t)\) be the tangent to \(S_{2}(t)\) as drawn in Figure 2.1. Then, by differentiability of \(S_{2}\), \[\lim_{t\searrow 0^{+}}\frac{d(S_{2}(t),x^{\prime})-d(S_{2}(0),x^{\prime})}{t}= \lim_{t\searrow 0^{+}}\frac{d(c(t),x^{\prime})-d(c(0),x^{\prime})}{t}=-\cos \phi\,.\] The last equality is a special case2 of Theorem 4.3 in [5]. Thus for some positive \(t\), \(d(S_{2}(t),x^{\prime})<r\), contradicting the assumption that \(d(x^{\prime},S_{2})=r\). Footnote 2: In this simple case, it can also be derived easily from an explicit computation. To prove Theorem 1.2, we pick a family \(\Gamma\) of geodesics in the patch given by (1.1) as follows. A geodesic \(\gamma(t)\) in \(\Gamma\) is determined by initial condition \(\gamma(0)=(0,x_{2}(0),x_{3}(0))\), where \(x_{2}(0)\) is not zero but small and \(\dot{x}_{1}(0)>0\) is of order unity, while \(x_{3}(0)\) is determined by the fact that \(\gamma\) is a curve in the surface \(K_{0}\) (see Figure 2.2). Since we are interested not in geodesics per se, but in geodesics modulo (non-singular) reparametrization, we establish a simple characterization of geodesics in \(\Gamma\) that does not depend on the parametrization (Lemma 3.1). We then consider the projection \(\Pi:K_{0}\to K_{r}\), with \(r>0\), which maps \(\gamma\) to a curve \(\gamma_{r}\) in \(K_{r}\). And finally, we prove that \(\gamma_{r}\) is not a (reparametrization of a) geodesic by showing that it fails the criterion just mentioned. To do this, we will need to determine the terms of the leading order of magnitude in a fairly involved expression. We will employ the standard 'big-oh' Figure 2.1: _This figure illustrates that \(\Pi:S_{1}\to S_{2}\) and \(\Pi^{\prime}:S_{2}\to S_{1}\) are not generally inverses of one another. Traveling from \(y\) along \(S_{2}\) in the direction of \(c(t)\) will (initially) decrease the distance to \(x^{\prime}\)._ and'small-oh' notation as follows. We consider curves such as the ones in Figure 2.2, and evaluate certain quantities as these curves cross the \(x_{1}=0\) axis. Thus3 using \(x\) as shorthand for \((x_{1},x_{2})\): Footnote 3: Care should be exercised with the “=” sign. It is not reflexive in this context. \[f(x_{1},x_{2}) = Ok\quad\mbox{means}\quad\limsup_{|x|\to 0}\frac{|f(x)|}{|x|^{k}}< \infty\,,\] \[\mbox{and}\qquad f(x_{1},x_{2}) = ok\quad\mbox{means}\quad\lim_{|x|\to 0}\frac{|f(x)|}{|x|^{k}}=0\,.\] It will be convenient to have a more compact notation. Hence the following definition. **Definition 2.2**: _We define \(z_{i}:=a_{i}x_{i}+\partial_{i}h(x)\), where \(x(t)=(x_{1}(t),x_{2}(t))\) is the projection to the \(x_{1}\)-\(x_{2}\) plane of the geodesic \(\gamma(t)\) in Figure 2.2._ We compute the leading orders at \(t=0\) of \(z_{i}\), \(\dot{z}_{i}\), and \(\ddot{z}_{i}\). \[\dot{z}_{i}=a_{i}\dot{x}_{i}+d_{t}\partial_{i}h\quad\mbox{ and }\quad\ddot{z}_{i}=a_{i}\ddot{x}_{i}+d_{t}^{2} \partial_{i}h\,.\] We know that \(h=o2\) and so \(\partial_{i}h=o1\). Furthermore, at \(t=0\), \(x_{1}=0\), and \(x_{2}=O1\). Thus \[z_{1}=\partial_{1}h\quad\mbox{ and }\quad z_{2}=a_{2}x_{2}+o1\,. \tag{2.1}\] Each of these is \(O1\) or less. Now, \[d_{t}\partial_{i}h=\partial_{1}\partial_{i}h\;\dot{x}_{1}+\partial_{2}\partial _{i}h\;\dot{x}_{2}\,.\] Along the geodesic in the patch, \(\dot{x}_{1}\) is order unity (or \(O1\)), and even though \(\dot{x}_{2}\) may be small, we see that \(d_{t}\partial_{2}h=o0\). In fact, we are only interested in evaluating these quantities at \(t=0\) at which point we have \(\dot{x}_{2}=\ddot{x}_{2}=0\). Putting this together results at \(t=0\) in \[\dot{z}_{1}=a_{1}\dot{x}_{i}+o0\quad\mbox{ and }\quad\dot{z}_{2}=\partial_{1} \partial_{2}h\;\dot{x}_{1}\,, \tag{2.2}\] and so \(\dot{z}_{2}=o0\). The next derivative, \(\ddot{z}_{i}\), is a little trickier. The reason is that \(d_{t}^{2}\partial_{2}h\) cannot be bounded by some order. It may be large, or, depending on \(h\), it may be small. To ensure we have the leading terms of \(\ddot{z}_{i}\), we have to include both terms and the expression does not simplify. Setting \(t=0\), we will see that \(\ddot{x}_{1}=O1\) and we know that \(\ddot{x}_{1}=0\). So at \(t=0\), \[\ddot{z}_{1}=a_{1}\ddot{x}_{i}+d_{t}^{2}\partial_{2}h\quad\mbox{ and }\quad\ddot{z}_{2}=\partial_{1}\partial_{2}h\;\dot{x}_{1}\,. \tag{2.3}\] Proof of Theorem 1.2 To distinguish the standard inner product in \(\mathbb{R}^{3}\) from a 2-tuple, we indicate the former by a dot: \(x\cdot y\). Also, to avoid cluttering the formulas with the repetitive occurrence of the argument "\((0)\)", we will not write it, except when its omission might lead to misunderstandings. **Lemma 3.1**: _Suppose the family of curves \(\gamma(t)=(x_{1}(t),x_{2}(t),-\frac{1}{2}[a_{1}x_{1}(t)^{2}+a_{2}x_{2}(t)^{2}]-h)\) in \(K_{0}\) are (a reparametrization of) geodesics with \(x_{1}=0\), \(\dot{x}_{1}>0\), \(x_{2}\neq 0\), and \(\dot{x}_{2}=0\). Then at \(t=0\)_ \[\lim_{x_{2}\to 0}\,\frac{\ddot{x}_{2}}{\dot{x}_{1}^{2}x_{2}}=-a_{1}a_{2}\,.\] _Furthermore, this characterization is independent of the (smooth) parametrization of \(\gamma\)._ **Proof.** Set \(e_{i}:=\partial_{i}K_{0}\), where \(K_{0}\) is given by (1.1). The metric tensor \(g_{ij}=e_{i}\cdot e_{j}\) and its inverse are given by (see Definition 2.2) \[g=\begin{pmatrix}1+z_{1}^{2}&z_{1}z_{2}\\ z_{1}z_{2}&1+z_{2}^{2}\end{pmatrix}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, **Lemma 3.2**: _Given the surface \(K_{0}\) of (1.1), then the constant distance surface \(K_{r}\) can be parametrized as follows_ \[K_{r}(u_{1},u_{2})=\left(u_{1},u_{2},r-\frac{1}{2}(a_{1r}u_{1}^{2}+a_{2r}u_{2}^{2 })+o2\right)\,,\] _where_ \[a_{ir}=\frac{a_{i}}{1+ra_{i}}\,.\] **Proof.** Fix \(r>0\). The inverse projection \(\Pi_{r}^{-1}:K_{0}\to K_{r}\) is well-defined, and given by \[K_{r}(x_{1},x_{2}):=\Pi_{r}^{-1}(K_{0}(x_{1},x_{2}))=K_{0}(x_{1},x_{2})+r\hat{ n}(x_{1},x_{2})\,. \tag{3.1}\] We'll call \(K_{0}\), somewhat informally, the 'downstairs' surface and \(K_{r}\) is 'upstairs'. We compute, using Definition 2.2 \[\hat{n}(x_{1},x_{2})=\frac{(z_{1},z_{2},1)}{\sqrt{1+z_{1}^{2}+z_{2}^{2}}}\,. \tag{3.2}\] So \[K_{r}(x_{1},x_{2}) = \left(x_{1}+\frac{rz_{1}}{V},x_{2}+\frac{rz_{2}}{V},-\frac{1}{2}( a_{1}x^{2}+a_{1}x_{2}^{2})-h(x_{1},x_{2})+\frac{r}{V}\right)\,,\] \[\mbox{where }\ V = \sqrt{1+z_{1}^{2}+z_{2}^{2}}\] There are no mixed quadratic terms of the form \(x_{1}x_{2}\) in the expansion of \(K_{r}(x_{1},x_{2})\). So if we rewrite this as \(K_{r}(u_{1},u_{2})=(u_{1},u_{2},r+u_{3}(u_{1},u_{2}))\), then the \(u_{1}\)- and \(u_{2}\)-axes of \(K_{r}\) are the axes of principal curvature at \((u_{1},u_{2})=(0,0)\). All we need to do to complete the proof, is a computation of the curvature in the \(x_{1}\)-\(r\) plane to get \(a_{1r}\). This is done in Figure 3.1 by employing osculating circles. The computation is the same in the \(x_{2}\)-\(r\) plane. Part of the difficulty here is that it is pretty clear that if \(K_{0}\) does not have constant curvature along a geodesic \(\gamma(t)\), then the curve traced in \(K_{r}\) by projecting \(\gamma\) will certainly not be a constant Figure 3.1: _The radius of curvature in the \(x\)-direction of \(K_{0}\) at the origin equals \(1/a\). The orthogonal projection to \(K_{r}\) then gives a radius of curvature of \(r+1/a\). The principal curvature is the reciprocal of this._ speed curve, let alone a constant speed geodesic. It is thus a priori clear that the projected curve will not satisfy the geodesic equations. What we wish to establish, however, is whether it can be _reparametrized_ as a geodesic. We use Lemma 3.1 that the images \(\gamma_{r}\) (\(r>0\)) under the projection are not geodesics. **Lemma 3.3**: _The geodesic \(\gamma\) depicted in Figure 2.2 with \(\gamma(0)=(0,x_{2})\) and \(\dot{\gamma}(0)=(\dot{x}_{1},0)\) projects to a curve_ \[\gamma_{r}(t)=(u_{1}(t),u_{2}(t),u_{3}(t))\] _in \(K_{r}\) (\(r>0\)), where at \(t=0\), we have_ \[\dot{u}_{1} = (1+ra_{1})\dot{x}_{1}+o0\] \[u_{2} = (1+ra_{2})x_{2}+o1\] \[\ddot{u}_{2} = -(1+ra_{1}+ra_{2})a_{1}a_{2}\dot{x}_{1}^{2}x_{2}+rd_{t}^{2}\partial _{2}h+o1\,.\] **Proof.** We trace a possibly reparametrized geodesic \(\gamma(t)\) in \(K_{0}\) satisfying Lemma 3.1, and determine the curvature of its projection \(\gamma_{r}\) 'upstairs' in \(K_{r}\). Note that \(\gamma_{r}(t)\) is given by \(\gamma(t)+r\hat{n}(x_{1}(t),x_{2}(t))\). The unit-normal \(\hat{n}\) is given in (3.2). We use the rules of evaluating the orders given in Section 2. The \(x_{1}\) and \(x_{2}\) coordinates of \(\gamma_{r}\) will be called \(u_{1}\) and \(u_{2}\) and, noting that \(z_{i}=O1\) (Section 2), we get \[u_{1}=x_{1}+rz_{1}(1-{{\frac{1}{2}}}z_{1}^{2}-{{\frac{1}{2}}}z_{2}^{2}+O4)\] \[u_{2}=x_{2}+rz_{2}(1-{{\frac{1}{2}}}z_{1}^{2}-{{\frac{1}{2}}}z_{2}^{2}+O4)\,.\] Referring to (2.1), this gives for \(u_{2}\) the following: \[u_{2}=(1+ra_{2})x_{2}+o1\,. \tag{3.3}\] Now, differentiate the \(u_{i}\) with respect to time. \[\dot{u}_{1}=\dot{x}_{1}+r\dot{z}_{1}-r\dot{z}_{1}({{\frac{1}{2}}}z_ {1}^{2}+{{\frac{1}{2}}}z_{2}^{2}+O4)-r\dot{z}_{1}(z_{1}\dot{z}_{1}+z_{2} \dot{z}_{2}+O3)\] \[\dot{u}_{2}=\dot{x}_{2}+r\dot{z}_{2}-r\dot{z}_{2}({{\frac{1}{2 }}}z_{1}^{2}+{{\frac{1}{2}}}z_{2}^{2}+O4)-r\dot{z}_{2}(z_{1}\dot{z}_{1}+z_{2} \dot{z}_{2}+O3)\,.\] Use (2.2), to see that the leading term appearing in \(\dot{u}_{1}\) is \(\dot{x}_{1}\) (which is \(O0\)), and thus \[\dot{u}_{1}=(1+ra_{1})\dot{x}_{1}+o0\,. \tag{3.4}\] We need to differentiate \(\dot{u}_{2}\) one more time with respect to time. \[\ddot{u}_{2} = \underbrace{\ddot{x}_{2}+\ddot{z}_{2}}_{A}-\underbrace{r\ddot{z }_{2}\,O2}_{B}-\underbrace{2r\dot{z}_{2}(z_{1}\dot{z}_{1}+z_{2}\dot{z}_{2}+O3 )}_{C}-\underbrace{rz_{2}(\dot{z}_{1}^{2}+z_{1}\ddot{z}_{1}+\dot{z}_{2}^{2}+ z_{2}\ddot{z}_{2}+O2)}_{D}\,.\] To analyze this, we denote the four terms A through D, and look at each individually. In A, \(\ddot{z}_{2}\) can be evaluated via (2.3) and \(\ddot{x}_{2}\) can be eliminated via Lemma 3.1. This gives \(-(1+ra_{2})a_{1}a_{2}\dot{x}_{1}^{2}x_{2}+rd_{t}^{2}\partial_{2}h\) for the term marked A. Clearly, B is negligible compared to A. In C, \(\dot{z}_{2}=o0\) as noted before, and the term in parentheses is \(O1\). So all together this term is \(o1\) and therefore negligible compared to A. Finally, in C, we use (2.1) to establish that \(-r\dot{z}_{1}^{2}z_{2}\) is \(O1\) and, by (2.1), all other terms are smaller. This last expression can be simplified using (2.1) and (2.2) to \(-ra_{1}^{2}a_{2}\dot{x}_{1}^{2}x_{2}+o1\). Collecting terms and adding the relations (3.3) and (3.4) yields the lemma. **Proof of Theorem 1.2.** On the one hand, if the projected curve \(\gamma_{r}\) is also a geodesic, then it itself must satisfy Lemma 3.1 with the curvatures given by Lemma 3.2. So \[\ddot{u}_{2}=-\frac{a_{1}a_{2}}{(1+ra_{1})(1+ra_{2})}\dot{u}_{1}^{2}u_{2}\,.\] Eliminating \(\dot{u}_{1}\) and \(u_{2}\) in favor of \(\dot{x}_{1}\) and \(x_{2}\) via Lemma 3.3 gives \[\ddot{u}_{2}=-(1+ra_{1})a_{1}a_{2}\dot{x}_{1}^{2}x_{2}+o1\,.\] On the other hand, another equation for \(\ddot{u}_{2}\) is given by Lemma 3.3. If we equate the two expressions, we obtain \[-(1+ra_{1})a_{1}a_{2}\dot{x}_{1}^{2}x_{2}+o1=-(1+ra_{1}+ra_{2})a_{1}a_{2}\dot{ x}_{1}^{2}x_{2}+rd_{t}^{2}\partial_{2}h+o1\,.\] Upon simplification, this gives \[-ra_{1}a_{2}^{2}\dot{x}_{1}^{2}x_{2}+rd_{t}^{2}\partial_{2}h=o1\,. \tag{3.5}\] This equation has two possible solutions. The first is if the left hand is \(o1\) and so \(a_{1}a_{2}=0\) and \(d_{t}^{2}\partial_{2}h\) is \(o1\). From the rules about manipulating the order symbols in Section 2, it follows that then \(h=o4\). The other possibility is if \(a_{1}a_{2}>0\) and so \(d_{t}^{2}\partial_{2}h=a_{1}a_{2}^{2}x_{1}^{2}x_{2}+o1\). This happens if and only if \(h=\frac{1}{4}a_{1}a_{2}^{2}x_{1}^{2}x_{2}^{2}+g\) and \(d_{t}^{2}\partial_{2}g=o1\). So \(h=\frac{1}{4}a_{1}a_{2}^{2}x_{1}^{2}x_{2}^{2}+o4\). Now consider the geodesic \(\eta\) which is just \(\gamma\) rotated by \(\pi/2\). Then, by the same reasoning, if the projection \(\eta_{r}\) in \(K_{r}\) is a geodesic, we must have that \(h=\frac{1}{4}a_{1}^{2}a_{2}x_{1}^{2}x_{2}^{2}+o4\). Since both4 must hold, we get \(\frac{1}{4}a_{1}a_{2}^{2}x_{1}^{2}x_{2}^{2}=\frac{1}{4}a_{1}^{2}a_{2}x_{1}^{2} x_{2}^{2}\) or \(a_{1}=a_{2}\). Footnote 4: Note that the powers of \(a_{i}\) are distinct. **Remark.** The two types of solutions of (3.5) in this proof do indeed occur. For if \(K_{0}\) is a plane or a cylinder with radius \(1/a\), we get \[K(x_{1},x_{2})=\sqrt{a^{-2}-x_{1}^{2}}-a^{-1}=-\tfrac{1}{2}ax_{1}^{2}-\tfrac{ 1}{8}a^{3}x_{1}^{4}+O6\,.\] In this case, the Gaussian curvature is zero and \(h=o4\). On the other hand, for a sphere of radius \(1/a\), we have \[K(x_{1},x_{2})=\sqrt{a^{-2}-x_{1}^{2}-x_{2}^{2}}-a^{-1}=-\frac{1}{2}(ax_{1}^{2 }+ax_{2}^{2})-\frac{1}{8}(a^{3}x_{1}^{4}+2a^{3}x_{1}^{2}x_{2}^{2}+a^{3}x_{2}^ {4})+O6\,.\] Here, the principal curvatures are equal and \(d_{t}^{2}\partial_{2}h=a^{3}\dot{x}_{1}^{2}x_{2}+o1\). ## 4 Cylinders Must Be Round Now let \(K_{0}\) be a convex, but not necessarily round, cylinder, invariant under translations along the \(x_{3}\)-axis. Consider "polar" coordinates \((\rho,x_{3},r)\) in \(\mathbb{R}^{3}\) where \(\rho\) is the arclength along the simple, closed curve in the \(x_{1}\)-\(x_{2}\) plane that defines \(K_{0}\) as illustrated in Figure 4. **Proposition 4.1**: _With the above assumptions, the projection \(\Pi:K_{0}\to K_{r}\) preserves geodesics if and only if the non-zero principal curvature of \(K_{0}\) is constant._ **Proof.** Since the Gaussian curvature is zero, the map from the cylinder to the plane, given by \(K_{0}(\rho,x_{3})\to(\rho,x_{3})\) is a bijective isometry and so maps geodesics to geodesics. A geodesic \(\gamma\) in \(K_{0}\) is (a) parallel to the \(x_{3}\)-axis, or (b) a circle in the \(x_{1}\)-\(x_{2}\) plane, or (c) a curve \(\gamma(x_{3})=(\rho(x_{3}),x_{3})\). Since \(\gamma\) is a geodesic, \(\rho(x_{3})\) is affine and has a constant derivative \(\frac{d\rho}{dx_{3}}\). Assume \(\gamma\) is a geodesic of type (c). Now consider the projection \(\gamma_{r}\) of \(\gamma\) onto \(K_{r}\). As with \(K_{0}\), we parametrize \(K_{r}\) by the arclength \(\rho_{r}\) of the defining curve and \(x_{3}\). It is clear that \(\gamma_{r}\) is a curve \(x_{3}\to\rho_{r}(x_{3})\). Again, if \(\gamma_{r}\) is a geodesic, then \(\frac{d\rho_{r}}{dx_{3}}\) is constant. Denote the non-zero principal curvature of \(K_{0}\) by \(a(\rho)\). A reasoning similar to that of Lemma 3.2 gives that arclengths \(\rho_{r}\) and \(\rho\) travelled along each geodesic relate as \[d\rho_{r}=\frac{1/a(\rho)+r}{1/a(\rho)}\ d\rho=(1+a(\rho)r)\ d\rho\,.\] Since the \(x_{3}\) coordinates of \(\gamma(t)\) and its projection \(\gamma_{r}(t)\) are the same, we get \[\frac{d\rho_{r}}{dx_{3}}=(1+a(\rho)r)\ \frac{d\rho}{dx_{3}}\,. \tag{4.1}\] Thus \(\frac{d\rho_{r}}{dx_{3}}\) is constant if and only if \(a(\rho)\) is constant. ## 5 A \(C^{1,1}\) Counter-example We consider the round cylinder 'topped off' by a hemisphere both of radius \(r\), which gives a \(C^{1,1}\) surface (see Figure 5.1). Denote this surface by \(S_{r}\). It is easy to convince oneself that \(S_{1}\) and \(S_{r}\) (\(r>1\)) are regular, constant distance surfaces. Clearly, at every point (except where \(C^{2}\) does not hold) either (i) \(a_{1}a_{2}=0\) or (ii) \(a_{1}=a_{2}\). **Proposition 5.1**: _Let \(S_{1}\), \(S_{r}\), and \(\Pi:S_{1}\to S_{r}\) be given as above. Let \(\gamma_{1}\) be the shortest geodesic connecting \(P_{1}=(0,1,\pi/2)\) and \(Q_{1}=(0,-1/\sqrt{2},-1/\sqrt{2})\). The projection of \(\gamma_{1}\) by \(\Pi\) to \(\gamma_{r}\) (connecting \(P_{r}\) to \(Q_{r}\) in \(S_{r}\) (\(r>1\))) is not a local geodesic near the point \(M_{r}\) where \(\gamma_{r}\) intersects the boundary of the cylinder (see Figure5.1)._ Figure 5.1: _“Polar” coordinates in \(\mathbb{R}^{3}\)._ **Proof.** The geodesic \(\gamma\) connects \(P_{1}\) to \(Q_{1}\), but we do not know where it crosses over from the cylinder to sphere. So let us call that point \(M_{1}(\theta)\). We have \[P_{1}=(0,1,\pi/2)\;,\quad M_{1}(\theta)=(\sin\theta,\cos\theta,0)\;,\quad Q_{1}= \left(0,\tfrac{-1}{\sqrt{2}},\tfrac{-1}{\sqrt{2}}\right)\,.\] It is easy to see that then the projection \(\gamma_{r}\) of \(\gamma\) connects \(P_{r}\) to \(Q_{r}\) via \(M_{r}(\theta)\) (the same \(\theta\)), where \[P_{r}=(0,r,\pi/2)\;,\quad M_{r}(\theta)=(r\sin\theta,r\cos\theta,0)\;,\quad Q _{r}=\left(0,\tfrac{-r}{\sqrt{2}},\tfrac{-r}{\sqrt{2}}\right)\,. \tag{5.1}\] The projection of \(\gamma_{r}\) consists of two pieces that live on \(C^{2}\) surfaces with either curvature zero (the cylinder) or the sphere, and so each of these two pieces is a geodesic in \(S_{r}\). The first piece connects \(P_{r}\) to \(M_{r}\), see the left of Figure 5.2. In the flattened out cylinder, it is the hypotenuse of the triangle with sides \(\pi/2\) and \(r\theta\) and thus has length \(\sqrt{\pi^{2}/4+r^{2}\theta^{2}}\). The second Figure 5.2: _The two geodesic pieces of \(\gamma_{r}\) in \(S_{r}\). To the left, the piece in the flattened out cylinder. To the right the piece that lies in the hemisphere._ Figure 5.1: _A straight cylinder of radius \(r\) topped off by a hemisphere of radius \(r\). This surface is \(C^{2}\), except on the circle where the cylinder and the hemisphere meet. Here \(\partial^{2}/\partial z^{2}\) has a discontinuity. Since the first derivative changes gradually, this surface is \(C^{1,1}\)._ piece lives in the sphere. Its length is \(r\) times the angle \(\alpha\) between \(M_{r}\) and \(Q_{r}\). The cosine of \(\alpha\) is given by the dot product of the unit vectors parallel to \(M_{r}\) and \(Q_{r}\), which gives \(-\cos\theta/\sqrt{2}\). Thus the length of the second piece equals \(r\arccos\left(\frac{-\cos\theta}{\sqrt{2}}\right)\). Therefore, the length of the projected curve \(\gamma_{r}\) is given by \[\ell_{\theta}(\gamma_{r})=\sqrt{\frac{\pi^{2}}{4}+r^{2}\theta^{2}}+r\arccos \left(\frac{-\cos\theta}{\sqrt{2}}\right)\,.\] We need to minimize this over \(\theta\in[0.\pi]\). It is an elementary calculus exercise5 to see that this is minimized at \(\theta\) satisfying \(\frac{\theta}{\sin\theta}=\frac{\pi/2}{r}\). We know that \(\gamma\) is minimizing in \(S_{1}\). Therefore if we substitute \(r=1\), we get \(\theta=\pi/2\). That same calculation for \(r>1\) implies then that \(\gamma_{r}\) (where \(r>1\)) is not globally minimizing in \(S_{r}\). Footnote 5: Use that the derivative of \(\arccos q\) equals \(-1/\sqrt{1-q^{2}}\). Figure 5.3 is a slightly impressionistic image of the tangent space \(TS_{r}\) at \(M_{r}\). The slope of \(\gamma_{r}\) restricted to the lower half plane that projects to the hemisphere equals \(1\). However the slope restricted the upper half plane which can be identified with the rolled out cylinder, the slope equals \(1/r\). Thus \(\gamma_{r}\) is not locally minimizing at \(M_{r}\). ## 6 There are Other Projections that Preserve Geodesics In this Section, we find a beautiful example of a family of projections \(\Pi_{k}:S_{k}\to S_{0}\) such that the surfaces \(\left\{S_{k}\right\}_{k\geq 0}\) foliate the space surrounding the boundary \(S_{0}\) of a convex body in \(\mathbb{R}^{3}\). It is not known whether this is possible for all such surfaces \(S_{0}\). Our construction is based on Section 4 and works for (convex) cylinders. Consider a general, not necessarily round, convex, cylinder \(S_{0}\). It consists of parametrized closed curve \(c(t)\) and lines though that curve, orthogonal to it, as sketched in Figure 4.1. We can define a \(S\) outside \(S_{0}\) by first defining a new curve in \(\mathbb{R}^{2}\): \[C(t)=c(t)+r(t)\hat{n}(t)\,.\] Figure 5.3: _The tangent space \(TS_{r}\) at \(M_{r}\). The red curve corresponds to the lift of \(\gamma_{r}\). The branch to the left of the base point \(M_{r}\) travels to \(Q_{r}\) and the branch to the right travels \(P_{r}\) (see (5.1))._ Here \(\hat{n}(t)\) is the unit normal to \(c(t)\) and \(r(t)\) is a non-negative distance. Let us denote the curvature of \(c(t)\) by \(a(t)\). According to (4.1), the projection \(\Pi:S\to S_{0}\) between the corresponding cylinders preserves geodesics if \[r(t)=\frac{k}{a(t)}\,,\] where \(k\) is a positive constant. The cylinder \(S_{k}\), \(k\geq 0\) is given by the lines through \(C_{k}\) orthogonal to plane of \(C_{k}\). Thus the projection \(\Pi_{k}:S_{k}\to S_{0}\) preserves geodesics. Notice that we have to be careful here, because now the back and forth projections are not inverses of one another anymore (see Proposition 2.1). We take as an example the ellipse given by \[c(t):=(\alpha\cos(t),\beta\sin(t))\ \ \ \ \ \mbox{and}\ \ \ \ \ C_{k}(t)=c(t)+ \frac{k}{a(t)}\;\hat{n}(t)\,,\] where \(k\) is a non-negative constant. Standard calculations give \(C(t)\) explicitly as \[C_{k}(t)=\left(\left(\alpha+\frac{k}{\alpha}(\alpha^{2}\sin(t)^{2}+\beta^{2} \cos(t)^{2})\right)\cos(t),\left(\beta+\frac{k}{\beta}(\alpha^{2}\sin(t)^{2}+ \beta^{2}\cos(t)^{2})\right)\sin(t)\right)\,.\] We used MAPLE in Figure 6.1, to draw the ellipse \(c(t)=(\cos(t),3\sin(t))\) in red, \(C_{k}(t)\) for \(k=0.5\) in green, and for \(k=1.5\) in blue. Note that these remarkable curves lose convexity for large enough \(k\). We leave it to the reader to establish that for large \(k\), the projection \(\Pi_{k}^{\prime}:S_{0}\to S_{k}\) is not single-valued and therefore does not preserve geodesics.
2309.14905
Multi-Messenger Measurements of the Static Structure of Shock-Compressed Liquid Silicon at 100 GPa
Ionic structure of high pressure, high temperature fluids is a challenging theoretical problem with applications to planetary interiors and fusion capsules. Here we report a multi-messenger platform using velocimetry and \textit{in situ} angularly and spectrally resolved X-ray scattering to measure the thermodynamic conditions and ion structure factor of materials at extreme pressures. We document the pressure, density, and temperature of shocked silicon near 100 GPa with uncertainties of 6%, 2%, and 20%, respectively. The measurements are sufficient to distinguish between and rule out some ion screening models.
H. Poole, M. K. Ginnane, M. Millot, G. W. Collins, S. X. Hu, D. Polsin, R. Saha, J. Topp-Mugglestone, T. G. White, D. A. Chapman, J. R. Rygg, S. P. Regan, G. Gregori
2023-09-26T13:08:22Z
http://arxiv.org/abs/2309.14905v1
# Multi-Messenger Measurements of the Static Structure of Shock-Compressed Liquid Silicon at 100 GPa ###### Abstract Ionic structure of high pressure, high temperature fluids is a challenging theoretical problem with applications to planetary interiors and fusion capsules. Here we report a multi-messenger platform using velocimetry and _in situ_ angularly and spectrally resolved X-ray scattering to measure the thermodynamic conditions and ion structure factor of materials at extreme pressures. We document the pressure, density, and temperature of shocked silicon near 100 GPa with uncertainties of 6%, 2%, and 20%, respectively. The measurements are sufficient to distinguish between and rule out some ion screening models. With the advent of high-power lasers, a laboratory-based exploration into extreme states of matter, such as those found in planetary interiors [1] or during asteroid impacts [2], has been realized. This exotic state, referred to as warm dense matter (WDM) [3], is characterized by temperatures and pressures on the order of \(1,000\,\)K and \(100\,\)GPa. Experimental measurement of material behavior and structure under such conditions is paramount for testing theoretical models used in the pursuit of fusion energy [4; 5] and for modeling planetary phenomena [6; 7; 8; 9], where dynamic geophysics processes are dominated by changes in solid- and liquid-state structure. Over the last few decades, high-energy density (HED) facilities have developed the capability of generating sufficiently long-lived WDM states to deploy suites of advanced diagnostics [10; 11; 12]. Using the X-ray free electron laser (XFEL) at the Linac Coherent Light Source, high-resolution X-ray scattering measurements have successfully probed the electronic and atomic structure of high-pressure states [13; 14]. However, the compression capabilities and prepared WDM volumes (critical for seeding uniform conditions) at XFELs are limited in comparison to what is readily achieved at kJ to MJ-class laser facilities. Due to the difficulty in making standard simplifying approximations at these high-pressure states, which are expected in both Jovian planet interiors [15] and fusion ignition capsules [16], equation-of-state (EOS) development [17; 18] requires experimental measurements. At such laser-facilities, diagnostic access can be comparatively constrained and probing shock-compressed matter has often been limited to single diagnostics e.g. X-ray Thomson scattering (XRTS) [19; 20; 21; 22] or X-ray diffraction (XRD) [23; 24; 25; 26] for measuring the electronic and atomic structures, or impedance matching techniques via a velocity interferometry system for any reflector (VISAR) [27]. In such experiments the conditions inferred from these diagnostics often rely on appropriate model selections or on previous measurements of reference materials. Initial attempts to combine scattering and velocimetry observations to infer WDM conditions were performed by Falk _et al._[28], though different measurements had to be taken over multiple shots using identical targets and drive conditions. In this work we present a novel experimental platform, where reduction of model selection biases is obtained by combining multiple diagnostics for simultaneous _in situ_ structure characterization. Reverse Monte Carlo techniques are employed to determine the structural properties of shock-compressed matter, via measurement of the static density response. For this study, silicon was chosen due to its importance in the understanding of planetary interiors [29; 30], for its use as a dopant to ablators in inertial confinement fusion target designs [31; 32] and to mitigate laser-imprint effects on multi-layer targets [33; 34]. The experiments were conducted at the OMEGA-EP laser facility at the Laboratory for Laser Energetics [35]. A \(51\,\mu\)m thick polycrystalline silicon sample was shock-compressed to \(\sim 100\,\)GPa using a single drive laser beam delivering \(\sim 440\,\)J over \(10\,\)ns with a \(\sim 1.1\,\)mm diameter distributed phase plate. The drive laser is incident on a \(11\,\mu\)m polystyrene (C\({}_{8}\)H\({}_{8}\)) ablator at a \(19.3^{\circ}\) angle with respect to the target normal. The ablator was fixed to the front of the silicon sample using a thin layer of glue (\(<1\,\mu\)m). Three additional beams were tightly focused on a \(12.5\,\mu\)m thick copper backlighter with an areal size of \(4\,\)mm\({}^{2}\), generating a \(1\,\)ns pulse of Cu He-alpha X-rays centered at \(E\sim 8.4\,\)keV [36]. The X-ray source was placed \(\sim 17\,\)mm away from the silicon sample. The experimental configuration devised to probe the structure of WDM silicon at OMEGA-EP is shown in Figure 1. It employed a variation of the powder X-ray diffraction image plate (PXRDIP) setup [25], which uses Fujifilm BAS-MS image plates (IP's) [37]. Due to spatial constraints the X-ray diffraction only accessed momentum transfers up to \(k\sim 4\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\) at \(8.4\,\mathrm{keV}\). To extend the capabilities of the PXRDIP diagnostic, a Bragg crystal zinc-spectrometer (ZSPEC) was added to measure scattering at high momentum transfer, and is capable of resolving the electronic structure of sufficiently ionized systems. The ZSPEC consists of a \(25\,\mathrm{mm}\times 50\,\mathrm{mm}\) highly-oriented pyrolitic graphite (HOPG) crystal with a radius of curvature of \(27\,\mathrm{mm}\), and placed \(12.8\,\mathrm{cm}\) after the sample. As shown in the top inset in Figure 1, the ZSPEC was fielded out of perfect von-Hamos focusing meaning the X-rays were spectrally dispersed on a curve. The spectral analysis procedure can be found in the Supplementary Material. The silicon sample was fitted to the front of the PXRDIP enclosure on top of a \(0.5\,\mathrm{mm}\) diameter silver or tantalum collimating aperture pinhole, which restricts the diagnostics' line-of-sight to the central planar shock region. These materials were chosen to ensure no fluorescence within the ZSPEC energy range, and to reduce interference between the pinhole and silicon Bragg peaks on the PXRDIP. To measure the shock-breakout (SBO) time we fielded line-imaging VISAR which monitored the silicon sample's free surface [39]. The streaked image inset in Figure 1 shows the SBO as a rapid disappearance of the fringes around \(\sim 5\,\mathrm{ns}\). From this time we inferred the shock velocity in silicon to be \(9.5\pm 0.2\,\mathrm{km/s}\) (see Supplementary Material for details). As silicon is opaque to the VISAR wavelength (\(532\,\mathrm{nm}\)) at the investigated conditions, a direct measurement of the silicon particle velocity could not be made, and is instead inferred from the bilinear relationship in Ref. [27], which for small velocities is calculated from previous high explosive measurements [40]. Combining this information with the Rankine-Hugoniot relations, we measured the achieved pressure-density state to be \(101\pm 6\,\mathrm{GPa}\) and \(4.43\pm 0.08\,\mathrm{g/cm^{3}}\). At these conditions silicon is expected to be in the fluid state, which occurs when dynamically compressed above \(30\,\mathrm{GPa}\)[41; 14]. Whilst liquid silicon scattering, up to \(30\,\mathrm{GPa}\), has been previously observed at XFELs [14], extracting the contribution from low-Z liquids at high-power laser facilities is experimentally challenging due to limited X-ray source brightness, the presence of fluorescence, spurious scattering from the pinhole, and X-ray emission in the drive ablation plasma. To achieve this we quantified the contribution from the pinhole, ablation plasma and ambient sample. The procedure is described in detail in the Supplementary Material. As shown in Figure 2(a), a broad scattering feature, attributed to liquid silicon, is observed around \(2\theta\sim 45\,\mathrm{\SIUnitSymbolDegree}\). Due to the PXRDIP's geometry and the broad band X-ray emission from the laser generated plasma plume, shadows from the box appear on the IP's, preventing a complete azimuthal integration in \(\phi\)-space. Instead, a partial integration is performed by selecting regions with reduced contamination from the aforementioned sources. The resultant signal for a reference shot (s30970), which contained only the pinhole and ablator, and a driven silicon sample (s30967) are shown in Figure 2(b) in green and blue, respectively. The final liquid silicon scattering signal, \(I_{\mathrm{liq}}(k)\), shown in Figure 3(a) is obtained by subtracting the reference shot from the driven sample, and excluding the \(2\theta\) regions around the pinhole Bragg peaks. Further details can be found in the Supplementary Material. A \(2\theta\) error of \(\sim 0.5\,\mathrm{\SIUnitSymbolDegree}\) is taken to be the average deviation of the observed pinhole Bragg peaks from their expected values. Additionally, the fraction of shocked (fluid) material within the probe volume was inferred using the ZSPEC diagnostic by comparing data obtained with varying time delays between the drive laser and X-ray probe. As the volume of liquid silicon increases, the elastic scattering signal recorded on the XRTS, fielded in-between Bragg peaks, becomes more intense. From the elastic signal measured on s30967, the volume fraction was found to be \(\sim 0.6\) (see Supplementary Material). This gives further evidence that the diffuse signal observed on the X-ray diffraction is dominated by liquid-state silicon. At high momentum transfers the liquid scattering signal is the result of coherent, \(I_{\mathrm{coh}}(k)\), incoherent, \(I_{\mathrm{incoh}}(k)\), and multiple, \(I_{\mathrm{m}}(k)\), scattering. As the silicon thickness is small relative to its attenuation length, \(I_{\mathrm{m}}(k)\) is assumed to be negligible. The experimentally measured \(I_{\mathrm{liq}}(k)\) is therefore related to the normalized ion-ion struc Figure 1: Experimental setup at the OMEGA-EP laser facility. The silicon target is mounted on the front of the PXRDIP box [38] with a \(100\,\mathrm{\SIUnitSymbolMicro m}\) thick, \(0.5\,\mathrm{mm}\) diameter Ag or Ta pinhole. A single beam drives the CH-Si target with a tailored pulse as shown in the inset figure. The remaining three lasers generate Cu He-\(\alpha\) X-rays. The purple dashed lines represent the scattered X-ray paths that are collected by the XRTS and XRD IPs. The raw data shown were collected from s30967. NB: Not drawn to scale. ture factor, \(S_{\rm ii}(k)\), via [42; 43], \[\frac{I_{\rm liq}(k)}{\gamma}\equiv I_{\rm scal}(k)= \,I_{\rm coh}(k)\left[S_{\rm ii}(k)-1\right]\] \[+\left[I_{\rm coh}(k)+I_{\rm incoh}(k)\right]\,, \tag{1}\] where \(I_{\rm coh}(k)=\left|f(k)+q(k)\right|^{2}\), with \(f(k)\) the form factor of the tightly bound electrons and \(q(k)\) that of the free electrons that follow the ion motion [44]. The factor \(\gamma\) is a scaling constant defined such that \(I_{\rm scal}(k\rightarrow\infty)=I_{\rm coh}(k)+I_{\rm incoh}(k)\). To be experimentally obtained, momentum transfers in excess of \(10\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\) are required, a regime not currently accessible at high-power laser facilities. Here, \(I_{\rm incoh}(k)\) is obtained using the tabulated values from Ref. [45] and \(I_{\rm coh}(k)\) is simulated using the multi-component scattering spectra (MCSS) code [46]. As detailed further in the Supplementary Material, \(\gamma\) is left proportional to a free random Gaussian scalar with a standard deviation equal to the noise of the raw data. The large parameter space, \(\Psi(\rho,T,Z)\), is explored using a Markov-Chain Monte Carlo (MCMC) procedure [47; 48]. This uses Bayesian inference to determine the likelihood of a set of parameters producing the experimental spectrum based on an acceptance percentage Figure 3: **(a)** Liquid Si diffraction signal, \(I_{\rm scal}(k)\), (in blue) is shown scaled to the theoretical signal, \(I_{\rm fit}(k)\), (thick red line) produced by the combined VISAR and converged MCMC conditions using the non-linear Hulthén model. The \(1\sigma\) error of \(I_{\rm fit}(k)\) is shaded in red. The dash-dotted black line shows \(I_{\rm coh}+I_{\rm incoh}\) for these values. The broad range of accepted MCMC fits (in gray) are scaled to the mean fit. **(b)** Probability density functions in the \(P\)-\(\rho\) and \(P\)-\(T\) phase for VISAR (blue heat maps) and X-ray scattering (gray heat maps) analysis using each \(V_{ii}\). The corresponding joint distributions are superimposed as red heat maps. In the upper grid the likelihood, as defined in equation 3, of each \(V_{ii}\) is shown. Figure 2: **(a)** X-ray diffraction data, projected into \(2\theta\)-\(\phi\) space (see Supplementary Material) [38], from background shot s30970, where no Si was placed in the target holder, and the liquid Si diffraction from s30967. The superimposed red and blue dashed vertical lines are the expected \(2\theta\) Bragg diffraction peaks of the Ta pinhole and ambient silicon, respectively. **(b)** Relative intensities of the partial-\(\phi\) integrated scattering shown for the background (in green) and shock-compressed silicon (in blue) shots. \(P\left[I_{\text{scal}}(k)|\Psi\right]=\text{e}^{-\beta_{\text{cont}}}\) with \[\beta_{\text{cost}}=\max\left[\frac{I_{\text{fit}}(k)-I_{\text{scal}}(k)}{\sqrt{2 }\sigma\Sigma}\right]^{2}\,, \tag{2}\] where \(\Sigma\) is the error on \(I_{\text{scal}}\), and \(\sigma=0.5\) is a scalar chosen to allow acceptance freedom within data uncertainty. The investigated parameter space assumed a uniform distribution with linear sampling for the density, \(2.33\leq\rho\left(\text{g}/\text{cm}^{3}\right)\leq 6\), ionization, \(0\leq Z\leq 14\), and temperature, \(10^{3}\leq T_{e}=T_{i}\left(\text{K}\right)\leq 1.1\times 10^{4}\). Simulating \(S_{ii}(k)\), however, is subject to model biases and requires appropriate selection of electron and ion interactions. Measurement of the liquid structure factor opens the opportunity for direct model comparison. In the partially ionized, low density state, the ion-ion interaction potential, \(V_{ii}(k)\) is commonly modeled using Debye-Huckel (DH) [59]. This work compares the DH model with the bare (unscreened) effective Coulomb (EC) interaction and a model non-linear Hulthen (NH) interaction [60]; the latter approximately describes screening beyond the DH approach. For the screening cloud, \(q(k)\), large momentum transfers in high-density matter have shown deviation from the simple DH model as a result of finite-wavelength screening (FWS) [61]. As detailed in the Supplementary Material, the simulated liquid scattering is comparatively insensitive to each \(q(k)\) model and FWS was chosen for the MCMC analysis. In Figure 3(a) the range of accepted fits after MCMC convergence using the NH model are shown in gray. The signal from the XRTS recorded on shots that were probed after shock breakout (where the liquid volume fraction \(>0.9\)) are compared in green against the angularly resolved scattering in Figure 3(a), extending the effective \(k\) range. They show good agreement with the MCMC results. Using a suitable theoretical description, the total plasma pressure can be determined from the range of accepted fits. Under conditions of strongly coupled ions and degenerate electrons, in which screening is expected to be significant, a reasonable framework is the 'two-fluid' model discussed by Vorberger _et al._[62, 63] (see Supplementary Material). The converged probability density functions \(\text{Pr}(P,\rho)\) and \(\text{Pr}(P,T)\), for each ion-ion potential model, are shown in gray in Figure 3(b) and compared, in blue, to the \(P\)-\(\rho\) state inferred using VISAR. We can combine these concurrent diagnostics to find joint \(P\)-\(\rho\) probability density functions which are superimposed in Figure 3(b) as red heat maps. The likelihood of each ion-ion potential model given the VISAR information is defined as the sum of its joint probability distribution, \[\mathcal{L}\left(V_{ii}|\text{VISAR}\right)=\sum_{\rho,P}\text{Pr}_{m}(P,\rho )\times\text{Pr}_{v}(P,\rho)\,, \tag{3}\] where \(m\) and \(v\) denote the MCMC and VISAR probability density functions, respectively. These likelihoods are indicated in the upper grid of Figure 3(b). They show that comparatively, the effective-Coulomb model is a poor representation of the liquid silicon state. This is expected as it does not account for screening effects. Unlike the VISAR diagnostic, the MCMC convergence of the X-ray scattering analysis is dependent not only on pressure and density, but also on temperature. The combined \(\text{Pr}(P,\rho)\) can therefore be propagated into temperature space. This re-distributes the X-ray scattering \(\text{Pr}(P,\rho,T)\) to penalize where the density and pressure disagree with VISAR. Further details of this process can be found in the Supplementary Material. The resultant \(\text{Pr}(P,\rho,T)\) are used to find the combined \(1\sigma\) errors in the pressure-temperature phase, shown in red in the lower Figure 4: **(a)** The principal silicon Hugoniot where this work is compared to SESAME-3810, [49], quotidian equation-of-state (QEOS) [50], PrOpacEOS [51], \(ab\ initio\) Kohn-Sham DFT molecular-dynamics (KSDM) [52], principle Hugoniot from DFT [53], and previous experimental work collected via conservation methods [54, 55, 40]. The bilinear fit [27] used to infer particle velocity is shown as a filled gray bar. **(b)** The silicon pressure-temperature phase diagram comparing the combined \(1\sigma\) error for each \(V_{ii}\) to the measured and predicted melt curve [56], the DFT isentrope [57] and previous shocked silicon experiments [41] where the temperature was inferred using molecular dynamics [58]. grid of Figure 3(b). The simulated X-ray diffraction fits, \(I_{\mathrm{fit}}\), produced by the conditions inferred when combining VISAR and the NH MCMC convergence are shown in red in Figure 3(a). In Figure 4 the VISAR and MCMC combined \(1\sigma\)\(P\)-\(\rho\) and \(P\)-\(T\) for each ion-ion potential model are plotted on the principal Hugoniot and compared to density functional theory (DFT) calculations [52; 53], the Si melt [56], and previous experimental work. Despite having the closest agreement with VISAR in \(P\)-\(\rho\), the temperature predicted by the commonly used Debye-Huckel model falls below the Hugoniot state. Instead we find the implementation of a Hulthen potential [60], which estimates nonlinear screening regimes beyond DH, better describes the thermodynamic conditions. This report demonstrates how a novel experimental platform at a high-power laser-facility collects detailed information on the structure of shock-compressed matter at HED conditions. Whilst previous work on liquid silicon structure has been limited to pressures of \(\sim 50\,\mathrm{GPa}\)[41; 14], this platform is highly scalable to different pressures and materials. Our presented analytical technique applied Markov-Chain Monte Carlo analysis to the observed liquid diffraction signal and combined it with the concurrent VISAR \(P\)-\(\rho\) measurement. This provided \(1\sigma\) uncertainties on the shock-compressed state that are equivalent to previous experimental work without relying on EOS models. In addition, this platform enables the investigation of ion-ion interaction potentials. We found that accounting for screening beyond the linear Debye-Huckel approach via a Hulthen potential was required for agreement between the measured liquid silicon state and Hugoniot predictions. This platform therefore facilitates both mapping solid-to-liquid transitions for high pressure states and reducing model selection biases. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority and the National Science Foundation under Grant No. PHY-2045718. Part of this work was prepared by LLNL under Contract No. DE-AC52-07NA27344. The PXRDIP data was analyzed with LLNL AnalyzePXRDIP package. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof.
2301.13848
Benchmarking Large Language Models for News Summarization
Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM's zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries.
Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, Tatsunori B. Hashimoto
2023-01-31T18:46:19Z
http://arxiv.org/abs/2301.13848v1
# Benchmarking Large Language Models for News Summarization ###### Abstract Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM's zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries. ## 1 Introduction Large language models (LLMs) have shown promising results in zero-/few-shot tasks across a wide range of domains (Chowdhery et al., 2022; Bai et al., 2022; Brown et al., 2020; Zhang et al., 2022) and raised significant interest for their potential for automatic summarization (Goyal et al., 2022; Liu et al., 2022). However, the design decisions contributing to its success on summarization remain poorly understood, and while prior work has shown that LLMs outperform prior state of the art, it remains unclear whether their outputs are comparable to human writers. Examining these questions is crucial for advancing future research in automatic summarization. To answer the first question, we perform a systematic evaluation of ten diverse LLMs with human evaluation on news summarization and our evaluation identifies instruction tuning to be the key to zero-shot summarization capability. In contrast, self-supervised learning alone cannot induce strong summarization performance in the zero-shot setting (Figure 1). In fact, even a 350M parameter instruction-tuned GPT-3 can perform on par with the 175B parameter GPT-3. To benchmark LLMs, we evaluated on the standard CNN/DM (Hermann et al., 2015) and XSUM datasets (Narayan et al., 2018), but found that low-quality reference summaries caused several issues. The reference summaries in these benchmarks are of such poor quality that human annotators judge them to be worse than the outputs of most automatic systems (Figure 1). When computing automatic metrics using these references, their poor quality reduces the correlation between metric results and human judgement. Not only does this make evaluation difficult, but it also degrades the performance of systems that take supervision either through finetuning or few-shot prompting and makes comparison difficult. To address the quality issues of reference summaries and better understand how LLMs compare to human summary writers, we recruit free Figure 1: Selected annotator ratings of summary coherence on a 1 to 5 Likert scale. lance writers from Upwork1 to re-annotate \(100\) articles from the test set of CNN/DM and XSUM. Comparing the best performing LLM, Instruct Davinci, to the freelance writers, we find that the Instruct Davinci summaries are much more extractive. By manually annotating the summarization operations Jing and McKeown (2000) used in these summaries, we find that Instruct Davinci paraphrases much less frequently although it is able to combine copied segments coherently. Footnote 1: [https://www.upwork.com](https://www.upwork.com) Given their stylistic differences, we recruit annotators to compare the Instruct Davinci summaries to those written by freelance writers. On aggregate, we find that Instruct Davinci is rated as comparable to the freelance writers. However, analysis of individual annotators reveals that each annotator has a varying and stable preference for either Instruct Davinci or the freelance writers. Together, our work makes the following key contributions. First, we identify instruction tuning, instead of model scale, as the key to LLMs' summarization capability. Second, we show that reference summaries used in XSUM are judged by humans to be worse than the best LLM generated summaries. Third, to address the issue of low quality references, we collect better quality summaries from freelance writers and we show that the best LLM is rated as comparable to Upwork freelance writers. In combination, these results call into question recent claims made about LLM summarization. In particular, summarization progress cannot be measured using reference-based metrics applied on XSUM. Furthermore, the question of whether fine-tuned, few-shot or zero-shot models perform better remains an open question due to the poor quality of training data. To encourage future work on improved evaluations, we release the high-quality summaries written by freelance writers and the evaluation data on 18 model settings and two datasets as resources2. Footnote 2: [https://github.com/Tiitiger/benchmark_llm_summarization](https://github.com/Tiitiger/benchmark_llm_summarization) ## 2 Background and Related Work ### News Summarization News summarization is the task of producing a concise paragraph that captures the main points of a news article and has been a core problem within the field of automatic summarization Radev et al. (2002); Rush et al. (2015); Nalapati et al. (2016); See et al. (2017); Chen and Bansal (2018); Dong et al. (2019). In this work, we benchmark LLMs on news summarization to understand their potential for automatic summarization and focus on two popular news summarization benchmarks, CNN/DM Hermann et al. (2015) and XSUM Narayan et al. (2018). These two benchmarks contain large scale data in the order of hundreds of thousands summaries but are created via "incidental supervison". CNN/DM includes articles from the CNN and DailyMail websites as the source article and adapt the bullet point highlights that come with the website articles as reference summaries. XSUM includes articles from BBC news and adapts the bolded sentence(s) that appear in the first paragraph as reference summaries. As a result, the reference summaries in these datasets are known to have quality issues Maynez et al. (2020); Kang and Hashimoto (2020), motivating us to addresses these defects to improve LLM evaluation. To contextualize the performance of LLMs, we mainly compare to previous state-of-the-art approaches that leveraged supervised finetuning Liu and Lapata (2019); Lewis et al. (2019); Zhang et al. (2020); Liu et al. (2022). Summarization evaluation is another active area of research. Many automatic metrics have been proposed Lin (2004); Zhang* et al. (2020); Sellam et al. (2020); Durmus et al. (2020); Maynez et al. (2020); Deutsch and Roth (2021) but they do not always correlate with human evaluation of summarization systems Fabbri et al. (2020); Durmus et al. (2022). In this work, we evaluate the effectiveness of automatic metrics for evaluating LLMs and show that the usefulness of reference-based evaluation is closely linked to the quality of the references. ### Large Language Models LLMs Bommasani et al. (2021); Chowdhery et al. (2022); Brown et al. (2020) have two distinctive features over previous pretrained models. First, LLMs have much larger scale in terms of model parameters and training data. Second, unlike previous pretrained models that require finetuning, LLMs can be prompted zero-shot or few-shot to solve a task. In the zero-shot setting, prompting presents the LLMs with inputs (e.g. news articles) and a natural language instruction (e.g., "summarize this news article in three sentences") and solicit outputs by having LLMs generate answers di rectly. When few-shot training examples are available, LLMs have the ability to learn "in context". Incontext learning prepends training input-output pairs along with the same style of instruction to the testing input. Recently, instruction-tuning has emerged as an effective way to improve LLM prompting performance Sanh et al. (2021); Wang et al. (2022); Ouyang et al. (2022). In this approach, a diverse set of natural language processing tasks are reformulated into the prompting format and the LLM's parameters are updated for these tasks either through supervised finetuning or reinforcement learning. Recent work Goyal and Durrett (2020) shows that the instruct-tuned GPT-3 Davinci model is better than finetuned LMs, but do not show the design decision that contribute to the improved performance. In our work, we carry out a more comprehensive benchmark on ten different LLMs, to understand the effect of model scale, incontext learning and instruction tuning. Given that automatic metrics may not be reliable, we focus on human evaluation as our benchmarking method. ## 3 Human Evaluation on News Summarization Benchmarks In this section, we use human evaluation to systematically benchmark a diverse set of ten LLMs on news summarization. We observe that instruction tuning is the key to strong summarization capability and low-quality reference summaries in current benchmarks may underestimate few-shot or finetuning performance. ### Experimental Setup DataWe conduct our human evaluation on CNN/DM and XSUM by sampling a hundred examples from each validation set respectively. For the few-shot incontext learning settings, we sample five examples from the training set to be the demonstration examples. Due to the limited context window, we sample five articles that are between 50 and 150 tokens in length according to the GPT-2 tokenizer. For XSUM, we find that a uniform sampling occasionally result in articles that are unreadable due to data preprocessing so we manually pick from the training set. Model DetailsWe consider ten LLMs across different pretraining strategies and model scales3. Table 1 lists the details of the LLMs we consider. Due to limited computational resources and model access, we benchmark all models in the five-shot setting but only benchmark three OpenAI GPT-3 models and three OpenAI instruction-tuned GPT-3 models in the zero-shot setting. Footnote 3: We note that the training details of instruction-tuned GPT-3 models may differ from those mentioned in the publication and are inferred by us based on the API naming scheme. For CNN/DM, we solicit LLM summaries with the following prompt template "Article: [article]. Summarize the article in three sentences. Summary:" For XSUM, we modify the prompt template to summarize in one sentence to match the style of the reference summaries. For all LLMs we consider, we sample with temperature \(0.3\) following prior work Wu et al. (2021). To contextualize our LLM benchmarking results, we also evaluate two state-of-the-art fine \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & Model Creator & \# Parameters & Instruction Tuning & Reference \\ \hline GPT-3 davinci v1 & & 175B & & \\ GPT-3 curie v1 & & 6.7B & ✗ & Brown et al. (2020) \\ GPT-3 ada v1 & & 350M & & \\ \hline InstructGPT davinci v2 & & 175B & & \\ InstructGPT curie v1 & & 6.7B & ✓ & Ouyang et al. (2022) \\ InstructGPT ada v1 & & 350M & & \\ \hline OPT 175B & Meta & 175B & ✗ & Zhang et al. (2022) \\ \hline GLM & \begin{tabular}{c} Tsinghua \\ University \\ \end{tabular} & 130B & ✗ & Du et al. (2021) \\ \hline Cohere xlarge v20220609 & Cohere & 52.4B & ✗ & Cohere (2022) \\ \hline Anthropic-LM v4-s3 & Anthropic & 52B & ✓ & Bai et al. (2022) \\ \hline \hline \end{tabular} \end{table} Table 1: List of large language models we benchmarked with human evaluation. tuned LMs: Pegasus (Zhang et al., 2020) and BRIO (Liu et al., 2022). We decode the finetuned LMs using a beam size of \(5\) following prior work (Lewis et al., 2019). In addition, we also evaluate the existing reference summaries in the CNN/DM and XSUM validation sets. Human Evaluation ProtocolWe recruit annotators from Amazon Mechanical Turk, compensating them at California minimum wage of $15.00/hr using conservative time estimates as recommended by Whiting et al. (2019). Each model summary was evaluated by three annotators and we report results based on their average score for each summary. Our annotators evaluate each summary based on three criteria: faithfulness, coherence, and relevance. We define these terms and collect data according to the guidelines in Fabbri et al. (2020). Coherence and relevance ratings are collected on a 1 to 5 Likert scale while faithfulness ratings are collected in binary ratings due to its binary nature. Unlike Fabbri et al. (2020), we omit evaluating fluency because we find LLM outputs to be mostly fluent. The full annotation guidelines are included in our code release. ### Evaluation Results Table 2 presents the evaluation results4. We now discuss two main observations. Footnote 4: We note that the 350M GPT-3 consistently generates empty outputs so we omit it from the human evaluation. Instruction tuned models have strong summarization ability.Across the two datasets and three aspects, we find that the zero-shot instruction-tuned GPT-3 models, especially Instruct Curie and Davinci, perform the best overall. Compared to the fine-tuned LMs (e.g. Pegasus), Instruct Davinci achieves higher coherence and relevance scores (4.15 vs. 3.93 and 4.60 vs. 4.40) on CNN and higher faithfulness and relevance scores (0.97 vs. 0.57 and 4.28 vs. 3.85) on XSUM, which is consistent with recent work (Goyal et al., 2022). In contrast to instruction tuning, we find scale to be less important. Even the largest 175B model often ignores the instruction and generates irrelevant content while the much smaller Instruct Ada outperforms the 175B GPT-3 model on coherence and relevance. In the five-shot setting, non-instruction-tuned LLMs can improve their summarization performance through incontext learning. For faithfulness scores on CNN/DM and coherence scores on XSUM, several non-instruction-tuned LLMs can perform as well as the instruction-tuned LLMs. However, for other aspects, we still find the instruction-tuned LLMs to be better. Reference summaries in current benchmarks are extremely low quality.We arrive at this conclusion based on two observations. First, most automatic summarization systems score better than the reference summaries across all three aspects. Second, applying incontext learning \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline & & \multicolumn{4}{c}{CNN/Daily Mail} & \multicolumn{4}{c}{XSUM} \\ Setting & Models & Faithfulness & Coherence & Relevance & Faithfulness & Coherence & Relevance \\ \hline \multirow{8}{*}{Zero-shot language models} & GPT-3 (350M) & \(0.29\) & \(1.92\) & \(1.84\) & \(0.26\) & \(2.03\) & \(1.90\) \\ & GPT-3 (6.7B) & \(0.29\) & \(1.77\) & \(1.93\) & \(0.77\) & \(3.16\) & \(3.39\) \\ & GPT-3 (175B) & \(0.76\) & \(2.65\) & \(3.50\) & \(0.80\) & \(2.78\) & \(3.52\) \\ & Ada Instruct v1 (350M*) & \(0.88\) & \(4.02\) & \(4.26\) & \(0.81\) & \(3.90\) & \(3.87\) \\ & Curie Instruct v1 (6.7B*) & \(0.97\) & \(\mathbf{4.24}\) & \(\mathbf{4.59}\) & \(\mathbf{0.96}\) & \(4.27\) & \(\mathbf{4.34}\) \\ & Davinci Instruct v2 (175B*) & \(\mathbf{0.99}\) & \(4.15\) & \(\mathbf{4.60}\) & \(\mathbf{0.97}\) & \(4.41\) & \(\mathbf{4.28}\) \\ \hline \multirow{8}{*}{Five-shot language models} & Anthropic-LM (52B) & \(0.94\) & \(3.88\) & \(4.33\) & \(0.70\) & \(\mathbf{4.77}\) & \(4.14\) \\ & Coherence XL (52.4B) & \(\mathbf{0.99}\) & \(3.42\) & \(4.48\) & \(0.63\) & \(\mathbf{4.79}\) & \(4.00\) \\ & GLM (130B) & \(0.94\) & \(3.69\) & \(4.24\) & \(0.74\) & \(4.72\) & \(4.12\) \\ & OPT (175B) & \(0.96\) & \(3.64\) & \(4.33\) & \(0.67\) & \(\mathbf{4.80}\) & \(4.01\) \\ & GPT-3 (350M) & \(0.86\) & \(3.73\) & \(3.85\) & - & - & - \\ & GPT-3 (6.7B) & \(0.97\) & \(3.87\) & \(4.17\) & \(0.75\) & \(4.19\) & \(3.36\) \\ & GPT-3 (175B) & \(\mathbf{0.99}\) & \(3.95\) & \(4.34\) & \(0.69\) & \(4.69\) & \(4.03\) \\ & Ada Instruct v1 (350M*) & \(0.84\) & \(3.84\) & \(4.07\) & \(0.63\) & \(3.54\) & \(3.07\) \\ & Curie Instruct v1 (6.7B*) & \(0.96\) & \(\mathbf{4.30}\) & \(4.43\) & \(0.85\) & \(4.28\) & \(3.80\) \\ & Davinci Instruct v2 (175B*) & \(\mathbf{0.98}\) & \(4.13\) & \(4.49\) & \(0.77\) & \(\mathbf{4.83}\) & \(\mathbf{4.33}\) \\ \hline \multirow{2}{*}{Fine-tuned language models} & Brio & \(0.94\) & \(3.94\) & \(4.40\) & \(0.58\) & \(4.68\) & \(3.89\) \\ & Pegasus & \(0.97\) & \(3.93\) & \(4.38\) & \(0.57\) & \(4.73\) & \(3.85\) \\ \hline Existing references & - & \(0.84\) & \(3.20\) & \(3.94\) & \(0.37\) & \(4.13\) & \(3.00\) \\ \hline \hline \end{tabular} \end{table} Table 2: Human evaluation results for zero-shot and five-shot LLMs, finetuned LMs, and reference summaries. We bold the entries that are not statistically significantly different from the best numbers in each column. with the current reference summaries makes instruction-tuned models generate worse summaries. For example, on the XSUM dataset, after conditioning on five reference summaries, the faithfulness score of Instruct Davinci drops from 0.97 to 0.77. The low-quality reference summaries make it difficult to compare LLMs to both fine-tuned models and humans. When comparing to finetuned models, the poor performance of fine-tuned models can be attributed to the low-quality references in training data and we may be underestimating the finetuning performance. When comparing to human, the low-quality references are not representative of human performance because they are created through heuristics. As a result, it's likely that the differences between instruction-tuned LLMs and human performance are likely overstated in Table 3. Qualitative Examples.Figure 2 showcases example summaries on an article from the CNN/DM validation set, comparing the summaries of zero-shot GPT-3 Davinci, instruction-tuned GPT-3 Davinci, and the CNN/DM reference summary. We start by noting that the zero-shot GPT-3 model cannot follow the instruction to summarize well. After the summary paragraph, the model generates an additional question that is completely irrelevant. In addition to the failure of instruction following, the generated summary contains a factual error, stating that the handbag mentioned is the most expensive in the world, which contradicts the original article. In contrast, the instruction-tuned GPT-3 model generates a summary that is both faithful and coherent. We also observe from Figure 2 that the reference summary is not coherent. The brand "Hermes" is not introduced until the end and its connection to the rest of the story is unclear. This is unsurprising as reference summaries in the CNN/DM dataset were originally bullet points accompanying the articles as opposed to a coherent paragraph. ### Understanding Automatic Metrics We compute six popular automatic metrics and compute their system-level correlations against human ratings. The list of metrics we evaluate are: Rouge-L Lin (2004), METEOR Banerjee and Lavie (2005), BertScore Zhang* et al. (2020), BLEURT Sellam et al. (2020), and BARTScore Yuan et al. (2021). Table 3 shows Kendall's tau rank correlations between automated metrics and human judgements. We observe significantly different trends Figure 2: Examples summaries generated by GPT-3 models (Section 3) or written by freelance writers (Section 4) of an article from the CNN/DM dataset. We find that instruction-tuned GPT-3 model can generate a much better summary compared to the non-instruction-tuned variant. Reference summary from CNN/DM is not coherent whereas freelance writer summary both coherent and relevant. on CNN/DM and XSUM so we discuss them separately in the following paragraphs. For CNN/DM, we observe that the reference-based automatic metrics have a moderate correlation with some aspects of human judgments, e.g., Rouge-L has a 0.72 Kendall's tau correlation coefficient with relevance in Table 3. Such a level of correlation is comparable to that reported in Fabbri et al. (2020), which measures the correlation of automatic metrics on evaluating finetuned LMs and even earlier neural summarization systems. Therefore, we conclude that on CNN/DM automatic metrics can still provide useful signals in relevance. Studying the result more closely, we find that Rouge-L and human evaluation is more correlated when comparing within each model group. We plot Rouge-L over the relevance rating in Figure 3 as an example. First, we observe that Rouge-L does still prefer finetuned LMs (green points on top of the plots) to LLMs, consistent with prior work Goyal et al. (2022). Despite this mistake, when only comparing LLMs with each other, we find that a larger than 0.05 Rouge-L difference usually translates to improved human evaluation. On XSUM, the metrics have very low correlation with faithfulness and relevance but it is also because the reference summaries are terrible in these aspects (Table 3; also see Maynez et al., 2020). With such low-quality references, we do not expect reference-based metrics to extract useful information. Combining the results from two datasets, we find that reference-based metrics correlate better with human judgments on the aspects for which reference summaries also have better scores (e.g. CNN/DM relevance, XSUM coherence). This points to the important role of quality reference summaries for reference-based metrics, as previously observed in machine translation Freitag et al. (2020). ## 4 Comparing the Best LLM to Freelance Writers In Section 3, we see that the low-quality reference summaries make studying and benchmarking LLMs difficult. In this section, we address this by recruiting Upwork freelance writers to collect better quality summaries. With this data, we aim to answer two important questions. First, we would like to know whether the best LLM has reached human-level performance and how the summaries written by the best LLM differ from the ones written by humans. Second, we want to understand how well reference-based metrics correlate with human judgments once we compute them with higher quality reference summaries. ### Experimental Setup In this section, we describe the process of recruiting summary writers and our summary writing instructions. Data.For data used in our study, we select 50 articles from each of the CNN/DM and XSUM evaluation sets described in Section 3.1 and assign each article to three writers. For XSUM, we use the full articles rather than the preprocessed version where the first bolded sentence is removed. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{CNN/DailyMail} & \multicolumn{4}{c}{XSUM} \\ Metric & Faithfulness & Coherence & Relevance & Faithfulness & Coherence & Relevance \\ \hline Rouge-L & 0.54 & 0.48 & 0.72 & -0.27 & 0.71 & 0.30 \\ METEOR & 0.58 & 0.37 & 0.66 & -0.22 & 0.68 & 0.38 \\ BertScore & 0.54 & 0.47 & 0.70 & -0.23 & 0.70 & 0.30 \\ BARTScore & 0.56 & 0.34 & 0.65 & -0.22 & 0.70 & 0.35 \\ BLEURT & 0.56 & 0.62 & 0.81 & -0.08 & 0.67 & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 3: System-level kendall’s tau correlation with human scores across different axes. Figure 3: System-level Rouge-L vs. annotator rated relevance scores. W writer recruitment.We recruit six writers who have had previous experience in writing blog posts, landing page introductions, or product descriptions from the freelance work platform Upwork. After conducting a qualification round by asking writers to summarize five articles, we selected the best writers according to the faithfulness, coherence, and relevance of their summaries. Through an initial pilot study, we estimate that the time required to summarize a CNN/DM or XSUM article is around 12 to 15 minutes. Therefore, we pay our writers $4 for every article they summarize to following the recommended practice (Whiting et al., 2019). We based the assignments on writers' availability, with the most prolific writer summarizing 100 articles and the least prolific writer summarizing 35 articles. Summary writing instructions.For the annotation instruction, we instruct our writers to summarize each article in around 50 words5. To give better task grounding, we ask the writers to summarize as if they are writing a newletter to update their readers on news. We release the full annotation guideline along with our code release. Footnote 5: We conducted an initial study to pilot instructions and found that instructing writers with a sentence limit often resulted in summaries that differ significantly in length. LLM Summaries Generation.Recently, Liu et al. (2022) showed that length is a confounding factor in summarization human evaluation. To control this potential length confound, we modify the zero-shot prompt in Section 3.1 to elicit summaries that are around 50 words, which is the same word limit provided to the freelance writers. We found that the Instruct Davinci model consistently produces summaries that exceed a given word limit. Therefore, we intentionally prompt the Instruct Davinci model with a 25 words limit to produce summaries with an average length of 50 words. With this new prompt, we generate the summaries using the same hyperparameters described in Section 3.1. Quality Control.To verify the quality of the summaries written by freelance writers, we evaluate a random subset of 100 summaries using the same annotation scheme in Section 3.1 using Mechanical Turkers. Table 4 reports the evaluation results, where we see that the freelance writer summaries have much higher quality than the original reference summaries in CNN/DM and XSUM. In addition, we see that the difference between the freelance writer and Instruct Davinci in this evaluation is small. Next, we carry out more targeted evaluations to compare the summaries written by freelance writers and Instruct Davinci. ### Paired Comparison between LLM and Freelance Writers Comparing Stylistic Differences.Despite the similar performance in our quality control study, we find that LLM summaries and the freelance writer summaries have distinctive styles. Figure 2 shows an example summry written by the freelance writer. Compared to the LLM generated summary, we find the freelance writer summary often contains more paraphrasing and copies less from the article. To illustrate this stylistic difference, we measure two extractiveness measures, coverage and density, following Grusky et al. (2018). Coverage \begin{table} \begin{tabular}{c|c c c} \hline \hline Model & Faithfulness & Coherence & Relevance \\ \hline Freelance Writer & 0.93 & 4.39 & 4.26 \\ \hline Zero-shot & & & \\ Instruct Davinci & 0.98 & 4.26 & 4.40 \\ \hline Reference Summaries & 0.64 & 3.59 & 3.45 \\ \hline \hline \end{tabular} \end{table} Table 4: Amazon Mechanical Turker evaluation results of the freelance writer summaries. Results of zero-shot Instruct Davinci and reference summaries are taken from Table 2 after averaging the corresponding ratings. Figure 4: Distributions of cut and paste operations in the summaries written by freelance writers and by Instruct Davinci. By comparison, human written summaries contain more lexical paraphrasing and sentence reduction whereas the Instruct Davinci model has more direct copying from the article. is defined as the percentage of words in the summary that are also present in the article; density is defined as the average length of the continuous text spans in the summary that are copied from the article. Our analysis shows that the coverage and density for Instruct Davinci generated summaries are 0.92 and 12.1 whereas those for the writers written summaries are 0.81 and 2.07. These measures show that the summaries generated by Instruct Davinci are highly extractive whereas the summaries written by the freelance writers are much more abstractive. To have a finegrained understanding of these stylistic differences, we manually analyze the distribution of "cut and paste operations" in these two sets of summaries. Jing and McKeown (2000) identify a set of "cut and paste" operations for reusing text from the article, including sentence reduction, sentence combination, syntactic transformation, lexical paraphrasing, and generalization or specification. On top of these operations, we additionally include a sentence copy operation to account for summary sentences that are directly copied from the article. Using this guideline, we manually annotate ten randomly sampled summary pairs written by Instruct Davinci and the freelance writers. Figure 4 reports the distribution of the cut and paste operations, showing the fraction of sentences that contain each operation. First, we observe that the freelance writer summaries use lexical paraphrasing and generalization/specification much more frequently than the Instruct Davinci generated summaries. Because both operations often involve using novel words that are not present in the article, this matches with the fact that the freelance writer summaries have lower coverage (0.81 vs. 0.92) than the Instruct Davinci summaries. Second, we find that sentence combination is a common strategy used by both the freelance writers and Instruct Davinci. Third, we find that the freelance writers never copy an entire sentence directly from the article but Instruct Davinci does this more frequently. In conclusion, we find that Instruct Davinci summarizes in a very different style than human writers. We emphasize here that the freelance writers write in an abstractive style despite the fact that we have not explicitly instructed them to do so. We also observe similarly abstractive styles across the six freelance writers. Comparing Human Preference.We now return to our original goal of understanding where LLM generated summaries have quality on par with the human written ones. In the following paragraphs, we discuss our annotation design and recruitment process. We conduct a blinded pairwise comparison evaluation between the best LLM Instruct Davinci and the freelance writers, similar to the evaluation in Goyal and Durrett (2020). Besides selecting the better summary within each pair, the annotators can decide the summary pair to be equally good. We release the full annotation instructions along with the code release for this project. In order to compare the best LLM with the freelance writers, we annotate two aspects. First, we solicit annotators' overall preference, which balances the multiple quality aspects such as faithfulness, coherence, and relevance. Second, we solicit a more targeted measure of informativeness by asking the annotators to compare the number of facts in each summary. For the informativeness measure, we are motivated by the hypothesis that a more abstractive writing style can pack more information into the summary given the same word Figure 5: Human evaluation results comparing summaries written by freelance writers and summaries generated by Instruct GPT-3 Davinci. On aggregate, annotators equally prefer the freelance writers and Instruct Davinci. However, there is high variability in individual annotators’ preferences. Notably, annotator 1 writes abstractive summaries but prefers the more extractive Instruct Davinci summaries. count. While it is also interesting to compare summary coherence and relevance, we omit them because annotators were unable to differentiate these aspects from the overall preference in a pilot study. For our recruitment process, we recruit five additional annotators through Upwork and retain one writer who participated in the previous round of summary writing6. We carry out a qualification round and reject annotators whose ratings differ significantly from the authors' on a set of control questions for informativeness. We give each annotator the same set of 100 summary pairs, where the average length of the freelance writer summaries and the Instruct Davinci summaries are 53.2 and 52.0 respectively. Footnote 6: Other annotators left during the course of study due to change in freelance work schedule. Figure 5 shows the results of the paired comparison. While we hypothesized that the more abstractive writing style can lead to more informative summaries, we do not find a significant effect in our annotator pool, who rate the more abstractive summaries to be more informative only 51.1% of the time. On the informative question, our annotators reached a moderate agreement (Krippendorff's alpha is 0.32), validating our annotation instruction and recruitment process. Moving onto the more subjective overall preference, we find that our annotators equally prefer the freelance writer summaries and the Instruct Davinci summaries. However, a closer analysis shows that there is significant variability in individual annotators' preference and the interannotator agreement is low (Krippendorff's alpha is 0.07). This suggests that the quality of generated summaries is getting close to that of the freelance writer summaries and the comparison is dependent on each annotator's stylistic preference. One example of such stylistic preference is seen in the results from annotator 1, who also participated in the first round of summary writing. Like other writers, annotator 1 summarizes in an abstractive style (2.5 density and 0.86 coverage). However, annotator 1 prefers Instruct Davinci 57% of the time even though it generated much more extractive summaries. These results suggest an intriguing gap between annotator preferences when writing and evaluating summaries. ### Reevaluating Reference-based Metrics In Section 3.3, we saw that the performance of automated metrics may depend on the quality of reference summaries. With the freelance writer summaries, we now conduct an initial study on the effect of using better quality summaries. We focus on using Rouge-L for faithfulness evaluation on the XSUM dataset because the current reference summaries are known to be highly unfaithful (Maynez et al., 2020). In Figure 6, we plot the system-level Rouge-L against the human ratings. The left plot shows results of computing Rouge-L with existing references summaries from XSUM, which has negative correlation with human ratings. This result matches our expectation because the existing reference summaries are highly unfaithful. On the right, we see the results of computing Rouge-L with the freelance writer summaries, which leads to a much more positive correlation. Hence, we see that the usefulness of reference-based evaluation is closely linked to the quality of the references and we can improve metric correlation by using better reference summaries. ## 5 Discussion **Implication for model development.** In this study, we build a systematic evaluation of a diverse set of LLMs and find that instruction tuning contributes the most to LLMs' summarization capability. We believe that there is much research beyond our benchmarking effort that needs to be done to better understand the effect of instruction tuning. Here we hypothesize three aspects that could account for the success of instruction tuning. First, the quality of the summariztion data used in instruction tuning can serve an important role. Our findings in Section 3 show that currently we are finetuning language models on low quality training data, which can account for their ineffectiveness. At this point, we cannot rule out the possibility that when finetuned on higher quality data, finetuned LMs may perform much better. Second, the learning algorithm used for instruction tuning can be important (Ouyang et al., 2022). While the exact training details are unknown, the success of Instruct Davinci might be credited to "learning from human feedback" (LHF; Stiennon et al., 2020; Ziegler et al., 2019). Contrary to supervised finetuning that trains systems on written summaries, learning from human feedback trains systems from binary labels of human preferences. As we observe in Section 4.2, there is discrepancy in how annotators write and rate summaries. While it is possible that LHF have merits over the supervised learning/finetuning approach in exploiting this discrepancy, more analysis is needed to validate this hypothesis. Third, multi-task learning can be important. Instruct Davinci is trained on a diverse distribution of inputs and many previous studies have confirmed the effectiveness of multi-task learning. We look forward to understanding how summarization benefits from learning on other tasks. Implication for Summarization Evaluation.Our work also reveals the difficulties in evaluating high-performance LLMs. As LLMs become increasingly close to human-level performance, human evaluation requires a larger number of samples and less noisy measurement to evaluate the quality of LLMs. Recently, Liu et al. (2022) also point out the difficulties in conducting human evaluation for summarization and advocate using finegrained semantic units to match with reference summaries. However, as our evaluation points out, not only are the existing reference summaries unreliable but the summaries written by well-paid freelance writers also may not outperform LLM summaries significantly. Therefore, defining reference summaries as the ground truth may be overly restrictive as LLMs are approaching or even exceeding average human level performance. Not only is human evaluation limited by the reference quality, but it also is affected by the subjectivitiy in evaluation. Individual variation shows that there are many acceptable ways to summarize and individuals may even show different preferences at different points in time (writing vs rating). These factors in combination lead to the fact that we may have reached the limit of single document news summarization. Existing benchmarks can still play a role in evaluating new models but only if evaluation is done correctly. As LLMs improve, we believe that summarization can be better grounded in downstream applications where user values are better defined so that annotators have a lower degree of freedom in balancing which quality aspects matter most to them. ## 6 Conclusion In this work, we conducted a comprehensive human evaluation of ten LLMs, across the two most popular news summarization benchmarks. Through our experiments, we find that the state-of-the-art LLM performs on par with summaries written by freelance writers, with instruction tuning being the key factor for success. Beyond these findings, our work highlights the crucial role of good reference summaries in both summarization model development and evaluation. Unless the reference quality issue is addressed, comparing zero-shot, few-shot, and finetuning performance will remain an open question, and the current benchmarks will provide limited value when used with reference-based evaluation. Even when we address the quality issue and conduct a human evaluation with high-quality references, we observe a significant amount of individual variation from our annotator pool. Due to these factors, evaluations for single document news summarization may be reaching their limits. Figure 6: System-level Rouge-L vs. annotating rating of faithfulness. The left plot is computed with XSUM references, where the correlation is weak, and the right plot is computed with the freelance writer summaries, where the correlation is much improved. ## Acknowledgement This work is supported by an Open Philanthropy grant and partially supported by a gift from Northrup Grumman. We thank the Stanford NLP group and the Stanford Center for Research on Foundation Models community for their feedback.
2307.16466
Maximum Radiation Efficiency of Arbitrarily-Shaped Implantable Antennas
Performance limitations for implanted antennas, taking radiation efficiency as the metric, are presented. The performance limitations use a convex optimization procedure with the current density inside the implant acting as its degree of freedom. The knowledge of the limitations provides useful information in design procedure and physical insight. Ohmic losses in the antenna and surrounding tissue are both considered and quantitatively compared. The interaction of all parts of the system is taken into account in a full-wave manner via the hybrid computation method. The optimization framework is thoroughly tested on a realistic implanted antenna design that is treated both experimentally and as a model in a commercial electromagnetic solver. Good agreement is reported. To demonstrate the feasibility of developed performance limitations, they are compared to the performance of a loop and a dipole antenna showing the importance of various loss mechanisms during the design process. The trade-off between tissue loss and antenna ohmic loss indicates critical points at which the optimal solution drastically changes and the chosen topology for a specific design should be changed.
Jakub Liska, Mingxiang Gao, Lukas Jelinek, Erik R. Algarp, Anja K. Skrivervik, Miloslav Capek
2023-07-31T07:55:46Z
http://arxiv.org/abs/2307.16466v2
# Upper Bound on Implantable Antennas ###### Abstract A hybrid approach for the computation of performance limitations for implanted antennas is presented, taking radiation efficiency as the optimized metric. Ohmic loss in the antenna and surrounding tissue are both considered. The full-wave interaction of all parts of the system is taken into account. The method is thoroughly tested on a realistic implanted antenna design that is treated both experimentally and as a model in a commercial electromagnetic solver. A great agreement is reported. In addition, the fundamental bounds on radiation efficiency are compared to the performance of a loop and a dipole antenna showing the importance of various loss mechanisms during the designs. The trade-off between tissue loss and antenna ohmic loss indicates critical points in which the optimal solution drastically changes and in which the real designs should change their topology. Implantated biomedical devices, hybrid solution methods, antenna efficiency, fundamental bounds, tissue loss, ohmic losses, numeric methods, quadratic programming. ## I Introduction Telemedicine has experienced a significant boom during the past few decades. An important enabler of this progress is wireless communication and wireless power transfer for medical implants via radiofrequency fields, which renders the antenna a crucial element. Laws of physics, however, bound the performance of antennas, and the knowledge of these limits is essential to improve the development of telemedicine devices. Performance limitations on antennas are of key interest to the antenna designers [1, 2, 3, 4, 5, 6] even outside telemedicine and their determination has evolved to a discipline on its own. For antennas radiating into free space, they can nowadays be evaluated for arbitrary metrics and shapes [7, 8, 9]. Nevertheless, the aspects of wireless communication with implanted devices [10, 11] differ significantly from antennas operating in free space. Of particular interest for this paper are physical bounds on the maximum power density of electromagnetic (EM) waves reaching free space from implanted antennas [12, 13, 14, 15, 16]. Generally, the size of an implanted antenna is typically a few millimeters, and the operating frequency ranges from hundreds of megahertz to a few gigahertz [17]. This indicates that implanted antennas are usually electrically small. Contrary to conventional antennas, the implant is firstly surrounded by a small volume of ideally lossless medium representing the implant encapsulation. Beyond this, the EM waves radiated by the implant must pass through a lossy tissue before escaping to free space. Until recently, fundamental bounds for implanted antennas have only been treated considering elementary sources [12] with no considerations on true physical dimensions. Finite geometry, cylindrical shell, was considered in [16] and compared to elementary infinitesimally small sources studied before. An initial attempt to find the radiation efficiency limitation of an implanted antenna with finite conductivity and arbitrary shape of the current supporting region was presented in [18, 19]. It was followed by a feasibility study of performance limitations on implanted antennas considering arbitrarily-shaped and lossy current-supporting regions [20]. A popular methodology [7, 21] for evaluating fundamental bounds is based on optimal current density, integral equation formulation, and the employment of the method of moments. This presents a major obstacle for implanted devices which demand the interaction of an electrically small antenna and a potentially electrically large body1. A direct approach using the volumetric meshing of inhomogeneous tissues would be unbearably computationally demanding [22]. To overcome the computation difficulties, a hybrid method [23] is used in this work, which consists of a combination of two frequency domain schemes: the method of moments (MoM) formulation of the electric field integral equation is used for the antenna and its closest neighborhood, while a vector spherical wave expansion (VSWE) is applied to numerically evaluate the interaction of the EM field with the host body [24]. This allows treating this complex problem with relatively few unknowns [24]. The hybrid method requires a spherical boundary on which the two underlying numerical methods, MoM and VSWE, are coupled. This boundary is defined naturally as a spherical encapsulation of the current support, see Fig. 1. Footnote 1: The radius of the smallest sphere circumscribing the body is much larger than a typical operating wavelength in the free space. Though the hybrid method is known, it remains to determine how fundamental bounds can be formulated in this framework. The basic idea was introduced in [18, 20, 19] and is developed in this paper. It also employs some previ ously developed optimization schemes [21, 25]. Similarly to antennas in the free space [7, 21], the computation of fundamental bounds [21, 25] is based on convex optimization, particularly quadratically constrained quadratic program (QCQP) [26, 21]. The approach is applied to a basic scenario with a spherical single-layered phantom and commonly used medical frequencies of 403 MHz and 2.45 GHz. Results are verified by measurement and by simulations using independent commercially available solvers: CST [27] (CST) simulation based on FDTD and FEKO [28] (FEKO) simulation based on MoM. This paper is organized as follows. Section II presents the used model and the basis of the numerical tools used later. Their description is necessary to understand the following sections. Section III stresses the importance of finite metal conductivity of the current supporting region and consequences. The method verification is done in Section IV, which also presents results of simulations in commercially available solvers and measurement results. Further investigation and discussion in the setup for verification are given in section V. ## II Computational Tools Modeling the wireless link between an implanted antenna and an external node is a challenging problem, resolved here using the advantages of two distinct approaches, MoM and VSWE, which are combined within the hybrid method. MoM with piece-wise basis functions [29] is applied to characterize the antenna or current supporting region in a great level of spatial details. Conversely, VSWE serves as a set of entire-domain basis functions to determine the influence of the surrounding tissues. Computational complexity is significantly reduced [30, 31] as compared to the case, in which the entire setup is solved just with a single method with entire-domain discretization. Within the quadratic representation of power-like quantities in MoM and VSWE, the physical limitations are readily evaluated as in previous studies related to MoM [7, 25, 21].The evaluation yields the value of the fundamental bound on a chosen metric and an associated optimal current distribution, which may serve as an initial inspiration for the antenna designers. ### _Method of Moments and T-matrix Hybrid_ MoM and T-matrix methods are described in [24] and only the essential steps to hybridize them are presented here. As mentioned above, the method applies MoM for the antenna description and VSWE with T-matrix for the surrounding tissues. #### Ii-A1 Method of Moments Within MoM, the current supporting region2, \(\Omega_{Z}\) region in Figs. 1 and 2, is discretized in a set of elementary cells together with current density, which is approximated by a sum of weighted basis functions \(\{\boldsymbol{\psi}_{n}\}_{n=1}^{N}\) with expansion coefficients \(I_{n}\), Footnote 2: Support for optimized current density in the sense of fundamental bounds defined in Section II-B or simply the antenna metalization in the case of a design. \[\boldsymbol{J}(\boldsymbol{r})\approx\sum_{n=1}^{N}I_{n}\boldsymbol{\psi}_{n} (\boldsymbol{r}). \tag{1}\] Substitution of (1) in the appropriate electric field integral equation (EFIE) and the use of Galerkin method [32] results in a system of linear equations \[\left(\mathbf{Z}_{0}+\mathbf{Z}_{\rho}\right)\mathbf{I}=\mathbf{V}_{1} \tag{2}\] where \(\mathbf{Z}_{0}\) is the free-space impedance matrix, \(\mathbf{Z}_{\rho}\) is the material impedance matrix3, \(\mathbf{V}_{1}\) is the excitation vector, and \(\mathbf{I}\) is the unknown vector of current expansion coefficients \(I_{n}\). This allows for the characterization of the antenna in the free space. It should also be noted that the addition of dielectric inclusions or the use of non-spherical encapsulation can be resolved by the use of a volumetric electric field integral equation in the same way as described above. Footnote 3: The free-space impedance matrix \(\mathbf{Z}_{0}\) gives the properties of the current as it flows in the free space, meaning the radiation (real part) and reactive power (imaginary part). The material impedance matrix \(\mathbf{Z}_{\rho}\) gives the properties of the current flowing in the chosen material, meaning the losses (real part) and reactive power of the material (imaginary part). #### Ii-A2 Vector Spherical Wave Expansion The idea of field expansion into a set of basis functions is also applied in the T-matrix method. In this case, entire-domain basis functions Fig. 1: Within the used hybrid method, the model is separated into regions treated by different numerical methods: MoM description of the current supporting region denoted as \(\Omega_{Z}\), and VSWE description of the host body denoted as \(\Omega_{T}\). The spherical boundary around the antenna (implanted capsule) separates the two methods and provides their interconnection. Fig. 2: Setup of the field and current expansion [18, 19]. Current density in the current supporting region \(\Omega_{Z}\) is described via vector \(\mathbf{I}\), while VSWE of EM fields on the boundaries of the host body \(\Omega_{T}\) is collected in vectors \(\mathbf{a}_{1},\ \mathbf{f}_{1}\) and \(\mathbf{a}_{2},\ \mathbf{f}_{2}\) outside and inside the host body, respectively. Spherical boundaries of these regions and the host body are essential for the application of VSWE to describe the interaction of the fields with the host body. are used, and the electric and magnetic fields are expanded in vector spherical harmonics \[\mathbf{E}(\mathbf{r}) =k\sqrt{Z}\sum_{\alpha}a_{\alpha}\mathbf{u}_{\alpha}^{(1)}(k\mathbf{r})+ f_{\alpha}\mathbf{u}_{\alpha}^{(4)}(k\mathbf{r}), \tag{3}\] \[\mathbf{H}(\mathbf{r}) =\mathrm{j}\frac{k}{\sqrt{Z}}\sum_{\alpha}a_{\alpha}\mathbf{u}_{ \dot{\alpha}}^{(1)}(k\mathbf{r})+f_{\alpha}\mathbf{u}_{\dot{\alpha}}^{(4)}(k\mathbf{r}), \tag{4}\] where \(k\) and \(Z\) are wave number and background wave impedance, respectively, \(a_{\alpha},\ f_{\alpha}\) are the expansion coefficients, and \(\mathbf{u}_{\alpha}^{(1)},\ \mathbf{u}_{\alpha}^{(4)}\) are vector spherical waves [24, Appendix A] of regular and outgoing type, respectively. To describe the scenario of implanted antenna, the relation between the field outside the body and in the implant (antenna) encapsulation is given in matrix form as \[\begin{bmatrix}\mathbf{f}_{1}\\ \mathbf{a}_{2}\end{bmatrix}=\begin{bmatrix}\mathbf{T}&\mathbf{\Psi}\\ \mathbf{\Psi}^{\mathrm{T}}&\mathbf{\Gamma}\end{bmatrix}\begin{bmatrix}\mathbf{a}_{1}\\ \mathbf{f}_{2}\end{bmatrix}, \tag{5}\] with \(\mathbf{T},\ \mathbf{\Gamma}\) being transmission matrices outside and inside, respectively, matrices \(\mathbf{\Psi},\ \mathbf{\Psi}^{\mathrm{T}}\) accounting for field penetrating from inside to outside or vice-versa, and \(\mathbf{a},\ \mathbf{f}\) gathering expansion coefficients. Sub-indices "1" and "2" corresponds to expansion outside and inside, respectively; see Figs. 2 and 3. In this paper, neither an incident field from infinity nor an external antenna is considered, therefore, \(\mathbf{a}_{1}=\mathbf{0}\) is used for simplicity in further derivations. In the case of a spherically layered host body, the interaction matrices in (5) can be evaluated analytically [33], which is also done in this paper. A representative example is a spherical encapsulation with an antenna centered in a phantom of a human head. The human head can be sufficiently approximated by a spherical multilayered host body [23], where tighter bounds on implanted antennas, as compared to [12, 15], can be evaluated in terms of seconds using the technique proposed here. The method can be easily adapted to other scenarios like ingestible devices and is not restricted to medical telemetry, but T-matrix has to be obtained numerically by some standard solver in the case of a non-spherical model. #### Ii-A3 Hybridization The coupling of MoM and T-matrix method is done via projection matrix \(\mathbf{U}_{1}\) \[U_{1}^{an}=k\sqrt{Z}\left\langle\mathbf{u}_{\alpha}^{(1)},\mathbf{\psi}_{n}\right\rangle \tag{6}\] which maps vector spherical waves to basis functions from MoM as \[\mathbf{f}_{2} =-\mathbf{U}_{1}\mathbf{I}, \tag{7}\] \[\mathbf{V} =\mathbf{V}_{\mathrm{i}}+\mathbf{U}_{1}^{\mathrm{T}}\mathbf{a}_{ 2}. \tag{8}\] The above relations are written for a particular case of outgoing waves and complete excitation within MoM, where \(\mathbf{V}_{\mathrm{i}}\) is the direct excitation in the current supporting region (for example delta-gap source). This connection allows combining the methods from (2) and (5) into a hybrid method \[\begin{bmatrix}\mathbf{Z}_{0}+\mathbf{Z}_{\rho}&-\mathbf{U}_{1}^{\mathrm{T}} &\mathbf{0}&\mathbf{0}\\ -\mathbf{U}_{1}&\mathbf{0}&-\mathbf{1}&\mathbf{0}\\ \mathbf{0}&-\mathbf{1}&\mathbf{\Gamma}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{\Psi}&-\mathbf{1}\end{bmatrix}\begin{bmatrix}\mathbf{I}\\ \mathbf{a}_{2}\\ \mathbf{f}_{2}\end{bmatrix}=\begin{bmatrix}\mathbf{V}_{\mathrm{i}}\\ \mathbf{0}\\ \mathbf{0}\\ \mathbf{0}\end{bmatrix}. \tag{9}\] ### _Upper Bound on Radiation Efficiency_ Concerning implanted antennas, radiation efficiency is one of the most important performance metrics. The performance is generally low due to the lossy tissues surrounding the antenna. The previously studied limits [12, 14] considered cycle-mean radiated power \(P_{\mathrm{rad}}\) and cycle-mean power absorbed by the lossy biological tissues \(P_{\mathrm{tis}}\). Within the hybrid method shown in the previous section II-A, ohmic losses in the antenna can also be considered and separated from the total loss. Similarly to [18, 20, 19], the upper bound on radiation is here formulated as the optimal current density, which realizes the highest radiation efficiency. The expansion coefficients \(I_{n}\) are taken as the degrees of freedom in this optimization problem. Explicitly, the radiated power \(P_{\mathrm{rad}}\) over the total power \(P_{\mathrm{tot}}\) supplied to the system is maximized \[\eta_{\mathrm{rad}}^{\mathrm{ub}}=\max_{\mathbf{I}}\eta_{\mathrm{rad}}, \tag{10}\] where \[\eta_{\mathrm{rad}}=\frac{P_{\mathrm{rad}}}{P_{\mathrm{tot}}}=\frac{P_{\mathrm{ rad}}}{P_{\mathrm{rad}}+P_{\mathrm{ant}}+P_{\mathrm{tis}}}, \tag{11}\] with \(P_{\mathrm{ant}}\) being the cycle-mean lost power in the current supporting region \(\Omega_{Z}\) and \(P_{\mathrm{tis}}\) being the cycle-mean lost power in region \(\Omega_{T}\). The individual power terms are defined within the hybrid method as follows \[P_{\mathrm{rad}} =\frac{1}{2}\mathbf{f}_{1}^{\mathrm{H}}\mathbf{f}_{1}, \tag{12}\] \[P_{\mathrm{ant}} =\frac{1}{2}\mathbf{H}^{\mathrm{H}}\mathrm{Re}\left[\mathbf{Z}_{ \rho}\right]\mathbf{I},\] (13) \[P_{\mathrm{tis}} =\frac{1}{2}\left(\mathbf{f}_{2}^{\mathrm{H}}\mathbf{f}_{2}+ \mathrm{Re}\left[\mathbf{a}_{2}^{\mathrm{H}}\mathbf{f}_{2}\right]\right)-P_{ \mathrm{rad}}. \tag{14}\] Note that the first term on the right-hand side of (14) is the net cycle-mean outward power flow at the inner boundary of \(\Omega_{T}\), see [24, Appendix E] for derivation. As mentioned above, the current density expansion coefficients \(I_{n}\) are the optimization variables, and the first row of system (9) \(\left(\mathbf{Z}_{0}+\mathbf{Z}_{\rho}\right)\mathbf{I}-\mathbf{U}_{1}^{\mathrm{ T}}\mathbf{a}_{2}=\mathbf{V}_{\mathrm{i}}\) is therefore not Fig. 3: Schematic diagram representing the matrix description of the host body within VSWE. imposed4[34]. Optimization only relies on the boundary conditions between different regions, which are represented by other rows. Specifically, the equation system Footnote 4: Note that equation system (9) has a unique solution, which binds the excitation vector with current vector. In order to set up an upper limit to radiation efficiency of all antennas fitting the prescribed current supporting region, this unity must be relaxed. One of the possible ways to skip the first row of equation system (9), which gives the necessary freedom to the current vector I. But, particular lines in the first row can be used as power constraints, to further tighten the bounds and to impose constraints on input impedance, the shape of the feed, _etc_. \[\begin{bmatrix}-\mathbf{U}_{1}&\mathbf{0}&-\mathbf{1}&\mathbf{0}\\ \mathbf{0}&-\mathbf{1}&\mathbf{\Gamma}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{\Psi}&-\mathbf{1}\end{bmatrix}\begin{bmatrix} \mathbf{I}\\ \mathbf{a}_{2}\\ \mathbf{f}_{2}\\ \mathbf{f}_{1}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \mathbf{0}\\ \mathbf{0}\end{bmatrix} \tag{15}\] is used as an affine constraint in the optimization problem, while the current density vector \(\mathbf{I}\) is optimized so as to maximize radiation efficiency. As seen in (12) - (14), all the power terms are represented as quadratic functions of expansion coefficients. Converting all terms (12) - (14) to the functionals of vectors \(\mathbf{I}\) via (15), optimization problem (10) is transformed to quadratically constrained quadratic program (QCQP). Its solution can be approached numerically [21], but in this particular case, it can be solved more efficiently. After several algebraic manipulations and substitutions from (15) to (12)-(14), the optimization problem (10) reduces to generalized eigenvalue problem (GEP) \[\left(\mathbf{U}_{1}^{\mathrm{T}}\mathbf{\Psi}^{\mathrm{T}}\mathbf{\Psi} \mathbf{U}_{1}\right)\mathbf{I}=\eta\left(\mathrm{Re}\left[\mathbf{Z}_{0}+ \mathbf{Z}_{\rho}\right]+\mathbf{U}_{1}^{\mathrm{T}}\mathbf{\Gamma}\mathbf{U}_ {1}\right)\mathbf{I}, \tag{16}\] where the left-hand-side matrix is proportional to the cycle-mean radiated power (12) and the right-hand side to the cycle-mean total supplied power. The maximum radiation efficiency \(\eta_{\mathrm{rad}}^{\mathrm{ub}}\) is given by the maximum eigenvalue \(\eta\), and the optimal current density corresponds to the associated eigenvector \(\mathbf{I}\). ## III Different Kind of Losses Implanted antennas suffer from dissipation of power in two different regions: the antenna itself and the host body. So far, only idealized EM dipole sources were considered in [12, 16], concluding that elementary electric dipole suffers more from tissue losses than the magnetic source, mainly due to the coupling of the near field and the lossy tissues. In this section, we show that the metal losses of a given current supporting region should also be taken into account. Let us consider two frequencies within standard medical bands: \(f_{1}=403\,\mathrm{MHz}\) and \(f_{2}=2.45\,\mathrm{GHz}\). Further, let us consider the following example: the current supporting region is a copper disc with radius \(r\) swept from 0.75 mm to 7.5 mm placed in a centered spherical air bubble with radius \(a=7.5\,\mathrm{mm}\) and implanted in spherical muscle (\(\varepsilon_{\mathrm{r,m}}=57-36\)j at frequency \(f_{1}\) and \(\varepsilon_{\mathrm{r,m}}=53-13\mathrm{j}\) at frequency \(f_{2}\)) phantom with radius \(R=61.5\,\mathrm{mm}\). The current supporting region is supposed to be made of copper with a thickness much higher than skin depth and, therefore, can be modeled by a sheet of surface resistivity [35]\(R_{\mathrm{s}}=\sqrt{(2\pi f\mu_{0})/(2\sigma)}\), where conductivity \(\sigma=5.56\cdot 10^{7}\,\mathrm{S/m}\) and permeability of vacuum \(\mu_{0}\) are assumed. The upper bound on radiation efficiency is computed for different radii \(r\) and is shown in Fig. 4. Fig. 4 depicts, at each frequency, the optimal radiation efficiency as a function of normalized radius \(r/a\). This efficiency cannot be overcome by any real antenna fitting the considered disc region. An interesting phenomenon is an abrupt growth in radiation efficiency at frequency \(f_{1}\) for support sizes larger than \(r/a=0.32\). For current supports up to a radius of \(r/a=0.5\), it is convenient to use frequency \(f_{2}\) with stable performance among different radii. In contrast, for larger sizes, the first frequency is preferred. A better physical insight into the aforementioned phenomenon is attained by analyzing the power quantities in (12) - (14) normalized to the radiated power \(P_{\mathrm{rad}}\), i.e., dissipation factors of the current support/tissue \[\delta_{\mathrm{ant/tis}}=\frac{P_{\mathrm{ant/tis}}}{P_{\mathrm{rad}}}, \tag{17}\] and the total dissipation factor \[\delta_{\mathrm{rad}}=\delta_{\mathrm{ant}}+\delta_{\mathrm{tis}}=\frac{1- \eta_{\mathrm{rad}}}{\eta_{\mathrm{rad}}}. \tag{18}\] The comparison of all dissipation factors is shown in Fig. 5. At frequency \(f_{1}\), the dissipation factors \(\delta_{\mathrm{ant}}\) and \(\delta_{\mathrm{tis}}\) are discontinuous, with the critical point being the radius \(r/a=0.32\). For smaller sizes, tissue loss dominates, while the antenna loss is comparable for larger sizes. At frequency \(f_{2}\), all curves are smooth, and antenna loss is negligible as compared to tissue loss. The behavior of power losses can be explained by observing the optimal current density for the case of small and large current supporting regions. For the smaller size, the optimal current densities are shown in the left panels of Fig. 6. The current densities are both \(\mathrm{TM}_{1m}\)-like5. For the larger size of Fig. 4: Upper bound on radiation efficiency for frequencies \(f_{1},f_{2}\) and radius of the disc normalized by the radius of the air bubble \(r/a\in[0.1,1]\). Performance bounds are evaluated for the copper disc as the current support centered inside air bubble with radius \(a=7.5\,\mathrm{mm}\), which is placed in the center of the muscle-filled sphere with radius \(R=61.5\,\mathrm{mm}\). the current supporting region, the optimal current density is, see the right panels in Fig. 6, \(\mathrm{TE}_{1m}\)-like at frequency \(f_{1}\) and is \(\mathrm{TM}_{1m}\)-like at frequency \(f_{2}\). This explains the discontinuity in dissipation factors at frequency \(f_{1}\), where the optimal current must dramatically switch its character to maximize radiation efficiency. The reason for this is that \(\mathrm{TE}_{1m}\)-like current couples less to the lossy dielectric host body [12] and therefore become favorable when the boarders of the current supporting region come closer to the tissue. While small-size \(\mathrm{TE}_{1m}\)-like currents can be seen as an analogy to a short circuit [20]. At frequency \(f_{2}\), this is not necessary since the electrical size of the current support is big enough to accommodate \(\mathrm{TM}_{1m}\)-like current with an optimal trade-off antenna loss due to dense current in the center and tissue loss due to short distance from the tissues. But, in general, antenna loss decreases with the increasing \(r\) for \(\mathrm{TM}_{1m}\)-like and \(\mathrm{TE}_{1m}\)-like currents because more spreading of the current is allowed. ## IV Experimental Verification The following two sections show how to use the previously developed methodology in the real design of an implanted antenna. This provides an experimental validation of the method and also demonstrates some new features that were not observed previously. To allow for a simple experimental setup, the host body is simulated by a glass bottle filled with distilled water, see Fig. 7 and Fig. 8. Distilled water has well-determined EM properties (\(\varepsilon_{\mathrm{r,w}}=78-12\mathrm{j}\) at frequency \(f_{2}\)) and similarly well defined are EM properties of glass (\(\varepsilon_{\mathrm{r,g}}\approx 6.7\)). These values are also used by commercial EM solver CST, which is employed as a virtual version of the experiment. The outer radius of the bottle is \(R_{1}=63.5\,\mathrm{mm}\) and glass thickness is \(t=2.0\,\mathrm{mm}\). The implanted antenna, see Fig. 9, is designed and measured for frequency \(f_{2}\). It consists of a \(100\,\mathrm{\SIUnitSymbolMicro m}\) thick polyimide disc (\(\varepsilon_{\mathrm{r}}=3.5\)) with \(18\,\mathrm{\SIUnitSymbolMicro m}\) thick copper cladding and two ceramic Fig. 5: Dissipation factors corresponding to the upper bound on radiation efficiency shown in Fig. 4. The partial dissipation factors of the current supporting region and the muscle host body are also shown. The radius of the disc is, as in Fig. 4, normalized by the radius of the air bobble. Fig. 8: Simplified scenario of the measurement: c – ceramic, w – water, g – glass. The parameters are: spherical glass bottle (\(\varepsilon_{\mathrm{r,g}}=6.7\)) with an outer radius of \(R_{1}=63.5\,\mathrm{mm}\), a glass thickness \(t=2\,\mathrm{mm}\), water radius \(R_{2}=R_{1}-t=61.5\,\mathrm{mm}\), ceramic encapsulation \(a=7.5\,\mathrm{mm}\). The true shape of the antenna (\(f_{2}\)) is detailed in Fig. 9. In the case of the fundamental bound, the current supporting region is a disc of radius \(r\in[0.75,7.5]\) mm. Fig. 6: Optimal current densities on a disc in a muscle host body corresponding to data reported in Fig. 5 and Fig. 4: (top left) \(\mathrm{TM}_{1m}\)-like for \(r/a=0.1\) and \(f_{1}=403\,\mathrm{MHz}\), (top right) \(\mathrm{TE}_{1m}\)-like for \(r/a=1\) and \(f_{1}=403\,\mathrm{MHz}\), (bottom left) \(\mathrm{TM}_{1m}\)-like for \(r/a=0.1\) and \(f_{2}=2.45\,\mathrm{GHz}\), (bottom right) \(\mathrm{TM}_{1m}\)-like for \(r/a=1\) and \(f_{2}=2.45\,\mathrm{GHz}\). Fig. 7: Measurement setup placed in the anechoic chamber. The ceramic-encapsulated antenna is placed in the center of a spherical phantom filled with distilled water. hemispheres as its encapsulation. A meander dipole antenna is etched in the copper. The antenna is connected to a copper jackcted semi-rigid cable (EZ-34, EZ Form Cable) through a chip balun (2450BL15B050, Johanson Technology) and a short coplanar stripline. A slot in the top ceramic hemisphere is machined to accommodate the balun and the cable. The ceramic hemispheres are made of \(\mathrm{Al_{2}O_{3}}\) (alumina, \(\varepsilon_{r}=12\) and \(\tan\!\delta=0.0003\), CST model) that is bio-compatible and also provides good matching of the antenna. At first, the above-mentioned antenna setup is used to validate the hybrid method used to define the fundamental bound. For this aim, the gain pattern of the antenna was measured in an anechoic chamber, see Fig. 10 and appendix A for details. Apart from the measurement and the hybrid method, the gain patterns were also evaluated in commercial solvers: CST and FEKO. All simulations omit the thin, flexible substrate and losses in the ceramic encapsulation. All the gain patterns are depicted in Fig. 10 and show excellent agreement. The maximum gain values \(G(\pi/2,0)\) are \(-16.9\,\mathrm{dB}\) for simulated pattern by the hybrid method and using Antenna Toolbox for Matlab [36] (AToM), \(-16.9\,\mathrm{dB}\) in CST, \(-16.5\,\mathrm{dB}\) in FEKO, and \(-17.6\,\mathrm{dB}\) for the measurement. The computational requirements of the commercial solutions (using the FDTD method in CST and the surface equivalence formulation in FEKO) were considerable due to the use of high dielectric constants and overall large electric size. The data presented in Fig. 10 demanded hours of computer time. Compared to this, the hybrid method demanded only a few seconds and order in magnitude less computer memory for the same task with the same discretization of the dipole antenna. Even for the maximum possible meshing allowed by the used computer, the results from the commercial solvers were still slightly mesh dependent, and the hybrid method is therefore considered as more reliable for this task. The radiation efficiency of the experimental setup, see Fig. 11, is also compared with the performance of a loop and dipole antenna, which were made of a copper strip of width \(w=0.5\,\mathrm{mm}\) and fed by an ideal delta-gap voltage source. The results once more show good agreement between several numerical simulations, with only slight variations attributed to different numerical schemes. Importantly, the figure also shows the fundamental bound on radiation efficiency with a current supporting region in the form of a disc accommodating all used antenna models. The performance of the experimental setup lies only approximately \(1\,\mathrm{dB}\) below the fundamental bound. This means that for this material composition, and shape of the design region (disc), not much can be done to improve the design. The hybrid method shows that the interaction with the host material is mostly dipolar and therefore does not depend much on the detailed composition of the host body. It can be expected that this design will similarly perform also in real-body conditions. Fig. 11: The radiation efficiency of the meandered dipole antenna as simulated with the hybrid method, CST, and FEKO at frequency \(f_{2}\). The performances of a simple circular loop and a straight dipole antenna are also shown. The fundamental bound on the performance of a disc current supporting region is the upper bound on the radiation efficiency of all considered antenna designs. The dipole antenna, loop antenna, and fundamental bound correspond to varying sizes of the current support, the size of which is measured by the radius of the smallest circumscribing sphere \(r\). The sizes are normalized to the radius of the ceramic encapsulation, i.e., at \(\tau/a=1\) the antenna fills the entire capsule. Fig. 10: Simulated and measured gain patterns’ cuts plotted in dB scale at the frequency \(f_{2}\) using the setup in Fig. 7. The cut plane corresponds to \(\theta=90^{\circ}\). Polarization in the plane of the dipole antenna is measured. Fig. 9: Photograph and design of the ceramic-encapsulated implanted antenna: antenna encapsulated in the ceramic hemispheres (left top panel), decomposed antenna and encapsulation (right top panel), visualization of the meander dipole antenna including dimensions and interconnection with balun via \(3.2\,\mathrm{mm}\) long coplanar stripline with a strip width of \(0.18\,\mathrm{mm}\) and a gap width of \(0.1\,\mathrm{mm}\) (left bottom panel), encapsulation hemisphere’s dimensions (center and right bottom panel). ## V Discussion In order to understand the design principles of implanted antennas in more detail, it is worth studying the simple cases of loop and dipole antennas in Fig. 11. The dipole antenna has a positive trend in radiation efficiency with increasing length, while the loop antenna achieves its maximum performance around the radius of \(r=0.6a\). These trends can be explained by separating dissipation factors into three contributions as depicted in Fig. 12. With the increasing length of the dipole antenna, the tissue dissipation factor is constant, but on the contrary, the antenna dissipation factor declines, which is inherited by the total dissipation factor. The antenna dissipation factor of the circular loop antenna completely dominates for the small radius up to \(r/a=0.3\). Antenna dissipation factor decay is faster with the increasing radius for the circular loop than for the straight dipole antenna due to the increase of the impedance. However, with a circumscribing sphere radius of \(0.5a\), the circumference of the circular loop antenna is comparable to the wavelength in the surrounding medium, and a further increase stops the decrease of the antenna dissipation factor and increase of tissue dissipation factor. Due to high antenna losses, the performance gap (distance to fundamental bound) for small electrical sizes is significant for straight dipole antennas (being comparable to tissue losses) and even wider for loop antennas. To see the reason for this behavior, it is instructive to plot how the dissipation factors, see Fig. 13, corresponding to the fundamental bound, behave when changing the size of the current supporting region. The total dissipation factor is approximately constant, and it is dominated by the tissue dissipation factor. While the antenna dissipation factor is discontinuous at \(r/a\approx 0.35\). As can be seen in Fig. 14, for small electric sizes, \(\mathrm{TM}_{1m}\)-like current realizes the optimal current density. In contrast, \(\mathrm{TM}_{2m}\)-like current is optimal at higher electric sizes. A switch between these two current profiles is represented by an abrupt jump seen in Fig. 13. This critical point at \(r/a\approx 0.35\) happens because of a slight improvement in radiation efficiency: a small drop in tissue dissipation factor and an increase in antenna dissipation factor; the radiation on an implanted antenna must always be seen as a trade-off between the antenna and tissue dissipation factors. The \(\mathrm{TM}_{2m}\)-like current shows that in certain scenarios, antennas using higher-order modes can perform better than the ones using only the dipole currents. ## VI Conclusion The use of the hybrid method for the computation of radiation efficiency limitations for implanted antennas is shown in this paper. The hybrid method not only allows for a dramatic reduction of computational demands (order in magnitude faster than commercial solutions of the same setup) but also reveals different contributions of losses in the problem of an antenna radiating through lossy media. The technique is applied to a disc-like current supporting region that is encapsulated in a dielectric and placed in the host body, i.e., a setup able to accommodate many of the practical implanted antenna designs. The paper shows how the upper bound on the radiation efficiency of all these designs can simply be evaluated. Furthermore, the paper reveals the importance of ohmic loss in the antenna proper compared to tissue loss. Varying the size of the current supporting region, the trade-off between ohmic losses in the antenna and the tissue drastically changes the shape of the optimal current density. The critical points were identified where the \(\mathrm{TE}_{1m}\)-like current density performs better than a \(\mathrm{TM}_{1m}\)-like current and vice-versa. This proves the importance of the fundamental limitations on the performance of implanted antennas and the usefulness to evaluate them for the scenario before starting the design to know the bounds. The method is verified by measurement as well as computation in commercial solvers, with the example being a meander dipole with a balun immersed in a phantom filled with distilled water. Great agreement is achieved for the results from the hybrid method, commercial simulators, and measurement. A detailed study of the fundamental bound on radiation efficiency in this scenario has also been done, concluding that ohmic loss in an implanted antenna must be considered in the design. ## Appendix A Measurement The implanted meander dipole antenna within a glass bottle filled with distilled water gain pattern is measured at a distance of 2.1 m from the reference antenna (a quad-ridge horn antenna, QH400, MVG Industries), see Fig. 7. The meander dipole antenna is oriented along the \(y\) axis. Fig. 12: Dissipation factors of the meander dipole (black markers), a circular loop (orange lines), and a straight dipole antenna (blue lines). The same normalization as in Fig. 11 is used. Fig. 13: Dissipation factors corresponding to the upper bound on radiation efficiency for the water host body. The same normalization as in Fig. 11 is used.
2303.00095
Modeling low- and high-frequency noise in transmon qubits with resource-efficient measurement
Transmon qubits experience open system effects that manifest as noise at a broad range of frequencies. We present a model of these effects using the Redfield master equation with a hybrid bath consisting of low and high-frequency components. We use two-level fluctuators to simulate 1/f-like noise behavior, which is a dominant source of decoherence for superconducting qubits. By measuring quantum state fidelity under free evolution with and without dynamical decoupling (DD), we can fit the low- and high-frequency noise parameters in our model. We train and test our model using experiments on quantum devices available through IBM quantum experience. Our model accurately predicts the fidelity decay of random initial states, including the effect of DD pulse sequences. We compare our model with two simpler models and confirm the importance of including both high-frequency and 1/f noise in order to accurately predict transmon behavior.
Vinay Tripathi, Huo Chen, Eli Levenson-Falk, Daniel A. Lidar
2023-02-28T21:46:03Z
http://arxiv.org/abs/2303.00095v1
# Modeling low- and high-frequency noise in transmon qubits with resource-efficient measurement ###### Abstract Transmon qubits experience open system effects that manifest as noise at a broad range of frequencies. We present a model of these effects using the Redfield master equation with a hybrid bath consisting of low and high-frequency components. We use two-level fluctuators to simulate \(1/f\)-like noise behavior, which is a dominant source of decoherence for superconducting qubits. By measuring quantum state fidelity under free evolution with and without dynamical decoupling (DD), we can fit the low- and high-frequency noise parameters in our model. We train and test our model using experiments on quantum devices available through IBM quantum experience. Our model accurately predicts the fidelity decay of random initial states, including the effect of DD pulse sequences. We compare our model with two simpler models and confirm the importance of including both high-frequency and \(1/f\) noise in order to accurately predict transmon behavior. ## I Introduction Quantum computing based on superconducting qubits has made significant progress. Starting with the first implementations of superconducting qubits [1; 2], the field has developed several flavors of qubits, broadly classified as charge, flux, and phase qubits [3]. However, the real workhorse behind many of the recent critical developments [4; 5; 6; 7; 8; 9; 10; 11; 12; 13] in gate-based quantum computing is the transmon qubit [14]. Transmons are designed by adding a large shunting capacitor to charge qubits, the result being that they are almost insensitive to charge noise. Transmon-based cloud quantum computers (QCs) are now widely available to the broad research community for proof of principle quantum computing experiments [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Quantum computers in their current form have high error rates. This includes coherent errors (originating from imperfect gates), state preparation and measurement errors (SPAM), and incoherent errors (environment-induced noise). The latter, which results in dephasing and relaxation errors, is a pernicious problem in quantum information processing. Characterizing and modeling these open quantum system effects is crucial for advancing the field and improving the prospects of fault-tolerant quantum computation [26; 27; 28]. Various procedures for modeling decoherence and control noise affecting idealized qubits have been discussed [29; 30; 31]. Still, modeling noise effects from first principles, i.e., starting at the circuit level of transmons and including \(1/f\) noise, is relatively unexplored [32]. In this work, we develop a framework to model environment-induced noise effects on a transmon qubit using the master equation formalism. We use a hybrid quantum bath with an Ohmic-like noise spectrum to model dephasing and relaxation processes in multi-level transmons. We also include classical fluctuators and use a hybrid Redfield model to account for both low- (\(1/f\)) and high-frequency noise. We develop a simple noise learning procedure relying on dynamical decoupling (DD) [33; 34; 35; 36; 37] to obtain the noise parameters (see Ref. [38] for early experimental work in this area). Our procedure relies only on measurements of quantum state fidelity with and without a single type of DD sequence, and so is quite resource-efficient compared to protocols requiring full quantum state tomography or DD-based spectral analysis. We test our noise model via fidelity preservation experiments on IBMQE processors for random initial states and find that the model can correctly capture these experiments. The model is, moreover, capable of reproducing the effects of time-dependent dynamical decoupling pulses on the main qubit. Finally, we compare the predictions based on our model with two simpler models using ideal two-level qubits, excluding the fluctuators and assuming ideal, zero-width DD pulses. In contrast to our complete model, these simpler models fail to capture noise simultaneously in both the low and high-frequency regimes. As a result, whether with or without DD, they underperform in capturing fidelity preservation experiments. This paper is organized as follows. In Section II, we develop our numerical method focusing on simulating multi-level transmon qubit and single-qubit gates, which form the DD sequences. Next, we discuss our open quantum system in Section III and describe our noise learning method in Section IV. We then test our learned model on Quito using DD experiments with random initial single-qubit states in Section V. We extend our method to Lima, which relies on a different calibration procedure, in Section VI. We conclude in Section VII. The appendix provides additional details and calculations in support of the main text. Numerical model of transmons In this section, we focus on the circuit-level description of the transmon qubit that we use in our model. We start with the transmon Hamiltonian and find an effective Hamiltonian to simulate single-qubit time-dependent microwave gates. We include the Derivative Removal of Adiabatic Gates (DRAG) [39] technique in our numerical model. DRAG is a state-of-the-art technique used to improve the performance of single-qubit gates by suppressing leakage and phase errors. The former refers to non-zero population of non-computational levels at the end of a pulse, while the latter is a type of coherent error that results from the non-zero population in non-computational levels _during_ the pulse: the admixture of such levels leads to a phase shift of the computational levels, resulting in a net phase error at the end of the pulse. By including the DRAG technique (used in the IBMQE devices) and considering the residual errors it is unable to suppress, we more accurately model the transmon behavior. ### Transmon Hamiltonian The Hamiltonian of a fixed-frequency transmon qubit is [14]: \[H_{\text{trans}}=4E_{C}\left(\hat{n}-n_{g}\right)^{2}-E_{J}\cos\hat{\varphi}. \tag{1}\] We work in units where \(\hbar=1\). \(E_{C}=e^{2}/(2C)\) is the charging energy (\(C\) is the capacitance, and \(e\) is the electron charge), \(E_{J}=I_{C}/(2e)\) is the potential energy of the Josephson junction (\(I_{C}\) is the critical current of the junction) and \(n_{g}\) represents the charge offset number which can result in charge noise. In the operating regime of a transmon qubit, i.e., \(E_{J}/E_{C}\gg 1\), the lowest few energy levels of the transmon are almost immune to charge noise, in which case \(n_{g}\) can be safely ignored. The two operators \(\hat{n}\) and \(\hat{\varphi}\) are a canonically conjugate pair analogous to momentum and position. They satisfy the commutation relation \([\hat{n},\hat{\varphi}]=i\); \(\hat{n}\) is the number operator for the Cooper pairs transferred between the superconducting islands of the Josephson junction, and \(\hat{\varphi}\) is the gauge invariant phase difference across the Josephson junction, i.e., between the islands. ### Time-dependent drives To numerically simulate the time-dependent drive-pulses or gates, we start with Eq. (1) and write it in the charge basis (the eigenbasis of \(\hat{n}\)) such that the number of Cooper pairs takes values from \(-n_{\text{max}}\) to \(n_{\text{max}}\). Eq. (1) thus reduces to \[H_{\text{trans}}= \ 4E_{C}\sum_{-n_{\text{max}}}^{n_{\text{max}}}n^{2}|n\rangle \langle n|\] \[-\frac{E_{J}}{2}\sum_{-n_{\text{max}}}^{n_{\text{max}}}\left(|n \rangle\langle n+1|+|n+1\rangle\langle n|\right) \tag{2}\] where we have taken \(n_{g}=0\) since we are in the transmon regime. We truncate to \(n_{\text{max}}\) (later we set \(n_{\text{max}}=50\)) and diagonalize the resulting Hamiltonian: \[H_{\text{trans}}^{\text{eigen}}=SH_{\text{trans}}S^{\dagger}=\sum_{k\geq 0} \omega_{k}|k\rangle\!\langle k|\, \tag{3}\] where \(\omega_{k}\) for \(k=0,1,...\) represents the energy of the \(k^{\text{th}}\) level in the transmon eigenbasis, and \(S\) is the unitary similarity transformation. The eigenfrequencies are \(\omega_{ij}\equiv\omega_{i}-\omega_{j}\). The _bare qubit frequency_ is \(\omega_{q}\equiv\omega_{10}\) and the _anharmonicity_ is \(\eta_{q}\equiv\omega_{10}-\omega_{21}\). Since \(\omega_{q}\) and \(\eta_{q}\) are the two quantities accessible via experiments, we use these values to obtain the fitting parameters \(E_{C}\) and \(E_{J}\) in Eq. (1), which is the starting point of our transmon model.1 Footnote 1: In more detail, we try different values of \(E_{J}\) and \(E_{C}\) in Eq. (1) by diagonalizing the corresponding \(H_{\text{trans}}\) and comparing the result with the experimental values of \(\omega_{q}\) and \(\eta_{q}\). Once we find the values of \(E_{J}\) and \(E_{C}\) yielding the closest match, we proceed to Eq. (3) to find the full spectrum. Next, we add the coupling to the microwave drive, which couples to the transmon charge operator. The total system Hamiltonian can then be written as \[H_{\text{sys}}=H_{\text{trans}}^{\text{eigen}}+\varepsilon(t)\cos\left(\omega_ {\text{d}}t+\phi_{\text{d}}\right)\hat{n}\, \tag{4}\] where \(\varepsilon(t)\) is the pulse envelope, \(\omega_{\text{d}}\) is the drive frequency, and \(\phi_{\text{d}}\) is the phase of the drive. We can simplify Eq. (4) by first writing the charge operator in the transmon eigenbasis of Eq. (3), i.e., \(\hat{n}=\sum_{k,k^{\prime}}\bra{k}\hat{n}\ket{k^{\prime}}\ket{k}\ket{k^{ \prime}}\) and considering the charge coupling matrix elements. Only nearest-level couplings \(\bra{k}\hat{n}\ket{k\pm 1}\) are found to be non-negligible, allowing us to ignore all higher order terms: \[\hat{n}\approx\sum_{k\geq 0}\bra{k}\hat{n}\ket{k+1}\ket{k}\!\bra{k+1}+\text{h.c.} \tag{5}\] Transforming into a frame rotating with the drive and employing the rotating wave approximation (RWA), we obtain, for \(\phi_{\text{d}}=0\), the effective Hamiltonian \[\tilde{H}_{\text{sys}}= \ \sum_{k\geq 0}\left(\omega_{k}-k\omega_{\text{d}}\right)\ket{k} \!\bra{k} \tag{6}\] \[+\frac{\varepsilon(t)}{2}\sum_{k\geq 0}g_{k,k+1}(|k\rangle\! \bra{k+1}+|k+1\rangle\!\bra{k})\,\] where \(g_{k,k+1}\equiv\bra{k}\hat{n}\ket{k+1}\). By tuning \(\phi_{\rm d}\), we can implement a rotation about any axis in the \((x,y)\) plane of the qubit subspace (after an additional projection). In particular, taking \(\phi_{\rm d}=0\) or \(\pi/2\) corresponds to a rotation about the \(x\) or \(y\) axis, respectively. Appendix A provides a derivation of Eq. (6) from Eq. (4). The pulse envelope \(\varepsilon(t)\) plays a vital role in the final implementation of the gate. Since we are interested mainly in applying \(\pi\) pulses, we choose \[\int_{0}^{t_{g}}\varepsilon(t)dt=\pi\, \tag{7}\] where \(t_{g}\) is the pulse or gate duration. For our numerical simulations, we choose Gaussian-shaped pulses with envelopes given by \[\varepsilon(t)=\varepsilon\left[G(t,t_{g},\sigma)-G(0,t_{g},\sigma)\right] \left(\Theta(t)-\Theta\left(t-t_{g}\right)\right)\, \tag{8}\] where \[G(t,t_{g},\sigma)=\exp\left(-\frac{\left(t-t_{g}/2\right)^{2}}{2\sigma^{2}} \right). \tag{9}\] Here \(\varepsilon\) is the maximum drive amplitude during the pulse, \(\Theta(t)\) is the step function, and \(\sigma\) is the standard deviation of the Gaussian pulse. An essential aspect of gate design is that population should not leak to higher levels of the transmon, i.e., the drive pulses should be bandwidth-limited (adiabatic). An accurate measure of these off-resonant excitations is the Fourier transform of the pulse envelope at the detuning frequencies [40; 41]. For example, consider a Gaussian pulse with standard deviation \(\sigma\). Its Fourier transform has a standard deviation proportional to \(1/\sigma\), which means that the drive pulse applied at the qubit frequency \(\omega_{q}\) has a frequency spread close to \(1/\sigma\) about \(\omega_{q}\). If \(1/\sigma\) is of the order of the anharmonicity of the transmon, the pulse spectral width will overlap with some of the higher-level transitions. Fig. 1 shows the Gaussian pulse envelope and its Fourier transform, and illustrates how choosing a shorter gate time results in a larger frequency spread and vice versa. The use of DRAG pulses mitigates this leakage, as discussed further below. In the two-state (qubit) subspace, Eq. (6) reduces to \[H_{X}(t)=\frac{\varepsilon(t)}{2}\left(|0\rangle\!\langle 1|+|1\rangle\! \langle 0|\right)=\frac{\varepsilon(t)}{2}\sigma^{x}, \tag{10}\] where \(g_{0,1}=g\) has been absorbed into \(\varepsilon(t)\) and \(\sigma^{x}\) is the Pauli \(X\) matrix. When \(H_{X}(t)\) is evolved for a time \(t_{g}\) such that Eq. (7) is satisfied, the resulting unitary is an ideal \(X_{\pi}\) gate. To include the effect of higher levels, we first use the full gate Hamiltonian from Eq. (6) and then project the result to the qubit subspace. The gate fidelity averaged over all input states in the qubit Hilbert space can be written as the average over the six polar states (i.e., the six eigenstates of \(\sigma^{x},\sigma^{y}\), and \(\sigma^{z}\)) [42; 39]: \[F_{g}=\frac{1}{6}\sum_{j=\pm x,\pm y,\pm z}\mathrm{Tr}\left[U_{\rm ideal}\rho_ {j}^{\rm 1q}U_{\rm ideal}^{\dagger}\Pi[\rho(t_{g})]\right]\, \tag{11}\] where \(\rho_{j}^{\rm 1q}\) is the single qubit density matrix, and \(U_{\rm ideal}\) represents the ideal unitary corresponding to the gate we wish to study. \(\Pi[\rho(t_{g})]\) is the projection of the full density matrix onto the single qubit subspace. \(F_{g}\) compares the density matrix \(\rho\) after application of the gate (i.e., at \(t=t_{g}\)) with the expected density matrix in the qubit subspace. To reduce phase errors caused by the presence of additional levels, a commonly used trick to implement single qubit gates such as \(X_{\pi}\) is to break the gate into two halves where each half performs a \(\pi/2\) rotation, accompanied by some virtual \(Z\) rotations [43; 44]. Numerically, we observe that with four levels included in the transmon Hamiltonian and a total gate duration \(t_{g}=70\) ns, with \(\sigma=t_{g}/6\), the average single-qubit gate error \(1-F_{g}\) can Figure 1: Gaussian pulse envelope (solid, orange) [see Eq. (8)] and its Fourier transform (dashed, blue) with amplitude \(\varepsilon\) chosen to keep both in the range \([0,1]\). (a) and (b) show the pulse with gate time \(t_{g}=70\) and \(10\) ns, respectively, and \(\sigma=t_{g}/6\). The bottom horizontal axis represents time in ns, and the top horizontal axis represents frequency in GHz. Shorter gate times result in a larger frequency spread of the spectrum, with associated larger leakage, as illustrated in (c), which shows the frequency spectrum corresponding to a \(t_{g}=10\) ns gate (left), compared to the energy levels (right) \(|0\rangle\), \(|1\rangle\) and \(|2\rangle\) of the transmon. The energy levels are shown in the rotating frame such that \(E_{|0\rangle}=E_{|1\rangle}\) and \(E_{|2\rangle}=-\eta_{q}=-200\) MHz. As indicated by the dashed horizontal line, the spectrum overlaps with level \(|2\rangle\), resulting in leakage into this level from the \(\{|0\rangle\,,|1\rangle\}\) qubit subspace. The sampling frequency used to compute the Fourier transform is \(10\)GS/s, which is state-of-the-art in experiments; the pulses that control the IBM processors used in this work have a sampling frequency of \(\sim 5\)GS/s. be suppressed by around 20% if we use two such pulses instead of a single long pulse. The exact quantitative improvement depended on other model parameters. We observed this error reduction in a closed system setting with no environmental coupling, and so any fidelity improvement may be counteracted by open-system effects. For a detailed numerical study on the error of time-dependent gates with transmon qubits in the open system settings, see Ref. [45]. ### Drag Derivative Reduction by Adiabatic Gate (DRAG) [39; 46] is a useful technique to reduce both the leakage and the phase errors which accumulate during the operation of single-qubit gates. The most standard DRAG technique is to add the derivative of the pulse envelope to the quadrature component such that the final form of the pulse envelope \(\tilde{\varepsilon}(t)\) is given by \[\tilde{\varepsilon}(t)=\varepsilon(t)+i\alpha\frac{\dot{\varepsilon}(t)}{ \eta_{q}}\, \tag{12}\] where \(\eta_{q}\) is the transmon anharmonicity and \(\alpha\) is a constant that can have different values depending on which errors need to be suppressed, e.g., \(\alpha=1\) to suppress leakage and \(\alpha=1/2\) to suppress coherent phase errors [47]. Eq. (12) implies that if we need to apply a single \(X\) gate whose pulse envelope is given by \(\varepsilon(t)\), then \(\alpha\dot{\varepsilon}(t)/\eta_{q}\) needs to be applied along the \(y\)-axis. IBMQE devices use DRAG, but the exact pulse parameters are not available to users. Instead, the value of \(\alpha\) in our simulations can be optimized numerically to match the experimentally reported gate fidelity. This is the approach we take here, with the goal being to model experiments on IBMQE devices with single-qubit gate errors of the order of \(10^{-3}\). This is the value reported using randomized benchmarking [48], which equals the average gate infidelity (\(1-F_{g}\)) when the gate set has gate-independent errors [49; 50], an assumption we make here to justify the use of \(10^{-3}\) as our target gate infidelity. We perform closed-system simulations and find that without DRAG, the gate infidelity (due to leakage and phase errors) is in the range \(10^{-2}-10^{-3}\). With DRAG, we find that varying \(\alpha\) from \(1/2\) to \(1\) increases the gate infidelity from \(\sim 10^{-6}\) to \(10^{-3}\), respectively. We thus choose \(\alpha=1\) to match the reported fidelity. This suggests, as described in Ref. [47], that the remaining errors are mostly phase error. Indeed, we find that even without DRAG, single-qubit \(X\) and \(Y\) gates (both implemented using two \(\pi/2\) pulses as explained above) have leakage errors well below \(10^{-5}\). This is unsurprising, as these long gates are quite narrowband compared to the transmon anharmonicity. We note that this attributes all errors to coherent closed-system effects rather than decoherence. We expect incoherent errors to be of the order of \(t_{g}/T_{2}\approx 5\times 10^{-4}\), suggesting that this choice is defensible (see Table 2). Furthermore, given that both gate fidelity and coherence times drift over hour-long timescales, we focus only on matching the correct order of magnitude for fidelity with our coherent error model. ## III Open Quantum System Simulation This section describes the noise model and discusses the hybrid Redfield equation used for the open quantum system simulations. For all simulations we truncate to the lowest four levels of the transmon qubit. ### Interaction Hamiltonian The single-qubit system bath interaction Hamiltonian in the lab frame can be written as \[H_{\mathrm{SB}}=\sum_{i=x,y,z}g_{i}A_{i}\otimes B_{i}\, \tag{13}\] where \(A_{i}\) and \(B_{i}\) represent the dimensionless system and bath coupling operators, respectively, and the coupling strengths \(g_{i}\) have dimensions of energy. There are several contributions to decoherence and noise for a multi-level transmon circuit. With fixed-frequency architectures, charge noise and fluctuations in the critical current contribute most to decoherence. In contrast, in the flux-tunable variants of transmon qubits, the largest contribution comes from flux noise [14]. These considerations determine which coupling operators are needed to describe the noise model for a given architecture. In the IBMQE processors used, the transmons are fixed-frequency. We therefore choose appropriate noise operators below. We consider the following system-bath interaction Hamiltonian: \[H_{\mathrm{SB}}=g_{x}A_{x}\otimes B_{x}+A_{z}\otimes\left(g_{z}B_{z}+\sum_{k} b_{k}\chi_{k}\left(t\right)I_{\mathrm{B}}\right)\, \tag{14}\] where the coupling operators \(A_{x}\) and \(A_{z}\) correspond to the charge coupling operator and to the Josephson energy operator and are defined as \[A_{x} =\mathrm{c}_{1}\hat{n} \tag{15a}\] \[A_{z} =\mathrm{c}_{2}\mathrm{cos}\hat{\varphi}\, \tag{15b}\] where \(c_{1}\) and \(c_{2}\) are fixed constants that depend on the charge energy \(E_{c}\) and the Josephson energy \(E_{J}\) of the transmon qubit, respectively. We expect, based on the discussion in Section II.2 - and observe in our simulations - that \(A_{x}\) and \(A_{z}\) act like \(\sigma^{x}\) and \(\sigma^{z}\) when projected into the qubit subspace. We find numerically that Eq. (14) is an adequate model accounting for the nearly equal decay of the \(|+\rangle\) and \(|i\rangle\) states, which is why we do not include a separate \(\sigma^{y}\) coupling term. Note, however, that a (dependent) \(\sigma^{y}\) component appears when we transform Eq. (14) from the lab frame into a frame rotating with the drive. Previous studies have found that noise in the superconducting circuit can be separated into high and low-frequency components [51]. To account for this observation, we combine two noise models. We choose the bath operators \(B_{x}\) and \(B_{z}\) in Eq. (14) to be bosonic bath operators, which generally represent the high-frequency component of the noise. However, this is not always the case, as we argue in Section IV. To account for the low-frequency noise component, which is a dominant noise source for superconducting qubits [52], we include a sum over classical fluctuators in Eq. (14), via the term proportional to the bath identity operator \(I_{\rm B}\). This semiclassical contribution, when parameterized properly, can simulate the behavior of \(1/f\) noise. We model the fluctuators as having equal coupling strengths, i.e., we set \(b_{k}=b\) (with dimensions of energy) for \(k=1,\cdots,10\). Each fluctuator can be characterized by a stochastic process \(\chi_{k}(t)\) that switches between \(\pm 1\) with a frequency \(\gamma_{k}\), which is log-uniformly distributed between \(\gamma_{\rm min}\) and \(\gamma_{\rm max}\)[53]. ### Hybrid Redfield model To simulate the reduced system dynamics of the interaction Hamiltonian in Eq. (14), we use a hybrid form of the Redfield (or TCL2) master equation [55; 56]. We first define the standard bath correlation function \[C_{ij}(t)={\rm Tr}\{U_{\rm B}(t)B_{i}U_{\rm B}^{\dagger}(t)B_{j}\rho_{\rm B} \}\, \tag{16}\] where \(U_{\rm B}(t)=e^{-iH_{\rm B}t}\) is the unitary evolution operator generated by the bath Hamiltonian \(H_{\rm B}\), and the reference state \(\rho_{\rm B}\) is the Gibbs state of \(H_{\rm B}\): \[\rho_{\rm B}=e^{-\beta H_{\rm B}}/{\rm Tr}\big{(}e^{-\beta H_{\rm B}}\big{)}\, \tag{17}\] where \(\beta=1/T\) is the inverse temperature. Assuming the bath operators \(B_{x}\) and \(B_{z}\) are uncorrelated, i.e., \(C_{xz}(t)=0\), we construct the following hybrid Redfield equation \[\frac{\partial\rho_{\rm S}}{\partial t}=-i[H_{\rm sys}+A_{z}\sum_{k=1}^{10}b_ {k}\chi_{k}(t),\rho_{\rm S}]+{\cal L}_{\rm R}(\rho_{\rm S})\, \tag{18}\] where \({\cal L}_{\rm R}\) is the Redfield Liouville superoperator \[{\cal L}_{\rm R}(\rho_{\rm S})=-\sum_{i=x,z}[A_{i},\Lambda_{i}(t)\rho_{\rm S}( t)]+{\rm h.c.}\, \tag{19}\] and \[\Lambda_{i}(t)=\int_{0}^{t}C_{i}(t-\tau)U_{\rm sys}(t,\tau)A_{i}U_{\rm sys}^{ \dagger}(t,\tau){\rm d}\tau\, \tag{20}\] where \(C_{j}(\tau)\equiv C_{jj}(\tau)\) [from Eq. (16)] and \(U_{\rm sys}(t)\) is the unitary evolution operator generated by the system Hamiltonian \(H_{\rm sys}\). The reduced system dynamics are obtained by averaging the solution of Eq. (18) over all the realizations of \(\chi_{k}(t)\) for \(k=1,...,10\). The bath component correlation functions \(C_{j}(\tau)\) are the Fourier transforms of the bath component noise spectra \[\gamma_{j}(\omega)=\int_{-\infty}^{\infty}C_{j}(\tau)e^{i\omega\tau}{\rm d} \tau. \tag{21}\] We choose the bath to be Ohmic, which means that the component noise spectra have the form \[\gamma_{j}(\omega)=2\pi\eta g_{j}^{2}\frac{\omega\mathrm{e}^{-|\omega|/\omega_ {j}^{c}}}{1-\mathrm{e}^{-\beta\omega}}\, \tag{22}\] where \(\omega_{j}^{c}\) is the cut-off frequency for bath operator \(B_{j}\), and \(\eta\) is a positive constant with dimensions of time squared arising in the specification of the Ohmic spectral function. Lastly, the hybrid Redfield equation (18) can be transformed into a frame rotating with the drive frequency \(\omega_{\rm d}\) (see Appendix A for details) by replacing every operator with the interaction-picture one [specifically, the \(A_{i}\) operator in Eq. (20) needs to be replaced by \(A_{i}(\tau)\)]. We simulate the Redfield master equation in this rotating frame in the methodology and results we discuss next. ## IV Methodology and fitting results This section discusses our methodology for modeling a transmon qubit's open quantum system behavior in a multi-qubit processor. We refer to the qubit of interest as the main qubit and all the others as spectator qubits. The goal is to extract the bath parameters in our open quantum system model and then use this model to predict the outcomes of experiments on the main qubit, including dynamical decoupling sequences. We treat qubit 1 (Q1) of the Quito processor as our main qubit. We are interested only in the main qubit's behavior here; hence, we measure only the main qubit. Appendix B describes the procedure to extract and analyze the experimental data. ### Free and DD Evolution Experiments Our procedure involves two types of experiments, as shown in Fig. 2. The first type, which we call a _free-evolution_ experiment, consists of initializing all the qubits in a given state by applying a particular unitary operation \(U3(\theta,\phi,\lambda)\)[57] (denoted as \(U\) in Fig. 2) to each of the qubits. We then apply a sequence of identity gates on the main qubit, which we vary in number. Simultaneously we also apply the XY4 DD sequence to all the other (spectator) qubits (i.e., \(Xf_{\tau}Yf_{\tau}Xf_{\tau}Yf_{\tau}\), where \(f\) denotes free-evolution in the absence of pulses for a duration of \(\tau\)[37]) for the same total duration as that of the identity gates on the main qubit. As shown in [54], DD sequences applied to spectator qubits suppress unwanted \(ZZ\)-interactions, i.e., \(ZZ\)-crosstalk between the main qubit and the spectator qubits. Without crosstalk suppression, we observe oscillations in the probability decay as a function of time [20, 29]; see Appendix C. With the crosstalk suppression scheme, i.e., DD applied to the spectator qubits, these oscillations disappear, and the main qubit is now primarily affected only by environment-induced noise. Finally, we apply the inverse of \(U3(\theta,\phi,\lambda)\) and measure in the \(Z\)-basis. The result is how we compute the initial states' decay probability. Everything remains the same in the second type of experiment, which we call _DD-evolution_, except that we now apply the XY4 sequence to the main qubit and identity gates on the spectator qubits. As discussed in [54], when we apply the XY4 sequence only to the main qubit, we suppress the \(ZZ\)-crosstalk between the latter and all the spectator qubits and also decouple unwanted interactions between the main qubit and the environment. We note that, in contrast to experiments using DD to perform noise spectroscopy, here we use only a single type of DD sequence and do not vary any of its parameters. ### Fitting Procedure We perform the free-evolution and DD-evolution experiments for the six Pauli states as initial states, i.e., we choose \(U3(\theta,\phi,\lambda)\) to prepare \(|0\rangle\), \(|1\rangle\), \(|+\rangle\), \(|-\rangle\), \(|i\rangle\) and \(|-i\rangle\), and use the hybrid Redfield equation described in Section III.2 to simulate the dynamics of these experiments. We sweep over different values of the bath parameters and obtain the simulated probability decay as a function of time. To identify the simulation parameters that optimally match the experimental results, we define a cost function \(C\) for a given initial state \(|\psi\rangle\) as the \(l_{2}\) norm distance between the experimental probabilities \(P^{\rm Exp}_{|\psi\rangle,s}(t_{i})\) and the simulation probabilities \(P^{\rm Sim}_{|\psi\rangle,s}(t_{i})\) for every instant \(t_{i}\): \[C_{|\psi\rangle,s}=\frac{1}{N}\sqrt{\sum_{i=0}^{N-1}\left(P^{\rm Sim}_{|\psi \rangle,s}(t_{i})-P^{\rm Exp}_{|\psi\rangle,s}(t_{i})\right)^{2}}\, \tag{23}\] where \(s\in\{{\rm free,DD}\}\) and \(N\) is the total number of instants. Note that we compensate for state preparation and measurement (SPAM) errors by shifting the experimental results such that in all cases, \(P^{\rm Exp}_{|\psi\rangle,s}(0)=1\). We limit the number of free parameters requiring fitting to six: the coupling strengths \(g_{x}\), \(g_{z}\), and \(b_{k}\equiv b\) [Eq. (14)], and the cutoff frequencies \(\gamma_{\rm max}\) (for \(1/f\) noise), \(\omega^{c}_{x}\), and \(\omega^{c}_{z}\) [Eq. (22)]. We set the bath temperature \(T=20\) mK (\(\sim\) the fridge temperature), \(\gamma_{\rm min}=10^{-4}\) GHz, and \(\eta=10^{-4}\) GHz\({}^{-2}\); these values are the same as in our previous work [54], which showed strong agreement between open system simulations and experiments using other IBMQE devices, and remain fixed throughout our fitting procedure. This procedure consists of three steps, which we detail next. #### iii.2.1 Step I: free-evolution for \(|1\rangle\) We first focus on the free-evolution experiment for the initial state \(|1\rangle\), the first excited state in the transmon eigenbasis. Since the free-evolution for this state is only affected by charge noise, i.e., noise along the \(x\)-axis, the only contribution to the decay of \(|1\rangle\) should come from Figure 2: The circuit schematics for the free-evolution and DD-evolution types of experiments. For the free-evolution case, we apply \(N\) cycles of the XY4 dynamical decoupling (DD) sequence on all the spectator qubits and \(2N\) cycles of the \(I_{4}\) sequence (here \(I_{4}\) means four identity gates) on the main qubit, which suppresses crosstalk errors [54]. Note that an \(X\) or \(Y\) gate is twice as long as an identity gate on the IBM cloud quantum devices, hence the extra factor of 2. For the DD-evolution case, we apply the DD sequence only to the main qubit and apply identity gates to all the spectator qubits. This suppresses both crosstalk and environment-induced noise. We measure only the main qubit. the \(g_{x}A_{x}\otimes B_{x}\) term in Eq. (14). Thus, we consider only this term in our numerical simulations for this initial state. For a given set of values of the coupling strength \(g_{x}\) and bath cutoff frequency \(\omega_{x}^{c}\), we compute the cost function \(C_{\left|1\right\rangle,\text{free}}\) using Eq. (23), and obtain the contour plot shown in Fig. 3(a). In our simulations, we vary \(\omega_{x}^{c}/(2\pi)\) from \(0.5\) to \(3\) GHz and \(g_{x}\) from \(0\) to \(10^{-2}\) GHz, each with \(20\) equidistant points so that the contour plot has a total of \(400\) data points. We take the position of the global minimum of the cost function on this grid as the optimal set of bath parameter values. To reduce the resulting discretization uncertainty, we interpolate the contour plot and use the Nelder-Mead optimization method to locate the minima. We numerically find the global minimum at \(\omega_{x}^{c}/(2\pi)=1.948\) GHz and \(g_{x}/(2\pi)=0.573\times 10^{-2}\) GHz, denoted by the green circle in Fig. 3(a). With this, we have two out of the six bath parameters, and we use these learned parameters in the subsequent steps. #### iii.2.2 Step II: DD-evolution for all six Pauli states The second step involves the DD-evolution experiment for all six Pauli states. This requires including the term \(g_{z}A_{z}\otimes B_{z}\) in Eq. (14), along with the first term whose bath parameters we already obtained. We do not include the semiclassical term in Eq. (14) consisting of fluctuators since it is expected to be strongly suppressed when DD is applied to the main qubit. We simulate time-dependent gates with DRAG corrections to model the DD pulses, as discussed in Section II. We are again left with just two bath parameters to optimize: \(\omega_{z}^{c}\) and \(g_{z}\). Fig. 3(b) shows the average of the cost function [Eq. (23)] over the six Pauli states. The global minimum is found at \(\omega_{z}^{c}/(2\pi)=0.569\times 10^{-2}\) GHz and \(g_{z}/(2\pi)=0.441\times 10^{-2}\) GHz. #### iii.2.3 Step III: free-evolution for \(\left|+\right\rangle\) The final step requires optimizing the two remaining free parameters associated with the fluctuators: \(\gamma_{\text{max}}\) and \(b\). Here we focus on the free-evolution experiment for initial state \(\left|+\right\rangle\). We now employ the full system-bath Hamiltonian in Eq. (14) with the optimal parameters found in Steps I and II. Fig. 3(c) shows the contour plot for the cost function [Eq. (23)], where, as in Step I, we again use \(20\) different values of \(\gamma_{\text{max}}\) and \(b\) each. The global minimum is found at \(\gamma_{\text{max}}/(2\pi)=0.051\) GHz and \(b/(2\pi)=0.598\times 10^{-3}\) GHz. ### Methodology wrap-up Let us briefly summarize our methodology and add a few technical details. As explained above, we extract the bath parameters by performing free-evolution experiments for two initial states (\(\left|1\right\rangle\) and \(\left|+\right\rangle\)) and DD Figure 3: Top: Results for the Quito processor. Bottom: Results for the Lima processor. Left: The cost function defined as the \(l_{2}\) norm distance between the experimental and simulation results [Eq. (23)], averaged over \(N=70\) time instants, as a function of the bath parameters \(\omega_{x}^{c}\) and \(g_{x}\) for free-evolution of the \(\left|1\right\rangle\) initial state. Middle: The average of the cost function over the six Pauli states for DD-evolution as a function of \(\omega_{z}^{c}\) and \(g_{z}\). Right: The cost function for free-evolution of the \(\left|+\right\rangle\) initial state, as a function of \(\gamma_{\text{max}}\) and \(b\). The green circles indicate the positions of the global minima in all the panels. evolution experiments for up to six initial states (the Pauli states). Our optimization procedure is iterative and is thus not guaranteed to yield the globally optimal values of all the bath parameters, but this is by design: we choose initial states that allow us to isolate the bath parameters one pair at a time, which renders the optimization problem tractable. This methodology is quite general and can be used to characterize all the transmon qubits on a given transmon processor or, much more broadly, to characterize single qubits on any quantum information processing platform capable of supporting individual qubit gates and measurements, provided a sufficiently accurate and descriptive model of the qubits and the system-bath interaction is available. Our procedure inherently suppresses the effects of crosstalk due to the neighboring qubits via DD applied either to the spectator qubits (free-evolution experiments) or the main qubit (DD-evolution experiment), which reduces the number of free parameters of the noise model by eliminating the need to model crosstalk. To obtain the contour plots shown in Fig. 3, we solve the Redfield master equation (Section III.2) for each point (i.e., each set of model parameters), requiring a total of 400 simulation runs for each optimization. In the final step, including classical fluctuators to obtain Fig. 3(c), we use the trajectory version of the Redfield model introduced in Section III.2 to simulate a total of 600 trajectories at each point. This is large enough to yield negligible error bars (\(<2\times 10^{-2}\)). The experimental results are obtained using the standard bootstrap method (see Appendix B). In defining the cost function [Eq. (23)], we use the mean value of the experimental fidelity obtained after bootstrapping (see Appendix C) and ignore the associated tiny error bars (\(\leq 6\times 10^{-3}\)). These error bars are much smaller than the error induced by the discrete nature of our \(40\times 40\) parameter grid, and so we can safely ignore them. We confirmed that varying the probabilities to the extremes of the error bars does not affect the values of the bath parameters we have extracted to the least significant digit we report. Table 1 summarizes the extracted values and the parameters we have fixed. The accuracy of our results depends on the number of points in the contour plots in Fig. 3 (we used a \(20\times 20\) grid for each panel). Even though we interpolate the otherwise discrete contour plots and find the minima over the resulting smooth surface, the limited number of points affects the precision of the learned bath parameters. Increasing this precision requires more sophisticated optimization techniques to speed up the process of obtaining the bath parameters. This becomes especially acute when extending the model to learning a multi-qubit system-bath Hamiltonian with correlated noise, as in this case, the number of bath parameters increases significantly. Here, we aim to demonstrate the model and methodology and illustrate both via the example of a single transmon qubit, and so we perform a simple brute-force search of the parameter space. Note that our methods for extracting the bath parameters also work with density matrices (from state tomography) instead of just probabilities. In that case, the \(l_{2}\) norm distance in the cost function of Eq. (23) can be replaced by the trace-norm distance between the density matrices obtained from the simulation and the experiment. However, quantum state tomography imposes a much higher cost in terms of the number of required experiments and is thus less practical to scale up with a larger number of qubits. Our protocol requires only fidelity measurements and so is more resource-efficient. ## V Model prediction results ### Full Model We now test our model for different initial states of the main qubit of the Quito processor. Since we always apply the DD sequence to the spectator qubits during the free-evolution experiments, the initial states of the latter do not matter due to the suppressed \(ZZ\) coupling. This section considers a total of 16 initial states, consisting of the six Pauli states and ten Haar-random states. We model the experimental results using the bath parameters we extracted in the previous section (Table 1). This serves as a stringent test of the model: we now use the previously fitted model to predict the outcome of experiments not included in Steps I-III of Section IV.2, i.e., the results with different initial states. The data for all the experiments (both fitting and testing) was obtained in one batch. We used only the data for the six Pauli initial states needed for Steps I-III to perform the fitting. We used the data for all 16 initial states in the testing phase. We consider the same two kinds of experiments: free-evolution and DD-evolution. Fig. 4 (top row) shows our model's prediction accuracy for the ten random and six Pauli states. The top left panel [Fig. 4(a)] corresponds to the free-evolution case, where we present the relative error in the prediction of our model as a function of time compared with the experimental results. The relative error is defined as \((\text{mean}_{\text{exp}}-\text{mean}_{\text{sim}})/\text{mean}_{\text{exp}}\), where \(\text{mean}_{\text{exp}}\) is the bootstrapped average over 8192 experimental repetition and \(\text{mean}_{\text{sim}}\) is the average over 600 trajectory simulations of the hybrid Redfield model for any given time instant. The box plot contains the spread in the relative error over all 16 states, showing that the relative error of the model is always below 8% over the total time considered here. The median and the mean over the 16 states are confined well below 3% for every instant. The performance of our model for the DD-evolution experiments is shown in the top right panel [Fig. 4(b)]. Here the relative error is always below 2%. The median and the mean are below 1%. The closer agreement of the model with the DD-evolution experiments is expected, given that in contrast to the free-evolution experiments, DD suppresses the low-frequency noise affecting the main qubit, and the limitations of our fluctuator model of this noise is a likely source of modeling error. The first and fourth columns of Fig. 5 show the relative error of our model over the 16 states and all 70 instants of the free and DD-evolution experiments, respectively. The results of the latter are better, as expected from Fig. 4. In both cases, however, we observe that the model has a relative prediction error of just a few percent. ### Simplified Models As discussed in Section II, our numerical simulations use the circuit model Hamiltonian of a transmon qubit truncated to the four lowest transmon eigenstates. The gates are applied with time-dependent pulses of non-zero duration. To test the robustness of our detailed model and learning procedure, we compare it with two simpler models, SM1 and SM2, derived from our detailed model. The simpler models use a more straightforward qubit description where we truncate the transmon Hamiltonian to only two levels. The time-dependent gates are replaced with instantaneous (zero-duration), ideal gates. Moreover, we focus only on the Ohmic bath terms in Eq. (14), thus simplifying the noise model by removing the classical fluctuators. To test these simpler models' predictive power, we follow the same procedure as in Section IV, but using only Steps I and II. The difference between SM1 and SM2 lies in Step II, where SM1 uses the DD-evolution experiments for the six Pauli states, whereas SM2 uses the free-evolution experiments for the same states. In both cases, we extract the model parameters and then use the resulting learned models to predict the outcomes of both the free-evolution and the DD-evolution experiment. Fig. 5 shows the comparison between our detailed model and the simpler models SM1 and SM2. We observe that SM1 has the largest relative error for the free-evolution experiments, whereas SM2 has the largest relative error for the DD-evolution case. Our full model Figure 4: Results for the Quito (top row) and Lima (bottom row) processors. Left: Box plots showing the relative error of our model for the free-evolution experiments as a function of time for 16 different initial states containing six Pauli states and ten Haar-random states. Right: the same as on the left, but for the DD-evolution experiments. We measured a total of 70 time instants, up to a total evolution time of \(19.6\;\mu\)s, but only display every other instant to avoid overcrowding. Green triangles indicate the mean over the 16 initial states, black horizontal lines are the median, gray boxes represent the \([25,75]\) percentiles, the whiskers (black lines extending outside the boxes) represent the \([0,25]\) and \([75,100]\) percentiles, and circles are outliers. has the smallest relative error among the three models considered here for _both_ the free and DD evolution experiments. However, the performance of SM1 and SM2 is essentially indistinguishable from the full model results in the DD and free-evolution cases, respectively. This is not unexpected, given that SM1 (SM2) is trained on the DD (free) evolution experiments and predicts these well. In other words, SM1 (SM2) captures the high (low)-frequency noise well, as expected since for SM1, the use of DD suppresses most of the low-frequency noise, while for SM2, the use of free-evolution means that the low-frequency noise remains a dominant source of decoherence. The added value of the detailed model and the use of Step III is that this provides enough information to capture both the low and high-frequency components of the noise, which yields a more complete model with better predictive power. We do note that taking the qubit approximation and treating DD pulses as instantaneous does not seem to appreciably worsen the performance of the simple models in their regime of accuracy, as SM1 (SM2) are roughly as accurate as the full model in the DD (free) evolution case. This suggests that an intermediate model, taking the qubit and instantaneous-pulse approximations but retaining the fluctuators, may be accurate and computationally efficient. ## VI Calibration-independent learning For multi-qubit superconducting processors, calibrating single-qubit drive frequencies is crucial for gate operations. In the presence of \(ZZ\) coupling, the state of the spectator qubits modifies the eigenfrequency of the main qubit. This results in different choices of calibration frequencies depending on the spectator qubits' state [54]. So far, we have focused on one particular device (Quito), which is calibrated while keeping the spectator qubits in the \(|+\rangle\) state (see Appendix C and Ref. [54]). The all-\(|+\rangle\) or all-\(|0\rangle\) are usually the two preferred choices for the spectators' state while calibrating a given qubit in a multi-qubit processor. When we perform state protection experiments on the main qubit initialized in the \(|+\rangle\) state while keeping all the spectator qubits in \(|0\rangle\), there exists a frequency mismatch which results in \(ZZ\) crosstalk oscillations (see Appendix C); to remove these oscillations, we applied DD to the spectator qubits before starting our noise learning procedure. When device calibration is performed while keeping the spectators in the \(|0\rangle\) state, a similar state protection experiment does not result in any oscillations, as evidenced by our Lima results (see Appendix C). Therefore, we extend our noise learning method to Lima in this section. Following our procedure from Section IV, we again perform free-evolution (no DD is applied to any qubit) and DD-evolution (XY4 is applied just to the main qubit) experiments. The only difference from the Quito case is that, for the reasons explained above, the free-evolution experiment does not require the application of DD to the spectator qubits to suppress crosstalk oscillations. Fig. 3 (bottom row) shows the contour plots for each of the three steps involved in our learning methodology as described in Section IV.2. We find the global minima at the parameter values given in Table 1. Comparing the Quito and Lima parameters in Table 1, we observe that the coupling strength \(g_{z}\) of the Ohmic bath along the \(z\)-axis is roughly double for Lima, whereas the strength of the fluctuators is roughly double for Quito. This indicates that Quito is more prone to low-frequency (\(1/f\)) noise. Fig. 4 (bottom row) shows the Lima prediction results for the 16 different initial states described above, using the learned noise parameters. The bottom left panel [Fig. 4(c)] shows the relative error in the prediction of our model of the free-evolution experiments as a function of time compared to the experimental results for all 16 states. The bottom right panel [Fig. 4(d)] shows \begin{table} \begin{tabular}{|c|c|c|} \hline Params/(2\(\pi\)) & Quito & Lima \\ \hline \hline \(g_{z}\) [MHz] & 5.734 & 4.782 \\ \(g_{z}\) [MHz] & 4.413 & 9.393 \\ \(\omega_{x}^{c}\) [GHz] & 1.948 & 2.340 \\ \(\omega_{z}^{c}\) [MHz] & 5.690 & 5.979 \\ \(b\) [MHz] & 0.598 & 0.323 \\ \(\gamma_{\rm max}\) [GHz] & 0.051 & 0.083 \\ \hline \end{tabular} \end{table} Table 1: System-bath parameter values extracted using the fitting procedure of Section IV.2, and corresponding to the minima indicated by the green circles in Fig. 3 for Quito (top row) and Lima (bottom row). Figure 5: Relative error results for the Quito processor. We display a comparison of the relative errors between the full model (which uses a three-step learning procedure and consists of four energy levels per transmon and realistic pulses) with the simplified models SM1 and SM2 (which are based on a two-step learning procedure and use just two energy levels and instantaneous pulses) for free-evolution and DD-evolution experiments. SM1 (SM2) is trained on the DD (free) evolution experiments. Each box contains a total of 16 initial states and all 70 time instants varying from 0 to 19.6 \(\mu s\). the same for the DD-evolution experiments. The relative error is always below 17% and 4% for free-evolution and DD-evolution, respectively. Similar to the Quito case, the relative error is significantly lower for the DD-evolution experiments. For longer evolution times (\(\gtrsim 14\)\(\mu\)s), the agreement worsens for the free-evolution experiments. As for the Quito results, the closer agreement of the model with the DD-evolution experiments is likely because DD suppresses the low-frequency noise affecting the main qubit, which dominates the free-evolution case's simulation error. Fig. 6 shows the time-integrated version of the Lima results of Fig. 4(c,d), where we have combined all 16 states and 70 time instants into one box each for the free-evolution and DD-evolution experiments. Except for a few outliers, almost all the data points for the free-evolution case have relative errors below 9%. The outliers all correspond to evolution times longer than 11\(\mu\)s, as indicated by the color bar. Similarly, for DD-evolution, all data points have relative errors below 3%, except for a few outliers. There are two main reasons for the larger errors at longer evolution times. First, the Redfield equation is based on the weak coupling approximation, and its accuracy degrades as we increase the simulation time (for rigorous error bounds, see Ref. [58]). Second, as time increases, the effect of distant fluctuators is increasingly felt, thus reducing the accuracy of our fluctuator model, which relies on a fixed number of fluctuators. Adding more fluctuators and exploring a distribution of fluctuator strengths, as opposed to our assumption of a fixed fluctuator strength \(b\), is expected to improve the predictive power of our model. Finally, as another test of our learning procedure, we computed \(T_{1}\) from the learned models, using the values we report in Table 2. We did this by simulating the fidelity decay of the \(|1\rangle\) state for \(19.6\mu\)s and fitting \(\exp(-t/T_{1})\) to estimate \(T_{1}\). We find \(T_{1}=92.5\mu\)s for Quito and \(86.5\mu\)s for Lima, compared to the reported \(T_{1}=98.6\mu\)s and \(105\mu\)s, respectively. Our result gives the correct order of magnitude and is particularly reasonable for Quito. The discrepancy may be in part due to the relatively short simulation time of \(19.6\mu\)s (larger times become prohibitively expensive). In addition, the discrepancy for Quito is smaller because we use three rounds of experiments within the same calibration to model the noise, which removes short-time fluctuations via bootstrap averaging, whereas for Lima, we used only one repetition. As \(T_{1}\) often drifts significantly over hour-long time scales, we would not expect the Lima prediction to line up exactly with the reported \(T_{1}\). ## VII Summary and Conclusions This work presents a detailed noise model for transmon qubits consisting of both low (\(1/f\)-like) and high-frequency noise components based on a hybrid Redfield master equation. We designed an iterative three-step procedure to extract the unknown system-bath and bath parameters from a few simple "free-evolution" and "DD-evolution" experiments, as illustrated in Fig. 2, using the six Pauli matrix eigenstates. In both cases, we used dynamical decoupling (DD) to suppress diagonal (\(ZZ\)) qubit crosstalk [54] so that the remaining dominant noise effect on the main qubit (the qubit of interest) is decoherence. Our model treats the transmon qubit as a four-level system based on the circuit model description of transmons (Section II) and treats the DD pulses as realistic time-dependent gates subject to quantum control (DRAG). Once the unknown system-bath and bath parameters are extracted, we compare the model predictions with new experiments and a larger set of initial states and demonstrate that the model predicts the experimental results of free-evolution and DD-evolution with a relative error below 8% and 2% for Quito, and below 9% and 3% for Lima, respectively. This is based on a test with the six Pauli matrix eigenstates and ten random states for a total duration of up to 19.6 \(\mu\)s. The relative errors are higher for larger times (see Fig. 4), as expected because the Redfield model is based on the weak-coupling approximation, and its accuracy degrades as the simulation time is increased [58]. To test the robustness of our model, we performed a comparison with two simpler, two-level models with instantaneous pulses; while these models capture either the low or the high-frequency noise, the full model captures both types of noise. Furthermore, our method is applicable independently of the particular device-calibration procedure followed, as witnessed by the agreement we Figure 6: Integrated relative error results for the Lima processor. We compare the relative errors between the free-evolution and DD-evolution experiments. Each box contains 16 initial states and all 70 time instants varying from 0 to 19.6 \(\mu\)s. The color bar indicates the time evolved for the outliers. All the outliers correspond to times longer than 11\(\mu\)s. find for both Quito and Lima - devices with different drive-frequency calibrations. The low relative error we find in the case of DD-evolution experiments (\(<2\%\) and \(<3\%\) for Quito and Lima, respectively) suggests that our full noise model helps model gate dynamics under the influence of decoherence. The model can also be used to study several qubits in parallel as long as there is no direct crosstalk between these qubits. Extending our noise model beyond weak coupling and using it to analyze and improve entangling gates are promising future directions. We hope this work will benefit experimental groups working with superconducting qubits by helping them understand and learn experimental noise using a first principles approach, which only requires a set of straightforward experiments. ###### Acknowledgements. We thank Mostafa Khezri, Ka-Wa Yip, and Humberto Munoz Bauza for several helpful discussions. This material is based upon work supported by the National Science Foundation, the Quantum Leap Big Idea under Grant No. OMA-1936388. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team. ## Appendix A Derivation of time-dependent drives We start with Eq. (4) from the main text, and to obtain Eq. (5), we focus on the charge coupling matrix elements for transmon qubits. The selection rules of the transmon qubit due to its cosine potential dictate that \(g_{k,k\pm 2}=0\)\(\forall k\)[14]. The next order coupling terms, \(g_{k,k\pm 3}\), are proportional to the ratio of the anharmonicity and the qubit frequency: \(\eta_{q}/\omega_{q}\)[59]. Therefore \(g_{k,k\pm 1}\) (the coupling between nearest levels) is dominant, and all other couplings can be ignored, giving Eq. (5). We can write the approximated charge operator \(\hat{p}\equiv\sum_{k\geq 0}\bra{k}\hat{n}\ket{k+1}\ket{k}\!\bra{k+1}+ \text{h.c.}\) of Eq. (5) in terms of effective creation and annihilation operators for the eigenstates of the transmon, i.e., \(\hat{p}=i(a^{\dagger}-a)\), where \[a\equiv i\sum_{k\geq 0}g_{k,k+1}\ket{k}\!\bra{k+1}=i\sum_{k\geq 0}\tilde{g}_ {k}\sqrt{k+1}\ket{k}\!\bra{k+1}, \tag{10}\] with \[g_{k,k+1} \equiv\bra{k}\hat{n}\ket{k+1}=g\frac{\bra{k}\hat{n}\ket{k+1}}{ \bra{0}\hat{n}\ket{1}} \tag{11a}\] \[\equiv\tilde{g}_{k}\sqrt{k+1}, \tag{11b}\] and \(g\equiv g_{0,1}=\bra{0}\hat{n}\ket{1}=\tilde{g}_{0}\). We note that \(g_{k,k+1}=g_{k,k+1}^{*}\), which follows from the fact that \(\hat{n}\) is the number operator for Cooper pairs. We note further that to first order in \(\eta_{q}/\omega_{q}\)[59] \[\tilde{g}_{k}\approx g\left(1-\frac{k}{2}\frac{\eta_{q}}{\omega_{q}}\right). \tag{12}\] We defined \(\tilde{g}_{k}(=\tilde{g}_{k}^{*})\) in Eq. (11b) to include all higher order perturbative corrections and do not use the approximation Eq. (12) in our numerical calculations. However, we note that the leading order correction to \(\tilde{g}_{k}\) is of the same order as \(g_{k,k\pm 3}\), i.e., proportional to \(\eta_{q}/\omega_{q}\). Eq. (4) becomes \[H_{\text{sys}} =H_{\text{trans}}^{\text{eigen}}+H_{\text{drive}} \tag{13a}\] \[H_{\text{drive}} \equiv i\varepsilon(t)\cos\left(\omega_{\text{d}}t+\phi_{\text{d}} \right)\left(a^{\dagger}-a\right). \tag{13b}\] For simplicity, we consider the case when \(\phi_{\text{d}}=0\) in Eq. (13b), and transform the Hamiltonian \(H_{\text{sys}}^{\text{eigen}}\) into a frame rotating with \(U_{\text{d}}=e^{i\omega_{\text{d}}\hat{N}t}\). Here \(\hat{N}=\sum_{k\geq 0}k\ket{k}\!\bra{k}\) is the transmon number operator. In this frame, rotating with the drive frequency, the effective Hamiltonian is given by \[\tilde{H}_{\text{sys}} =U_{\text{d}}H_{\text{sys}}U_{\text{d}}^{\dagger}+i\hat{U}_{ \text{d}}U_{\text{d}}^{\dagger} \tag{14a}\] \[=H_{\text{trans}}^{\text{eigen}}+U_{\text{d}}H_{\text{drive}}U_{ \text{d}}^{\dagger}+i\hat{U}_{\text{d}}U_{\text{d}}^{\dagger}\] (14b) \[=\sum_{k\geq 0}\left(\omega_{k}-k\omega_{\text{d}}\right)\ket{k} \!\bra{k}+U_{\text{d}}H_{\text{drive}}U_{\text{d}}^{\dagger} \tag{14c}\] where in going from Eq. (14a) to Eq. (14b), we used the fact that \(\hat{N}\) commutes with \(H_{\text{trans}}^{\text{eigen}}\) [Eq. (3)]. Note that \(a\) and \(a^{\dagger}\) should not be confused with the harmonic oscillator raising and lowering operators since \[[a,a^{\dagger}] =\sum_{k\geq 1}\left((k+1)\tilde{g}_{k}^{2}-k\tilde{g}_{k-1}^{2} \right)\ket{k}\!\bra{k}+\tilde{g}_{0}\ket{0}\!\bra{0}\] \[\neq I. \tag{15}\] For the harmonic oscillator case, \(\tilde{g}_{k}=1\) for all \(k\) and we obtain the usual commutation relation \(\left[a,a^{\dagger}\right]=I\). Despite this, we have, as for the harmonic oscillator: \[[a,\hat{N}]= [i\sum_{k\geq 0}\tilde{g}_{k}\sqrt{k+1}\ket{k}\!\bra{k+1},\sum_{k^{ \prime}\geq 0}k^{\prime}\ket{k^{\prime}} \tag{16a}\] \[= i\sum_{k\geq 0}\tilde{g}_{k}(k+1)\sqrt{k+1}\ket{k}\!\bra{k+1}\] \[-i\sum_{k\geq 0}\tilde{g}_{k}k\sqrt{k+1}\ket{k}\!\bra{k+1}\] (16b) \[= i\sum_{k\geq 0}\tilde{g}_{k}\sqrt{k+1}\ket{k}\!\bra{k+1}, \tag{16c}\] so that: \[[a,\hat{N}]=a\,\quad[a^{\dagger},\hat{N}]=-a^{\dagger} \tag{17}\] Let us write the last term in Eq. (10c) as \[U_{\rm d}H_{\rm drive}U_{\rm d}^{\dagger}=i\frac{\varepsilon(t)}{2}\left(e^{i \omega_{\rm d}t}+e^{-i\omega_{\rm d}t}\right)U_{\rm d}\left(a^{\dagger}-a\right)U _{\rm d}^{\dagger}. \tag{12}\] Using Eq. (11), we then have: \[U_{\rm d}\left(a^{\dagger}-a\right)U_{\rm d}^{\dagger}=e^{i\omega_{\rm d}t}a^{ \dagger}-e^{-i\omega_{\rm d}t}a. \tag{13}\] Making the rotating wave approximation, we ignore the fast-oscillating terms with frequencies \(\pm 2\omega_{\rm d}\), and Eq. (12) reduces to \[U_{\rm d}H_{\rm drive}U_{\rm d}^{\dagger}\approx i\frac{\varepsilon(t)}{2} \left(a^{\dagger}-a\right). \tag{14}\] Combining this with Eqs. (12) and (10), the effective Hamiltonian for a drive that results in a rotation about the \(x\)-axis (in the qubit subspace) is given by \[\tilde{H}_{\rm sys} =\sum_{k\geq 0}\left(\omega_{k}-k\omega_{\rm d}\right)|k \rangle\!\langle k| \tag{15}\] \[\quad+\frac{\varepsilon(t)}{2}\sum_{k\geq 0}\tilde{g}_{k}\sqrt{k+ 1}(|k\rangle\!\langle k+1|+|k+1\rangle\!\langle k|),\] which is Eq. (6) of the main text. By tuning \(\phi_{\rm d}\), we can implement a rotation about any axis in the \((x,y)\) plane in the qubit subspace. ## Appendix B Data collection and analysis methodology We used the IBMQE processor ibmq_quito (Quito) and ibmq_lima (Lima), whose layout is shown schematically in Fig. 7. For both Quito and Lima, we use qubit 1 (Q1) as the main qubit. These are five-qubit processors consisting of superconducting transmon qubits. Various calibration details and hardware specifications relevant to the qubits and gates used in this work are provided in Table 2. For each initial state, we selected 70 equidistant time instants (\(19.6\ \mu\)s/70), with each such instant corresponding to an integer number of cycles of the XY4 DD sequences. We generated a circuit according to the scheme given in Fig. 2 for each such initial state and each such instant. We sent all 70 such circuits in one job (the maximum allowed number of circuits per job is 75), and each job was repeated 8192 times. We ensured that all the jobs were sent consecutively within the same calibration cycle to avoid charge-noise-dependent fluctuations and variations in critical features over different calibration cycles. We measured only the main qubit in the \(Z\) basis, each measurement yielding either 0 or 1. We computed the empirical fidelity \(F^{(\varepsilon)}\) as the number of favorable outcomes (0) to the total number of experiments (8192 per initial state and per measurement time instant). This is a proxy for the \(F_{|\psi\rangle}=\mathrm{Tr}\left[U^{-1}\mathcal{E}\left(U|0\rangle\!\langle 0 |U^{-1}\right)U\right]\), where \(U\) represents \(U3(\theta,\phi,\lambda)\) and \(\mathcal{E}\) represents the quantum map of the main qubit corresponding to any of the three types of experiments described in the main text and in Appendix C below. Error bars were then generated using the standard bootstrapping procedure, where we resample (with replacement) counts out of the experimental counts' dictionary (i.e., the list of 0/1 measurement outcomes per state and instant) and create several new dictionaries. The final fidelity and error bars are obtained by calculating the mean and standard deviations over the fidelities of these newly resampled dictionaries. Using ten such resampled dictionaries of the counts sufficed to give small error bars. We report the final fidelity with \(2\sigma\) error bars, corresponding to 95% confidence intervals. \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline Quito & Q0 & Q1 & Q2 & Q3 & Q4 \\ \hline Qubit freq. (GHz) & 5.3006 & 5.0806 & 5.3220 & 5.1637 & 5.0524 \\ \(\eta_{\rm d}\) (MHz) & 331.5 & 319.2 & 332.3 & 335.1 & 319.3 \\ \(T_{1}\) (\(\mu\)s) & 86.7 & 98.6 & 61.5 & 111.5 & 85.7 \\ \(T_{2}\) (\(\mu\)s) & 132.5 & 149.0 & 78.9 & 22.7 & 136.7 \\ sex gate error [\(10^{-2}\)] & 0.0302 & 0.0243 & 0.1042 & 0.0629 & 0.0884 \\ sex gate length (ns) & 35.556 & 35.556 & 35.556 & 35.556 & 35.556 \\ readout error [\(10^{-2}\)] & 3.91 & 2.10 & 6.42 & 2.28 & 2.00 \\ \hline \hline Lima & Q0 & Q1 & Q2 & Q3 & Q4 \\ \hline Qubit freq. (GHz) & 5.0297 & 5.1277 & 5.2474 & 5.3026 & 5.0920 \\ \(\eta_{\rm d}\) (MHz) & 335.7 & 318.3 & 333.6 & 331.2 & 334.5 \\ \(T_{1}\) (\(\mu\)s) & 125.2 & 105.7 & 88.1 & 59.9 & 23.2 \\ \(T_{2}\) (\(\mu\)s) & 194.3 & 136.2 & 123.3 & 16.8 & 21.1 \\ sex gate error [\(10^{-2}\)] & 0.0230 & 0.0189 & 0.0308 & 0.0251 & 0.0758 \\ sex gate length (ns) & 35.556 & 35.556 & 35.556 & 35.556 & 35.556 \\ readout error [\(10^{-2}\)] & 1.73 & 1.40 & 1.69 & 2.42 & 4.820 \\ \hline \end{tabular} \end{table} Table 2: Specifications of the Quito (top row) and Lima (bottom row) devices accessed on September 1, 2021, and January 1, 2023, respectively. The sex (\(\sqrt{\sigma^{x}}\)) gate forms the basis of all the single qubit gates, and any single qubit gate of the form \(U3(\theta,\phi,\lambda)\) is composed of two six and three \(\mathrm{rz}(\lambda)=\exp(-i\frac{\lambda}{2}\sigma^{z})\) gates (which are error-free and take zero time, as they correspond to frame updates). Figure 7: Schematic layout of the Quito and Lima processors. The main qubit in our work is 1; we refer to the rest as spectators. ## Appendix C Experimental fidelity results For the Quito device, Fig. 8 shows the results of three different types of experiments for the 16 states consisting of 6 Pauli states and 10 Haar-random states. In the first type of experiment, we apply a series of identity gates separated by barriers on all the qubits (main qubits and the spectator qubits). All the qubits always start in the \(\ket{0}\) state. The second and third types are the experiments discussed in the main text: free-evolution, where we apply an XY4 DD sequence to the spectator qubits and identity gates on the main qubit, and DD-evolution, where we apply the XY4 DD sequences to the main qubit and identity gates to the spectator qubits (see Fig. 2). For Lima, we only show two types of experiments in Fig. 9. The first is the free-evolution experiment, where we prepare some initial state of the main qubit, apply a series of identity gates, and measure the computational basis. The second type is the usual DD experiment, where we apply a series of XY4 sequences to the main qubit and the identity operation to all the spectator qubits. In both types of experiments, the spectator qubits are always initialized in the ground state \(\ket{0}\). In the Lima case, applying a DD sequence to the spectator qubits is unnecessary.
2301.01282
RSA+: An RSA variant
We introduce a new probabilistic public-key cryptosystem which combines the main ingredients of the well-known RSA and Rabin cryptosystems. We investigate the security and performance of our new scheme in comparison to the other two.
Soeren Kleine, Andreas Nickel, Torben Ritter, Krishnan Shankar
2022-12-31T02:48:17Z
http://arxiv.org/abs/2301.01282v2
# RSA+: An algorithm at least as secure as RSA ###### Abstract The RSA algorithm has been around for nearly five decades ([1]) and remains one of the most studied public key cryptosystems. Many attempts have been made to break it or improve it and questions remain about the equivalence of the strength of its security to well known hard problems in computational number theory. A basic question which has received much attention (cf. [1], [2]1) is: Is breaking RSA equivalent to factoring? In this note we propose a modified version which we call RSA+ which is at least as secure as RSA and show that breaking RSA+ is probably computationally equivalent to factoring \(n\), the public modulus. The motivation came from wanting to obscure the encryption exponent in RSA. Footnote 1: In [2] there seems to be an issue in the proof of Lemma 3.2, where it is assumed that a cyclotomic polynomial \(\Phi_{d}(x)\) is irreducible over \(\mathbf{F}_{p}\), which is not always true. ## 1. The RSA+ Algorithm **RSA**: Bob wishes to send a message \(m\) to Alice whose RSA public key is \((n,e)\) and private key is \((p,q,d)\) where \(n=pq\) and \(de\equiv 1\pmod{\varphi(n)}\)). In the usual implementation of RSA Bob computes \(c\equiv m^{e}\pmod{n}\) and sends it to Alice. Decryption is straightforward: \(c^{d}\equiv(m^{d})^{e}\equiv m^{de}\equiv m\pmod{n}\) since \(de\equiv 1\pmod{\varphi(n)}\). **RSA+**: We start with the same setup as above namely that Bob has a message \(m\) to transmit to Alice whose RSA public key is \((n,e)\). **Encryption**: 1. Bob finds a random number \(x\pmod{n}\) and computes \(y\equiv x^{2}\pmod{n}\). 2. Bob computes \(c\equiv m^{x}\pmod{n}\) and \(r\equiv y^{e}\pmod{n}\). 3. Bob transmits the pair \((c,r)\) to Alice. **Decryption**: 1. Alice computes \(y\equiv(y^{e})^{d}\pmod{n}\). This step is similar to decrypting in RSA. 2. Alice then writes down the equation \(y\equiv x^{2}\pmod{n}\). She uses her knowledge of the factorization of \(n=pq\) to compute all four square roots of \(y\) (see Section 4). 3. For each square root, say \(\{x_{1},x_{2},x_{3},x_{4}\}\), Alice sequentially computes the inverse \(u_{i}\equiv x_{i}^{-1}\pmod{\varphi(n)}\) and evaluates \(c^{u_{i}}\pmod{n}\) until she sees an intelligible message \(m\) (but see 1.2 below). **Theorem A**.: _Breaking RSA+ is probably computationally equivalent to factoring \(n=pq\)._ Similar to RSA care must be taken in choosing the random integer \(x\) in Step 1 of encryption. If Bob chooses \(x<\sqrt{n}\), then \(y=x^{2}\) as integers and it is easy to find the square root if one knows \(y\). In this case the system is as secure as RSA since \(y\) is RSA-encrypted. It is also important that \(\gcd(x,\varphi(n))=1\) since otherwise Alice cannot decrypt the message even if she can find \(x\) (Euler's theorem). So, Bob could choose \(x\) to be a prime of at least 150 digits assuming \(n\) is around 300 digits. In Step 3 of decryption as stated Alice must compute all four square roots and then try to uncover an intelligible message by sequentially decrypting each \(m^{x_{i}}\pmod{n}\) which is onerous in practice. We describe a way around this problem as long as both primes dividing \(n\) are congruent to 3 mod 4 (this is the workaround suggested by Blum and Williams). Consider the setup with \(y\equiv x^{2}\pmod{n}\), where Bob chooses \(x,y\). Then \(y\) has four square roots mod \(n\) which come from combining the two square roots each mod \(p\) and mod \(q\) using the Chinese Remainder Theorem (CRT). Suppose both primes \(p,q\) in the factorization of \(n\) are chosen (by Alice) to be congruent to 3 mod 4. Now Bob chooses \(x,y\) such that \(x\) itself is a square mod \(n\). Then solving the equation \(X^{2}\equiv y\pmod{n}\) has solutions (say) \(X\equiv\pm a\pmod{p}\) and \(X\equiv\pm b\pmod{q}\). These are combined using CRT to yield \(X\equiv x_{i}\pmod{n}\), for \(i=1,2,3,4\) and let us suppose \(x=x_{1}\). Note that \(p,q\equiv 3\pmod{4}\) so \(\left(\frac{-1}{p}\right)=-1\) and \(\left(\frac{-1}{q}\right)=-1\) i.e., \(-1\) is not a quadratic residue mod \(p\) nor mod \(q\). This implies that exactly one of the roots \(\pm\pmod{p}\) and one of the roots \(\pm b\pmod{q}\) is a quadratic residue (mod the respective primes) and the other is not. Since Bob picked \(x\) to be a square mod \(n\), it follows that the congruences that yielded \(x_{1}\) must also be squares i.e., suppose \(X\equiv a\pmod{p}\) and \(X\equiv b\pmod{q}\) yields \(X\equiv x_{1}\pmod{n}\), then it follows that \(\left(\frac{a}{p}\right)=+1,\left(\frac{b}{q}\right)=+1\) and \(\left(\frac{-a}{p}\right)=-1\), \(\left(\frac{-b}{q}\right)=-1\). From this it follows that none of \(x_{2},x_{3},x_{4}\) is a square mod \(n\). We summarize this via the following **Proposition 1.1**.: _Suppose \(n=pq\), where \(p\) and \(q\) are congruent to 3 mod 4. Given \(x,y\) such that \(y\equiv x^{2}\pmod{n}\), \(\gcd(x,n)=1\). Then \(y\) has four square roots mod \(n\) exactly one of which is a square mod \(n\)._ This suggests a way to pinpoint \(x\) without having to search all square roots, namely, Bob picks \(x\) to be a square mod \(n\) and then computes \(y\equiv x^{2}\pmod{n}\). By the above Proposition this will be the only square root of \(y\) mod \(n\). It also follows from the discussion above that: (i) If one of the primes is congruent to 1 mod 4, then the number of square roots that are themselves perfect squares is either 0 or 2; (ii) If both primes are congruent to 1 mod 4, then the number of square roots that are themselves perfect squares is 0 or 4. Decryption is more involved if at least one of the primes dividing \(n\) is congruent to 1 (mod 8). Nevertheless, the overall algorithm's security is tied to the difficulty of factoring and as such lies in class \(NP\), although it is unknown whether it is in \(P\). ## 2. RSA+ is at least as secure as RSA In this section we show that RSA+ is at least as secure as RSA. We will show this via the use of a so-called _black box_ i.e., an unknown machine or method or algorithm that takes in a specified input and returns an output that is otherwise currently computationally intractable. **Theorem B**.: _RSA+ is at least as secure as RSA._ Proof.: Suppose we have a black box that is able to decrypt RSA+ messages i.e., this black box takes as input \((n,e,m^{x}\pmod{n},y^{e}\pmod{n})\) and returns \(m\). Here \(y\equiv x^{2}\pmod{n}\). Given an RSA ciphertext \(c\equiv m^{e}\pmod{n}\) one would then input \((n,e,c,(e^{2})^{e}\pmod{n})\) which should generate the output of \(m\). This shows that any system that can decrypt RSA+ ciphertexts can also decrypt RSA ciphertexts. Conversely, suppose there is a black box that is able to decrypt RSA messages i.e., this black box takes as input \((n,e,c\equiv m^{e}\pmod{n})\) and returns \(m\). Given an RSA+ ciphertext \((m^{x}\pmod{n},y^{e}\pmod{n})\) one can only decrypt \(y\). In order to get to \(m\) one would need to know the exponent \(x\) for input into this black box, i.e., one would now need \(x\) in order to input \((n,x,m^{x}\pmod{n})\) into the black box. This means computing a square root of \(y\). It is not known whether this black box which can decrypt RSA ciphertexts can also compute square roots (but see Theorem C). ## 3. Computational Black Boxes In this section we explore further computational black boxes which may circumvent known methods like factoring. Suppose one is able to decrypt \(y\) by ascertaining \(d\). One needs to compute the particular square root \(x\) used to encrypt \(m\). Is it possible to compute just one (pair of) square root(s) without knowledge of the factors? If there exists a computational black box that can produce one square root when the input is \((y,n)\), then there are two possibilities. Suppose the black box spits out a random square root mod \(n\) for the equation \(y\equiv x^{2}\mod{n}\). In this case one simply inputs the same pair \((y,n)\) repeatedly until we get two distinct square roots that do not add up to zero mod \(n\). By Lemma 4.1 this will yield a factorization of \(n\). Suppose the black box spits out a random square root mod \(n\) but always the _same_ square root for the equation \(y\equiv x^{2}\mod{n}\) i.e., for a given input \((y,n)\), the output is always the same \(x\) rather than one of the four distinct square roots at random. In this case we start with some known \(x\) and square it to obtain \(y\equiv x^{2}\pmod{n}\). Now input \((y,n)\) into the black box; if the output is \(\pm x\), then discard and try again with a different \(x\). If the output, say \(x^{\prime}\) is different from \(\pm x\pmod{n}\), then by Lemma 4.1 we can factor \(n\) by computing \(\gcd(x-x^{\prime},n)\). If the adversary has instead a black box that computes \(d\) given the input \((n,e)\), then \(\varphi(n)\mid(de-1)\). Therefore, for any \(x\) we have \(x^{de-1}\equiv 1\pmod{\varphi(n)}\). Since \(\varphi(n)=(p-1)(q-1)\) is divisible by \(4\), we may compute \(y\equiv x^{(de-1)/2}\pmod{n}\). Then, by construction \(y^{2}\equiv 1\pmod{n}\) which means \(n\mid(y-1)(y+1)\). Then either \(y\equiv\pm 1\pmod{n}\) or by Lemma 2.1 \(\gcd(y\pm 1,n)\) yields a factor of \(n\). Again, we can repeat this for several different \(x\) as needed to factor \(n\) with high probability. Whether RSA+ is computationally equivalent to factoring depends on the following thought experiment. Suppose we have a black box like the one in the previous Section which takes as input \((n,e,m^{x}\pmod{n},y^{e}\pmod{n})\), where \(y\equiv x^{2}\pmod{n}\) and produces as output \(m\). Does this allow us to factor \(n\)? **Theorem C**.: _A black box with input \((n,e,m^{x}\pmod{n},y^{e}\pmod{n})\) and output \(m\) can probably factor \(n\)._ Proof.: Start with a known pair \((x,y)\), where \(y\equiv x^{2}\pmod{n}\). Then note that: \[y^{x}\equiv(x^{2})^{x}\equiv x^{2x}\equiv(x^{x})^{2}\pmod{n}\] \[x^{y}\equiv x^{x^{2}}\equiv x^{x\cdot x}\equiv(x^{x})^{x}\pmod{n}\] If we were to input \((n,e,x^{y}\pmod{n},y^{e}\pmod{n})\) into the black box, then this is the same as the input \((n,e,(x^{x})^{x}\pmod{n},y^{e}\pmod{n})\). The output should therefore be \(x^{x}\pmod{n}\) which is a square root of \(y^{x}\pmod{n}\). From 3.1 and 3.2 above, repeating this procedure for several different \(x\) should yield a factorization of \(n\). The above proof carries over for RSA as well i.e., if we had a black box for RSA, then the same argument above yields a procedure to compute square roots mod \(n\). The only caveat in Theorem C above is the nature of the black box. Suppose the black box were to always return the distinguished square root \(x^{x}\pmod{n}\) for the square \(y^{x}\equiv(x^{x})^{2}\pmod{n}\), then this will not allow us to factor \(n\). ## 4. Computing square roots and factoring In the absence of a black box an adversary Eve would need \(y\) and then a square root of \(y\) to attempt to decrypt \(m\). Computing \(y\equiv(y^{e})^{d}\mod{n}\) without \((p,q,d)\) is as hard as breaking RSA. Even if Eve can deduce \(d\) without factoring \(n\), she would next have to solve the equation \(y\equiv x^{2}\pmod{n}\). See Section 3 for a discussion on black boxes that may compute square roots. **Lemma 4.1**.: _Given \(n=pq\) if we can solve the (generic) equation \(y\equiv x^{2}\pmod{n}\) and find all four roots \(\pm a,\pm b\ mod{n}\), then we can factor \(n\)._ Conversely, if one can solve the equation \(y\equiv x^{2}\pmod{p}\) and \(y\equiv x^{2}\pmod{q}\), then one can combine the roots using the Chinese Remainder Theorem to produce (in general) four square roots mod \(pq\). Proof.: Since \(a\not\equiv\pm b\pmod{n}\) and \(a^{2}\equiv b^{2}\mod n\) implies that \(n\mid(a-b)(a+b)\). Thus, computing \(\gcd(a\pm b,n)\) yields a non-trivial factor of \(n\). **Lemma 4.2**.: _If \(p\equiv 3\pmod{4}\) is prime, then one can solve \(y\equiv x^{2}\pmod{p}\) by computing \(x\equiv y^{(p+1)/4}\pmod{p}\). Then \((\pm x)^{2}\equiv y\pmod{p}\)._ **Lemma 4.3**.: _If \(p=8k+5\) is prime, i.e., \(p\equiv 5\pmod{8}\), then one can solve \(y\equiv x^{2}\pmod{p}\)._ Proof.: Since \(y\) is a square mod \(p\), we have that \(y^{(p-1)/2}\equiv 1\pmod{p}\). Since \((p-1)/4\) is an integer we can take a square root and we obtain that \(y^{(p-1)/4}\equiv\pm 1\pmod{p}\). _Case 1_: If \(y^{(p-1)/4}\equiv 1\pmod{p}\) and \(p=8k+5\), then \(x\equiv y^{k+1}\equiv y^{(p+3)/8}\pmod{p}\) is a desired square root. This is because \[x^{2}\equiv y^{(p+3)/4}\equiv y^{(p-1)/4}\cdot y\equiv y\pmod{p}\] _Case 2_: If \(y^{(p-1)/4}\equiv-1\pmod{p}\), then \(x\equiv 2^{2k+1}y^{k+1}\equiv 2^{(p-1)/4}y^{(p+3)/8}\pmod{p}\) is a desired square root. This is because \[x^{2}\equiv 2^{(p-1)/2}y^{(p+3)/4}\equiv(-1)\cdot y^{(p-1)/4}\cdot y\equiv(-1 )(-1)y\equiv y\pmod{p}\] Note that \(2^{(p-1)/2}\equiv-1\pmod{p}\) since \(p\equiv 5\pmod{8}\). **Lemma 4.4**.: _If \(p\equiv 1\pmod{8}\) is prime, then there exists an algorithm terminating in at most \(r\) steps to compute \(y\equiv x^{2}\pmod{p}\), where \(p-1=2^{r}s\)._ Proof.: This is the most involved case of the Tonelli-Shanks algorithm, which covers all the above Lemmas; see [10], [11]. See [12] for a nice exposition. Now we can prove Theorem A. Proof of Theorem A.: If an adversary Eve can factor \(n\), then she can certainly decrypt the message \(m\): compute \(\varphi(n)=(p-1)(q-1)\), find \(d\) using the Extended Euclidean Algorithm and use this to find \(y\equiv(y^{e})^{d}\mod n\). Now using Lemmas 4.2, 4.3, 4.4, depending on whether \(p,q\) are congruent to \(1\pmod{4}\) or \(3\pmod{4}\), Eve can find the square roots of \(y\mod p\) and mod \(q\). Then she can combine them to obtain all square roots mod \(n\) using the Chinese Remainder Theorem. Finally Eve solves for \(x^{-1}\pmod{\varphi(n)}\) to decrypt \(m\). If Eve possesses a black box as described in Theorem C, then Eve can probably factor \(n\) (with the caveat introduced in Remark 3.5). If Eve has instead a black box or method to find square roots mod \(n\), then by Lemma 4.1 she can factor \(n\). ## 5. Similar cryptosystems A search of the literature by the author yielded similar cryptosystems and these are described briefly here. Any other omissions are entirely accidental. To the author's knowledge this particular protocol has not been described before. For each protocol below the description is brief and only intended to serve as a reference or comparison. ### The Rabin Cryptosystem This cryptosystem ([10]) is the closest in spirit to the method proposed here. Its security is tied to square roots and factoring. Alice finds primes \(p,q\) both congruent to \(3\bmod 4\) and computes \(n=pq\) and exponents \(e,d\) with \(de\equiv 1\pmod{\varphi(n)}\) same as in RSA. Bob encrypts his message \(m\) as \(c\equiv m^{2}\pmod{n}\) and transmits \(c\). To decrypt Alice computes the square root mod \(p\) and mod \(q\) by the method outlined in Lemma 4.2. Then she combines them using the Chinese Remainder Theorem to obtain all four square roots and then examines which of these is intelligible. This latter disambiguation problem was addressed by Blum and Williams. The Rabin cryptosystem was shown to be vulnerable to a chosen ciphertext attack. This attack can be thwarted by adding redundancies to the message. ### D-RSA (Dependent RSA) This cryptosystem, described in [11], was in response to finding a so-called _semantically secure cryptosystem as efficient as RSA_. Similar to RSA Alice has public key \((n,e)\). Then Bob picks a random \(k\in(\mathbf{Z}_{n})^{*}\) and computes \(A\equiv k^{e}\pmod{n}\) and \(B\equiv m\cdot(k+1)^{e}\pmod{n}\). Decryption is via computing \(k\equiv(k^{e})^{d}\pmod{n}\) and then \(B\cdot(k+1)^{-e}\pmod{n}\). This is also similar in spirit to what we have described in that Bob picks a random number during the encryption process. Among the theorems in the paper it is shown that a slight modification of this protocol is semantically secure against _adaptive chosen ciphertext attacks_ relative to the Decision D-RSA Problem in the random oracle model. ### RSA-CRT This is a variant of RSA ([12]) in which the public key is the same, namely \((n,e)\), but the private key is split up as \(d_{p}\equiv d\pmod{p-1}\), \(d_{q}\equiv d\pmod{q-1}\), \(q_{inv}\equiv q^{-1}\pmod{p}\). The encrypted message \(c\equiv m^{e}\pmod{n}\) is then decrypted as \(m_{1}\equiv m^{d_{p}}\pmod{p},m_{2}\equiv m^{d_{q}}\pmod{q},h\equiv q_{inv}( m_{1}-m_{2})\pmod{p}\) and finally, \(m=m_{2}+hq\). This variant runs faster than RSA in practice but it was found in 1996 to be vulnerable to a differential fault attack by the Bellcore Institute. ### Multi-prime RSA In this variant more than two primes are used in the public modulus. The system yields some advantages in efficiency but may be more vulnerable to attacks to factor \(n\). ### Multi-power RSA In this variant one considers public moduli of the form \(n=p^{r}q^{s}\), where \(\gcd(r,s)=1\) (when \(s=1\) this is also called Takagi's variant; [14]). The method offers advantages in speedier decryption using Hensel's lemma and the Chinese Remainder theorem mod \(p^{r}\) and mod \(q^{s}\). ### More RSA variants Several other variants exist: Dual RSA (using two distinct moduli that share the same \(d,e\)), Batch RSA (encrypting using two different small exponents; [10]) etc. Most RSA variants are vulnerable to small private exponent attacks first described by Boneh and Durfee; [1]. A fairly exhaustive survey of RSA variants and their cryptanalysis can be found in the book by Hinek; [17]. ### Hofheinz-Kiltz-Shoup Cryptosystem This more recent system is described as a so-called Key Exchange Mechanism (KEM); [14]. The precise details would take up a few pages so an abbreviated version is outlined here. To set up Alice picks two _safe_ primes \(P,Q\) i.e., _Sophie Germain primes_, \(P=2p+1,Q=2q+1\). Moreover, \(P,Q\equiv 3\pmod{4}\). Then \(N=PQ\) is a so-called _Blum integer_. Let \(\mathbf{QR}_{N}\subseteq(\mathbf{Z}_{N})^{*}\) denote the subset of quadratic residues mod \(N\); this has \(pq\) elements. A _signed quadratic residue_ is an element of \(\mathbf{QR}_{N}\) which is normalized to lie in the set, \([-\frac{(N-1)}{2},\frac{(N-1)}{2}]\); this normalized set is denoted \(\mathbf{QR}_{N}^{+}\). We also a _target collision resistant function_ (also known as universal one way hash function), say \(T\). The desired key to be transmitted is \(K\) and the notation \(\ell_{K}\) denotes the number of bits in \(K\). Finally, for \(u\in\mathbf{Z}_{N}\) (thought of as an element in \([-\frac{(N-1)}{2},\frac{(N-1)}{2}]\)) define the function \(\mathrm{LBS}_{N}(u)=u\pmod{2}\) and then use this to define bits of the key using the Blum-Blum-Shub pseudorandom number generator ([1]): \[\mathrm{BBS}_{N}(u):=(\mathrm{LBS}_{N}(u),\mathrm{LBS}_{N}(u^{2}),\ldots, \mathrm{LBS}_{N}(u^{2^{\ell_{K}-1}}))\in\{0,1\}^{\ell_{K}}\] **Public and Private keys**: Given \(N\), Alice picks a signed quadratic residue \(g\in\mathbf{QR}_{N}^{+}\) and an element \(\alpha\in\{1,2,\ldots,\frac{(N-1)}{4}\}\). Let \(X\equiv g^{\alpha 2^{\ell_{K}+\ell_{T}}}\pmod{N}\). Then the public key is \((N,g,X)\) and the private key is \((N,g,\alpha)\). **Encapsulation**: Bob chooses \(r\in\{1,2,\ldots,\frac{(N-1)}{4}\}\) and computes \[R\equiv g^{r2^{\ell_{K}+\ell_{T}}}\pmod{N},\quad t=T(R),\quad S\equiv(g^{t}X )^{r}\pmod{N}\] The ciphertext is \(C=(R,S)\) and the key transmitted is \(K=\mathrm{BBS}_{N}(g^{r\cdot 2^{\ell_{T}}})\in\{0,1\}^{\ell_{K}}\). **Decapsulation**: The receiver computes \(t=T(R)\) and verifies whether \[S^{2^{\ell_{K}+\ell_{T}}}\stackrel{{?}}{{\equiv}}R^{t+\alpha 2^{ \ell_{K}+\ell_{T}}}\pmod{N}\] Assuming this equality is satisfied the receiver (Alice) computes integers \(a,b,c\) such that \(2^{c}=\gcd(t,2^{\ell_{K}+\ell_{T}})=at+b2^{\ell_{K}+\ell_{T}}\). Alice then derives \[U\equiv(S^{a}R^{b-a\alpha})^{2^{\ell_{T}-c}}\pmod{N},\quad\implies\quad K= \mathrm{BBS}_{N}(U)\] We don't add any more details; the reason for describing this algorithm is to point out that the ciphertext has the same flavor of the algorithm presented in this paper. Note that decapsulation here does not require knowing the factorization of \(N\). ## 6. Remarks The method proposed is probably computationally equivalent to factoring. To be certain one would have to solve the following thought exercise: Suppose there is a black box which takes in input \((c,r,n,e)\) in the proposed algorithm and returns \(m\). Can one factor \(n\) given this information? A similar formulation can be made for RSA itself: Suppose there is a black box which takes input \((c,n,e)\) and outputs \(m\); can one factor \(m\)? This is similar in spirit to a _chosen plaintext attack_ on either system. While Theorem C indicates this may be true the caveat of Remark 3.4 suggests uncertainty. ## Acknowledgments The author is grateful to Kimball Martin, Steven Miller and Larry Washington for their careful reading of early drafts and for providing valuable comments that helped greatly improve this paper.
2301.13686
Detecting Unknown Encrypted Malicious Traffic in Real Time via Flow Interaction Graph Analysis
In this paper, we propose HyperVision, a realtime unsupervised machine learning (ML) based malicious traffic detection system. Particularly, HyperVision is able to detect unknown patterns of encrypted malicious traffic by utilizing a compact inmemory graph built upon the traffic patterns. The graph captures flow interaction patterns represented by the graph structural features, instead of the features of specific known attacks. We develop an unsupervised graph learning method to detect abnormal interaction patterns by analyzing the connectivity, sparsity, and statistical features of the graph, which allows HyperVision to detect various encrypted attack traffic without requiring any labeled datasets of known attacks. Moreover, we establish an information theory model to demonstrate that the information preserved by the graph approaches the ideal theoretical bound. We show the performance of HyperVision by real-world experiments with 92 datasets including 48 attacks with encrypted malicious traffic. The experimental results illustrate that HyperVision achieves at least 0.92 AUC and 0.86 F1, which significantly outperform the state-of-the-art methods. In particular, more than 50% attacks in our experiments can evade all these methods. Moreover, HyperVision achieves at least 80.6 Gb/s detection throughput with the average detection latency of 0.83s.
Chuanpu Fu, Qi Li, Ke Xu
2023-01-31T15:03:28Z
http://arxiv.org/abs/2301.13686v1
# Detecting Unknown Encrypted Malicious Traffic in Real Time via Flow Interaction Graph Analysis ###### Abstract Nowadays traffic on the Internet has been widely encrypted to protect its confidentiality and privacy. However, traffic encryption is always abused by attackers to conceal their malicious behaviors. Since the encrypted malicious traffic has similar features to benign flows, it can easily evade traditional detection methods. Particularly, the existing encrypted malicious traffic detection methods are supervised and they rely on the prior knowledge of known attacks (e.g., labeled datasets). Detecting unknown encrypted malicious traffic in real time, which does not require prior domain knowledge, is still an open problem. In this paper, we propose HyperVision, a realtime unsupervised machine learning (ML) based malicious traffic detection system. Particularly, HyperVision is able to detect unknown patterns of encrypted malicious traffic by utilizing a compact in-memory graph built upon the traffic patterns. The graph captures flow interaction patterns represented by the graph structural features, instead of the features of specific known attacks. We develop an unsupervised graph learning method to detect abnormal interaction patterns by analyzing the connectivity, sparsity, and statistical features of the graph, which allows HyperVision to detect various encrypted attack traffic without requiring any labeled datasets of known attacks. Moreover, we establish an information theory model to demonstrate that the information preserved by the graph approaches the ideal theoretical bound. We show the performance of HyperVision by real-world experiments with 92 datasets including 48 attacks with encrypted malicious traffic. The experimental results illustrate that HyperVision achieves at least 0.92 AUC and 0.86 F1, which significantly outperform the state-of-the-art methods. In particular, more than 50% attacks in our experiments can evade all these methods. Moreover, HyperVision achieves at least 80.6 Gb/s detection throughput with the average detection latency of 0.83s. ## I Introduction Traffic encryption has been widely adopted to protect the information delivered on the Internet. Over 80% websites adopted HTTPS to prevent data breach in 2019 [16, 62]. However, the cipher-suite for such protection is double-edged. In particular, the encrypted traffic also allows attackers to conceal their malicious behaviors, e.g., malware campaigns [2], exploiting vulnerabilities [64], and data exfiltration [77]. The ratio of encrypted malicious traffic on the Internet is growing significantly [2, 3, 76] and exceeds 70% of the entire malicious traffic [16]. However, encrypted malicious traffic detection is not well addressed due to the low-rate and diverse traffic patterns [2, 39, 77]. The traditional signature based methods that leverage deep packet inspection (DPI) are invalid under the attacks with the encrypted payloads [34]. Table I compares the existing malicious traffic detection methods. Different from plain-text malicious traffic, the encrypted traffic has similar features to benign flows and thus can evade existing machine learning (ML) based detection systems as well [2, 3, 62]. Particularly, the existing encrypted traffic detection methods are supervised, i.e., relying on the prior knowledge of known attacks, and can only detect attacks with known traffic patterns. They extract features of specific known attacks and use labeled datasets of known malicious traffic for model training [2, 3, 76]. Thus, they are unable to detect a broad spectrum of attacks with encrypted traffic [39, 41, 64, 77], which are constructed with unknown patterns [22]. Besides, these methods are incapable of detecting both attacks constructed with and without encrypted traffic and unable to achieve generic detection because features of encrypted and non-encrypted attack traffic are significantly different [2, 3]. In a nutshell, the existing methods cannot achieve unsupervised detection and they are unable to detect encrypted malicious traffic with unknown patterns. In particular, the encrypted malicious traffic has stealthy behaviors, which cannot be captured by these methods [2, 76] that detect attacks according to the patterns of a single flow. However, it is still feasible to detect such attack traffic because these attacks involve multiple attack steps with different flow interactions among attackers and victims, which are distinct from benign flow interactions patterns [24, 26, 39, 46, 61]. For example, the encrypted flow interactions between spam bots and SMTP servers are significantly different from the legitimate communications [61] even if the single flow of the attack is similar to the benign one. Thus, this paper explores utilizing interaction patterns among various flows for malicious traffic detection. To the end, we propose HyperVision, a realtime detection system that aims to capture footprints of encrypted malicious traffic by analyzing interaction patterns among flows. In particular, it can detect encrypted malicious flows with unknown footprints by identifying abnormal flow interactions, i.e., the interaction patterns that are distinct from benign ones. To achieve this, we build a compact graph to capture various flow interaction patterns so that HyperVision can perform detection on various encrypted traffic according to the graph. The graph allows us to detect attacks without accessing packet payloads, while retaining the ability of detecting traditional (known) attacks with plain-text traffic. Therefore, HyperVision can detect the malicious traffic with unknown patterns by learning the graph structural features. Meanwhile, by learning the graph structural features, it realizes unsupervised detection, which does not require model training with labeled datasets. However, it is challenging to build the graph for realtime detection. We cannot simply use IP addresses as vertices and traditional four-tuple of flows [19, 36] as edges to construct the graph because the resulting dense graph cannot maintain interaction patterns among various flows, e.g., incurring the dependence explosion problem [87]. Inspired by the study of the flow size distribution [25, 84], i.e., most flows on the Internet are short while most packets are associated with long flows, we utilize two strategies to record different sizes of flows, and process the interaction patterns of short and long flows separately in the graph. Specifically, it aggregates the short flows based on the similarity of massive short flows on the Internet, which reduces the density of the graph, and performs distribution fitting for the long flows, which can effectively preserve flow interaction information. We design a four-step lightweight unsupervised graph learning approach to detect encrypted malicious traffic by utilizing the rich flow interaction information maintained on the graph. First, we analyze the connectivity of the graph by extracting the connected components and identify abnormal components by clustering the high-level statistical features. By excluding the benign components, we also significantly reduce the learning overhead. Second, we pre-cluster the edges according to the observed local adjacency in edge features. The pre-clustering operations significantly reduce the feature processing overhead and ensure realtime detection. Third, we extract critical vertices by solving a vertex cover problem using Z3 SMT solver [55] to minimize the number of clustering. Finally, we cluster each critical vertex according to its connected edges, which are in the centers of the clusters produced by the pre-clustering, and thus obtain the abnormal edges indicating encrypted malicious traffic. Moreover, to quantify the benefits of the graph based flow recording of HyperVision over the existing approaches, we develop a _flow recording entropy model_, an information theory based framework that theoretically analyzes the amount of information retained by the existing data sources of malicious traffic detection systems. By using this framework, we show that the existing sampling based and event based traffic data sources (e.g., NetFlow [19] and Zeek [86]) cannot retain high-fidelity traffic information. Thus, they are unable to record flow interaction information for the detection. But the graph in HyperVision captures near-optimal traffic information for the graph learning based detection and the amount of the information maintained in the graph approaches the theoretical up-bound of the idealized data source with infinite storage according to the data processing inequality [85]. Also, the analysis results demonstrate that the graph in HyperVision achieves higher information density (i.e., amount of traffic information per unit of storage) than all existing data sources, which is the foundation of the accurate and efficient detection. We prototype HyperVision1 with Intel's Data Plane Development Kit (DPDK) [37]. To extensively evaluate the performance of the prototype, we replayed 92 attack datasets including 80 new datasets collected in our virtual private cloud (VPC) with more than 1,500 instances. In the VPC, we collected 48 typical encrypted malicious traffic, including (i) encrypted flooding traffic, e.g., flooding target links [41]; (ii) web attacks, e.g., exploiting web vulnerabilities [64]; (iii) malware campaigns, including connectivity testing, dependency update, and downloading. In the presence of the background traffic by replaying the backbone network traffic [80], HyperVision achieves 13.9% \(\sim\) 36.1% accuracy improvements over five state-of-the-art methods. It detects all encrypted malicious traffic in an unsupervised manner with more than 0.92 AUC, 0.86 F1, where 44 of the real-world stealthy traffic cannot be identified by all the baselines, e.g., an advanced side-channel attack exploiting the CVE-2020-36516 [26] and many newly discovered cryptojacking attacks [7]. Moreover, HyperVision achieves on average more than 100 Gb/s detection throughput with the average detection latency of 0.83s. Footnote 1: Source code and datasets: [https://github.com/fuchuanpu/HyperVision](https://github.com/fuchuanpu/HyperVision). In summary, the contributions of our paper are five-fold: * We propose HyperVision, the first realtime unsupervised detection for encrypted malicious traffic with unknown patterns by utilizing the flow interaction graph. * We develop several algorithms to build the in-memory graph that allows us to accurately capture interaction patterns among various flows. * We design a lightweight unsupervised graph learning method to detect encrypted traffic via graph features. * We develop a theoretical analysis framework established by information theory to show that the graph captures near-optimal traffic interaction information. * We prototype HyperVision and use the extensive experiments with various real-world encrypted malicious traffic to validate its accuracy and efficiency. The rest of the paper is organized as follows: Section II introduces the threat model of HyperVision. Section III presents the high-level design of HyperVision. In section IV, V, and VI, we describe the detailed designs. In Section VII, we conduct the theoretical analysis. In Section VIII, we experimentally evaluate the performances. Section IX reviews related works and Section X concludes this paper. Finally, we present details in Appendix. * Existing multi-flow features can only represent the features of specific flows, which cannot be used to represent complicated interaction patterns among various flows. \end{table} TABLE I: The comparison with the existing methods of malicious traffic detection. ## II Threat Model And Design Goals We aim to develop a realtime system (i.e., HyperVision) to detect encrypted malicious traffic. It performs detection according to the traffic replicated by routers through port mirroring [17], which ensures that the system will not interfere with the traffic forwarding. After identifying encrypted malicious traffic, it can cooperate with the existing on-path malicious traffic defenses [48, 49, 88] to throttle the detected traffic. To perform detection on encrypted traffic, we cannot parse and analyze application layer headers and payloads. In this paper, we focus on detecting active attacks constructed with encrypted traffic. We do not consider passive attacks that do not generate traffic to victims, e.g., traffic eavesdropping [68] and passive traffic analysis [70]. According to the existing studies [10, 24, 29, 40, 46, 81], attackers utilize reconnaissance steps to probe the information of victims, e.g., the password of a victim [39], the TCP sequence number of a TLS connection [26, 27], and the randomized memory layout of a web server [75], which cannot be accessed directly by attackers due to lack of prior knowledge. Note that, these attacks are normally constructed with many addresses owned or faked by attackers. The design goals of HyperVision are as follows: First, it should be able to achieve generic detection, i.e., detect attacks constructed with encrypted or non-encrypted traffic, which ensures that the attacks cannot evade detection by traffic encryption [2, 77]. Second, it is able to achieve realtime high-speed traffic processing, which means that it can identify whether the passing through encrypted traffic is malicious, while incurring low detection latency. Third, the performed detection by HyperVision is unsupervised, which means that it does not require any prior knowledge of encrypted malicious traffic. That is, it should be able to deal with attacks with unknown patterns, i.e., zero-day attacks, which have not been disclosed [30]. Thus, we do not use any labeled traffic datasets for ML training. These issues cannot be well addressed by the existing detection methods [62]. ## III Overview of HyperVision In this section, we develop HyperVision that is an unsupervised detection system to capture malicious traffic in real time, in particular, encrypted malicious traffic. Normally, patterns of each flow in the encrypted malicious traffic, i.e., single-flow patterns, may be similar to benign flows, which allow them to evade the existing detection. However, the malicious behaviors appearing in the interaction patterns between the attackers and victims will be more distinct from the benign ones. Thus, in HyperVision, we construct a compact graph to maintain interaction patterns among various flows and detect abnormal interaction patterns by learning the features of the graph. HyperVision analyzes the graph structural features representing the interaction patterns without prior knowledge of known attack traffic and thus can achieve unsupervised detection against various attacks. It realizes generic detection by analyzing flows regardless of the traffic type and can detect encrypted and non-encrypted malicious traffic. Figure 1 shows three key parts of HyperVision, i.e., graph construction, graph pre-processing, and abnormal interaction detection. **Graph Construction.** HyperVision collects network flows for graph construction. Meanwhile, it classifies the flows into short and long ones and records their interaction patterns separately for the purpose of reducing the density of the graph. In the graph, it uses different addresses as vertices that connect the edges associated with short and long flows, respectively. It aggregates the massive similar short flows to construct one edge for a group of short flows, and thus reduces the overhead for maintaining flow interaction patterns. Moreover, it fits the distributions of the packet features in the long flows to construct the edges associated with long flows, which ensures high-fidelity recorded flow interaction patterns, while addressing the issue of coarse-grained flow features in the traditional methods [36]. We will detail how HyperVision maintains the high-fidelity flow interaction patterns in the in-memory graph in Section IV. **Graph Pre-Processing.** We pre-process the built interaction graph to reduce the overhead of processing the graph by extracting connected components and cluster the components using high-level statistics. In particular, the clustering can detect the components with only benign interaction patterns accurately and thus filters these benign components to reduce the scale of the graph. Moreover, we perform a pre-clustering and use the generated cluster centers to represent the edges in Fig. 1: The overview of HyperVision. the identified clusters. We will detail the graph pre-processing in Section V. **Malicious Traffic Detection Based on the Graph.** We achieve unsupervised encrypted malicious traffic detection by analyzing the graph features. We identify critical vertices in the graph by solving a vertex cover problem, which ensures that the clustering based graph learning processes all edges with the minimum number of clustering. For each selected vertex, we cluster all connected edges according to their flow features and structural features that represent the flow interaction patterns. HyperVision can identify abnormal edges in real time by computing the loss function of the clustering. We will describe the details of graph learning based detection in Section VI. ## IV Graph Construction In this section, we present the design details of constructing the flow interaction graph that maintains interaction patterns among various flows. In particular, we classify different flows, i.e., short and long flows, and aggregate short flows, and perform the distribution fitting for long flows, respectively, for efficient graph construction. In Section VII, we will show that the graph retains the near-optimal information for detection. ### _Flow Classification_ In order to efficiently analyze flows captured on the Internet, we need to avoid the dependency explosion among flows during the graph construction. We classify the collected flows into short and long flows, according to the flow size distribution [25] (see Figure 2), and then reduce the density of the graph (shown in Figure 3). Figure 2 shows the distribution of flow completion time (FCT) and flow length of the MAWI Internet traffic dataset [80] in Jan. 2020. For simplicity, we use the first \(13\times 10^{6}\) packets to plot the figure. According to the figure, we observe that only 5.52% flows have FCT \(>\) 2.0s. However, 93.70% packets in the dataset are long flows with only 2.36% proportion. Inspired by the observation, we apply different flow collection strategies for the short and long flows. We poll the per-packet information from a data-plane high-speed packet parsing engine and obtain their source and destination addresses, port numbers, and per-packet features, including protocols, lengths, and arrival intervals. These features can be extracted from both encrypted and plain-text traffic for generic detection. We develop a flow classification algorithm to classify the traffic (see Algorithm 1 in Appendix A). It maintains a timer time_now, a hash table that uses Hash(src, dst, src_port, dst_port) as key and the collected flows indicated by the sequences of their per-packet features as values. It traverses the hash table every judge_interval second according to time_now and judges the flow completion when the last packet arrived before pkt_timeout second of time_now. When the flows are completed, we classify them as long flows if the flows have more than flow_line packets. Otherwise, we classify them as short flows. As shown in Figure 2(b), we can accurately classify short and long flows. The definitions of the hyper-parameters can be found in Table VII (see Appendix A). Note that, we poll the state-less per-packet information from data-plane, while not maintaining flow states (e.g., a state machine [89]) on the data-plane to prevent attackers manipulating the states, e.g., side-channel attack [65] and evading detection [79]. ### _Short Flow Aggregation_ We need to reduce the density of the graph for analysis. As shown in Figure 3(a), the graph will be very dense for analysis if we use traditional four-tuple flows as edges, which is similar to the dependency explosion problem in provenance analysis [83, 87]. We observe that most short flows have almost the same per-packet feature sequences. For instance, the encrypted flows of repetitive SSH cracking attempts originated from specific attackers [39]. Thus, we perform the short flow aggregation to represent similar flows using one edge after the classification. We design an algorithm to aggregate short flows (see Algorithm 2 in Appendix A). A set of flows can be aggregated when all the following requirements are satisfied: (i) the flows have the same source and/or destination addresses, which implies similar behaviors generated from the addresses; (ii) the flows have the same protocol type; (iii) the number of the flows is large enough, i.e., when the number of the short flows reaches the threshold agg_line, which ensures that the flows are repetitive enough. Next, we construct an edge for the short flows, which preserves one feature sequence (i.e., protocols, lengths, and arrival intervals) for all the flows, and their four-tuples. As a result, four types of edges associated with short flows exist on the graph, i.e., source address aggregated, destination address aggregated, both addresses aggregated, and without aggregation. Thus, a vertex connected to the edge can denote a group of addresses or a single address. Figure 3 compares the graph using traditional flows as edges and our aggregated graph by using the real-world backbone traffic dataset, which is same to that used in Figure 2. The diameter of a vertex indicates the number of addresses denoted Fig. 4: The number and size of the buckets for feature distribution fitting. Fig. 3: HyperVision aggregates short flows to reduce the dense graph. Fig. 2: The real-world flow features distribution of short and long flows. by the vertex and the depth of the color indicates the repeated edges. In Figure 3(b), we observe that the algorithm reduces 93.94% vertices and 94.04% edges. The edge highlighted in green indicates short flows (i.e., 2.38 Kpps, from PH) exploiting a vulnerability. Note that, the flow aggregation reduces the storage overhead, which makes it feasible to maintain the in-memory graph for realtime detection. ### _Feature Distribution Fitting for Long Flows_ Now we use histograms to represent the per-packet feature distributions of a long flow which avoid preserving their long per-packet feature sequences, since the features in long flows are centrally distributed. Specifically, we maintain a hash table to construct the histogram for each per-packet feature sequence in each long flow. According to our empirical study, we set the buckets widths for packet-length and arrival interval as 10 bytes and 1 ms, respectively, to trade off between the fitting accuracy and overhead. We calculate the hash code by dividing the per-packet features by the bucket width and increase the counter indexed by the hash code. Finally, we record the hash codes and the associated counters as the histograms. Note that, the coarse-grained flow statistics, e.g., numbers of packets [36], are insufficient for encrypted malicious traffic detection [76], which also lose the flow interaction information [18]. Figure 4 shows the number of the used buckets and the maximum bucket size for the long flows in the same dataset shown in Figure 2. We confirm the centralized feature distribution, i.e., most packets in the long flows have similar packet lengths and arrival intervals. Specifically, in Figure 4(a), we fit the distribution of packet length using only 11 buckets on average, and most of the buckets collect more than 200 packets (see Figure 4(b)), which demonstrate that the histogram based fitting is effective with low storage overhead. Similarly, the fitting for arrival interval uses 121 buckets on average and realizes 71 packets per bucket high utilization. Besides, we use the same method for protocol. We use the mask of protocols as the hash code and use smaller numbers of buckets to realize more efficient fitting due to the limited number of protocol types. Note that, Flowlens [5] used a similar histogram to efficiently utilize hardware flow tables on P4 switches. Instead, we construct the histograms to accurately analyze long flows. ## V Graph Pre-Processing In this section, we pre-process the flow interaction graph to identify key components and pre-cluster the edges, which can enable realtime graph learning based detection against encrypted malicious traffic with unknown patterns. ### _Connectivity Analysis_ To perform the connectivity analysis of the graph, we obtain the connected components by using depth-first search (DFS) and split the graph by the components. Figure 5(a) presents the size distribution of the identified components of the MAWI traffic dataset [80] collected in Jan. 2020. We observe that most components contain few edges with similar interaction patterns. Thus, we perform a clustering on the high-level statistics for the connected components to capture the abnormal components that have over one order of magnitude clustering loss than normal components as clustering outliers. Specifically, we extract five features to profile the components, including: (i) the number of long flows; (ii) the number of short flows; (iii) the number of edges denoting short flows; (iv) the number of bytes in long flows; and (v) the number of bytes in short flows. We perform a min-max normalization and acquire the centers using the density based clustering, i.e., DBSCAN [32]. For each component, we calculate the Euclidean distance to its nearest center. We detect an abnormal component when its distance is over the \(99^{\rm th}\) percentile of all the distances based on our empirical study. Figure 5(b) shows an instance of the clustering, where the diameters indicate the scale of the traffic on the components (in the unit of bytes). We observe that most components are small, and a high ratio of huge components is classified as abnormal. All edges associated with the normal components are labeled as benign traffic, and the edges associated with the abnormal components will be further processed by the following steps. ### _Edge Pre-Clustering_ Now we further need to process and pre-cluster the graph for efficient detection. As shown in Figure 5, the abnormal components in the graph have massive vertices and edges. In particular, we cannot directly apply graph representation learning, e.g., graph neural network (GNN), for realtime detection. Figure 6 shows the edges from the components in the graph structural feature space. We observe that the distribution of the edges is sparse, i.e., most edges are adjacent to massive similar edges in the feature space. To utilize the sparsity, we perform a pre-clustering using DBSCAN [32] that leverages KD-Tree for efficient local search and select the cluster centers of the identified clusters to represent all edges in each cluster to reduce the overhead for graph processing. Specifically, we extract eight and four graph structural features (see Table V in Appendix A) for the edges associated with short and long flow, respectively, e.g., the in-degree of Fig. 5: The statistical features of the components. Fig. 6: The sparsity of edges in the graph feature space. Fig. 7: Critical vertices identification via solving the vertex cover problem. the source vertex of an edge associated with a long flow. These degree features of malicious traffic are significantly distinct from the benign ones, e.g., the vertices denoting spam bits have higher out-degrees than benign clients due to their frequent interactions with servers. Then, we perform a min-max normalization for the features, and adopt a small search range \(\epsilon\) and a large minimum number of points for DBSCAN clustering (see Section VIII-A for the setting of hyper-parameters) to avoid including irrelevant edges in the clusters, which may incur false positives. Moreover, some edges cannot be clustered and should be treated as outliers, which will be processed as clusters with only one edge. ## VI Malicious Traffic Detection In this section, we detect encrypted malicious traffic by identifying abnormal interaction patterns on the graph. In particular, we cluster edges connected to the same critical vertex and detects outliers as malicious traffic (see Figure 7). ### _Identifying Critical Vertices_ To efficiently learn the interaction patterns of the traffic, we do not perform clustering for all edges directly but cluster edges connected to critical vertices. For each connected component, we select a subset of all vertices in the connected component as the critical vertices according to the following conditions: (i) the source and/or destination vertices of each edge in the component are in the subset, which ensures that all the edges are connected to more than one critical vertices and clustered at least once; and (ii) the number of selected vertices in the subset is minimized, which aims to minimize the number of clustering to reduce the overhead of graph learning. Finding such a subset of vertices is an optimization problem and equivalent to the _vertex cover problem_[33], which was proved to be NP Complete (NPC). We select all edges and all vertices on each component to solve the problem. And we reformulate the problem to a Satisfiability Modulo Theories (SMT) problem that can be effectively solved by using Z3 SMT solver [55]. Since we pre-cluster the massive edges and reduce the scale of the problem (see Section V-B), the NPC problem can be solved in real time. ### _Edge Feature Clustering for Detection_ Now we cluster the edges connected to each critical vertex to identify abnormal interaction patterns. In this step, we use the structural features in Section V-B, and the flow features extracted from the per-packet feature sequences of short flows or the fitted feature distributions of long flows. All features are shown in Table V (see Appendix A). We use the lightweight K-Means algorithm to cluster the edges associated with short and long flows, respectively, and calculate the clustering loss that indicates the degree of maliciousness for malicious flow detection. \[\mathsf{loss}_{\mathrm{center}}(\mathsf{edge}) =\min_{C_{i}\in\{C_{1},\ldots,C_{K}\}}||C_{i}-f(\mathsf{edge})|| _{2}, \tag{1}\] \[\mathsf{loss}_{\mathrm{cluster}}(\mathsf{edge}) =\mathsf{TimeRange}(\mathcal{C}(\mathsf{edge})),\] (2) \[\mathsf{loss}_{\mathrm{count}}(\mathsf{edge}) =\log_{2}(\mathsf{Size}(\mathsf{C}(\mathsf{edge}))+1),\] (3) \[\mathsf{loss}(\mathsf{edge}) =\alpha\mathsf{loss}_{\mathrm{center}}(\mathsf{edge})\] \[-\beta\mathsf{loss}_{\mathrm{cluster}}(\mathsf{edge})+\gamma \mathsf{loss}_{\mathrm{count}}(\mathsf{edge}), \tag{4}\] where \(K\) is the number of obtained cluster centers, \(C_{i}\) is the \(i^{\mathrm{th}}\) center, \(f(\mathsf{edge})\) is the feature vector, \(\mathcal{C}(\mathsf{edge})\) contains all edges in the cluster of edge produced by pre-clustering, and \(\mathsf{TimeRange}\) calculates the time range covered by the flows denoted by the edges. According to Equation (4), the loss has three parts: (i) \(\mathsf{loss}_{\mathrm{center}}\) in (1) is the Euclidean distance to the cluster centers which indicates the difference from other edges connected to the critical vertex; (ii) \(\mathsf{loss}_{\mathrm{cluster}}\) in (2) indicates the time range covered by the cluster identified by the pre-clustering in Section V-B which implies long lasting interaction patterns tend to be benign; (iii) \(\mathsf{loss}_{\mathrm{count}}\) in (3) is the number of flows denoted by the edges, which means a burst of massive flows implies malicious behaviors. Moreover, we used weights: \(\alpha,\beta,\gamma\) to balance the loss terms. Finally, it detects the associated flows as malicious when the loss function of the edge is larger than a threshold. ## VII Theoretical Analysis In this section, we develop a theoretical analysis framework, i.e., _flow recording entropy model_, to analyze the information preserved in the graph of HyperVision for graph learning based detection. The detailed analysis can be found in Appendix C. ### _Information Entropy Based Analysis_ We develop the framework that aims to quantitatively evaluate the information retained by the exiting traffic recording modes, which decide the data representations for malicious traffic detection, by using three metrics: (i) the amount of information, i.e., the average Shannon entropy obtained by recording one packet; (ii) the scale of data, i.e., the space used to store the information; (iii) the density of information, i.e., the amount of information on a unit of storage. By using this framework, we model the graph based traffic recording mode used by HyperVision as well as three typical types of flow recording modes, i.e., (i) idealized mode that records and stores the whole per-packet feature sequence; (ii) event based mode (e.g., Zeek) that records specific events [2, 20]; and (iii) sampling based mode (e.g., NetFlow) that records coarse-grained flow information [8, 51]. We model a flow, i.e., a sequence of per-packet features, as a sequence of random variables represented by an aperiodic irreducible discrete-time Markov chain (DTMC). Let \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) denote the state diagram of the DTMC, where \(\mathcal{V}\) is the set of states (i.e., the values of the variables) and \(\mathcal{E}\) denotes the edges. We define \(s=|\mathcal{V}|\) as the number of different states and use \(\mathcal{W}=[w_{ij}]_{s\times s}\) to denote the weight matrix of \(\mathcal{G}\). All of the weights are equal and normalized: \[\forall\ \ 1\leq i,j,m,n\leq s,(w_{ij}= w_{mn})\vee(w_{ij}=0\lor w_{mn}=0),\] \[w_{i}=\sum_{j=1}^{s}w_{ij},\ \ 1=\sum_{i=1}^{s}w_{i}. \tag{5}\] The state transition is performed based on the weights, i.e., the transition probability matrix \(P=[P_{ij}]\), \(P_{ij}=w_{ij}/w_{i}\). Therefore, the DTMC has a stationary distribution \(\mu\): \[\begin{cases}\mu P=\mu,\\ 1=\sum_{j=1}^{s}\mu_{j}\end{cases}\quad\Rightarrow\quad\mu_{j}=w_{j},\quad\forall \ \ 1\leq j\leq s. \tag{6}\] Assume that the stationary distribution is a binomial distribution with the parameter: \(0.1\leq p\leq 0.9\) to approach Gaussian distribution with low skewness: \[\mu\sim B(s,p)\xrightarrow{\lambda pp.}\mathcal{N}(sp,sp(1-p)). \tag{7}\] Based on the distribution, we obtain the entropy rate of the DTMC which is the expected Shannon entropy increase for each step in the state transition, i.e., the expected Shannon entropy of each random variable in the sequence, (using _nat_ as unit, _1 nat_\(\approx\) 1.4 _bit_): \[\mathcal{H}[\mathcal{G}]=\sum_{i=1}^{s}\mu_{i}\sum_{j=1}^{s}p_{ij} \ln\frac{1}{p_{ij}}=-\sum_{i=1}^{s}\sum_{j=1}^{s}w_{ij}\ln w_{ij}+\sum_{j=1}^{ s}w_{j}\ln w_{j}\] \[=\ln|\mathcal{E}|-\frac{1}{2}\ln 2\pi sep(1-p). \tag{8}\] Moreover, for the real-world flow size distribution, we assume that the length of the sequence of random variables obeys a geometric distribution with high skewness, i.e., \(L\sim G(q)\) with a parameter: \(0.5\leq q\leq 0.9\). \(\mathcal{H}\), \(\mathcal{L}\), and \(\mathcal{D}\) denote the expectation of the metrics, i.e., the amount of information, the scale of data, and the density, respectively. **Idealized Recording Mode.** The idealized recording mode has infinite storage and captures optimal fidelity traffic information by recording each random variable from the sequence without any processing. Thus, the obtained information entropy of the idealized mode grows at the entropy rate of the DTMC: \[\mathcal{H}_{\rm Ideal}=\mathrm{E}[L\mathcal{H}[G]]=\frac{1}{q}\ln|\mathcal{E }|-\frac{1}{2q}\ln 2\pi sep(1-p). \tag{9}\] According to data processing inequality [85], the information retained in the idealized recording mode reaches the optimal value. It implies that processing of the observed per-packet features denoted by the random variables may incur information loss. In the following sections, we will show that the other mode incurs information loss. We can obtain the scale of data and the density of information for the idealized recording mode as follows: \[\mathcal{L}_{\rm Ideal}=\mathrm{E}[L]=\frac{1}{q}. \tag{10}\] \[\mathcal{D}_{\rm Ideal}=\frac{\mathcal{H}_{\rm Ideal}}{\mathcal{L}_{\rm Ideal }}=\mathcal{H}[G]. \tag{11}\] **Graph Based Recording Mode of HyperVision.** HyperVision applies different strategies to process short and long flows for the graph construction. Let \(K\) denote the threshold for classifying the flows. When \(L<K\), it collects all random variables from the sequence for short flows. Otherwise, it collects the histogram to fit the distribution for long flows. Then, we can obtain the lower bound to estimate the information entropy in the graph of HyperVision: \[\mathcal{H}_{\rm H.V.}=\frac{1-(Kq+1)(1-q)^{K}}{q}\mathcal{H}[G]+ \frac{1}{4}s(1-q)^{K} \tag{12}\] \[[(1+s)\ln ps+2\ln 2\pi e+2q\ln K-2s(1+p+\gamma)].\] We can also obtain the expected data scale and the density: \[\mathcal{L}_{\rm H.V.}=s(1-q)^{K}+\frac{1-(Kq+1)(1-q)^{K}}{Cq}, \tag{13}\] where \(C\) is the average number of flows denoted by an edge associated with short flows. \[\mathcal{D}_{\rm H.V.}=\frac{\mathcal{H}_{\rm H.V.}}{\mathcal{L}_{\rm H.V.}}. \tag{14}\] **Sampling Based Recording Mode.** Similarly, the sampling based mode extracts and records flow statistics for the detection. We analyze the accumulative statistics (e.g. the total number of bytes) that are widely adopted [19, 36]. Let \(\langle s_{1},s_{2},...,s_{L}\rangle\) denote the sequence of random variables, and \(X_{\rm Samp.}=\sum_{i=1}^{L}s_{i}\) indicates the flow statistic to be recorded. We can obtain a tight lower bound as an estimation for the amount of information and the other metrics as follows: \[\mathcal{H}_{\rm Samp.}=\mathcal{H}[X_{\rm Samp.}]=\frac{1}{2}\ln 2 \pi resp(1-p)+\frac{\ln 2}{2}q(1-q). \tag{15}\] \[\mathcal{L}_{\rm Samp.}=1.\] (16) \[\mathcal{D}_{\rm Samp.}=\mathcal{H}_{\rm Samp.} \tag{17}\] **Event Based Recording Mode.** The event based recording mode inspects each random variable in the sequence and records events with a small probability. Since the observation that the event based methods do not generate repetitive events for a long flow with a larger \(s\), for simplicity, we assume that the probability is \(p^{s}\propto 1/s\). Then, we can obtain the concise closed-form solution of the amount of information, the scale of data, and the density of information for event based recording mode as follows: \[\mathcal{H}_{\rm Ew.}=-2\theta\ln\theta, \tag{18}\] where \(\theta=\frac{\zeta}{\eta}\), \(\zeta=q-qp^{s}\), and \(\eta=q-p^{s}(q-1)\). \[\mathcal{L}_{\rm Ew.}=-\frac{p^{s}}{\eta}. \tag{19}\] \[\mathcal{D}_{\rm Ew.}=\frac{2\zeta}{p^{s}}\ln\theta. \tag{20}\] ### _Analysis Results_ We perform numerical studies to compare the flow recording modes in real-world setting. We select three per-packet features: protocol, length, and the arrival interval (in ms) as the instances of the DTMC, then we measure the parameters of the DTMC, i.e., \(|\mathcal{E}|\) and \(|\mathcal{V}|\) according to the first \(10^{6}\) packets in the MAWI dataset on Jan. 2020 [80]. We also measure \(K\), \(C\), and estimate the geometric distribution parameter \(q\) via the second moment. We have the following three key results. (1) HyperVision maintains more information using the graph than the existing methods._ Figure 8 shows the results on the feasible region (\(\mathcal{F}=\{0.1\leq p\leq 0.9,\ 0.5\leq q\leq 0.9\}\)). We observe that HyperVision maintains at least 2.37 and 1.34 times information entropy than traditional flow sampling and event based flow recording. Thus, the traditional detection methods cannot retain high-fidelity flow interaction information. Actually, they only analyze the features of a single flow, which can be evaded by encrypted traffic. According to Figure 8(b), HyperVision has 69.69% data scale of the sampling based mode. It implies that the data scale is the key challenge for the existing methods to utilize flow interaction patterns. We well address this issue by using the compact graph for maintaining the interactions among flows. _(2) HyperVision maintains near-optimal information using the graph._ According to Figure 8(a), we observe that the information maintained by the graph almost equals to the theoretical optimum, with the difference ranging from \(4.6\times 10^{-9}\) to \(2.6\)_nat_. When the parameter of the geometric distribution of \(L\) approaches 0.9, the flow information loss is larger because of the increasing ratio of long flows that incur more information loss. Figure 9 compares the information in HyperVision and the idealized system when \(q=0.59\) and \(p=0.8\). We have similar results. The gaps between the graph mode and the optimal mode are only 0.056 and 0.021. _(3) HyperVision has higher information density than the existing methods._ Figure 8(c) shows that HyperVision realizes 1.46, 1.54, and 2.39 times information density than the existing methods, respectively. Although the idealized system realizes the optimal amount of traffic information, the density is only 78.55% of HyperVision in the worst case, as shown in Figure 8(d). From Table II, we find that, for all kinds of per-packet features, HyperVision can increase the density ranging between 35.51% and 47.27% due to the different recording strategies for short and long flows. In summary, the flow interaction graph provides high-fidelity and low-redundancy traffic information with obvious flow interaction patterns, which ensures that HyperVision achieves realtime and unsupervised detection, particularly, detecting encrypted malicious traffic with unknown patterns. ## VIII Experimental Evaluation ### _Experiment Setup_ **Implementation.** We prototype HyperVision with more than 8,000 Line of Code (LOC). The prototype is compiled by gcc 9.3.0 and cmake 3.16.3. We use DPDK [37] version 19.11.9 encapsulated by libpacap++ [63] version 21.05 to implement the high-speed data-plane module. The graph construction module maintains the graph in memory for realtime detection. The graph learning module detects the encrypted malicious traffic on the interaction graph. It uses DBSCAN and K-Means in mlpack [57] (version 3.4.2) for clustering and Z3 SMT Solver [55] (version 4.8) to identify the critical vertices. **Testbed.** We deploy HyperVision on a testbed built upon DELL servers (PowerEdge R410, produced in 2012) with two Intel Xeon E5645 CPUs (2 \(\times\) 12 cores), Ubuntu 200.04.2 (Linux 5.11.0), Docker 20.10.7, 24GB memory, one Intel 82599ES 10 Gb/s NIC, and two Intel 850nm SFP+ laser ports for optical fiber connections. We configure 6GB huge page memory for DPDK (3GB/NUMA Node) and bind 8 threads on 8 physical cores for 16 NIC RX queues to parse the per-packet features from high-speed traffic. We use 8 cores for in-memory graph construction, and 7 cores are used for graph learning, the rest one core is used as DPDK master core. **Datasets.** We use real-world backbone network traffic datasets from the vantage-G of WIDE MAWI project [80] in AS2500, Tokyo Japan, Jan. \(\sim\) Jun. 2020 as background traffic. The vantage transits traffic from/to its BGP peers and providers using 10 Gb/s fiber linked to its IXP (DIX-IE), and the traffic is collected using port mirroring, which is consistent with our threat model and the physical testbed described above. We remove the attack traffic with obvious patterns in the background traffic dataset according to the rules defined by the existing studies [66, 22, 43], e.g., traffic will be detected as scanning traffic if it has scanned over 10% IPv4 addresses [22]. We generate the malicious traffic by constructing real attacks or replaying the existing traces in our testbed. Specifically, we collect malicious traffic in our virtual private cloud (VPC) with \begin{table} \begin{tabular}{c|c c c} \hline \hline **Per-Packet Features** & **Packet Length** & **Time Interval** & **Protocol Type** \\ \hline \(\iint_{\mathcal{F}}\mathcal{D}_{\text{ideal}}(p,q)\text{dp}q\) & \(1.011\pi_{\mathbf{32.10\%}}\) & \(0.918\pi_{\mathbf{32.00\%}}\) & \(0.705\pi_{\mathbf{32.51\%}}\) \\ \(\iint_{\mathcal{F}}\mathcal{D}_{\text{Samp}}(p,q)\text{dp}q\) & \(0.965\pi_{\mathbf{35.17\%}}\) & \(0.963\pi_{\mathbf{32.66\%}}\) & \(0.800\pi_{\mathbf{32.08\%}}\) \\ \(\iint_{\mathcal{F}}\mathcal{D}_{\text{Eve}}(p,q)\text{dp}q\) & \(0.588\pi_{\mathbf{60.51\%}}\) & \(0.588\pi_{\mathbf{60.44\%}}\) & \(0.588\pi_{\mathbf{50.08\%}}\) \\ \hline \(\iint_{\mathcal{F}}\mathcal{D}_{\text{H.V.}}(p,q)\text{dp}q\) & \(1.489\pi_{\mathbf{47.27\%}}\) & \(1.350\pi_{\mathbf{35.51\%}}\) & \(1.178\pi_{\mathbf{48.18\%}}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: The integral of the density in the FEASIBEGion. Fig. 8: The traffic information retained by different recording modes on the feasible region of the parameters. Fig. 9: HyperVision approaches the idealized flow recording mode on information entropy. more than 1,500 instances. We manipulate the instances to perform attacks according to the real-world measurements [22, 24, 40, 42, 43, 54, 66] and the same settings in the existing studies [11, 26, 41, 44]. We classify 80 new datasets used in our experiments (see Table VI for details) into four groups, three of which are encrypted malicious traffic: * _Traditional brute force attack_. Although HyperVision focuses on encrypted traffic, we generate 28 kinds of traditional flooding attacks to verify its generic detection and the correctness of baselines including 18 high-rate and 10 low-rate attacks: (i) the brute scanning with the real packet rates [22]; (ii) the source spoofing DDoS with various rates [40]; (iii) the amplification attacks [43]; (iv) probing vulnerable applications [21, 22]. We collected the traffic in our VPC to avoid interference with real services. * _Encrypted flooding traffic_. Different from the brute force flooding, the encrypted flooding is generated by repetitive attack behaviors which target specific applications: (i) the link flooding generates encrypted low-rate flows, e.g., the low-rate TCP attacks [44, 52] and the Crossfire attack [41], to congest links; (ii) injecting encrypted flows that exploits protocol vulnerabilities by flooding attack traffic and inject packets into the channel [11, 26, 28]; (iii) the password cracking performs slow attempts to hijack the encrypted communication protocols [39, 50]. We perform SSH cracking in the VPC with the scale of SSH servers in the ASes reachable to AS2500. * _Encrypted web malicious traffic_. Web malicious traffic is normally encrypted by HTTPS. We collect the traffic generated by seven widely used web attacks including automatic vulnerabilities discovery (including XSS, CSRF, various injections) [64], SSL vulnerabilities detection [53], and crawlers. We also collect the SMTP-over-TLS spam traffic that lures victims to visit the phishing sites [61]. * _Malware generated encrypted traffic_. The traffic of malware campaigns is low-rate and encrypted, e.g., malware component update or delivery [9], command and control (C&C) channel [8], and data exfiltration [77]. We use the malware infection statistics published in 2020 [42] and probed active addresses from the adopted vantage [23, 59] to estimate the number of visible victims. We use the same number of instances to replay public malware traffic datasets [13, 73] to mimic malware campaigns, which is similar to the existing study [58]. The malicious traffic is replayed with the background traffic datasets on the physical testbed simultaneously according to their original packet rates [80] which is the same as the existing studies [30, 47, 51]. Specifically, each dataset contains 12\(\sim\)15 million packets and the replay lasts 45s and the first 75% time does not contain malicious traffic for collecting flow interactions and training the baselines. Note that, the rates of the encrypted attack flows in our datasets are only 0.01 \(\sim\) 8.79 Kpps which consume only 0.01% \(\sim\) 0.72% bandwidth. We will show that these stealthy attacks evade most baselines. To eliminate the impact of the dataset bias, we also use 12 existing datasets including the Kitsune datasets [56], the CIC-DDoS2019 datasets [14], and the CIC-IDS2017 datasets [15], which are collected in the real-world. These detailed results can be found in Appendix B2. In particular, the traffic in two CIC datasets [14, 15] lasts 6\(\sim\)8 hours under multiple attacks, which aims to verify the long-run performances of HyperVision (see Appendix B3). Moreover, we validate the robustness of HyperVision against evasion attacks with obfuscation techniques, which can be found in Appendix B4. **Baselines.** We use five state-of-the-art generic malicious traffic detection methods as baselines: * **Jaqen (_sampling based recording and signature based detection_). Jaqen [51] uses Sketches to obtain flow statistics and applies the threshold based detection. We prototype Jaqen on the testbed, and adjust the signatures for each statistic and each attack to obtain the best accuracy. * **FlowLens (_sampling based recording and ML based detection_). FlowLens [5] uses sampled flow distribution and supervised learning, i.e., random forest. We use the hyper-parameter setting with the best accuracy used in the paper to retrain the ML model. * **Whisper (_flow-level features and ML based detection_). Whisper [30, 31] extracts the frequency domain features of flows and uses clustering to learn the features. We deploy Whisper on the physical testbed without modifications and then retrain the clustering model. * **Kitsune (_packet-level features and DL based detection_). Kitsune extracts per-packet features and uses autoencoders to learn the features which is an unsupervised method [56]. We use its default hyper-parameters and retrain the model. * **DeepLog (_event based recording and DL based detection_). DeepLog is a general log analyzer using LSTN RNN [20]. We use the logs of connections for detection and its original hyper-parameter setting to achieve the best accuracy. Note that, in the baselines above, we do not include DPI-based encrypted malicious traffic detection because they are unable to investigate encrypted payloads [34]. Also, we do not compare the task-specific detection methods [3, 76] because they cannot achieve acceptable detection accuracy. Features in FlowLens, Kitsune, and Whisper are similar to them, e.g., flow features [3], packet header features [2], and time-series [76]. **Metrics.** We mainly use AUC and F1 score because they are most widely used in the literature [8, 20, 30, 35, 56, 75, 91]. Also, we use other six metrics to validate the improvements of HyperVision, including precision, recall, F2, ACC, FPR, and EER. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multirow{2}{*}{\begin{tabular}{c} Traditional \\ Attacks \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Flooding \\ Enc. Traffic \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Enc. Web \\ Attacks \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Malware \\ Traffic \\ \end{tabular} } & \multirow{2}{*}{Overall} \\ \cline{5-6} & & & & & \\ \hline \multirow{3}{*}{Jaqen} & AUC & 0.913\(\pm\)7.0 & 782\(\pm\)190 & NA & NA & 0.867\(\pm\)127 \\ & F1 & 0.819\(\pm\)0.4 & 0.495\(\pm\)0.4 & NA & NA & 0.705\(\pm\)26 \\ \hline \multirow{3}{*}{FlowLens} & AUC & 0.939\(\pm\)**0.4** & 0.757\(\pm\)**0.2** & 0.685\(\pm\)**5**0.30** & 0.768\(\pm\)**2**.275** & 0.752\(\pm\)**5**6** \\ & F1 & 0.799\(\pm\)**0.6** & 0.651\(\pm\)**2**0.3 & 0.384\(\pm\)**5**0.0 & 0.411\(\pm\)**9**7** & 0.451\(\pm\)**4** \\ \hline \multirow{3}{*}{Whisper} & AUC & 0.951\(\pm\)**0.3** & 0.932\(\pm\)**4**.0 & 0.958\(\pm\)**2**.45 & 0.648\(\pm\)**3**0.45 & 0.752\(\pm\)**2**5** \\ & F1 & 0.705\(\pm\)**0.7** & 0.461\(\pm\)**5**0.5 & 0.546\(\pm\)**4**2** & 0.357\(\pm\)**0.27** & **0.407\(\pm\)**5** \\ \hline \multirow{3}{*}{Kisune} & AUC & 0.748\(\pm\)**2**.4** & - & 0.759\(\pm\)**2**2**2** & - & 0.751\(\pm\)**2**3** \\ & F1 & 0.419\(\pm\)**9**7** & - & 0.366\(\pm\)**0.1** & - & 0.402\(\pm\)**5**8** \\ \hline \multirow{3}{*}{DeepLog} & AUC & 0.716\(\pm\)**2**0.7** & 0.621\(\pm\)**0.65** & **0.767\(\pm\)**2**0.65 & 0.653\(\pm\)**0.44** & 0.666\(\pm\)**3**2** \\ & F1 & 0.513\(\pm\)**0.4** & 0.508\(\pm\)**0.5** & 0.572\(\pm\)**4**0.60 & 0.628\(\pm\)**0.54** & 0.597\(\pm\)**3**7** \\ \hline \multirow{3}{*}{H.V.} & AUC & 0.988\(\pm\)**0.9** & 0.974\(\pm\)**4** & 0.985\(\pm\)**2**0.5 & 0.993\(\pm\)**2**0.97\(\pm\)**0.98 & 0.988\(\pm\)**1**3** \\ & F1 & 0.978\(\pm\)**1**0.9 & 0.927\(\pm\)**4**2** & 0.957\(\pm\)**6**7** & 0.970\(\pm\)**6**4** & 0.960\(\pm\)**3**6** \\ \hline \end{tabular} * The results are NA because Jaqen is designed for detection of volumetric attacks. * * [leftmargin=*] * means that the average AUC is lower than 0.60, which is nearly the result of random guessing. * **Whisper (_flow-level features and ML based detection_). Whisper [30, 31] extracts the frequency domain features of flows and uses clustering to learn the features. We deploy Whisper on the physical testbed without modifications and then retrain the clustering model. * **Kitsune (_packet-level features and DL based detection_). Kitsune extracts per-packet features and uses autoencoders to learn the features which is an unsupervised method [56]. We use its default hyper-parameters and retrain the model. * **DeepLog (_event based recording and DL based detection_). DeepLog is a general log analyzer using LSTN RNN [20]. We use the logs of connections for detection and its original hyper-parameter setting to achieve the best accuracy. Note that, in the baselines above, we do not include DPI-based encrypted malicious traffic detection because they are unable to investigate encrypted payloads [34]. Also, we do not compare the task-specific detection methods [3, 76] because they cannot achieve acceptable detection accuracy. Features in FlowLens, Kitsune, and Whisper are similar to them, e.g., flow features [3], packet header features [2], and time-series [76]. **Metrics.** We mainly use AUC and F1 score because they are most widely used in the literature [8, 20, 30, 35, 56, 75, 91]. Also, we use other six metrics to validate the improvements of HyperVision, including precision, recall, F2, ACC, FPR, and EER. **Hyper-parameter Selection.** We conduct four-fold cross validation to avoid overfitting and hyper-parameter bias. Specifically, the datasets are equally partitioned into four subsets. Each subset is used once as a validation set to tune the hyper-parameters via the empirical study and the remaining three subsets are used as testing sets. Finally, four results are averaged to produce final results. Moreover, our ablation study shows that the different threshold settings incur at most 5.2% accuracy loss. Therefore, the hyper-parameter selection has limited impacts on the detection results. ### _Accuracy Evaluation_ Table III summarizes the detection accuracy and the improvements of HyperVision over the existing methods. In general, HyperVision achieves average F1 ranging between 0.927 and 0.978 and average AUC ranging between 0.974 and 0.993 on the 80 datasets, which are 35% and 13% improvements over the best accuracy of the baselines. In 44 datasets, none of the baselines achieves F1 higher than 0.80, which means that they are not effective to detect the attacks. Due to the page limits, we do not show the failed detection results of these baselines. **Traditional Brute Force Attacks.** First, we measure the performance of the baselines by using the flooding attacks with short flows. Although HyperVision is designed for encrypted malicious traffic detection, we find that it can also detect traditional attacks accurately. The results are shown in Table IV. HyperVision has 0.992 \(\sim\) 0.999 AUC and 0.929 \(\sim\) 0.999 F1, which achieves at most 13.4% and 1.3% improvement of F1 and AUC over the best performance of the baselines. The ROC and PRC results are illustrated in Figure 10. According to Figure 10(a) and 10(b), we observe that HyperVision has less false positives while achieving similar accuracy. Figure 10(c) and Figure 10(d) show that the PRC of HyperVision is largely better than the baselines, which means that it has a higher precision when all methods reach the same recall. Second, by comparing HyperVision with Jaqen, we can see that HyperVision can realize higher accuracy (i.e., a 19.4% F1 improvement) than Jaqen with the best threshold set manually. That is, the unsupervised method allows reducing manual design efforts. Moreover, it has 56.3% AUC improvement over the typical supervised ML based method (FlowLens). Note that, we assume that HyperVision cannot acquire labeled datasets for training, which is more realistic. Also, it outperforms Whisper with 11.6% AUC, which is an unsupervised detection in high-speed network. We observe that Kitsune and DeepLog have lower accuracy because they cannot afford high-speed backbone traffic. Third, we measure the detection accuracy of probing vulnerable applications. As shown in Figure 11, we see that HyperVision can detect the low-rate attacks with 0.920 \(\sim\) 0.994 F1 and 0.916 \(\sim\) 0.999 AUC under 6 \(\sim\) 268 attackers with 17.6 \(\sim\) 97.9 Kpps total bandwidth. It also achieves at most 46.8% F1 and 27.3% AUC improvements over the baselines that have a more significant accuracy decrease than the high-rate attacks. For example, FlowLens only achieves averagely 0.684 F1, which is only 77% under the high-rate attacks. Although Jaqen can be deployed on programmable switches, its thresholds are invalded by the low-rate attacks. And Whisper is unable to detect the attacks with two datasets. Moreover, Kitsune and DeepLog cannot detect the attacks because of the low rate of malicious packets (\(\leq\) 1.2%). The reason why HyperVision can detect the slow probing while maintaining the similar accuracy to the high-rate attacks is that the graph preserves flow interaction patterns. Although the flows from a single attacker are slow, e.g., at least 244 pps, HyperVision can record and analyze their interaction patterns. Specifically, each flow in the stealthy attack traffic can be represented by an edge in the graph, while the vertices in the graph indicate the addresses generating the traffic. Thus, the Fig. 10: ROC and PRC of HyperVision and all the baselines. traffic can be captured by identifying vertices with large out-degrees (i.e., a large number of edges). Moreover, the brute force attacks validate that our method is effective to capture the DDoS traffic because it utilizes the short flow aggregation to construct the edge associated with short flows and avoids inspecting each short spoofing flow. Besides, the experiment results also show that the critical vertices denote the addresses of major active flows, e.g., web servers, DNS servers, and scanners. Note that, we exclude the results of the baselines that cannot detect encrypted traffic with lower rates in the following sections due to the page limits. **Encrypted Flooding Traffic.** Figure 12 shows the detection accuracy under flooding attacks using encrypted traffic. Generally, HyperVision achieves 0.856 \(\sim\) 0.981 F1 and 0.917 \(\sim\) 0.998 AUC, which are 58.7% and 25.3% accuracy improvements over the baselines that can detect such attacks. Specifically, as shown in Figure 12(a) and 12(b), we observe that HyperVision can accurately detect the link flooding traffic consists of various encrypted traffic with different parameters. For instance, it can detect the Crossfire attack using HTTPS web requests generated by different sizes of botnets [41] with at most 0.939 F1. The massive web traffic generated by bots, which is low-rate (\(\leq\) 4Kbps) and encrypted, evades the detection of Whisper and FlowLens (F1 \(\leq\) 0.8). As shown in Figure 14(a), HyperVision can detect the attack efficiently by splitting the botnet clusters into a single connected component to exclude the interference from the similar benign web traffic, where the inner layer denotes botnets and the outer denotes decoy servers. Moreover, we find that HyperVision can detect low-rate TCP DoS attacks that use burst encrypted video traffic for at most 0.995 AUC and 0.938 F1. Although Whisper has slightly better AUC in some cases, we find that it cannot achieve high accuracy on all scenarios. As a result, it has only 55.5% AUC in the worse case. Moreover, HyperVision can aggregate the short flows in the SSH connection injection attacks and achieves more than 0.95 F1. The attacks exploiting protocol vulnerabilities realize low-rate packet injection and evade the detection of FlowLens (i.e., AUC \(\leq\) 0.774, F1 \(\leq\) 0.513). Figure 12(c) and 12(d) illustrate that HyperVision can identify slow and persisted password attempts for the channels with over 0.881 F1 and 0.917 AUC, which are 1.19 and 1.28 times improvements over FlowLens and Whisper. The reason is that HyperVision maintains the interaction patterns of attackers using the graph, e.g., the massive short flows for login attempts shown as red edges in Figure 14(b). Fig. 11: Heatmap of accuracy for probing vulnerabilities. Fig. 14: Subgraph with various encrypted malicious traffic. Fig. 12: Detection accuracy of encrypted flooding traffic. Fig. 13: Accuracy of encrypted web attack traffic detection. **Encrypted Web Malicious Traffic.** Figure 13 presents the detection accuracy of the encrypted traffic generated by various web vulnerabilities discovery. HyperVision achieves 0.985 average AUC and 0.957 average F1 (i.e., 2.8% and 75.2% increase compared to Whisper). The flow based ML detection cannot detect web encrypted malicious traffic because the traffic has single-flow patterns that are almost same to benign web access flows. HyperVision can accurately detect the encrypted web malicious traffic, because, as shown in Figure 14(c), it captures the traffic from the frequent interactions as the edges associated with long flows, and identifies the malicious traffic (denoted by red edges) generated by the attacker (denoted by the green vertex) by clustering the edges associated with benign web traffic that are connected to the same critical vertex (denoted by the red solid vertex). **Encrypted Malware Traffic.** We show the detection accuracy of encrypted malware traffic in Figure 15. Note that, the encrypted malware traffic is hard to detect for the baselines because it is slow and persistent. However, HyperVision accurately detects the malware campaigns with at least 0.964 AUC and 0.891 F1. Specifically, it captures the C&C servers of spyware for exfiltration as abnormal critical vertices that are connected by massive infected hosts in the graph. As a result, it detects the encrypted malicious traffic of the malware with at least 0.942 F1. For example, to detect Sality P2P botnet shown in Figure 14(d), HyperVision collects the interactions among similar P2P bots, aggregates the encrypted short flows as edges, and finally clusters the edges with higher loss than benign interaction patterns. Similarly, it can capture the static servers of adware, malware component delivery servers, the infected miner pools as abnormal vertices. Note that, the low-rate malicious flows (at least 0.814 pps) are represented as the edges associated with short flows connected to critical vertices. Meanwhile, the massive long flows with almost 100% encrypted packet proportion are represented as the edges associated with long flows to the vertices. Therefore, a critical vertex connected with the edges indicates the malware campaign that is significantly different from benign vertices with large degrees, e.g., benign websites. ### _Performance Results_ **Throughput.** We truncate the packets to the first 200 bytes on the physical testbed and increase the sending rates until the graph construction module reaches maximum throughput. Figure 16 shows the throughput of the graph construction and the detection. Figure 16(a) presents the distribution of average throughput within a 1.0s time window. We observe that HyperVision constructs the graph for 28.21 Gb traffic per second. Figure 16(b) presents the maximum throughput in each time window with all the backbone traffic datasets used in the experiments. HyperVision achieves 32.43 \(\sim\) 39.71 peak throughput on average. Moreover, we measure the throughput of the graph learning module, which inspects flow interactions. According to Figure 16(c), we observe that it can analyze 121.14 Gb traffic per second on average. Note that, the detection throughput is 4.2 times higher than the construction so that the detection can analyze the recorded traffic iteratively to consider the past interaction information. We observe that the average throughput exhibits a bimodal distribution. The peak of low throughput (around 75 Gb/s) is caused by lacking the information on the graph for analyzing during cold start stages. Fig. 16: Throughput of graph construction and detection. Fig. 17: Latency of graph construction and detection. Fig. 18: Hardware resource usages of HyperVision. Fig. 15: HyperVision can detect various encrypted malware traffic. Figure 16(d) illustrates the throughput when the performance of the system is stable. We observe that it achieves 80.6 \(\sim\) 148.9 Gb/s throughput. Note that, the throughput on Apr. and Jun. 2020 datasets is lower because of their low original traffic volume. **Latency.** We measure the latency caused by graph construction and detection. Figure 17(a) presents the PDF of the maximum latency for constructing each edge within a 1.0s window. We observe that HyperVision has 1.09s \(\sim\) 1.04s average construction latency with an upper bound of 1.93s. The distribution is a significant bimodal one because the receive side scaling (RSS) on the Intel NIC is unbalanced on the threads. The light-load threads have only 0.75s latency. We analyze the composition of the latency in Figure 17(b) (where the error bar is \(10^{\rm th}\) and \(90^{\rm th}\) percentile) and find that the flow classification, short flow aggregation, and long flow distribution fitting share 50.95%, 35.03%, and 14.0% latency, respectively. We measure the average detection latency. Figure 17(c) shows that the learning module has a 0.83s latency on average with a \(99^{\rm th}\) percentile of 4.48s. We also analyze the latency in each step (see Figure 17(d)). We see that 75.8% of the latency comes from pre-clustering (i.e., 0.66s on average). However, the pre-clustering step reduces the processing overhead of the subsequent processing, i.e., selecting critical vertex and clustering, for \(5.5\times 10^{-3}\)s (0.64%) and \(3.4\times 10^{-3}\)s (0.40%). **Resource Consumption.** Figure 18(a) presents the memory usage of HyperVision. Note that, the DPDK huge pages require 6GB memory and thus we measure the consumption when the usage reaches 6GB. We observe that the increasing rate of memory for maintaining the graph is only 13.1 MB/s. Finally, HyperVision utilizes 1.78 GB memory to maintain the flow interaction patterns extracted from 2.82 TB ongoing traffic. HyperVision incurs low memory consumption because the feature distribution fitting for long flow and short flow aggregation make the in-memory graph compact which ensures low-latency detection and long-term recording. Moreover, the memory consumption of the learning algorithm is 1.452 \(\sim\) 1.619 GB. HyperVision can export the graph to disk for forensic analysis. Figure 18(b) shows the storage used for recording the first 45s traffic of the MAWI dataset by different methods, i.e., HyperVision, event based network monitors (i.e., Suricata [74] and Zeek [86]), and raw packet headers. We observe that HyperVision achieves 8.99%, 55.7%, 98.1% storage reduction over the baselines, respectively. Meanwhile, our analysis shows that HyperVision retains more traffic information than the existing tools (see Section VII). Thus, the graph based analysis is more efficient than these existing tools. ## IX Related Work **Graph Based Anomaly Detection.** Graph based structures have been used for task-specific traffic detection. These methods heavily rely on DPI and thus cannot be applied to detect encrypted traffic [76]. Kwon _et al._ analyzed the download relationship graph to identify malware downloading [45], which is similar to WebWitness [60]. Eshete _et al._ constructed HTTP interaction graphs to detect malware static resources [24], and Invernizzi _et al._ used a graph constructed from plaintext traffic to identify malware infrastructures [38]. Different from these works, HyperVision constructs the interaction graph without parsing specific application layer headers and thus achieves task-agnostic encrypted traffic detection. Note that, the provenance graph based attack forensic analysis [83, 87] is orthogonal to our traffic detection. **DTMC Based Anomaly Detection.** Discrete-Time Markov Chain (DTMC) has been used to model the behaviors of users/devices [1, 71, 72]. These methods aim to predict behaviors of users and devices by utilizing DTMC. For instance, Peek-a-Boo predicted user activities [1], Aegis predicted user behaviors for abnormal event detection [72], and 6thSense predicted sensor behaviors for detecting sensor-based attacks [71]. Different to these methods, our work utilizes DTMC to quantify the benefits of building the compact graph for detecting various unknown attacks. **ML Based Malicious Traffic Detection.** ML based detection can detect zero-day attacks [12] and achieve higher accuracy than the traditional signature based methods [89]. For example, Fu _et al._ leveraged frequency domain features to realize realtime detection [30]. Barradas _et al._ developed Flowlens to extract flow distribution features on data-plane and detect attacks by applying random forest [5]. Stealthwatch detected attacks by analyzing flow features extracted from NetFlow [16]. Mirsky _et al._ developed Kitsune to learn the per-packet features by adopting auto-encoders [56]. For task-specific methods, Nelms _et al._[60], Invernizzi _et al._[38], and Bilge _et al._[8] detected traffic in the different stages of malware campaigns by using statistical ML. Bartos _et al._[6] and Tang _et al._[75] detect malformed HTTP request traffic. Holland _et al._[35] developed an automatic pipeline for traffic detection. All these methods cannot effectively detect attacks based on encrypted traffic. **Task-Specific Encrypted Traffic Detection.** The existing encrypted traffic detection relies on domain knowledge for short-term flow-level features [2, 16, 62]. For example, Zheng _et al._ leveraged SDN to achieve crossfire attack detection [90], and Xing _et al._ designed the primitives for the programmable switch to detect link flooding attacks [82]. For encrypted malware traffic, Bilge _et al._[8] leveraged the traffic history to detect C&C server, and Tegeler _et al._ developed supervised learning using time-scale flow features extracted from malware binaries [76]. Anderson _et al._ studies the feasibility of detecting malware encrypted communication via malformed TLS headers [3]. To the best of our knowledge, our HyperVision is the first system that enables unsupervised detection for the encrypted traffic with unknown patterns. **Encrypted Traffic Classification.** HyperVision aims to identify the malicious behaviors according to encrypted traffic. It is different from encrypted traffic classifications that decide if the traffic is generated by certain applications or users [69]. For instance, Rimmer _et al._ leveraged DL for web fingerprint, which de-anonymizes Tor traffic by classifying encrypted web traffic [67]. Siby _et al._ showed that classifying encrypted DNS traffic can jeopardize the user privacy [70]. Similarly, Bahramali _et al._ classified the encrypted traffic of instant messaging applications [4]. Ede _et al._ designed semi-supervised learning for mobile applications fingerprinting [78]. All these classifications are orthogonal to HyperVision. ## X Conclusion In this paper, we present HyperVision, an ML based realtime detection system for encrypted malicious traffic with unknown patterns. HyperVision utilizes a compact in-memory graph to retain flow interaction patterns, while not requiring prior knowledge on the traffic. Specifically, HyperVision uses two different strategies to represent the interaction patterns generated by short and long flows and aggregates the information of these flows. We develop an unsupervised graph learning method to detect the traffic by utilizing the connectivity, sparsity, and statistical features in the graph. Moreover, we establish an information theory based analysis framework to demonstrate that HyperVision preserves near-optimal information of flows for effective detection. The experiments with 92 real-world attack traffic datasets demonstrate that HyperVision achieves at least 0.86 F1 and 0.92 AUC with over 80.6 Gb/s detection throughput and average detection latency of 0.83s. In particular, 44 out of the attacks can evade all five state-of-the-art methods, which demonstrate the effectiveness of HyperVision.
2305.19884
Positivity in Linear Gaussian Structural Equation Models
We study a notion of positivity of Gaussian directed acyclic graphical models corresponding to a non-negativity constraint on the coefficients of the associated structural equation model. We prove that this constraint is equivalent to the distribution being conditionally increasing in sequence (CIS), a well-known subclass of positively associated random variables. These distributions require knowledge of a permutation, a CIS ordering, of the nodes for which the constraint of non-negativity holds. We provide an algorithm and prove in the noise-less setting that a CIS ordering can be recovered when it exists. We extend this result to the noisy setting and provide assumptions for recovering the CIS orderings. In addition, we provide a characterization of Markov equivalence for CIS DAG models. Further, we show that when a CIS ordering is known, the corresponding class of Gaussians lies in a family of distributions in which maximum likelihood estimation is a convex problem.
Asad Lodhia, Jan-Christian Hütter, Caroline Uhler, Piotr Zwiernik
2023-05-31T14:22:26Z
http://arxiv.org/abs/2305.19884v1
# Positivity in linear Gaussian structural equation models ###### Abstract. We study a notion of positivity of Gaussian directed acyclic graphical models corresponding to a non-negativity constraint on the coefficients of the associated structural equation model. We prove that this constraint is equivalent to the distribution being conditionally increasing in sequence (CIS), a well-known subclass of positively associated random variables. These distributions require knowledge of a permutation, a CIS ordering, of the nodes for which the constraint of non-negativity holds. We provide an algorithm and prove in the noise-less setting that a CIS ordering can be recovered when it exists. We extend this result to the noisy setting and provide assumptions for recovering the CIS orderings. In addition, we provide a characterization of Markov equivalence for CIS DAG models. Further, we show that when a CIS ordering is known, the corresponding class of Gaussians lies in a family of distributions in which maximum likelihood estimation is a convex problem. AL was supported by a fellowship from the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard. CU was partially supported by NCCIH/NIH (1DP2AT012345), ONR (N00014-22-1-2116), NSF (DMS-1651995), the MIT-IBM Watson AI Lab, and a Simons Investigator Award. PZ was supported by the NSERC Discovery Grant RGPIN-2023-03481. ## 1. Introduction Many random systems exhibit some form of positive dependence. Examples in statistical physics include the ferromagnetic Ising Model [9] as well as general classes of lattice gas models and percolation models [8]. In fields such as finance [2], psychometrics [25, 29, 21] and biology (see [18] and further discussion in [17, Section 1.1]), positive dependence naturally arises [3]. In recent years, there has been an increased interest in exploiting this and related notions of positive dependence in statistical modelling and in machine learning. This research direction has been particularly fruitful in the context of Gaussian and related distributions. Well studied examples of positive dependence in Gaussian models include: positive association defined by nonnegativity of all correlations [23], totally positive distributions (also known as \(\mathrm{MTP}_{2}\) distributions) defined by nonnegativity of all partial correlations [27, 16], and mixtures of these two scenarios as discussed in [17]. Various methods have been developed for covariance matrix estimation in the Gaussian setting [5, 2, 28, 33]. In applications, where the assumption of positive dependence is appropriate, these methods perform extremely well with no need for explicit regularization [2, 24]. An important problem, which motivates our work, is that none of these notions of positive dependence are suitable in the context of directed acyclic graphical models, also known as Bayesian networks1. For a simple example, consider a Gaussian vector \(X=(X_{1},X_{2},X_{3})\) such that all partial correlations are non-negative. In other words, expressing partial correlations in terms of marginal correlation coefficients \(\rho_{ij}=\operatorname{corr}(X_{i},X_{j})\), we require that \[\rho_{12}-\rho_{13}\rho_{23}\geq 0,\quad\rho_{13}-\rho_{12}\rho_{23}\geq 0, \quad\rho_{23}-\rho_{12}\rho_{13}\geq 0.\] If \(X\) is Markov to the DAG \(2\to 1\gets 3\), or equivalently, if \(\rho_{23}=0\), then, from the last inequality, we necessarily have that \(\rho_{12}\rho_{13}=0\). In other words, a Bayesian network with a v-structure cannot have all partial correlations non-negative. Given that two DAGs are Markov equivalent if and only if they have the same skeleton and v-structures (see Theorem 4.1), adding the \(\operatorname{MTP}_{2}\) constraint would severely restrict the class of Bayesian networks. In this paper, we study a natural form of directional positive dependence that is suitable for Gaussian models on directed acyclic graphs (DAGs). We introduce the model through its representation via linear structural equations [22]. If \(G\) is a DAG with \(m\) nodes representing the Gaussian vector \(X=(X_{1},\ldots,X_{m})\), then the distribution of \(X\) lies in the associated DAG model if it admits the stochastic representation \[X_{i}\;=\;\sum_{j\in\operatorname{Pa}(i)}\Lambda_{ij}X_{j}+\varepsilon_{i} \quad\text{for all }i=1,\ldots,m, \tag{1.1}\] where \(\operatorname{Pa}(i)\) denotes the set of parents of the node \(i\) in the DAG \(G\), \(\Lambda_{ij}\in\mathbb{R}\), and \(\varepsilon_{i}\sim N(0,\sigma_{i}^{2})\) are mutually independent. In matrix form, this can be written as \(X=\Lambda X+\varepsilon\), where \(\Lambda_{ij}=0\) unless \(j\to i\) in \(G\). **Remark 1.1**.: Let \(D\) be a diagonal matrix representing the covariance matrix of \(\varepsilon\). Denoting the covariance matrix of \(X\) by \(\Sigma\) (we assume throughout that it is full rank), then (1.1) implies \[\Sigma\;=\;(I-\Lambda)^{-1}D(I-\Lambda)^{-\top},\] which is equivalent to the following equality for the precision matrix \(K=\Sigma^{-1}\): \[K\;=\;(I-\Lambda)^{\top}D^{-1}(I-\Lambda).\] The following natural notion of positivity in Gaussian DAG models is the central theme of our paper. **Definition 1.2**.: A Gaussian vector \(X\) is _positively DAG dependent_ with respect to a DAG \(G\) if \(X\) admits the stochastic representation (1.1) with all \(\Lambda_{ij}\) nonnegative. We denote the subset of Gaussian DAG models \(\mathbb{M}(G)\) over \(G\) that satisfy this property by \(\mathbb{M}_{+}(G)\). Other approaches have been proposed to define positive dependence on DAGs; see, for example, [30, 32] and references therein. To explain the relationship between these different notions, we note that positive DAG dependence is closely related to the following classical notion of positive dependence. **Definition 1.3**.: A random vector \(X=(X_{1},\ldots,X_{m})\) is _conditionally increasing in sequence_ (_CIS_) if for every \(i\in[m]\) and every fixed \(x_{i}\in\mathbb{R}\), it holds that \[\mathbb{P}\big{(}\{X_{i}\geq x_{i}\}\big{|}(X_{j}=x_{j})_{j<i}\big{)}\] is a non-decreasing function in \((x_{1},\ldots,x_{i-1})\), when equipped with the coordinate-wise partial order. In the context of DAGs, the papers [30, 32] investigated a similar notion which they called a "weak monotonic effect" or "positive influence". If a parent \(k\in\mathrm{Pa}(i)\) of a particular vertex \(i\) has the property that \[\mathbb{P}\Big{(}\{X_{i}\geq x_{i}\}\Big{|}\bigcap_{j\in\mathrm{Pa}(i)}(X_{j}= x_{j})\Big{)}\] is a non-decreasing (non-increasing) function in \(x_{k}\) then \(k\) is said to have a weak monotonic positive effect on \(i\). Notably, this condition can be used to infer the presence/absence of certain edges in the graph. To provide a specific example, consider the DAG from [30, Example 5] with variables A denoting air pollution levels, E denoting antihistamine treatment, D denoting asthma incidence, and C denoting bronchial reactivity in the following figure. From context, it is reasonable to assume that the directed edges \((A,D)\), \((A,E)\) and \((A,C)\) are weak positive monotonic effects, and similarly the edges \((C,E)\) and \((C,D)\) are weak positive monotonic effects. The following argument can be used to test the causal relationship \(E\to D\): From [30, Theorem 4] it follows that the covariance of \(E\) and \(D\) must be non-negative due to the weak positive monotonic effects of the other edges. Thus if the observed covariance of \(E\) and \(D\) was negative (which is the desired medical objective), we would conclude the presence of the edge \(E\to D\) even without measuring the variables \(A\) and \(C\). We will show that the notion of positive dependence of a DAG considered in this work (stated in Definition 2.6) is the same as assuming a weak positive monotonic effect of _every_ parent on its child. This example showing how positive dependence can be used to derive causal relationships motivates our study of Markov equivalence of these models. In this work, we link the class of CIS models (which a priori make no reference to any DAG structure) to positive DAG dependence. In particular, we will discuss the problem of ordering the variables in such a way that it is CIS and identifying when there exists such an ordering. The resulting DAG could be used for causal inference. In Theorem 2.1 below, we show that a Gaussian vector \(X\) is CIS if and only if it is positively DAG dependent with respect to the full DAG with arrows \(i\to j\) for all \(i<j\). It follows that the Gaussian CIS condition has a simple algebraic formulation. Let \(K=UU^{T}\) be the Cholesky decomposition (\(U\) is upper triangular with positive diagonal entries) and \(K=\Sigma^{-1}\) is the inverse covariance matrix of \(X\). Then our notion of positive dependence restricts the signs of the off-diagonal entries of \(U\) to be non-positive. This constraint is convex, which makes computing the maximum likelihood estimator (MLE) particularly tractable. In practice, \(K\) may admit such a signed Cholesky factorization only after permuting its rows and columns. Thus, part of the problem is to recover a permutation matrix \(P\) that makes such a signed factorization possible. Maximizing the likelihood over all \(m!\) permutation matrices is infeasible. Instead, we propose a simple algorithm for learning such a permutation, and we provide statistical guarantees for the proposed algorithm. We will often contrast the class of CIS Gaussian vectors \(X\) with the well-known and well-studied class of _multivariate totally positive distributions of order 2_ (\(\mathrm{MTP}_{2}\)), which requires that its density \(p\) on \(\mathbb{R}^{m}\) satisfies \[p(x)p(y)\;\leq\;p(x\lor y)p(x\wedge y)\qquad\text{for all }x,y\in\mathbb{R}^{m},\] where \(\vee\) is the componentwise maximum and \(\wedge\) is the componentwise minimum. This inequality appeared in [8], where it was shown to imply positive association for general distributions. In the Gaussian case \(\mathrm{MTP}_{2}\) was shown to be equivalent to the precision matrix (inverse covariance matrix) being an M-matrix [12]. **Definition 1.4** (M-matrix).: A positive definite \(m\times m\) matrix \(A=[a_{i,j}]_{1\leq i,j\leq m}\) is an M-matrix if the entries satisfy \(a_{i,j}\leq 0\) for all \(i\neq j\). The space of symmetric, positive definite M-matrices of dimension \(m\times m\) is denoted \(\mathcal{M}_{m}(\mathbb{R})\). ### Outline Section 2 expounds upon the relationship between CIS distributions and DAG models while also providing motivating examples both in the Gaussian and non-Gaussian settings. Section 3 provides examples that distinguish CIS distributions from \(\mathrm{MTP}_{2}\) and other positively associated distributions along with an illustration that CIS orderings may not provide sufficient information to recover the underlying Markov equivalence class. Section 4 dives deeper into Markov equivalence for CIS orderings. Section 5 shifts the focus to parameter estimation and fitting: Cholesky factor models are introduced for the purpose of characterizing the MLE of \(\Lambda\) and \(D\) of a CIS distributed vector assuming the underlying CIS ordering is known. Section 6 concerns recovering a CIS ordering, first in the population case and then proving consistency of a noisy version of our proposed algorithm under simple assumptions on \(\Lambda\). In this section, we also prove results on what sorts of CIS orderings are possible for a distribution. ### Notation For a DAG \(G=(V,E)\), we denote the set of parent nodes of a vertex \(i\) by \(\mathrm{Pa}(i)\) and the set of children nodes of a vertex \(i\) by \(\mathrm{Ch}(i)\). If there are several DAGs over the same vertex set \(V\) under consideration, we write \(\mathrm{Pa}_{G}(i)\) and \(\mathrm{Ch}_{G}(i)\) to indicate the dependence on the particular DAG \(G\). We will mostly use \(V=[m]=\{1,\ldots,m\}\) or subsets of \([m]\). When we say a function \(f:\mathbb{R}^{k}\to\mathbb{R}\) is increasing (non-decreasing) in \(\mathbb{R}^{k}\), we mean that \(f\) is increasing (non-decreasing) in each variable. Moreover for a subset \(A\subset[k]\), if we write \((x_{j})_{j\in A}\), or equivalently, \(x_{A}\), we mean the tuple formed by taking the entries of \(x\) that are indexed by \(A\), keeping the original order. We denote the set of \(m\times m\) positive semidefinite matrices by \(\mathcal{S}_{m}(\mathbb{R})\) and the subset of positive definite matrices by \(\mathcal{S}_{m}^{+}(\mathbb{R})\). Further, \(I\) always denotes the identity matrix. When \(M\) is an \(s\times t\) matrix with \(A\subset[s]\) and \(B\subset[t]\), then \(M_{A,B}\) is the submatrix of size \(|A|\times|B|\) with entries \(M_{i,j}\) with \(i\in A\) and \(j\in B\). Following [14, Section 5.1.1], if a matrix operation appears with the subset indices, e.g., \(M_{A,A}^{-1}\) the matrix operation is performed first -- so \(M_{A,A}^{-1}\) is the submatrix of \(M^{-1}\) indexed by \(A\), whereas \((M_{A,A})^{-1}\) is the inverse of the submatrix of \(M\) indexed by \(A\). We will use the shorthand \(\backslash i\) for \([m]\backslash i\). When we consider collections of permutations, we use one line notation and use parentheses around those elements that can be ordered in any way, so for instance \((123)45\) is the set of permutations for which \(\sigma(4)=4\) and \(\sigma(5)=5\) and \(1,2,3\) can be arbitrarily assigned to the values \(\sigma(1)\), \(\sigma(2)\) and \(\sigma(3)\), that is, \((123)45=\{12345,13245,21345,23145,31245,32145\}\). ## 2. Structure of Positive Dependence on a DAG ### Basic results and definitions We start by stating the main result of this section, which links the classical concept of CIS dependence and positive DAG dependence. **Theorem 2.1**.: _A Gaussian vector \(X\) is CIS if and only if it is positively DAG dependent with respect to the full DAG with \(i\to j\) for all \(i<j\)._ The proof relies on a lemma that we prove first. **Lemma 2.2**.: _Let \(Z=(Z_{1},\ldots,Z_{m})\sim\mathcal{N}_{m}(\mu,\Sigma)\) be a Gaussian random vector on \(\mathbb{R}^{m}\) with mean \(\mu\in\mathbb{R}^{m}\) and covariance \(\Sigma\in\mathcal{S}^{+}_{m}(\mathbb{R})\), let \(K=\Sigma^{-1}\) be the precision matrix. The function_ \[\mathbb{P}\Big{(}\{Z_{i}\geq x_{i}\}\Big{|}\bigcap_{j\neq i}\{Z_{j}=x_{j}\} \Big{)} \tag{2.1}\] _is non-decreasing in \((x_{j})_{j\neq i}\) if and only if \(K_{i,j}\leq 0\) for all \(j\neq i\). Moreover, this statement is equivalent to the following two statements:_ * \(\mathbb{E}[Z_{i}|Z_{\setminus i}]\) _is a non-decreasing function in_ \((Z_{j})_{j\neq i}\)_._ * \(Z_{i}=\sum_{j\neq i}\Lambda_{ij}Z_{j}+\varepsilon_{i}\) _with_ \(\Lambda_{ij}\geq 0\) _and_ \(\varepsilon_{i}\) _Gaussian and independent of_ \((Z_{j})_{j\neq i}\)_._ Proof.: It is a classic result [19, Theorem 1.2.11 (b)] that \[Z_{i}|Z_{\setminus i}\sim\mathcal{N}\Big{(}\mu_{i}+\Sigma_{i,\setminus i}( \Sigma_{\setminus i,\setminus i})^{-1}(Z_{\setminus i}-\mu_{\setminus i}), \Sigma_{i,i}-\Sigma_{i,\setminus i}(\Sigma_{\setminus i,\setminus i})^{-1} \Sigma_{i,\setminus i}^{\top}\Big{)},\] but note that by the Schur complement formula, \[K_{i,i} =\Big{(}\Sigma_{i,i}-\Sigma_{i,\setminus i}\big{(}\Sigma_{ \setminus i,\setminus i}\big{)}^{-1}\Sigma_{i,\setminus i}^{\top}\Big{)}^{-1},\] \[K_{i,\setminus i} =-K_{i,i}\Sigma_{i,\setminus i}\big{(}\Sigma_{\setminus i, \setminus i}\big{)}^{-1},\] and \(K_{i,i}>0\) by positive definiteness. Hence we may rewrite the mean of \(Z_{i}|Z_{\setminus i}\) as \[\mu_{i}-\frac{K_{i,\setminus i}}{K_{i,i}}(Z_{\setminus i}-\mu_{\setminus i}).\] It is then clear that the function in the statement of the lemma is non-decreasing in \(x_{\setminus i}\) only if the entries of \(K_{i,\setminus i}\) are all non-positive. Note that this is also the condition on the conditional mean in (a). Equivalence with (b) follows from the fact that \[\varepsilon_{i}\;:=\;Z_{i}\;-\;\mathbb{E}[Z_{i}|Z_{\setminus i}]\] is a mean zero Gaussian variable. Since \(\mathbb{E}[\varepsilon_{i}Z_{j}]=0\) for all \(j\neq i\), and all \((\varepsilon,Z)\) are jointly Gaussian, it follows that \(\varepsilon_{i}\) is independent of \(Z_{\setminus i}\) as claimed. Proof of Theorem 2.1.: Using Lemma 2.2(b) recursively starting with \(i=m\) we get that \(X\) is CIS if and only if \[X_{i}\;=\;\sum_{j=1}^{i-1}\Lambda_{ij}X_{j}+\varepsilon_{i}\quad\text{for all $i=1,\ldots,m$}\] with \(\Lambda_{ij}\geq 0\) and \(\varepsilon_{i}\) independent of \(X_{1},\ldots,X_{i-1}\). This is precisely (1.1) when applied to the full DAG with \(j\to i\) for all \(j<i\). Theorem 2.1 together with Remark 1.1 gives the following important algebraic characterization of Gaussian CIS distributions. **Corollary 2.3**.: _The vector \(X\sim\mathcal{N}_{m}(\mu,\Sigma)\) is CIS if and only if \(K=UU^{\top}\) with \(U\) upper triangular with positive diagonal and non-positive off-diagonal entries._ Note that the CIS property relies on the ordering of the variables in the vector \(X\). The following definition is a natural extention of the CIS property; see also [20]. **Definition 2.4**.: If there exists a permutation \(\sigma\) of \([m]\) such that \((X_{\sigma(1)},\ldots,X_{\sigma(m)})\) is CIS, then we say \(\sigma\) is a CIS ordering of \(X\). If _for every_ permutation \(\sigma\) of \([m]\) we have that the vector \((X_{\sigma(1)},\ldots,X_{\sigma(m)})\) is also CIS, then we say \(X\) is conditionally increasing (CI). Interestingly, in the Gaussian case CI equals \(\operatorname{MTP}_{2}\) (see Section 3). Next, let \(G=(V,E)\) be a DAG. A permutation \(\sigma\) of \(V\) is a _topological ordering_ if \(a\to b\) implies \(\sigma(a)<\sigma(b)\). It is well-known that if \(G\) is a DAG, there exists a permutation of \(V\) that is a topological ordering. In relation to the structural equation model, it is useful to recall that if a DAG is topologically ordered then Remark 1.1 takes on a particularly nice form with \(\Lambda\) lower triangular. Denote by \(\operatorname{CIS}_{\sigma}\) the set of all Gaussian distributions such that \((X_{\sigma(1)},\ldots,X_{\sigma(m)})\) is CIS. The following result gives an important characterization of Gaussian positive DAG dependent distributions \(\mathbb{M}_{+}(G)\). **Theorem 2.5**.: _For a DAG \(G\) it holds that_ \[\mathbb{M}_{+}(G) = \mathbb{M}(G)\cap\operatorname{CIS}_{\sigma},\] _where \(\sigma\) is any topological ordering of \(G\)._ Proof.: We first show \(\mathbb{M}_{+}(G)\subseteq\mathbb{M}(G)\cap\operatorname{CIS}_{\sigma}\). The inclusion \(\mathbb{M}_{+}(G)\subseteq\mathbb{M}(G)\) follows by definition. To argue that \(\mathbb{M}_{+}(G)\subseteq\operatorname{CIS}_{\sigma}\) let \(\widehat{G}\) be the complete DAG whose only topological ordering is \(\sigma\). It is clear that \(\mathbb{M}_{+}(G)\subseteq\mathbb{M}_{+}(\widehat{G})\) and \(\mathbb{M}_{+}(\widehat{G})=\operatorname{CIS}_{\sigma}\) by Theorem 2.1. Consequently, \(\mathbb{M}_{+}(G)\subseteq\mathbb{M}(G)\cap\operatorname{CIS}_{\sigma}\). To show the opposite inclusion, note that if \(X\) has distribution in \(\mathbb{M}(G)\) then the representation (1.1) holds. Since \(X\) is \(\operatorname{CIS}_{\sigma}\) and \(\sigma\) is a topological ordering, we get from Lemma 2.2(b) that the coefficients \(\Lambda_{ij}\) must be non-negative and so the distribution of \(X\) lies in \(\mathbb{M}_{+}(G)\). Although we focus in this paper on the Gaussian case, we note that Lemma 2.2 suggests a more general definition, which is in line with [30, 32]. Consider a random vector \(X\) with values in \(\mathcal{X}=\prod_{i=1}^{m}\mathcal{X}_{i}\) where \(\mathcal{X}_{i}\subseteq\mathbb{R}\). We always assume that \(X\) admits a density function with respect to some product measure on \(\mathcal{X}\). **Definition 2.6**.: Suppose that \(X\) is a distribution that is Markov to a directed acyclic graph \(G\). Then \(X\) is positively DAG dependent with respect to \(G\) if, for every \(i\) the condition survival function \[\mathbb{P}\Big{(}\{X_{i}\geq x_{i}\}\Big{|}\bigcap_{j\in\mathrm{Pa}(i)}\{X_{j}= x_{j}\}\Big{)}\] is non-decreasing in \((x_{j})_{j\in\mathrm{Pa}(i)}\). We will use this more general definition to motivate some non-Gaussian examples int he following discussion. ### Motivating examples Positive DAG dependence is often present in small, well-designed studies. Some examples of datasets that both are well modeled by DAGs and the variables in the system are positively correlated, can be found in educational research or medical psychology; see, e.g., [1, 13]. There are also two general popular datasets, where DAG positive dependence appears naturally. These are fictitious datasets that were constructed to mimic real processes. The first dataset was introduced in [15]. It consists of sequences of "yes" and "no" responses from patients with suspected lung disease to the following questions: 1. Has shortness-of-breath 2. Had a recent trip to Asia 3. Has Lung Cancer 4. Has Tuberculosis 5. Either (T) or (L), or both, are true 6. Has a chest X-ray with a positive test 7. Is a smoker 8. Has Bronchitis In modeling the relationships of these variables, we take 1 to be the response "yes" and 0 to be "no" and use a binary-valued Bayesian network illustrated in Figure 1 below to encode the relationships between variables, following [15]. Figure 1. The node letters are the parenthetical letters in the list above. The variable E represents the logical statement “Tuberculosis (T) or Lung Cancer (L)”. In [15, Table 1], a ground truth joint distribution was defined for this example using the conditional probabilities \[\begin{array}{ccc}\mathbb{P}\big{(}\{\mathrm{A}=1\}\big{)}=.01&&\mathbb{P}\big{(} \{\mathrm{S}=1\}\big{)}=.50\\ \mathbb{P}\big{(}\{\mathrm{T}=1\}|\{\mathrm{A}=1\}\big{)}=.05&&\mathbb{P}\big{(} \{\mathrm{L}=1\}|\{\mathrm{S}=1\}\big{)}=.10\\ \mathbb{P}\big{(}\{\mathrm{T}=1\}|\{\mathrm{A}=0\}\big{)}=.01&&\mathbb{P} \big{(}\{\mathrm{L}=1\}|\{\mathrm{S}=0\}\big{)}=.01\\ \mathbb{P}\big{(}\{\mathrm{E}=1\}|\{\mathrm{L}=1\}\cap\{\mathrm{T}=1\}\big{)}= 1&&\mathbb{P}\big{(}\{\mathrm{B}=1\}|\{\mathrm{S}=1\}\big{)}=.60\\ \mathbb{P}\big{(}\{\mathrm{E}=1\}|\{\mathrm{L}=1\}\cap\{\mathrm{T}=0\}\big{)}= 1&&\mathbb{P}\big{(}\{\mathrm{B}=1\}|\{\mathrm{S}=0\}\big{)}=.30\\ \mathbb{P}\big{(}\{\mathrm{E}=1\}|\{\mathrm{L}=0\}\cap\{\mathrm{T}=1\}\big{)}= 1&&\mathbb{P}\big{(}\{\mathrm{X}=1\}|\{\mathrm{E}=1\}\big{)}=.98\\ \mathbb{P}\big{(}\{\mathrm{E}=1\}|\{\mathrm{L}=0\}\cap\{\mathrm{T}=0\}\big{)}= 0&&\mathbb{P}\big{(}\{\mathrm{X}=1\}|\{\mathrm{E}=0\}\big{)}=.05\\ \mathbb{P}\big{(}\{\mathrm{D}=1\}|\{\mathrm{E}=1\}\cap\{\mathrm{B}=1\}\big{)}=. 90&&\mathbb{P}\big{(}\{\mathrm{D}=1\}|\{\mathrm{E}=1\}\cap\{\mathrm{B}=0\} \big{)}=.70\\ \mathbb{P}\big{(}\{\mathrm{D}=1\}|\{\mathrm{E}=0\}\cap\{\mathrm{B}=1\}\big{)}=.80&&\mathbb{P}\big{(}\{\mathrm{D}=1\}|\{\mathrm{E}=0\}\cap\{\mathrm{B}=0\} \big{)}=.10.\end{array}\] It is clear that the above model is positive dependent with respect to the given DAG by inspecting the probabilities directly and checking that the condition in Definition 2.6 holds. Another dataset that is used in the context of Gaussian DAGs is the crop analysis dataset discussed in Section 2.1 in [26]. The underlying DAG and the node descriptions is given in Figure 2. The dataset assumes the following conditional node distributions: \[\begin{array}{rl}\mathrm{E}\sim&\mathcal{N}(50,100)\\ \mathrm{G}\sim&\mathcal{N}(50,100)\\ \mathrm{V}\mid\mathrm{G},\mathrm{E}\sim&\mathcal{N}(-10.36+0.5\mathrm{G}+0.77 \mathrm{E},25)\\ \mathrm{N}\mid\mathrm{V}\sim&\mathcal{N}(45+0.1\mathrm{V},99)\\ \mathrm{W}\mid\mathrm{V}\sim&\mathcal{N}(15+0.7\mathrm{V},51)\\ \mathrm{C}\mid\mathrm{N},\mathrm{W}\sim&\mathcal{N}(0.3\mathrm{N}+0.3\mathrm{ W},39.06)\end{array}\] Here, again, positive DAG dependence is part of the construction because all the conditional means depend positively on the conditioning variables. Figure 2. The DAG representing the crop dataset from [26]. The nodes are: E (environmental potential), G (genetic potential), V (vegetative organs), N (number of seeds), W (seeds mean weight), C (crop). ## 3. Illustrative theoretical examples Denote \(\mathrm{MTP}_{2}\) to be the set of all \(\mathrm{MTP}_{2}\) Gaussians, and \(\mathrm{PA}\) to be the set of all positively associated Gaussians; see [6] for a discussion of association. In [20] it is shown that for general distributions, the \(\mathrm{MTP}_{2}\) property implies CI which in turn implies CIS, and CI is equal to \(\mathrm{MTP}_{2}\) in the Gaussian case. Thus, in the Gaussian case, for every permutation \(\sigma\) we have: \[\mathrm{MTP}_{2}\quad=\quad\bigcap_{\tau}\mathrm{CIS}_{\tau}\quad\subset \quad\mathrm{CIS}_{\sigma}\quad\subset\quad\bigcup_{\tau}\mathrm{CIS}_{\tau} \quad\subset\quad\mathrm{PA}, \tag{3.1}\] where the intersection and the union are taken over all orderings. As we will see, even in the Gaussian case, all the inclusions are strict. We first give a simple example which is not \(\mathrm{MTP}_{2}\) but is CIS. **Example 3.1**.: Consider the upper triangular matrix \[U=\begin{bmatrix}1&0&-a\\ 0&1&-b\\ 0&0&1\end{bmatrix}\] with \(a,b>0\). If \(K=UU^{\top}\) is the precision matrix of a Gaussian \(X=(X_{1},X_{2},X_{3})\), then \(X\) is CIS by Corollary 2.3. However, \[K=\begin{bmatrix}1&0&-a\\ 0&1&-b\\ 0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\\ 0&1&0\\ -a&-b&1\end{bmatrix}=\begin{bmatrix}1+a^{2}&ab&-a\\ ab&1+b^{2}&-b\\ -a&-b&1\end{bmatrix},\] which is not an M-matrix, therefore \(X\) is not \(\mathrm{MTP}_{2}\). As a structural equation model, we may write \(X\) as \[X_{1} =\varepsilon_{1}\] \[X_{2} =\varepsilon_{2}\] \[X_{3} =aX_{1}+bX_{2}+\varepsilon_{3},\] where \((\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\) is a standard \(\mathcal{N}(0,I_{3})\) Gaussian. This is a DAG with a v-structure, \(1\to 3\gets 2\). Note that \(K_{12}>0\) and so \(123\) and \(213\) are the only possible CIS orderings. The above example is significant in that it shows that for Gaussian distributions, the class of CIS ordered graphical models is substantially larger than \(\mathrm{MTP}_{2}\) Gaussians. In particular it is known that v-structures cannot occur for \(\mathrm{MTP}_{2}\) graphical models in a very general setting [7]. From this standpoint, it is quite appealing to be able to extend from \(\mathrm{MTP}_{2}\) distributions to CIS distributions, since v-structures are significant in determining Markov equivalence classes, which we discuss in the next section. Example 3.1 shows that a distribution that is CIS may not be CIS with respect to other orderings. In consequence, the inclusion \(\mathrm{CIS}_{\sigma}\subset\bigcup_{\tau}\mathrm{CIS}_{\tau}\) is also strict (unless \(m=2\)). As a demonstration that the last inclusion in (3.1) is strict, we give the following example which is a positive associated Gaussian where _no reordering_ of \(X\) is CIS. **Example 3.2**.: Let \(X\) be a centered Gaussian with covariance \[\Sigma=\begin{bmatrix}5&4&7&8\\ 4&9&8&7\\ 7&8&11&11\\ 8&7&11&14\end{bmatrix}.\] Since all entries of \(\Sigma\) are positive, by [23], \(X\) is a positive associated Gaussian. However, \[K=\begin{bmatrix}94&25&-55&-23\\ 25&7&-15&-6\\ -55&-15&33&13\\ -23&-6&13&6\end{bmatrix},\] since each row of the above matrix has a positive off-diagonal entry it follows that there is no \(j\in[4]\) such that \(\mathbb{E}[X_{j}|X_{\setminus j}]\) is a non-decreasing function in \(X_{\setminus j}\), from which we conclude that there is no CIS ordering of \(X\). The next result studies the relation between \(\mathrm{CIS}_{\sigma}\) models. **Proposition 3.3**.: _Suppose \(X=(X_{1},\ldots,X_{m})\) has a Gaussian distribution. If \(m=2\) then \((X_{1},X_{2})\) is \(\mathrm{CIS}\) if and only if \((X_{2},X_{1})\) is \(\mathrm{CIS}\). If \(m\geq 3\) then \(\mathrm{CIS}_{\sigma}=\mathrm{CIS}_{\sigma^{\prime}}\) if and only if \(\sigma(k)=\sigma^{\prime}(k)\) for \(k=3,\ldots,m\)._ Proof.: The bivariate case follows because \((X_{1},X_{2})\) is CIS if and only if \(\mathrm{Cov}(X_{1},X_{2})\geq 0\), which is symmetric in \((X_{1},X_{2})\). Suppose \(m\geq 3\). The "if" implication follows directly from the definition and from the \(m=2\) case. For the "only if" implication assume with no loss in generality that \(\sigma^{\prime}=\mathrm{id}\). We construct a distribution that lies in \(\mathrm{CIS}_{\mathrm{id}}\) and show that it lies in \(\mathrm{CIS}_{\sigma}\) if and only if \(\sigma=\mathrm{id}\) or \(\sigma=(2,1,3,\ldots,m)\). Let \(U\) be an upper triangular matrix of the form \[U\;=\;\begin{bmatrix}1&0&-1&-2&\cdots&-(m-3)&-(m-2)\\ 0&1&-1&-1&\cdots&-1&-1\\ 0&0&1&-1&\cdots&-1&-1\\ \vdots&\vdots&&\ddots&\ddots&&\\ 0&0&0&0&\cdots&1&-1\\ 0&0&0&0&\cdots&0&1\end{bmatrix}.\] The distribution we construct has covariance \(\Sigma\) such that \(K=\Sigma^{-1}=UU^{\top}\). Since all the upper off-diagonal entries are non-positive, this distribution is \(\mathrm{CIS}_{\mathrm{id}}\). Denote the rows of \(U\) by \(U_{1},\ldots,U_{m}\). Note that \[U_{1}^{\top}U_{2}\;>\;\cdots\;>\;U_{1}^{\top}U_{m-1}\;=\;1\;>\;0,\] and so \(K_{1i}>0\) for all \(i=2,\ldots,m-1\). This shows that every CIS ordering of this random vector must have \(m\) as the last index. If \(m=3\), then we are done. If \(m\geq 4\), consider the marginal distribution over \(A=\{1,\ldots,m-1\}\). Because \(U\) is upper triangular, we get that \((\Sigma_{A,A})^{-1}=U_{A,A}U_{A,A}^{\top}\). Note that \(U_{A,A}\) has the same form as \(U\) but with \(m-1\) replacing \(m\). Thus, by the same argument as above, \[(\Sigma_{A,A})_{1i}^{-1}>0\quad\text{for all $i=2,\ldots,m-2$}.\] This shows that every CIS ordering of our constructed distribution must have \(m-1\) as the penultimate index. If \(m=4\), we are done. If \(m\geq 5\), take \(A\setminus\{m-1\}\) as the new \(A\) and proceed as above. In this way we show that for this distribution \(\sigma\) is a CIS ordering only if \(\sigma(k)=k\) for \(k=3,\ldots,m\) There are qualitative properties of CIS distributions that contrast with \(\text{MTP}_{2}\) distributions. It is known ([11, Proposition 3.2]) that if \(X\) is \(\text{MTP}_{2}\) distributed then any marginal distribution of \(X\) also satisfies the \(\text{MTP}_{2}\) property. The next example shows that a Gaussian CIS random variable does not satisfy this property. **Example 3.4**.: Let \(X=(X_{1},X_{2},X_{3},X_{4})\) be a centered Gaussian with covariance \[\Sigma=\begin{bmatrix}\frac{1}{4}&\frac{1}{4}&\frac{3}{4}&\frac{29}{16}\\ \frac{1}{4}&\frac{5}{4}&\frac{7}{4}&\frac{77}{16}\\ \frac{3}{4}&\frac{7}{4}&\frac{17}{4}&\frac{167}{16}\\ \frac{29}{16}&\frac{77}{16}&\frac{167}{16}&\frac{1737}{64}\end{bmatrix}.\] It can be checked directly that \((X_{1},X_{2},X_{3},X_{4})\) is CIS. However, the inverse of \(\Sigma_{134}\) is \[\begin{bmatrix}\frac{205}{24}&-\frac{23}{12}&\frac{1}{6}\\ -\frac{23}{12}&\frac{14}{3}&-\frac{5}{3}\\ \frac{1}{6}&-\frac{5}{3}&\frac{2}{3}\end{bmatrix}.\] Since the last row of this matrix has a positive off-diagonal entry, we conclude that \((X_{1},X_{3},X_{4})\) is not CIS. However, the following result, which follows immediately from the definition, shows that certain conditionals and marginals preserve the CIS property. result. **Proposition 3.5**.: _Let \(X\) be a CIS distributed centered Gaussian. Then the following distributional properties hold:_ 1. _The conditional distribution of_ \((X_{k+1},\ldots,X_{m})\) _given_ \((X_{1},\ldots,X_{k})\) _is CIS._ 2. _The vector_ \((X_{1},\ldots,X_{k})\) _is CIS for every_ \(1\leq k\leq m\)_._ Theorem 2.5 shows a relation between CIS orderings and positive DAG dependence. The following example describes a complication that can arise. Namely, we consider a DAG whose possible topological orderings are \(1(23)4\), the union of all CIS orderings for all Markov equivalent DAGs is \((123)4\), but _for special_ distributions in the model it is possible that \(4321\) is a valid CIS ordering. **Example 3.6**.: Consider the DAG model defined by the upper triangular matrix \[U=\begin{bmatrix}1&-a&-b&0\\ 0&1&0&-c\\ 0&0&1&-d\\ 0&0&0&1\end{bmatrix}\] with \(a,b,c,d>0\). The Markov equivalence class of this DAG consists of the following three DAGs. The corresponding precision matrix is given by \[K=UU^{\top}=\begin{bmatrix}1+a^{2}+b^{2}&-a&-b&0\\ -a&1+c^{2}&cd&-c\\ -b&cd&1+d^{2}&-d\\ 0&-c&-d&1\end{bmatrix}\] Since \(K_{23}=cd>0\), it is clear that any CIS ordering has \(1\) or \(4\) as the last element and \(1\) is actually possible. Since \((X_{1},X_{2},X_{3})\) is always CI, we conclude that all orderings \((123)4\) are CIS2. By direct computation we see that for \(\Sigma=K^{-1}\), Footnote 2: The notation \((123)\) stands for any permutation of these three. \[(\Sigma_{234})^{-1}\;=\;\begin{bmatrix}\frac{(a^{2}+1)c^{2}+b^{2}(c^{2}+1)+1}{ 1+a^{2}+b^{2}}&\frac{a^{2}cd-ab+(b^{2}+1)cd}{1+a^{2}+b^{2}}&-c\\ \frac{a^{2}cd-ab+(b^{2}+1)cd}{1+a^{2}+b^{2}}&\frac{a^{2}(d^{2}+1)+(b^{2}+1)d^ {2}+1}{1+a^{2}+b^{2}}&-d\\ -c&-d&1\end{bmatrix}.\] In particular, if \(a,b\) are sufficiently large and \(c,d\) are sufficiently small such that \(a^{2}cd-ab+(b^{2}+1)cd\leq 0\), we also have that \((X_{2},X_{3},X_{4})\) is CI. In this case, each ordering \((234)1\) is also a CIS ordering. Thus the CIS orderings are of the form \(1(23)4\), \(4(23)1\) and \((23)(14)\). Note that only the DAG with topological ordering \(1(23)4\) is in the Markov equivalence class, while the DAGs with topological ordering \(4(23)1\) and \((23)(14)\) are not. This shows that the set of all CIS orderings contains only limited information about the underlying DAG. The situation is not always that complicated. In Proposition 4.5 we will show that there is a large class of interesting hierarchical networks for which the possible CIS orderings are exactly the topological orderings. Another property worth noting is that the space of \(\operatorname{MTP}_{2}\) Gaussian distributions amounts to the M-matrix constraint on \(K\), which is convex in \(K\). We can show that the space of \(K\) for which \(X\) is a CIS Gaussian is not convex in \(K\). **Example 3.7**.: Let \(K_{1}\) and \(K_{2}\) be the precision matrices \[K_{1}=\begin{bmatrix}1&-1&-1&-4\\ 0&1&0&0\\ 0&0&1&-3\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}1&-1&-1&-4\\ 0&1&0&0\\ 0&0&1&-3\\ 0&0&0&1\end{bmatrix}^{\top}=\begin{bmatrix}19&-1&11&-4\\ -1&1&0&0\\ 11&0&10&-3\\ -4&0&-3&1\end{bmatrix},\] \[K_{2}=\begin{bmatrix}1&-1&0&0\\ 0&1&-1&0\\ 0&0&1&-1\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}1&-1&0&0\\ 0&1&-1&0\\ 0&0&1&-1\\ 0&0&0&1\end{bmatrix}^{\top}=\begin{bmatrix}2&-1&0&0\\ -1&2&-1&0\\ 0&-1&2&-1\\ 0&0&-1&1\end{bmatrix}.\] Clearly, by Corollary 2.3, we must have that Gaussians with precision matrices \(K_{1}\) or \(K_{2}\) are CIS ordered. However consider the sum \[K=K_{1}+K_{2}=\begin{bmatrix}21&-2&11&-4\\ -2&3&-1&0\\ 11&-1&12&-4\\ -4&0&-4&2\end{bmatrix}.\] Then if \(\Sigma=K^{-1}\), by the Schur Complement formula \[\left(\Sigma_{[3],[3]}\right)^{-1}=K_{[3],[3]}-\frac{K_{[3],4}K_{4,[3]}}{K_{4,4}}= \begin{bmatrix}13&-2&3\\ -2&3&-1\\ 3&-1&4\end{bmatrix}\] which means that if \(X\) has covariance \(\Sigma\), then \(\mathbb{E}[X_{3}|X_{1},X_{2}]\) is not a non-decreasing function of \(X_{1}\) and \(X_{2}\) due to the third row of the above matrix having an off-diagonal that is positive -- the same will be true if we were to replace \(K\) by \(\frac{K}{2}\), which implies that the convex combination of \(K_{1}\) and \(K_{2}\) does not stay in the class of precision matrices of CIS Gaussians. The above example shows that even if we assume that a Gaussian is CIS ordered under a known permutation \(\sigma\), we do not have convexity in the space of precision matrices \(K\) that parameterize this model. In Section 5, we show that there is a broad class of models, under which Gaussians that are CIS for a known permutation \(\sigma\) are included, for which computing the MLE is a convex optimization problem. While the results of Section 5 may be familiar to many practitioners, we did not find a direct reference and thought it worthwhile to specify these models explicitly. Most importantly for us however, is that computationally, once a CIS ordering is known, calculating the MLE for a CIS Gaussian can be done with similar efficiency as restricting to the \(\operatorname{MTP}_{2}\) class. ## 4. Markov equivalence for CIS models One of the most fundamental limitations of Bayesian networks is that two different DAGs may represent the same conditional independence model, in which case we say that they are Markov equivalent. We recall the following classical result [31] that uses the concept of a skeleton, which for a DAG \(G\) is the undirected graph obtained from \(G\) by forgetting the directions of all arrows. **Theorem 4.1**.: _Two DAGs \(G\) and \(H\) are Markov equivalent if and only if they have the same skeleton and v-structures. For a Gaussian \(X\), we have \(\mathbb{M}(G)=\mathbb{M}(H)\) if and only if \(G\) and \(H\) are Markov equivalent._ If \(G\) is a DAG then by \([G]\) we denote the set of all DAGs Markov equivalent to \(G\). There is another useful characterization of Markov equivalence proposed in [4], which describes elementary operations on DAGs that transform a DAG into a Markov equivalent DAG in such a way that \(G\) can be transformed to any graph in \([G]\) by a sequence of these elementary operations. This elementary operation is given by flipping the arrow \(i\to j\) whenever the pair \(i,j\) satisfies \(\{i\}\cup\operatorname{Pa}(i)=\operatorname{Pa}(j)\). More specifically, given a DAG \(G\) over \(V\), we say that an arrow \(i\to j\) in \(G\) is _covered_ if the graph \(H\) obtained from \(G\) by replacing \(i\to j\) with \(j\to i\) is also acyclic, and also \(\operatorname{Pa}(i)=\operatorname{Pa}(j)\setminus\{i\}\). result of [4] states: **Theorem 4.2**.: _We have \(H\in[G]\) if and only if \(H\) can be obtained from \(G\) by a sequence of flips of covered arrows._ We say that \(H\) is CIS-Markov equivalent to \(G\) if \(\mathbb{M}_{+}(G)=\mathbb{M}_{+}(H)\). We offer a similar characterization of CIS-Markov equivalence. An edge \(i\to j\) is _trivially covered_ if \(\operatorname{Pa}(i)\cup\{i\}=\operatorname{Pa}(j)=\{i\}\). **Theorem 4.3**.: _For a Gaussian \(X\), we have \(\mathbb{M}_{+}(G)=\mathbb{M}_{+}(H)\) if and only if \(H\) can be obtained from \(G\) by a sequence of flips of trivially covered arrows._ Note that when \(G\) is a complete DAG (with all possible \(\binom{m}{2}\) edges), then \(\mathbb{M}_{+}(G)=\mathrm{CIS}_{\sigma}\), where \(\sigma\) is the unique topological ordering of this DAG. This shows that Theorem 4.3 generalizes Proposition 3.3. Proof.: For the "if" part, it is enough to consider the case when \(H\) is obtained from \(G\) by a single flip of a trivially covered pair \(i\to j\). By Theorem 4.2, \(\mathbb{M}(G)=\mathbb{M}(H)\). Since \(i\) has no parents and it is the only parent of \(j\), there is a permutation \(\sigma\) with \(\sigma(1)=i\), \(\sigma(2)=j\) that forms a topological ordering of \(G\). Moreover, the permutation \(\sigma^{\prime}\) obtained from \(\sigma\) by swapping \(i\) and \(j\) is a topological ordering of \(H\). By Proposition 3.3, \(\mathrm{CIS}_{\sigma}=\mathrm{CIS}_{\sigma^{\prime}}\). By Theorem 2.5, \(\mathbb{M}_{+}(G)=\mathbb{M}_{+}(H)\). To show the "only if" part, first note that if \(\mathbb{M}_{+}(G)=\mathbb{M}_{+}(H)\) then necessarily \(\mathbb{M}(G)=\mathbb{M}(H)\). Via a contrapositive argument, suppose \(G\) and \(H\) are Markov equivalent but \(H\) is not obtained from \(G\) by a sequence of trivially covered edge flips. This means that there exists an arrow \(i\to j\) in \(G\) and \(k\) with \(k\in\mathrm{Pa}_{G}(i)\cap\mathrm{Pa}_{G}(j)\) such that \(i\gets j\) in \(H\). To get a contradiction, it is enough to construct a distribution in \(\mathbb{M}_{+}(G)\) such that in every CIS ordering \(j\) must come after \(i\). Let \(\sigma\) be a topological ordering of \(G\). Without loss of generality assume \(\sigma=\mathrm{id}\) and let \(i,j,k\) be as above. In particular, \(1\leq k<i<j\leq m\). Let \(U\) be upper triangular such that \(U_{ll}=1\) for all \(l=1,\ldots,m\), \(U_{ij}=-1\), \(U_{kj}=-1\) and \(U\) is zero otherwise. Note that by the above, this \(U\) corresponds to a distribution in \(\mathbb{M}_{+}(G)\) where some of the edges in \(G\) have zero coefficients. We will show that for any \(A\) containing \(\{i,j,k\}\), neither \(i\) nor \(k\) can be the last one in a CIS ordering. To show this, note that \(U_{A,A^{c}}=0\), \(U_{A^{c},A}=0\), and \(U_{A^{c},A^{c}}=I\). It follows that \[(\Sigma_{A,A})^{-1}\;=\;U_{A,A}U_{A,A}^{\top}\] and so \((\Sigma_{A,A})_{ik}^{-1}=1>0\) showing that neither \(i\) nor \(k\) can be the last element in any CIS ordering of \(X_{A}\). Using this recursively, starting from \(A=\{1,\ldots,m\}\), we conclude that \(j\) must appear after \(i,k\) in every CIS ordering. In Gaussian Bayesian networks the crucial observation is that if the Markov equivalence classes \([G]\) and \([H]\) are not equal then the Gaussian models \(\mathbb{M}(G)\) and \(\mathbb{M}(H)\) intersect at a measure zero set (we think about the geometry of these models as embedded in the space of covariance matrices). This means that for almost all ground-truth models we can learn the equivalence classes from the data. The analogous statement is unfortunately not true for CIS-Markov equivalence classes. For example, if \(m=3\), the following two graphs lie in the same Markov equivalence class and different CIS-Markov equivalence classes The intersection of \(\mathbb{M}_{+}(G)\) and \(\mathbb{M}_{+}(H)\) for these two graphs has full dimension and it contains the set of all inverse \(M\)-matrices. **Lemma 4.4**.: _Suppose the distribution of \(X\) lies in \(\mathbb{M}_{+}(G)\). Suppose that there exists \(k\) such that \(i\to k\gets j\) is a v-structure in \(G\) and suppose that \(K_{ij}\neq 0\) (this holds generically). Then no CIS ordering of \(X\) finishes with \(i\) or \(j\)._ Proof.: Without loss of generality assume that the trivial ordering \(1<2<\ldots<m\) is a topological ordering of \(G\). In this case the matrix \(\Lambda\) in (1.1) is lower triangular. Then let \(K=UU^{\top}\), with \(U\) upper triangular, be the precision matrix of \(X\). By Remark 1.1 we have \(U=(I-\Lambda)^{\top}D^{-1/2}\) and so for \(i\neq j\) have \(U_{uv}\leq 0\) if \(u\to v\) in \(G\) and \(U_{uv}=0\) otherwise. We have \[K_{ij}\;=\;\sum_{l}U_{il}U_{jl}\;=\;\sum_{l\in\operatorname{Ch}(i)\cap \operatorname{Ch}(j)}U_{il}U_{jl}.\] This expresses \(K_{ij}\) as a sum of non-negative terms. Since this sum is non-zero by assumption, it must be strictly positive and so, neither \(i\) nor \(j\) can be the last ones in a CIS ordering. As a corollary we get the following result. **Proposition 4.5**.: _Consider a DAG \(G\) consisting of \(k\) layers \(V_{1},\ldots,V_{k}\) such that:_ 1. \(V=V_{1}\sqcup\cdots\sqcup V_{k}\)_,_ 2. _only arrows from_ \(V_{i}\) _to_ \(V_{i+1}\) _are allowed in_ \(G\)_,_ 3. \(|V_{i}|\geq 2\) _for all_ \(i=1,\ldots,k-1\) _(only the last layer may contain one node),_ 4. _every_ \(v\in V_{i}\) _for_ \(i=1,\ldots,k-1\) _is contained in a v-structure (as a parent)._ _If the distribution of \(X\) lies in \(\mathbb{M}_{+}(G)\) and \(K_{ij}\neq 0\) unless \(i,j\in V_{k}\) (this holds generically), then the only possible CIS orderings of \(X\) are \((V_{1})\cdots(V_{k})\), where the notation \((V_{i})\) means that the vertices in \(V_{i}\) can be ordered in an arbitrary way. In particular, any possible CIS ordering of \(X\) is a topological ordering of \(G\)._ ## 5. Maximum likelihood estimation in \(\mathbb{M}_{+}(G)\) In this section we show that maximum likelihood estimation in the model \(\mathbb{M}_{+}(G)\) for a given \(G\) is straightforward and amounts to solving a convex optimization problem. Consider a Gaussian vector \(X\sim\mathcal{N}_{m}(0,\Sigma)\) and let \(K=\Sigma^{-1}\). Since \(K\) is positive definite, by [10, Corollary 3.5.6] we have that there exists a unique upper triangular matrix \(U\) whose diagonals are all \(1\), and a diagonal matrix \(D\) with strictly positive diagonals such that \(K=UDU^{\top}\). Moreover, the relation between \(K\) and the pair \((D,U)\) is one-to-one. Equivalently, we obtain the stochastic representation \[X=\Lambda X+\varepsilon, \tag{5.1}\] where \(\Lambda=(I_{m}-U)^{\top}\) is lower triangular with zero diagonals, and \(\varepsilon\sim\mathcal{N}_{m}(0,D^{-1})\). **Definition 5.1**.: Let \(\mathcal{L}_{i}\subseteq\mathbb{R}^{i}\) be sets for each \(i=1,\ldots,m-1\). A _Cholesky factor model_ consists of all Gaussian distributions such that the inverse-covariance matrix satisfies \(K=UDU^{\top}\) with \(D\) a diagonal matrix and \(U=(I_{m}-\Lambda)^{\top}\) with \[\Lambda_{i}:=(\Lambda_{i,1},\ldots,\Lambda_{i,i-1})\in\mathcal{L}_{i-1}\quad \text{for}\quad i=2,\ldots,m.\] **Remark 5.2**.: In the case that \(\mathcal{L}_{i}=[0,\infty)^{i}\), we recover the CIS model on \(X\). If \(\mathcal{L}_{i}=\mathbb{R}^{i}\) we simply have the space of all covariance matrices. **Remark 5.3**.: If the DAG \(G\) is known, we can always assume without loss of generality that the id permutation is a topological ordering of \(G\). In other words, the matrix \(\Lambda\) in Remark 1.1 is lower triangular. Thus \(\mathbb{M}(G)\) is a Cholesky factor model with the support of \(\Lambda_{i}\) equal to the parent set \(\operatorname{Pa}(i)\). The model \(\mathbb{M}_{+}(G)\) is obtained by additional non-negativity constraints. If we want to make the constraints on \(\Lambda\) explicit we denote the model by \(F(\mathcal{L}_{1},\ldots,\mathcal{L}_{m-1})\). Maximum likelihood estimation for such models links to the problem of least squares estimation in linear regression as follows. Given \(n\) independent observations of \(X\) from this model, we stack them in the matrix \(\mathbf{X}\in\mathbb{R}^{n\times m}\). We denote by \(\mathbf{x}_{1},\ldots,\mathbf{x}_{m}\) the columns of \(\mathbf{X}\) and by \(\mathbf{Z}_{i}:=\mathbf{X}_{[n],[i-1]}\) the \(\mathbb{R}^{n\times(i-1)}\) matrix obtained from the first \(i-1\) columns of \(\mathbf{X}\). **Theorem 5.4**.: _If \((\hat{D},\hat{\Lambda})\) is the maximum likelihood estimator for a Cholesky factor model \(F(\mathcal{L}_{1},\ldots,\mathcal{L}_{m-1})\), then each \(\hat{\Lambda}_{i}\) for \(i=2,\ldots,m-1\) is given as a minimizer of the quadratic problem_ \[\mbox{minimize }\frac{1}{n}\|\mathbf{x}_{i}-\mathbf{Z}_{i}\Lambda_{i}^{\top} \|^{2}\qquad\mbox{subject to }\Lambda_{i}\in\mathcal{L}_{i-1}\subseteq\mathbb{R}^{i-1}.\] _Moreover,_ \[\hat{D}_{ii}\;=\;n\|\mathbf{x}_{i}-\mathbf{Z}_{i}\hat{\Lambda}_{i}^{\top}\|^{ -2}\] _for all \(i=1,\ldots,m\)._ Proof.: We have \(K=(I_{m}-\Lambda)^{\top}D(I_{m}-\Lambda)\), where \(\Lambda\) is strictly lower triangular with \(\Lambda_{i}\in\mathcal{L}_{i-1}\) for \(i=2,\ldots,m\). As before, set \(U=(I_{m}-\Lambda)^{\top}\). Since \(\det(U)=1\) and \(D\) is diagonal, the corresponding log-likelihood function \(\log\det(K)-\frac{1}{n}\mbox{tr}(\mathbf{X}^{\top}\mathbf{X}K)\) can be written as \[\sum_{i=1}^{m}\log D_{ii}-\frac{1}{n}\sum_{i=1}^{m}D_{ii}((\mathbf{X}U)^{\top} \mathbf{X}U)_{ii}. \tag{5.2}\] The expression \(((\mathbf{X}U)^{\top}\mathbf{X}U)_{ii}\) is simply the squared-norm of the \(i\)-th column of \(\mathbf{X}U\), which is equal to \[\mathbf{x}_{i}-\sum_{j=1}^{i-1}\Lambda_{ij}\mathbf{x}_{j}\;=\;\mathbf{x}_{i}- \mathbf{Z}_{i}\Lambda_{i}^{\top}.\] Thus, maximizing (5.2) is equivalent to minimizing \[-\sum_{i=1}^{m}\log D_{ii}+\sum_{i=1}^{m}\frac{D_{ii}}{n}\|\mathbf{x}_{i}- \mathbf{Z}_{i}\Lambda_{i}^{\top}\|^{2}. \tag{5.3}\] The \(i\)-th squared term in (5.3) depends only on \(\Lambda_{i}\). This means that minimizing (5.3) in a Cholesky factor model can be done term by term. Once the optimizer for \(\Lambda\) is found, \(D\) can be handled in a straightforward way. Theorem 5.4 gives also a simple condition on the existence of the MLE. **Proposition 5.5**.: _The MLE in Theorem 5.4 exists if and only if each set \(\mathcal{L}_{i}\) is closed and for every \(i=1,\ldots,m-1\),_ \[\mathbf{x}_{i}\;\notin\;\{\mathbf{Z}_{i}\Lambda_{i}^{\top}:\;\Lambda_{i}\in \mathcal{L}_{i}\}.\] _In particular, if there are subsets \(A_{i}\subseteq[i-1]\) such that \(\mathcal{L}_{i}=\operatorname{span}\{\mathbf{x}_{j}:j\in A_{i}\}\), then the MLE exists with probability 1 as long as \(n\geq\max_{i}|A_{i}|\)._ It is now straightforward to compute the optimal value for the log-likelihood. **Corollary 5.6**.: _If the MLE exists, then the optimal value of the log-likelihood is_ \[-\sum_{i=1}^{m}\log\left(\frac{1}{n}\|\mathbf{x}_{i}-\mathbf{Z}_{i}\hat{ \Lambda}_{i}^{\top}\|^{2}\right)-m.\] Recall that in the linear regression problem, given a vector \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) and the matrix \(\mathbf{Z}_{i}\in\mathbb{R}^{n\times(i-1)}\), the least squares estimator is given precisely as the minimizer of \(\|\mathbf{x}_{i}-\mathbf{Z}_{i}\theta\|^{2}\) over \(\theta\in\mathbb{R}^{i-1}\). If this minimizer is unique, it is given by the well-known formula \[\hat{\theta}=(\mathbf{Z}_{i}^{\top}\mathbf{Z}_{i})^{-1}\mathbf{Z}_{i}^{\top} \mathbf{x}_{i}. \tag{5.4}\] If \(\mathbf{Z}_{i}\) does not have full column rank, the optimum is obtained over an affine space. Replacing the inverse above with the pseudo-inverse gives the solution with the smallest norm. The following result follows almost immediately. **Proposition 5.7**.: _If the constraints \(\mathcal{L}_{1},\ldots,\mathcal{L}_{m-1}\) are all linear, then the MLE \((\hat{D},\hat{\Lambda})\) in the Cholesky factor model \(F(\mathcal{L}_{1},\ldots,\mathcal{L}_{m-1})\) can be given in closed form._ ## 6. Finding a CIS ordering Having established that the MLE can be easily computed in \(\mathbb{M}_{+}(G)\) for any fixed \(G\), we now explore the harder problem of estimating \(\Sigma\) knowing that the distribution of \(\mathbf{X}\) lies in \(\mathbb{M}_{+}(G)\) for _some_\(G\). By Theorem 2.5, \(\mathbb{M}_{+}(G)=\mathbb{M}(G)\cap\mathrm{CIS}_{\sigma}\) for any topological ordering \(\sigma\) of \(G\). Thus, if we know a topological ordering of \(G\) the problem can be solved by running regressions in the order given by \(\sigma\) and adding a LASSO penalty to learn a sparse representation. Using the fact that \(\mathbb{M}_{+}(G)=\mathbb{M}(G)\cap\mathrm{CIS}_{\sigma}\) for any topological ordering \(\sigma\) of \(G\), we do not need to search over all orderings but can restrict ourselves to CIS orderings for the underlying distribution. In this section, we show that these can be efficiently recovered. ### Recovering a CIS ordering in the population case In the following, we provide an algorithm that, given \(K\), recovers a CIS ordering given that such an ordering exists. The algorithm is based on the following lemma. **Lemma 6.1**.: _Suppose \(X\) is a CIS \(m\)-variate Gaussian. Suppose there exists \(k\in[m-1]\) such that \(K_{k,\setminus k}\leq 0\). Then \((X_{1},\ldots,X_{k-1},X_{k+1},\ldots,X_{m},X_{k})\) is CIS._ Proof.: Recalling Lemma 2.2, we have that \(X\) being a centered CIS ordered Gaussian is equivalent to \[\mathbb{E}[X_{j}|X_{[j-1]}]=-\frac{\left(\Sigma_{[j],[j]}\right)_{j,[j-1]}^{- 1}}{\left(\Sigma_{[j],[j]}\right)_{j,j}^{-1}}X_{[j-1]}\] being a non-decreasing function in \((X_{1},\ldots,X_{j-1})\) for all \(j\in[m]\). We only need to check that the functions \[\mathbb{E}[X_{j}|X_{[j-1]\setminus\{k\}}]\qquad j=k+1,\ldots,m, \tag{6.1}\] \[\mathbb{E}[X_{k}|X_{\setminus k}] \tag{6.2}\] are non-decreasing in their arguments, which, at least for the second function, follows automatically by the assumption \(K_{k,\setminus k}\leq 0\). We now proceed by an induction argument starting from \(j=m\) working downward, to prove that the functions (6.1) are all non-decreasing in their arguments. We have \[\mathbb{E}\big{[}X_{m}|X_{[m-1]\setminus\{k\}}\big{]}=-\frac{\left(\Sigma_{ \setminus k},\setminus k\right)_{m,[m-1]\setminus\{k\}}^{-1}}{\left(\Sigma_{ \setminus k,\setminus k}\right)_{m,m}^{-1}}X_{[m-1]\setminus\{k\}},\] then the Schur complement formula gives the following two statements: \[\left(\Sigma_{\setminus k,\setminus k}\right)^{-1} =K_{\setminus k,\setminus k}-\frac{K_{\setminus k,k}K_{k, \setminus k}}{K_{k,k}},\] \[\left(\Sigma_{[m-1],[m-1]}\right)^{-1} =K_{[m-1],[m-1]}-\frac{K_{[m-1],m}K_{m,[m-1]}}{K_{m,m}}.\] By our assumption \(K_{k,\setminus k}\leq 0\), we have that \(\frac{K_{\setminus k,k}K_{k,\setminus k}}{K_{k,k}}\) is a non-negative rank-one matrix. Similarly, since \(X_{m}\) is the last in a CIS ordering of \(X\), we have \(K_{m,\setminus m}\leq 0\) and \(\frac{K_{[m-1],m}K_{m,[m-1]}}{K_{m,m}}\) is a non-negative matrix. It follows then that \[\left(\Sigma_{\setminus k,\setminus k}\right)_{m,[m-1]\setminus \left\{k\right\}}^{-1} \leq 0, \tag{6.3}\] \[\left(\Sigma_{[m-1],[m-1]}\right)_{k,[m-1]\setminus \left\{k\right\}}^{-1} \leq 0.\] The first inequality in (6.3) implies that the function in equation (6.1) for \(j=m\) is non-decreasing in its arguments. Our induction hypothesis is that for some \(j^{*}\geq k+1\) we have shown for every \(j=j^{*}+1,\ldots,m\), that \[\left(\Sigma_{[j]\setminus\left\{k\right\},[j]\setminus\left\{k \right\}}\right)_{j,[j-1]\setminus\left\{k\right\}}^{-1} \leq 0, \tag{6.4}\] \[\left(\Sigma_{[j-1],[j-1]}\right)_{k,[j-1]\setminus\left\{k\right\}} ^{-1} \leq 0.\] We will now prove that both of these inequalities are true for \(j=j^{*}\) as well. By the second inequality in (6.4) (setting \(j=j^{*}+1\)), and the fact that \(X\) is CIS ordered, we have that \[\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{k,[j^{*}]\setminus\left\{k \right\}}^{-1} \leq 0, \tag{6.5}\] \[\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{j^{*},[j^{*}-1]}^{-1} \leq 0.\] The Schur complement formula implies the following two equalities \[\left(\Sigma_{[j^{*}]\setminus\left\{k\right\},[j^{*}]\setminus \left\{k\right\}}\right)^{-1}=\\ \left(\Sigma_{[j^{*}],[j^{*}]}\right)_{[j^{*}]\setminus\left\{k \right\},[j^{*}]\setminus\left\{k\right\}}^{-1}-\frac{\left(\Sigma_{[j^{*}],[j^ {*}]}\right)_{[j^{*}]\setminus\left\{k\right\},k}^{-1}\left(\Sigma_{[j^{*}],[j ^{*}]}\right)_{k,[j^{*}]\setminus\left\{k\right\}}^{-1}}{\left(\Sigma_{[j^{*}],[ j^{*}]}\right)_{k,k}^{-1}},\] and \[\left(\Sigma_{[j^{*}-1],[j^{*}-1]}\right)^{-1}=\\ \left(\Sigma_{[j^{*}],[j^{*}]}\right)_{[j^{*}-1],[j^{*}-1]}^{-1} -\frac{\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{[j^{*}-1],j^{*}}^{-1}\left(\Sigma _{[j^{*}],[j^{*}]}\right)_{j^{*},[j^{*}-1]}^{-1}}{\left(\Sigma_{[j^{*}],[j^{*} ]}\right)_{j^{*},j^{*}}^{-1}}.\] By the inequality of equation (6.5), it follows that \[\frac{\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{[j^{*}-1],\left\{k \right\},k}^{-1}\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{k,[j^{*}]\setminus\left\{ k\right\}}^{-1}}{\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{k,k}^{-1}} \geq 0,\] \[\frac{\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{[j^{*}-1],j^{*}-1],j^ {*}}^{-1}\left(\Sigma_{[j^{*}],[j^{*}]}\right)_{j^{*},[j^{*}-1]}^{-1}}{\left( \Sigma_{[j^{*}],[j^{*}]}\right)_{j^{*},j^{*}}^{-1}} \geq 0,\] from which the inequalities in equation (6.4) are proven for \(j=j^{*}\). Given that the first inequality in equation (6.4) is equivalent to the function in (6.1) being non-decreasing in its arguments, we have proven the required result. Lemma 6.1 allows us to find a row of the precision matrix \(K\) whose off-diagonals are non-positive and assume it is the last element of a CIS ordering. This is the basis of our algorithm. **Theorem 6.2**.: _Suppose \(X\) is a centered multivariate Gaussian for which there exists a CIS ordering. Then the following procedure produces a permutation \(\sigma\) such that \(X_{\sigma}\) is CIS._ 1. _Initialize_ \(O^{(1)}=[m]\) _as the "leftover" set,_ \(K^{(1)}=K\) _as the current precision matrix, and_ \(C^{(1)}=\{j:K_{j,\setminus j}\leq 0\}\) _as the current candidate set._ 2. _For_ \(i=1,\ldots,m\)_, take an element of_ \(k\in C^{(i)}\) _and set_ \(\sigma(m-i+1)=k\)_. Compute_ \[O^{(i+1)} =O^{(i)}\backslash\{k\},\] \[K^{(i+1)} =\Bigl{(}\Sigma_{O^{(i+1)},O^{(i+1)}}\Bigr{)}^{-1},\] \[C^{(i+1)} =\{j:K^{(i+1)}_{j,\setminus j}\leq 0\}.\] Proof.: We must simply show that at each step \(C^{(i)}\) is not empty, since at each step, the condition \(K^{(i)}_{j,\setminus j}\leq 0\) is sufficient for the variable \(X_{j}\) to be a non-decreasing function in the variables \(X_{v}\) with \(v\in O^{(i)}\backslash\{j\}\) by Lemma 2.2. This follows by existence of a CIS ordering along with Theorem 6.1. Indeed if a CIS ordering exists, then \(C^{(1)}\neq\emptyset\), in which case, an _arbitrary_ element of \(C^{(1)}\) can be taken to be \(\sigma(m)\). A simple induction argument shows that this is true for each \(C^{(i)}\). We illustrate this algorithm with an example. **Example 6.3**.: Consider the four dimensional Gaussian distribution with covariance and precision matrix \[\Sigma\;=\;\begin{bmatrix}1&0.75&0.50&0.14\\ 0.75&1&0.81&0.50\\ 0.50&0.81&1&0.75\\ 0.14&0.50&0.75&1\end{bmatrix},\qquad K=\begin{bmatrix}2.77&-2.51&0&0.88\\ -2.51&5.49&-3.2&0\\ 0&-3.2&5.49&-2.51\\ 0.88&0&-2.51&2.77\end{bmatrix}.\] The matrix \(K\) has two rows with only non-positive off-diagonal entries. We choose \(i_{1}=2\) and consider the marginal distribution over \(\{1,3,4\}\). The matrix \[(\Sigma_{134})^{-1}=\begin{bmatrix}1.61&-1.47&0.88\\ -1.47&3.62&-2.51\\ 0.88&-2.51&2.77\end{bmatrix}\] has one row with nonpositive off-diagonal entries; so we take \(i_{2}=3\). This shows that both \((1,4,3,2)\) and \((4,1,3,2)\) are CIS orderings. Beginning with \(i_{1}=3\) shows that also \((1,4,2,3)\) and \((4,1,2,3)\) are CIS orderings and there are no other CIS orderings of \(X\). ### Noisy CIS Recovery In the noisy setting, we are given a matrix of observations \(\mathbf{X}\in\mathbb{R}^{n\times m}\) where the rows are i.i.d and distributed according to \(\mathcal{N}_{m}(0,\Sigma)\), where \(\Sigma\) is such that the distribution admits a CIS ordering. As in Section 5, we let \(\mathbf{x}_{t}\) refer to the \(t\)-th column of \(\mathbf{X}\). For any \(i\in\{1,\ldots,m\}\) and any non-empty \(A\subseteq\{1,\ldots,m\}\setminus\{i\}\), denote by \(\beta^{(i,A)}\) the vector of coefficients of the linear regression of \(\mathbf{x}_{i}\) on \(\mathbf{X}_{[n],A}\). Then we have that \[\beta^{(i,A)}\;=\;\Sigma_{i,A}\Sigma_{A,A}^{-1}.\] When \(\beta^{(i,A)}\geq 0\), we say \(\mathbf{x}_{i}\) can be positively regressed on \(\mathbf{X}_{[n],A}\). For \(\alpha>0\), an estimator \(\hat{\beta}^{(i,A)}\) (we suppress \(n\)-dependence for ease) of \(\beta^{(i,A)}\) is said to be \(n^{\alpha}\), consistent if \[n^{\alpha}(\hat{\beta}^{(i,A)}-\beta^{(i,A)})\to 0\] in probability as \(n\to\infty\). Our noisy CIS recovery algorithm presented in the Theorem below will mimic the method of the previous section by inspecting the entries of \(\hat{\beta}^{(i,A)}\) at each step, assuming a bound on the entries of \(\beta^{(i,A)}\). **Theorem 6.4**.: _Assume that there exists a CIS ordering of the distribution \(\mathcal{N}_{m}(0,\Sigma)\) and there exists an \(\epsilon^{*}=\epsilon^{*}(\Sigma)>0\) such that for any \(i\in V\) and \(A\subseteq V\setminus\{i\}\), either \(\beta^{(i,A)}\) is a non-negative vector or \(\min_{j}\beta^{(i,A)}_{j}<-2\epsilon^{*}\). For an \(\alpha>0\), let \(\hat{\beta}^{(i,A)}\) be an \(n^{\alpha}\)-consistent estimators of \(\beta^{(i,A)}\) and let \(\epsilon_{n}\) be a sequence such that \(\epsilon_{n}\to 0\) while \(n^{\alpha}\epsilon_{n}\to\infty\)._ _We define an estimator \(\hat{\sigma}\) through the following algorithm:_ 1. _Initialize_ \(\mathcal{A}_{1}=[m]\) _to be the set of active variables and set_ \(t=1\)_._ 2. _If_ \(t\leq m-2\)_, for each_ \(i\in\mathcal{A}_{t}\)_, we compute_ \(\hat{\beta}^{(i,\mathcal{A}_{t}\setminus\{i\})}\)_. At the first instance_3 _of_ \(i^{*}\) _such that all entries of_ \(\hat{\beta}^{(i^{*},\mathcal{A}_{t}\setminus\{i^{*}\})}\) _are greater than_ \(-\epsilon_{n}\) _we define_ Footnote 3: In practice we could score different potential choices to further improve the power of the method. \[\hat{\sigma}(m-t+1)=i^{*}.\] _Define_ \(\mathcal{A}_{t+1}=\mathcal{A}_{t}\backslash\{i^{*}\}\) _and increment_ \(t\) _and repeat this step until_ \(t=m-1\)_._ 3. _When_ \(t=m-1\) _it must be that_ \(|\mathcal{A}_{t}|=2\)_, in which case, we take_ \(\hat{\sigma}(1)\) _and_ \(\hat{\sigma}(2)\) _to be arbitrary._ _As \(n\to\infty\), \(\hat{\sigma}\) will be a valid CIS ordering of \(\mathcal{N}_{m}(0,\Sigma)\) with probability going to 1._ Proof.: Depending on the sample size \(n\), consider the event \[\mathcal{E}^{(n)}\;=\;\bigcap_{i,A}\mathcal{E}^{(n)}_{i,A},\qquad\mathcal{E}^ {(n)}_{i,A}:=\{\|\hat{\beta}^{(i,A)}-\beta^{(i,A)}\|_{\infty}<\epsilon_{n}\}. \tag{6.6}\] By \(n^{\alpha}\)-consistency of the estimators and the fact that \(n^{\alpha}\epsilon_{n}\to\infty\), \(\mathbb{P}(\mathcal{E}^{(n)})\to 1\) as \(n\to\infty\)4. Note that, by the definition of \(\epsilon^{*}\) and the fact that \(\epsilon_{n}<\epsilon^{*}\) if \(n\) is sufficiently large, conditionally on \(\mathcal{E}^{(n)}\), this is equivalent to the fact that \(\mathbf{x}_{i}\) can be positively regressed on \(\mathbf{X}_{[n],A}\). More specifically, let \(R_{t}\) for \(t=1,\ldots,m-3\), be the event that says that at the \(t\)-th step of the algorithm: Footnote 4: Indeed if \(A_{n}\), \(B_{n}\) are sequences of events such that \(\mathbb{P}(A_{n})\to 1\) and \(\mathbb{P}(B_{n})\to 1\) then \(\mathbb{P}(A_{n}\cap B_{n})\to 1\) simply because \(\mathbb{P}((A_{n}\cap B_{n})^{c})=\mathbb{P}(A_{n}^{c}\cup B_{n}^{c})\leq \mathbb{P}(A_{n}^{c})+\mathbb{P}(B_{n}^{c})\to 0\). 1. \(\mathbf{X}_{[n],\mathcal{A}_{t}}\) admits a CIS ordering, 2. the algorithm correctly finds an \(\mathbf{x}_{i}\) that can be positively regressed on \(\mathbf{X}_{[n],\mathcal{A}_{t}\setminus\{i\}}\). Note that (a) is automatically satisfied if \(t=1\). Similarly, for an arbitrary \(t\), (a) holds automatically conditionally on \(R_{1}\cap\ldots\cap R_{t-1}\), by Theorem 6.1. The probability of recovering a CIS ordering is \(\mathbb{P}(R_{1}\cap\ldots\cap R_{m-3})\) and we have \[\mathbb{P}(R_{1}\cap\ldots\cap R_{m-3})\;=\;\mathbb{P}(R_{1})\mathbb{P}(R_{2}| R_{1})\cdots\mathbb{P}(R_{m-3}|R_{1}\cap\ldots\cap R_{m-4}).\] Denote \(\mathbb{P}^{(n)}(\cdot)=\mathbb{P}(\cdot|\mathcal{E}^{(n)})\). We also have \[\mathbb{P}^{(n)}(R_{1}\cap\ldots\cap R_{m-3})\;=\;\mathbb{P}^{(n)}(R_{1}) \mathbb{P}^{(n)}(R_{2}|R_{1})\cdots\mathbb{P}^{(n)}(R_{m-3}|R_{1}\cap\ldots \cap R_{m-4}).\] As we said earlier, after conditioning on \(\mathcal{E}^{(n)}\), \(X_{i}\) can be positively regressed on \(X_{A}\) if and only if all coefficients of \(\hat{\beta}^{(i,A)}\) are greater than \(-\epsilon\). This means that \[\mathbb{P}^{(n)}(R_{1})\;=\;\mathbb{P}^{(n)}(R_{2}|R_{1})\;=\;\cdots\;=\; \mathbb{P}^{(n)}(R_{m-3}|R_{1}\cap\ldots\cap R_{m-4})\;=\;1\] implying that \(\mathbb{P}^{(n)}(R_{1}\cap\ldots\cap R_{m-3})=1\). This implies that \(\mathbb{P}(R_{1}\cap\ldots\cap R_{m-3})\to 1\) as \(n\to\infty\)5, which completes the proof. Footnote 5: Indeed, if \(A_{n},B_{n}\) are sequences of events such that \(\mathbb{P}(A_{n}|B_{n})=1\) and \(\mathbb{P}(B_{n})\to 1\) then \(\mathbb{P}(A_{n}\cap B_{n})=\mathbb{P}(B_{n})\to 1\). Since \(\mathbb{P}(A_{n})\geq\mathbb{P}(A_{n}\cap B_{n})\), then also \(\mathbb{P}(A_{n})\to 1\). **Remark 6.5**.: The event \(\mathcal{E}^{(n)}\) in (6.6) may have small probability for finite sample sizes. However, for the proof it is not necessary to define \(\mathcal{E}^{(n)}\) as an intersection over all pairs \((i,A)\). For example, it is sufficient to include only the pairs \((i,A)\) such that \(\mathbf{X}_{[n],A\cup\{i\}}\) admits a CIS ordering but \(\mathbf{x}_{i}\) cannot be positively regressed on \(\mathbf{X}_{[n],A}\) (if \(\Sigma\) is an inverse M-matrix then there are no such pairs).
2309.09942
OptiRoute: A Heuristic-assisted Deep Reinforcement Learning Framework for UAV-UGV Collaborative Route Planning
Unmanned aerial vehicles (UAVs) are capable of surveying expansive areas, but their operational range is constrained by limited battery capacity. The deployment of mobile recharging stations using unmanned ground vehicles (UGVs) significantly extends the endurance and effectiveness of UAVs. However, optimizing the routes of both UAVs and UGVs, known as the UAV-UGV cooperative routing problem, poses substantial challenges, particularly with respect to the selection of recharging locations. Here in this paper, we leverage reinforcement learning (RL) for the purpose of identifying optimal recharging locations while employing constraint programming to determine cooperative routes for the UAV and UGV. Our proposed framework is then benchmarked against a baseline solution that employs Genetic Algorithms (GA) to select rendezvous points. Our findings reveal that RL surpasses GA in terms of reducing overall mission time, minimizing UAV-UGV idle time, and mitigating energy consumption for both the UAV and UGV. These results underscore the efficacy of incorporating heuristics to assist RL, a method we refer to as heuristics-assisted RL, in generating high-quality solutions for intricate routing problems.
Md Safwan Mondal, Subramanian Ramasamy, Pranav Bhounsule
2023-09-18T17:01:17Z
http://arxiv.org/abs/2309.09942v1
OptiRoute: A Heuristic-assisted Deep Reinforcement Learning Framework for UAV-UGV Collaborative Route Planning ###### Abstract Unmanned aerial vehicles (UAVs) are capable of surveying expansive areas, but their operational range is constrained by limited battery capacity. The deployment of mobile recharging stations using unmanned ground vehicles (UGVs) significantly extends the endurance and effectiveness of UAVs. However, optimizing the routes of both UAVs and UGVs, known as the UAV-UGV cooperative routing problem, poses substantial challenges, particularly with respect to the selection of recharging locations. Here in this paper, we leverage reinforcement learning (RL) for the purpose of identifying optimal recharging locations while employing constraint programming to determine cooperative routes for the UAV and UGV. Our proposed framework is then benchmarked against a baseline solution that employs Genetic Algorithms (GA) to select rendezvous points. Our findings reveal that RL surpasses GA in terms of reducing overall mission time, minimizing UAV-UGV idle time, and mitigating energy consumption for both the UAV and UGV. These results underscore the efficacy of incorporating heuristics to assist RL, a method we refer to as heuristics-assisted RL, in generating high-quality solutions for intricate routing problems. ## I Introduction Unmanned aerial vehicles (UAVs) have rapidly evolved as an emerging technology, finding profound applications in both military and civilian sectors [1, 2, 3, 4]. They are critical for real-time sensing in scenarios like traffic monitoring [5], border security [6], disaster management [7] and forest fire surveillance [8], all of which demand continuous data transmission. However, a major limitation of UAVs in such persistent applications is their restricted operational time due to limited battery capacity. This challenge can be mitigated by leveraging the synergies of multi-agent systems that combine UAVs with unmanned ground vehicles (UGVs). This collaborative approach can enhance the overall task efficacy and prolong the UAVs' operational longevity [9, 10]. In this work, we address a _cooperative routing problem_ involving a heterogeneous team of a UAV and a UGV to visit a set of designated task nodes in the quickest possible time (see Fig. 1). The UAV operates under limited battery life constraint and is supported by the UGV that acts as a mobile recharging depot. The UGV is also restricted in terms of speed and can only travel on the road network. The recharging process of UAV by UGV is not instantaneous which adds an additional layer of complexity to the problem. The challenge, therefore, is to meticulously strategize the routes for both the entities, ensuring that the UAV can effectively get recharged from the UGV to execute the mission optimally. This necessitates a comprehensive cooperative routing framework to optimally plan UAV and UGV routes while synchronizing their rendezvous recharging instances. ### _Related works_ Extensive research has been carried out on different variants of UAV-UGV cooperative routing problem [11, 12, 13, 14]. Traditional methodologies, such as Mixed Integer Linear Programming (MILP) [15], graph matching [16], and multi-echelon formulation [12], have previously been employed to tackle this type of combinatorial optimization problem (CO). In most studies, the cooperative routing is modelled as a variant of vehicle routing problem what makes it a NP-hard problem. These methods often fail to scale well with the number of the task points and lacks practical applicability due to the intricate combinatorial nature of the problem. Hence, in recent years, learning based methodologies have gained popularity as a promising solution to solve the combinatorial optimization problems. Paul et al. [17, 18] utilized an encoder-decoder based graph neural network to model mutli-robot task allocation problem (MRTA) and showed learning based algorithms can produce quality solutions in significantly less computational time compare to the non-learning based baselines. Li et al. [19] employed a deep reinforcement learning (DRL) method that leveraged attention mechanisms to tackle the heterogeneous capacitated vehicle routing problem. Their methodology outperformed traditional heuristic solutions in both quality and computational efficiency. Wu et al. [20] investigated truck and drone based last mile delivery problem Fig. 1: Illustration of a fuel constrained UAV-UGV cooperative routing problem studied in this paper. DRL policy specifies rendezvous points, and heuristic planners outline optimal UAV-UGV paths to cover all task points. using a reinforcement learning (RL) paradigm. They split the optimization problem into customer clustering problem and routing components and applied an encoder-decoder framework with RL to resolve it. Fan et al. [21] employed a multi-head attention mechanism coupled with a DRL policy to design routes for an energy-limited UAV, however they assumed a fixed set of recharging locations for the UAV. In our prior research [22, 23], we introduced a two-tiered optimization structure that streamlined the collaborative routing challenge. This framework determined the UGV route in the primary layer and the UAV route in the secondary layer using heuristic methods. We further demonstrated that optimizing the recharging instances between the UAV and UGV can enhance the cooperative routing solution's quality [24]. In this paper, we have coupled heuristic strategy-based route planners with a learning framework that has produced an optimal strategy for UAV-UGV recharging rendezvous planning to achieve the best cooperative route. Given the complexity of the task, we have harnessed the strengths of both heuristic and learning methods to navigate the scenario and have compared our solution with a non-learning baseline, which has shown substantial improvement in solution quality. To this end, we present the following novel contributions: 1. We have formulated the fuel constrained UAV-UGV cooperative routing problem as a Markov Decision Process (MDP) so that the recharging rendezvous problem can be learned using a policy gradient RL approach. To the best of our knowledge, this is one of the first work to address fuel constrained UAV-UGV cooperative routing problem with RL. 2. We have adopted an encoder-decoder based transformer architecture within the learning policy network that can help in more efficient data extraction from the scenario to select the appropriate refuel stop locations. 3. We have integrated the advantages of heuristics with RL framework by modelling the energy-constrained vehicle routing problem and solving it through constraint programming heuristics to get the UAV-UGV routes based on the recharging stop locations chosen by the RL policy. 4. The comparison with a non-learning baseline method shows the effectiveness and reliability of the proposed algorithm. ## II Problem Formulation ### _Problem overview_ Given a set of task points \(\mathcal{M}=\{m_{0},m_{1},...,m_{n}\}\) situated on a road network, the objective is to strategize a collaborative operation between a UAV \(A\) and a UGV \(G\) to optimally visit all the task points in the scenario. The vehicles have heterogeneous characteristics as the UAV boasts higher speed \(v^{a}\) but operates within a restricted battery capacity \(F^{a}\). It can, however, recharge its battery from the UGV, which travels at a slower pace \(v^{g}\) exclusively on the road network. For recharging, the UAV docks onto the UGV's landing platform and once rejuvenated, it resumes its flight by taking off from UGV. The duration of this recharging is contingent upon the UAV's battery level prior to docking. The UAV's energy consumption profile, \(\mathcal{P}^{a}=0.0461{v^{a}}^{3}-0.5834{v^{a}}^{2}-1.8761{v^{a}}+229.6\) being a inversely proportional of its velocity \(v^{a}\) adds another layer of complexity to the problem as the hovering consumes more energy than transiting, hence discouraged during the task. Considering these multifaceted challenges, which largely revolve around the recharging interplay between the UAV and UGV, our objective is to develop an efficient recharging strategy ( _RendezvousPlanner_ ) which determines **where** and **when** the UAV can rendezvous with UGV for recharging. Based on this rendezvous, routes for both vehicles \(\tau^{a}\in A,\tau^{g}\in G\) are constructed by the _UAVPlanner_ and _UGVPlanter_ to ensure comprehensive coverage of all task points in the shortest time span, while conserving energy and minimizing idle periods ( UAV hovering, UGV waiting ) during recharging sessions. ### _MDP formulation_ The rendezvous planning of this cooperative routing can be formulated as a MDP, since the refueling stop locations are chosen sequentially and its selection influences the state of entire task scenario (see Fig. 2) The components of the MDP can be defined as a tuple \(<S,\mathcal{A},\mathcal{R},\mathcal{T}>\), as following: 1) **State Space \((\mathcal{S})\):** In our MDP at any decision making step, the state of the environment \(s_{t}\in\mathcal{S}\) is defined as, \(s_{t}=(\{x_{t}^{a},y_{t}^{a}\}\in A,\{x_{t}^{g},y_{t}^{g}\}\in G,\{x_{t}^{i},y_ {t}^{i},d_{t}^{i}\})\in\mathcal{M}\)), where, \(\{x_{t}^{a},y_{t}^{a}\}\) and \(\{x_{t}^{g},y_{t}^{g}\}\) represent the current coordinates of the UAV and UGV respectively and \(\{x_{t}^{i},y_{t}^{i},d_{t}^{i}\}\) highlights the coordinates of task points and their visitation status, \(d_{t}^{i}\) will be 1 if a task point is already visited or 0 if not. 2) **Action Space \((\mathcal{A})\):** Selection of a task node as the refuel stop \(m^{r}\) is considered to be the action \(a_{t}\in\mathcal{A}\). Specifically, \(a_{t}\) is defined as the index of the task node that is selected as rendezvous location at that decision making step, i.e, \(a_{t}=i,\ \forall\ m_{i}^{r}=m_{i}\in\mathcal{M}\). 3) **Reward (\(\mathcal{R}\)):** By keeping congruity with the objective of the cooperative routing problem to reduce total task completion time, we set the reward \(r_{t}\in\mathcal{R}\) as the negative value of UAV route time \(t_{route}^{a}\). This UAV route time is obtained by solving a vehicle routing problem (VRP) problem based on the rendezvous location \(m_{t}^{r}\) chosen in the action \(a_{t}\). The recharging time \(t_{r}^{a}\) is calculated based on UAV's fuel consumption and subtracted from the reward function. Our reward function also deducts \(t_{idle}^{a}\) and \(t_{idle}^{g}\) to discourage UAV's hovering and UGV's waiting time period during rendezvous. For effective reward shaping, we provide a significant positive bonus, \(D\), upon task completion Fig. 2: MDP for heuristics-assisted UAV-UGV cooperative routing and impose a substantial penalty, \(P\), for task failure. So, the reward \(r^{t}\) can be written be as: \(r_{t}=r(s_{t},a_{t})=-t_{route}^{a}-t_{r}^{a}-t_{idle}^{a}-t_{idle}^{g}+D-P\). Here, when the task is successfully completed, \(D=10000,P=0\). In the case of task failure, \(D=0,P=1000\). This reward structure promotes successful task completion and aids in achieving faster learning convergence. 4) **Transition \((\mathcal{T})\):** The next state \(s_{t+1}\) in the transition function \(\mathcal{T}(s_{t+1}|s_{t},a_{t})\) depends on the UAV route \(\tau_{t}^{a}\) and UGV route \(\tau_{t}^{g}\) obtained from route planners _UAVPlanner_ and _UGVPlanner_ based on selected action \(a_{t}\). This is where we integrate routing heuristics (see section IV) with RL framework for solving the cooperative route. In transition, the position of UAV and UGV is updated and the visitation state of the mission points is updated based on the UAV and UGV routes as follow: \(s_{t+1}=(\{x_{t+1}^{a},y_{t+1}^{a}\}\in A,\{x_{t+1}^{g},y_{t+1}^{g}\}\in G,\{ x_{t+1}^{i},y_{t+1}^{i},d_{t+1}^{i}\})\in\mathcal{M}\)), here \(d_{t+1}^{i}=1,\;\;\text{for}\) unvisited mission points \(m_{i}\in\tau_{t}^{a}|\tau_{t}^{g}\) and other elements is 0. Since stochasticity is not assumed the transition is considered to be deterministic. ## III Reinforcement learning framework In our study, we have implemented encoder-decoder based transformer architecture with RL algorithm to parameterize and learn the optimal policy \(\pi_{\theta}\) with a trainable parameter \(\theta\). Starting from the initial state \(s_{0}\), the policy \(\pi_{\theta}=Pr(a_{t}|s_{t},\theta_{t}=\theta)\) takes action \(a_{t}\) to select appropriate refuel stops based on the scenario state \(s_{t}\) until the terminal state \(s_{\tau}\) is reached. The final solution of the policy network consists of a series of refuel stops chosen sequentially which can be represented as joint probability distribution as follow: \[Pr(s_{\tau}|s_{0})=\prod_{t=0}^{\tau-1}\pi_{\theta}(a_{t}|s_{t})Pr(s_{t+1}|s_{ t},a_{t}) \tag{1}\] ### _Encoder-Decoder Transformer architecture_ Our policy network is consists of two main components: an encoder and a decoder. The encoder translates the task point coordinates into a more nuanced, high-dimensional embedding for better feature extraction. On the other hand, the decoder in the policy network \(\pi_{\theta}\), discerns suitable rendezvous locations based on the encoder embedding and the contextual information extracted from the scenario's current state. A detailed description of the policy architecture (see Fig. 3) is explained here: #### Iii-A1 Encoder We have utilized a multi head attention (MHA) mechanism [25, 26] in the encoder for higher dimensional representation of the raw features of the problem instance. The encoder takes the normalized 2D coordinates of the task points as an input \(X=(o_{i}=\{x_{i},y_{i}\},\forall\;m_{i}\in\mathcal{M})\) and linearly projects them to a higher dimension space with dimension \(d_{h}=32\) to create input embedding \(h_{i}^{0}\). This input embedding is subsequently transformed using a single-layer multi-head self-attention mechanism. This transformation process yields \(h_{i}^{L}\), allowing a more detailed understanding of the relationships among the task points. The Attention layer calculates three vectors, _query_, _key_ and _value_ from the input node embedding \(h_{i}^{0}\) with the dimension of _query_/_key_, _value_ as, \(d_{q}/d_{k}=d_{v}=\frac{d_{h}}{M}\), here \(M=4\), number of attention heads. For each head \(j\in{1,2,...M}\), the attention scores \(Z_{j}\) are calculated between _query_\(q_{i,j}\) and _key_\(k_{i,j}\) and concatenated together to the original embedding dimension \(h_{i}^{0}\). The calculations are shown here: \[q_{i,j}=h_{i}^{0}W_{j}^{q},\;k_{i,j}=h_{i}^{0}W_{j}^{k},\;v_{i,j}=h_{i}^{0}W_{j}^ {v} \tag{2}\] \[Z_{j}=\text{softmax}\left(\frac{q_{i,j}{k_{i,j}}^{T}}{\sqrt{d_{k}}}\right)v_{ i,j} \tag{3}\] \[h_{i}^{L}=MHA(h_{i}^{0})=\text{Concat}(Z_{1},Z_{2},...,Z_{j}) \tag{4}\] Here, \(q_{i,j},k_{i,j}\) and \(v_{i,j}\) are the _query_, _key_ and _value_ respectively in head \(j\) and \(W_{j}^{q}\in\mathbb{R}^{d_{h}\times d_{v}},W_{j}^{k}\in\mathbb{R}^{d_{h}\times d _{h}}\) and \(W_{j}^{v}\in\mathbb{R}^{d_{h}\times d_{v}}\) are the trainable parameter matrices in the attention layer. The self attention output \(h_{i}^{L}\) has a residual skip connection around them, followed by Layer-Normalization (LN). Then for the richer interpretation of feature space, the output \(h_{i}^{r}\) is refined through a Feed Forward (FF) layer followed by another residual skip connection and Layer-Normalization that yields the encoder embedding \(h_{i}^{f}\) of the problem instance which is leveraged in the decoder. \[h_{i}^{r}=LN(h_{i}^{0}+MHA(h_{i}^{0})), \tag{5}\] \[h_{i}^{f}=LN(h_{i}^{r}+FF(\text{ReLU}(h_{i}^{r}))) \tag{6}\] #### Iii-A2 Decoder During each decision-making step, the decoder determines the probability of selecting each available node as an action based on the encoder's node embedding \(h_{i}^{f}\) and a **context** vector, which provides insights into the current scenario state. The decoder employs multi head self attention layer to extract state features as the context \(H_{t}^{con}\) which is achieved by linear transformation of current state \(s_{t}\) into \(H_{i}^{0}\) and subsequent processing through multi head self attention layer followed by skip residual connection and Layer-Normalization as shown in following equations: \[H_{i}^{0}=\text{linear}(s_{t}),\;H_{t}^{con}=LN(H_{i}^{0}+MHA(H_{i}^{0})) \tag{7}\] Then this context vector \(H_{t}^{con}\) is treated as the _Query_ and the encoder node embedding \(h_{i}^{f}\) is considered as the _Key/Value_ pair for a multi-head heterogeneous cross attention layer to generate \(H_{t}^{attn}\). \[H_{t}^{attn}=MHA(H_{t}^{con}W_{j}^{Q},\;h_{i}^{f}W_{j}^{K},\;h_{i}^{f}W_{j}^{V}) \tag{8}\] Following the original transformation network [25, 26], the cross attention score \(H_{t}^{attn}\) has a skip connection and Layer-Normalization. This is succeeded by a Feed Forward (FF) layer and another Layer-Normalization, further refining it to \(H_{t}^{f}\). At the final step, the attention score is linearly projected to the action space dimension and is then averaged across the nodes. \[H_{t}^{r}=LN(H_{t}^{attn}+H_{t}^{con}) \tag{9}\] \[H_{t}^{f}=LN(H_{t}^{r}+FF(\text{ReLU}(H_{t}^{r}))) \tag{10}\] \[H_{t}^{out}=\text{mean}(\text{linear}(H_{t}^{f})) \tag{11}\] An important step in this decoding step is to mask the invalid action nodes by setting its attention score to negative infinity. In our case, any task point that is out of reach of UAV and UGV from its current position and fuel capacity is considered as invalid at that action step. Finally, a softmax layer outputs the probability distribution over the action nodes, \(Pr(a_{t}|s_{t},\theta_{t}=\theta)=\text{softmax}(H_{t}^{out})\), indicating the likelihood of each node to be selected as the refueling point. We have utilized sampling strategy to sample the node as action according to their probability in the decoding stage. ### _Training method_ The training of our policy network leverages the REINFORCE policy gradient method, a seminal approach in the realm of reinforcement learning. At its core, REINFORCE seeks to optimize policies by directly adjusting policy parameters in the direction that improves expected rewards. The training process ( see Algorithm 1 ) begins by initializing the policy parameters, often randomly. We execute an episode under the current policy, producing state, action, and reward sequences. Each step's return is calculated, and the policy parameters are adjusted based on the gradient of the log probability of actions, scaled by these returns. This iterative process makes actions with higher returns more likely in future episodes. The method's strength lies in its model-free nature, eliminating the need for explicit knowledge of the environment's dynamics. However, in our study future enhancements may include baseline networks with algorithms like Proximal Policy Optimization (PPO) or Actor-Critic method for better convergence. ``` 0: Policy network \(\pi_{\theta}\), epochs \(E\), batches \(B\), episode length \(T\), learning rate \(\alpha\), discount factor \(\gamma\) 0: Trained policy network \(\pi_{\theta^{\prime}}\) 1: Initialize policy \(\pi_{\theta}\) parameters \(\theta\) 2:for epoch in \(1\dots E\)do 3: Initialize trajectory set \(\mathcal{E}=[\ ]\) 4:for instance in \(1\dots B\)do 5: Initialize \(s_{0}\) and \(t\gets 0\) 6: Initialize trajectory \(\tau=[\ ]\) 7:while\(t<T\)do 8: Get action \(a_{t}\sim\pi_{\theta}(a_{t}|s_{t})\) 9: Obtain \(r_{t}\) and \(s_{t+1}\) 10: Store \((s_{t},a_{t},r_{t})\) in \(\tau\), \(t=t+1\) 11:endwhile 12: Append \(\tau\) to \(\mathcal{E}\) 13:endfor 14: Compute gradient: \[\nabla_{\theta}J=\frac{1}{B}\sum_{\tau\in\mathcal{E}}\sum_{(s,a,r)\in\tau} \nabla_{\theta}\log\pi_{\theta}(a|s)\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime} -t}r_{t^{\prime}}\] 15: Update policy parameters: \(\theta\leftarrow\theta+\alpha\nabla_{\theta}J\) 16:endfor ``` **Algorithm 1** Policy network training using REINFORCE ## IV Routing Heuristics Based on the refuel stops selected by the policy network (_RendezvousPlanner_), we can model vehicle routing problem with constraints to obtain the UGV & UAV routes which in turn give us the reward of our MDP in the learning framework. Since the UGV is slower than the UAV, we followed the 'UGV first, UAV second' heuristics approach of cooperative routing to construct the UGV route as the initial step and then construct the UAV route based on the spatio-temporal components of UGV route. #### Iv-B1 UGV route planner The _UGVPlanner_ generates the UGV path by connecting the UGV's current position to the chosen refuel stops. Since the UGV is confined to the road network, the waypoints between its present position and the refuel stop provide its route's spatial aspect. Given the UGV's consistent speed, its temporal journey can also be determined. To align with the UAV for refueling, the UGV may wait at the refuel station if necessary, however waiting at the refuel stops is discouraged to reduce idle period. The arrival time of the UGV at the refuel stop is fed to _UAVPlanner_ which models the UAV routing as the energy constrained vehicle routing problem with time windows (E-VRPTW). #### Iv-B2 UAV route planner The E-VRPTW formulation can be depicted using graph theory. Here, the task points act as the vertices \(V=\{S,0,1,...D\}\) and \(E=\{(i,j)\parallel i,j\in V,i\neq j\}\) denotes the edges connecting vertices \(i\) and \(j\). We assign non-negative arc cost between vertices \(i\) and \(j\) as \(t_{ij}\) (traversal time) and the decision variable as \(x_{ij}\) that indicates whether a vehicle transits from \(i\) to \(j\). The UAV commences it's journey at starting point \(S\), visits the task points and when needed it Fig. 3: Architecture of the proposed policy network. Encoder has one self attention layer to generate input scenario embedding, while decoder contains one self attention to yield **context** and one cross attention layer to generate attention score. terminates its route to recharge from the UGV at refuel stop \(D\), which is bounded by a time-window due to UGV's slower pace. The objective (Eq. 12) is minimizing the cumulative travel duration while dropping least number of task points by penalizing each task point drop with a large penalty \(P\); here \(y_{i}\) is a binary variable that's set to 1 if a point is visited or 0 otherwise. We've established energy constraint in Eqs. 13 - 14 to make sure that the UAV never runs out of its fuel and its fuel consumption follows the UAV's power consumption profile during traversal (Eq. 15). The time-window condition in Eq. 16 makes the UAV visit the UGV only after its arrival at refuel stop \(D\). Eq. 17 says that the cumulative arrival time at \(j^{th}\) node is equal to the sum of cumulative time at the node \(i\), \(t_{i}\) and the travel time between them \(t_{ij}\). In both Eq. 15 & Eq. 17 we have applied Miller-Tucker Zemlin (MTZ) formulation [27] by adding large constant \(L_{1},L_{2}\) for sub-tour elimination in the UAV route. The other generic constraints for the UAV route like flow conservation are detailed in our previous work [28]. Objective: \[\min\sum_{i}\sum_{j}t_{ij}x_{ij}+P\sum_{i}(1-y_{i})\quad\forall i,j\in V \tag{12}\] Energy constraints: \[f_{j}^{a}=F^{a},\quad j\in D \tag{13}\] \[0\leq f_{j}^{a}\leq F^{a},\quad\forall j\in V\setminus\{S,D\} \tag{14}\] \[f_{j}^{a}\leq f_{i}^{a}-\left(\mathcal{P}^{a}(v^{a})t_{ij}x_{ij}\right)\] \[+L_{1}\left(1-x_{ij}\right),\quad\forall i\in V,j\in V\setminus\{S,D\} \tag{15}\] Time window constraints: \[t_{j,start}\leq t_{j}\leq t_{j,end}\,\quad\forall j\in D \tag{16}\] \[t_{j}\geq t_{i}+\left(t_{ij}x_{ij}\right)-L_{2}\left(1-x_{ij}\right),\quad \forall i\in V,j\in V \tag{17}\] The UAV route is calculated by the _UAVPlanner_ by solving the above E-VRPTW formulation with Google OR-Tools (tm CP-SAT solver [29] that uses constrained programming (CP) and we leverage TABU SEARCH heuristics in the solver for avoiding the local optimal solution. Distinct from earlier cooperative routing studies [9, 12], our approach leverages mobile recharging, turning recharging duration into opportunities for mission point visitation. Once the UAV-UGV route is determined, the UAV's refueling time is computed based on its in-flight fuel consumption. This refueling time is projected as a distance along road network where UGV transports the UAV, effectively recharging it while concurrently visiting task points along the road. The end location of the recharging process will be the next take off point of the UAV and the route time, recharging time, UAV hovering time and UGV waiting time are fed back to the MDP reward function and we will move forward to the next cycle in MDP. ## V Simulation Results In this section, we simulate the fuel-constrained cooperative routing of a UAV-UGV team across a 20 km \(\times\) 20 km real world site. Starting from a depot, the team aims to visit task points shown as black dots in Fig. 5. The UAV and UGV moves at \(v^{a}=10\) m/s and \(v^{g}=4.5\) m/s speed respectively and UAV has a fuel capacity of 287.7 kJ. The recharging instances between UAV and UGV are determined by _RendezvousPlanner_ and their route is determined by _UAVPlanner_ and _UGVPlanner_ respectively as previously discussed. The goal is to accomplish the task in minimum possible time while ensuring that the UAV and UGV spend minimal idle time hovering or waiting during their recharging rendezvous instances. We evaluate the following metrics: 1) the total task completion time as the main metric 2) the idle time spent percentage (idle time / task completion time) by UAV and UGV during rendezvous 3) the UAV and UGV energy consumption metrics during the entire task period. Given the unique complexity and specificity of our challenge, there are no standard benchmarks available, nor could an exact solution be discerned. Hence we compare our RL based results with results derived from a Genetic algorithm (GA) based non-learning baseline. The details about the implementation of GA for this cooperative routing can be found in our previous study [24]. All computational procedures were executed using a Python simulation on a 3.7 GHz Intel Core i9 processor, equipped with 32 GB RAM, operating on a 64-bit system. The training of RL is conducted in 500 episodes with different learning rates \(lr=0.001,0.005,0.01\). The learning rate significantly influences the RL algorithm's convergence speed. As illustrated in Fig. 4, a lower \(lr=0.001\) results in a prolonged convergence, whereas an aggressive \(lr=0.01\) converges swiftly but tends to overfit, compromising policy exploration. The intermediate \(lr=0.005\) strikes a balance between exploration and exploitation, emerging as the most effective for deriving an optimal policy. Consequently, we adopt the \(\pi_{\theta}\) policy trained with \(lr=0.005\) for coordinating the UAV-UGV rendezvous in our task scenario. Although, both the GA and DRL frameworks are able to come up with routing strategy to cover entire task scenario without drooping any task nodes, Fig. 5 shows the difference of the cooperative routes obtained by them. DRL optimally determines the recharging rendezvous between the UAV and UGV, ensuring even-handed utilization of both in the routing process. In contrast to the GA approach, DRL's chosen refuel stops are further from previous rendezvous points. Consequently, the UGV travels longer to rendezvous Fig. 4: Mean episode reward in training process locations hence doesn't endure prolonged wait times at refuel stops for UAV to land. From table II reveals that with strategic rendezvous planning, DRL requires only 4 recharging instances to complete the task, while GA demands 5. More recharging events add extra recharging and UAV detour times, leading to a prolonged overall task completion time (see table I). Also, owing to the less detours and less hovering UAV total energy consumption is 15% lesser in DRL method. The impact of the rendezvous policy can be clearly understood in table II. In DRL rendezvous policy, UAV and UGV spend less amount of time in remain idle either in hovering or waiting. Particularly, the UGV idle time is 3 times less in the DRL method what fulfills the objective of this work. Hence we can say, DRL method is able to learn the given environment to figure out the optimal recharging policy between the UAV and UGV which ultimately impacts the overall the cooperative route positively by reducing the task completion time and vehicle idleness and energy consumption. ## VI Conclusion & Future work In this study, we introduced a novel framework that combined heuristic guidance with deep reinforcement learning to address the fuel-constrained UAV-UGV cooperative routing problem. We demonstrated the effectiveness of using an encoder-decoder transformer architecture for making rendezvous decisions, while a heuristic planner efficiently devised routes for the UAV and UGV to minimize total mission time. By harnessing the decision-making capabilities of reinforcement learning and the route generation through heuristics, our approach aptly navigated the complexities of such a problem. With our DRL-based framework, we achieved a 23% reduction in task completion time compared to the non-learning GA baseline. Additionally, through strategic rendezvous planning, we significantly minimized the UAV-UGV idle periods, leading to a considerable decrease in their energy consumption. In future, we plan to incorporate batch processing in RL to enhance training, aiming for a more robust solution. This will allow us to make more broader claim about the viability of our approach. Furthermore, we intend to adapt our framework for dynamic routing to account for unpredictable variations. \begin{table} \begin{tabular}{l l l} \hline \hline **Rendezvous Metrics** & **DRL method** & **GA method** \\ \hline No. of recharge stops & 4 & 5 \\ Recharge time (min) & 55.87 & 73.00 \\ UAV idle time (\%) & 1.68 & 2.15 \\ UGV idle time (\%) & 10.64 & 32.79 \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison of Rendezvous metrics \begin{table} \begin{tabular}{l l l} \hline \hline **Route Metrics** & **DRL method** & **GA method** \\ \hline Total task completion time (min) & 143.00 & 186.00 \\ Mission points dropped & 0 & 0 \\ UAV travel time (min) & 84.72 & 99 \\ UAV hovering time (min) & 2.41 & 4 \\ UAV hovering energy cost (kJ) & 33.26 & 55.20 \\ UAV total energy cost (kJ) & 1039.73 & 1231.32 \\ UGV travel time (min) & 71.92 & 52.00 \\ UGV waiting time (min) & 15.22 & 61.00 \\ UGV waiting energy cost (kJ) & 325.30 & 1304.06 \\ UGV total energy cost (kJ) & 19347.13 & 19911.56 \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison of Routing metrics Fig. 5: Cooperative routes of UAV-UGV at recharging instances A) DRL method B) GA method. The route animation can be found at [http://tiny.cc/x55bvz](http://tiny.cc/x55bvz)
2301.00836
Kannudi -- A Reference Editor for Kannada
Kannudi is a reference editor for Kannada based on OPOK! and OHOK! principles, and domain knowledge. It introduces a method of input for Kannada, called OHOK!, that is, Ottu Haku Ottu Kodu! (apply pressure and give ottu). This is especially suited for pressure sensitive input devices, though the current online implementation uses the regular mechanical keyboard. OHOK! has three possible modes, namely, sva-ottu (self-conjunct), kandante (as you see), and andante (as you say). It may be noted that kandante mode does not follow the phonetic order. However, this mode may work well for those who are inclined to visualize as they type rather than vocalizing the sounds. Kannudi also demonstrates how domain knowledge can be effectively used to potentially increase speed, accuracy, and user friendliness. For example, selection of a default vowel, automatic shunyification, and arkification. Also implemented are four types Deletes that are necessary for phono-syllabic languages like Kannada.
Vishweshwar V. Dixit
2022-12-24T01:40:56Z
http://arxiv.org/abs/2301.00836v1
# Kammudi - A Reference Editor for Kannada ###### Abstract Kannudi is a reference editor for Kannada based on OPOK! and OHOK! principles, and domain knowledge. It introduces a method of input for Kannada, called OHOK!, that is, _Ottu Haku Ottu Kodu!_ (apply pressure and give ottu). This is especially suited for pressure sensitive input devices, though the current online implementation uses the regular mechanical keyboard. OHOK! has three possible modes, namely, sva-ottu (self-conjunct), _kandante_ (as you see), and _andante_ (as you say). It may be noted that _kandante_ mode does not follow the phonetic order. However, this mode may work well for those who are inclined to visualize as they type rather than vocalizing the sounds. Kannudi also demonstrates how domain knowledge can be effectively used to potentially increase speed, accuracy, and user friendliness. For example, selection of a default vowel, automatic _shunyification_, and _arkification_. Also implemented are four types Deletes that are necessary for phono-syllabic languages like Kannada. Kannudi can be accessed at [https://kannadakali.com/kannudi/kannudi.html](https://kannadakali.com/kannudi/kannudi.html) ## 1 Introduction Many tools are available for digital inputting of Kannada and other Indian language text, such as Input Method Editors (IME), and real time transliteration tools, free and commercial, online as well as offline. Most of these are generic in the sense that they are designed to address all Indian languages. That makes sense as the scripts for these languages, as they all descended from the same Brahmi script, share many common features such as being alpha-syllabic. However, there are subtle differences in the writing styles of these languages. These language specifics can be used to "optimize" and make the editors and IMEs more efficient and user friendly. An earliest implementation of one such editor attempting to use the 'domain knowledge' was described by Dixit [1][2][3]. It was an ambitious effort as it aimed for a universal framework. It identified notions such as _Stin'yification_, _Non-initial Vowel_, and hinted at _a-rule_ for voice input. However, the implementation was limited to Kannada and limited in scope of the features. This was a DOS based editor and no versions were released later on Windows or other platforms. Another notable DOS editor was developed in Visual Basic by Rangachar [4]. Described in this article is a reference implementation introducing a new method of input for _otaksharas_ (conjuncts) while incorporating previous ideas in Chitragupta and Unived [1][2]. ## 2 Input Methods **Keyboarding,** being the norm, is a required method of input. A phonetic input method generally assigns a single key to a single phoneme and the keys are typed in the order of pronunciation. A strict one phoneme - one key (OPOK!) mapping of keys to phonemes may not desirable for the sake of convenience. Some may assign multiple combinations of 1-3 keys to a single phoneme for convenience to resemble common writing using Latin script. For example, one may assign _B, bh, _Bh_, or _BH_ for \(z\) (_mahaprana b_) for the sake of convenience. A mapping of letters to (ASCII) keys was developed by Kannada Ganaka Parishattu (KGP) and is used in its Nudi editor [5]. This has been designated as the official standard by Government of Karnataka [6]. A context sensitive dynamic keyboarding has been described by Joshi _et.al_. [7] **Handwriting**, using stylus on a tablet or mobile screen, is another method of input. The complexity of graphics processing makes it slow. Variations in individual writing styles also introduces errors. Correcting these errors as one goes takes time and interrupts the thought process of the user which reduces the speed. **Voice** input is a promising method. However, Voice recognition is not perfect. This is especially problematic in Kannada where regional and other variations in pronunciations abound. Additionally, current implementations are mostly dictionary based. They suffer from the same difficulties with all Indian languages, namely, variations in pronunciation, dictionary limitations, and indetermination between writing as pronounced and the dictionary entries. Current voice input implementations may make it more difficult to type. One needs to keep looking constantly at the words being entered, select among the choices, or correct manually. A user seems to spend significant time in 'correcting' the dictionary words and ultimately, being frustrated, ends up using or taking help of a basic keyboard layout such as Inscript. Thus, voice input, though convenient, current implementations fail the user in both accuracy and speed. ## 3 Improvement Opportunities Three main considerations in any input method are accuracy, speed, and convenience. Appropriate tradeoff among the three is also of concern. Elimination or minimization of corrections (backspace and deletes) and minimization of required keystrokes become important parameters. Certainly, there is a need and room to improve the existing methods in these regards. Current implementations aim and try to cater to all Indian languages and therein implementing a set of common minimum features. They become minimally useful, as most users are not interested in 15+ languages. Hence, an implementation must be extensible so that domain knowledge specific to each language and script can be incorporated Mobile platforms present interesting opportunities for novel methods using soft keyboards, dynamic context, and new input mechanisms such as swipes. ## 4 Letter Frequencies Whereas earlier studies have found 36% _mila aksaras_ (vowel \(a\) or _C+a_) and 14% conjuncts, using a sample of articles from Kannada Wikipedia [8] and Kannada Kali [9],we found the letter frequencies as shown in Table 1. ## 5 Kannudi Kannudi [10] is a reference editor introducing several innovative and experimental features. Currently, an online implementation invoked via a web browser, is available. Some features in this implementation require a keyboard. Kannudi input method follows the phonetic order, i.e., phonemes are entered in the order of their pronunciation. Major principles in Kannudi are OPOK!, OHOK!, user friendliness, prevention/minimization of errors and illegal combinations and letter formations. \begin{table} \begin{tabular}{|l|l|} \hline _moola akshara_ & 42.4\% \\ \hline _Gunitakshara_ & 39.3\% \\ \hline anusväara / sonne / sun’ya & 5.6\% \\ \hline _Ottu and end viräma_ & 18.2\% \\ \hline _Sajät/Dvitva (self-conjunct)_ & 8.7\% \\ \hline _Dvitva post-vowel_ & 1.1\% \\ \hline _Vijäti (non-self-conjunct)_ & 8.8\% \\ \hline _End-viräma_ & 0.7\% \\ \hline \end{tabular} \end{table} Table 1: Syllable Frequencies ### Opok! Principle First major principle in Kannudi implementation of _One Phoneme One Key_ (OPOK!) Here, keys are assigned to phonemes, not graphemes; and further, a one-one correspondence exists between a key and a phoneme. Key assignments adhere to the standard specified by Government of Karnataka. ### Default Vowel - \(null\)\(\odot\)* or \(a\)\(\Leftrightarrow\) A pure phonetic method of input would be to type every vowel and consonant in the order pronounced. This assumes no vowel inherently attached to a consonant in the alphabet. Thus, with OPOK! in force, *\({}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In case of pressure sensitive (mobile) devices, OHOK! is called _Ottu Haku Ottu Kodu! [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [20000] [2000] [20000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [20000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [20000] [20000] [2000] [2000] [2000] [20000] [2000] [20000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [20000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [200] [200] [200] [2000] [2000] [200] [2000] [200] [2000] [2000] [2000] [200] [200] [200] [200] [200] [200] [200] [2000] [200] [2000] [2000] [200] [200] [2000] [2000] [2000] [200] [2000] [2000] [200] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [2000] [200] [2000] [2000] [2000] [2000] [2000] [2000] [2000] ### Key Savings Table 3 shows the key savings, with _a_-default, for various conjunct syllable types for the three modes of OHOK!. For example, a _dvitva_ requiring 3 keys normally, can be produced with only 1 key in _dvitva_ mode (SO). Similarly, Table 4 lists the key savings with _null_-default. These tables also show the keys saved per 1000 syllables in a typical document, based on the frequency data in Table 1. Thus, under a-default, key savings of 174 can be achieved in OHOK! _Dvitva_ mode (SO). It may be noted that, most key savings occur in _a_-default mode while the differences among the three OHOK! modes remain insignificant. **Another scheme**: when a key is pressed and held next key becomes an _ottu._ e.g press-holding a \(k\) then typing \(r\) will produce \(\$\) in _a_-default_-mode. This scheme has physical limitations due to positions of keys being fixed on a keyboard. It becomes necessary to cross hands or fingers or quickly decide which hand to use for a key. This is contrary to the trained typist who expects to blindly use the same finger at the same physical location for a given key. It can be physically impossible or totally confusing. Hence this scheme is not implemented in Kannudi. ## 6 Rules of Convenience Apart from introducing the novel input method, Kannudi implements several user-friendly features that are simply matter of convenience or eliminate or reduce errors. A few such features are described here. ### Backspace \(\leftarrow\)BS During normal course of tying, a user may have typed a key in error or pressed an adjacent key. When the mistake is realized, the normal action is to press _backspace_ key \(\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\))))) \ 1. Phoneme Delete: delete the phoneme immediately left of the cursor, assigned to \(\leftarrow\)_BS_. 2. Character Delete: delete the (unicode) character immediately left of the cursor, assigned to \(alt\)_-_BS_, 3. Syllable Delete: delete the syllable immediately left of the cursor, assigned to \(shift\)_-_BS_, 4. Word Delete: delete the word immediately left of the cursor, assigned to \(ctrl\)_-_BS_. ### Shunyification _Some_ or \(\check{s}un\)_ya_ is used in writing to represent an _annaska_ before a consonant as in \(\mathfrak{sof}\), though it is not incorrect to write \(\mathfrak{ssa}\). Thus, _sonne_ before a classified consonant is pronounced as the _annansika_ of the same class. The process of automatic conversion of _annansika_ before a consonant, classified or non-classified, to a _sonne_ is called _shunyification_. For the sake of convenience, and only \(n\)/\(m\) keys are considered for \(\check{s}un\)_yification_ in this implementation. _\(\check{s}un\)yification_ makes the input entry flexible by allowing both \(n\) and \(m\) to be automatically \(\check{s}un\)yified_. \(\check{s}un\)_yification_ is straightforward in case of a classified consonant but can be ambiguous in case of a non-classified consonant and exceptions occur. In most cases when _sonne_ precedes a non-classified consonant _(avargya yvanjana)_, \(\check{s}un\)yification_ allows the entry to be phonetic corresponding to how most people pronounce. For example, \(\mathfrak{sof}\mathfrak{sof}\) is pronounced as \(\mathfrak{ssa}\mathfrak{sof}\)_-_ssanya_ and \(\mathfrak{sof}\mathfrak{s}\) as sansktra or samsktra by many, _albeit_ all incorrectly. \(\check{s}un\)_yification_ does not save any keystrokes; But does not require one to switch the flow between _andante_ (as pronounced) and _kandante_ (as seen). This is especially convenient for those who are mentally "spelling" the Kannada words in roman script as they type. And there are many such casual users. ### Arkification Though \(\mathit{arkavotta}\)\(\bar{\mathfrak{s}}\)is used mostly in place of \(\mathfrak{c}^{\sharp}\), there are a few situations where it is not desirable as in \(\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{s}\), \(\mathfrak{x}\mathfrak{c}\mathfrak{s}\). Kannudi automatically uses the correct form (_arkifies_) in such cases allowing the user to type normally without hindrance. ## 7 Error Prevention Rules Certain domain knowledge of the language can be used to prevent typos and warn the user. A few examples are described below. ### Non-initial Vowel Kannada allows a standalone vowel only in beginning of word. Kannudi prevents such typing. ### Aspirated Ottu It is not possible to pronounce an aspirated consonant (_mahaprana_) when followed by another aspirated consonant. As such Kannada does not allow a _mahaprana ottu_ to another _mahaprana_. However, certain words can be exceptions due to common usage, _e.g._, \(\mathfrak{sof}\)_withhala_. ## 8 Exception Handling Convenience and error prevention rules may not be perfect, and exceptions can be found as mentioned earlier. Hence it is necessary to ensure that there is a mechanism to override the normal behavior. Two ways to override are a) using ZWJ/ZWNJ characters and b) inputting the phonemes with a space character in between and then remove it. ## 9 Conclusions Here we have introduced an input method called OHOK! with three possible modes, namely, _sva-ottu_ (self-conjunct), _kandante_ (as seen), and _andante_ (as pronounced/said). It may be noted that _kandante_ mode (KO) does not follow the phonemic order. However, this mode may work well for those who are inclined to visualize as they type rather than vocalizing the sounds. OHOK! will work very well on mobile or any device where input pressure can be sensed where OHOK! can be called _Ottu Haku Ottu Kodu!_ (Apply Pressure and Give _Ottu_) when implemented on pressure sensitive devices. it where can be a real time saver. On mechanical keyboards, it saves keystrokes though any time saved is dependent on keyboard settings. We have showed that domain knowledge can be used to improve user friendliness. Several convenience and error minimization rules such as _Sunyification_ and _arkavottu_ are described. Four types of deletions, namely phoneme, character, syllable, and word delete are identified and assigned to backspace key. Thus, domain knowledge is shown to be necessary and helpful to enhance user friendliness as wells as input flow and speed. Further, one may consider incorporating these rules into open type font tables as attached language specific resources, and eliminate the need for a separate editor application.
2309.08490
Bessel Periods on $U(2,1) \times U(1,1)$, Relative Trace Formula and Non-Vanishing of Central $L$-values
In this paper we calculate the asymptotics of the second moment of the Bessel periods associated to certain holomorphic cuspidal representations $(\pi, \pi')$ of $U(2,1) \times U(1,1)$ of regular infinity type (averaged over $\pi$). Using these, we obtain quantitative non-vanishing results for the Rankin-Selberg central $L$-values $L(1/2, \pi \times \pi')$, which are of degree twelve over $\mathbb{Q}$, with concomitant difficulty in applying standard methods, especially since we are in a `conductor dropping' situation. We use the relative trace formula, and the orbital integrals are evaluated rather than compared with others. Besides their intrinsic interest, non-vanishing of these critical values also lead, by known results, to deducing certain associated Selmer groups have rank zero.
Philippe Michel, Dinakar Ramakrishnan, Liyang Yang
2023-09-15T15:49:59Z
http://arxiv.org/abs/2309.08490v4
Bessel periods on \(U(2,1)\times U(1,1)\), relative trace formula and non-vanishing of central L-values ###### Abstract. In this paper, we calculate the asymptotics of the second moment of Bessel periods associated to certain holomorphic cuspidal representations \((\pi,\pi^{\prime})\) on \(U(2,1)\times U(1,1)\) (averaged over \(\pi\)). Using these, we obtain quantitative non-vanishing results for the Rankin-Selberg central values \(L(1/2,\pi\times\pi^{\prime})\) which arise in the Gan-Gross-Prasad conjectures. ###### Contents * 1 Introduction * 2 Notations * 3 A Relative Trace Formula on \(U(2,1)\) * 4 Choice of local and global data * 5 The spectral Side * 6 The Geometric Side * 7 The Identity Orbital Integral * 8 The unipotent Orbital Integrals * 9 The Regular Orbital Integrals * 10 Bounds for the sum of global regular orbital integrals * 11 Twisted moments of Bessel periods * 12 Weighted Vertical Sato-Tate Distribution * 13 Averaging over forms of exact level \(N\) * 14 Amplification and Non-vanishing * A Explicit double coset decompositions ## 1. **Introduction** Let \(L(s,\pi)\) be an \(L\)-function admitting an Euler product factorisation \[L(s,\pi)=\prod_{p}L_{p}(s,\pi)=L(s,\pi)\] constructed out of some automorphic datum \(\{\pi\}\); we assume that \(L(s,\pi)\) is analytically normalized, selfdual and even so that its admit an analytic continuation to the whole \(s\)-plane with a functional equation relating \(L(s,\pi)\) to \(L(1-s,\pi)\) and the root number is \(+1\). Under such hypothesis the central value of the finite part \(L(1/2,\pi)\) (which is expected to be non-negative) is of great importance either from the arithmetic or the analytic viewpoint. A basic invariant of the \(L\)-function is its _analytic conductor_, \[C(\pi)=C_{f}(\pi)C_{\infty}(\pi)\] ###### Contents * 1 Introduction * 2 The \(L\)-functions * 2.1 The \(L\)-functions * 2.2 The \(L\)-functions * 2.3 The \(L\)-functions * 2.4 The \(L\)-functions * 2.5 The \(L\)-functions * 2.6 The \(L\)-functions * 2.7 The \(L\)-functions * 2.8 The \(L\)-functions * 2.9 The \(L\)-functions * 2.1 The \(L\)-functions * 2.1.1 The \(L\)-functions * 2.1.2 The \(L\)-functions * 2.1.3 The \(L\)-functions * 2.1.4 The \(L\)-functions * 2.1.5 The \(L\)-functions * 2.1.6 The \(L\)-functions * 2.1.7 The \(L\)-functions * 2.1.8 The \(L\)-functions * 2.1.9 The \(L\)-functions * 2.2.1 The \(L\)-functions * 2.2.2 The \(L\)-functions * 2.2.3 The \(L\)-functions * 2.2.4 The \(L\)-functions * 2.2.5 The \(L\)-functions * 2.2.6 The \(L\)-functions * 2.2.7 The \(L\)-functions * 2.2.8 The \(L\)-functions * 2.2.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2 The \(L\)-functions * 2.3.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5 The \(L\)-functions * 2.3.6 The \(L\)-functions * 2.3.7 The \(L\)-functions * 2.3.8 The \(L\)-functions * 2.3.9 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.1 The \(L\)-functions * 2.3.2.2 The \(L\)-functions * 2.3.2.3 The \(L\)-functions * 2.3.4 The \(L\)-functions * 2.3.5.1 The \(L\)-functions * 2.3.6.1 The \(L\)-functions * 2.3.7.1 The \(L\)-functions * 2.3.8.2 The \(L\)-functions * 2.3.9.1 The \(L\)-functions * 2.3.1.1 The \(L\)-functions * 2.3.2.2 The \(L\)-functions * 2.3.2.3 The \(L\)-functions * 2.3.2.4 The \(L\)-functions * 2.3.2.5 The \(L\)-functions * 2.3.2.6 The \(L\)-functions * 2.3.2.7 The \(L\)-functions * 2.3.8.2 The \(L\)-functions * 2.3.9.1 The \(L\)-functions * 2.3.1.2 The \(L\)-functions * 2.3.1.3 The \(L\)-functions * 2.3.1.4 The \(L\)-functions * 2.3.1.5 The \(L\)-functions * 2.3.1.6.2 The \(L\)-functions * 2.3.1.7 The \(L\)-functions * 2.3.1.8 The \(L\)-functions * 2.3.1.9 The \(L\)-functions * 2.3.1.1 The \(L\)-functions * 2.3.2.1.1 The \(L\)-functions * 2.3.2.2.2 The \(L\)-functions * 2.3.2.3.2 The \(L\)-functions * 2.3.2.4 The \(L\)-functions * 2.3.2.5.1 The \(L\)-functions * 2.3.1.1.1 The \(L\)-functions * 2.3.2.2.6 The \(L\)-functions * 2.3.2.7.2 The \(L\)-functions * 2.3.2.8.9 The \(L\)-functions * 2.3.2.9.2.7 The \(L\)-functions * 2.3.2.9.1 The \(L\)-functions is the "completed" \(L\)-function, \[L_{\infty}(s,\pi_{E}\times\pi_{E}^{\prime})=\prod_{w|\infty}L_{w}(s,\pi_{E}\times \pi_{E}^{\prime}),\] \(C_{f}(\pi_{E}\times\pi_{E}^{\prime})\geq 1\) is an integer (the arithmetic conductor) and \(\varepsilon(\pi_{E}\times\pi_{E}^{\prime})\in\{\pm 1\}\) is the root number. #### 1.1.1. The main assumptions We will consider the families for which \(\pi^{\prime}\), the form on the smaller group is _fixed_, while the form in the larger group \(\pi\) is varying. More precisely (see SS3 for greater details) let \(k\geq 0\) be an integer and let \(N,N^{\prime}\geq 1\) be integers either equal to \(1\) or to prime numbers unramified in \(E\). We assume that * \(k\) is even and sufficently large: (1.2) \[k>32,\] * If \(N>1\), then \(N\) is _inert_ in \(E\), and (1.3) \[N^{\prime}\geq 3\] * If \(N^{\prime}>1\), then \(N^{\prime}\) is _split_ in \(E\) and (1.4) \[N^{\prime}\geq 10^{6}\] * The representation \[\pi^{\prime}\simeq\pi_{\infty}^{\prime}\otimes\bigotimes_{p}^{\prime}\pi_{p}^ {\prime}\] is a cuspidal representation of \(G^{\prime}(\mathbb{A})\), with trivial central character, whose archimedean component \(\pi_{\infty}^{\prime}\) is a holomorphic discrete series of weight \(k\), which is unramified at every prime not dividing \(N^{\prime}\) and, if \(N^{\prime}\) is prime, that \(\pi_{N^{\prime}}^{\prime}\) is the Steinberg representation. * The representations \[\pi\simeq\pi_{\infty}\otimes\bigotimes_{p}^{\prime}\pi_{p}\] are cuspidal automorphic representations of \(G(\mathbb{A})\), with trivial central character whose archimedean component \(\pi_{\infty}\) is a holomorphic discrete series of weights \(\Lambda=(-2k,k)\) (cf. [20] and below) for the same value of \(k\) as above, which is unramified at every prime not dividing \(N\) and, if \(N\) is prime, that \(\pi_{N}\) is either unramified or the Steinberg representation. We denote by \(\mathcal{A}_{k}(N)\) the finite set of all such automorphic representations \(\pi\) and we denote by \[\mathcal{A}_{k}^{n}(N)\subset\mathcal{A}_{k}(N)\] the subset of those representations which are ramified at \(N\): if \(N=1\), \[\mathcal{A}_{k}^{n}(1)=\mathcal{A}_{k}(1)\] and if \(N\) is prime, this is the set of \(\pi\) such that \(\pi_{N}\) is the Steinberg representation. By a version of Weyl's law, one has1 Footnote 1: We would like to point out a seeming discrepancy in [20, Lemma 9.4] between the formula computing the formal degree of \(\pi_{\lambda}\) and the original (correct) formula from Harish-Chandra [10, Remark 5.5]; the former would lead to the asymptotic in the \(k\)-aspect \(|\mathcal{A}_{k}(N)|\asymp k^{2}N^{3}\) which is not correct; we are thankful to Paul Nelson for pointing to this error. \[|\mathcal{A}_{k}^{n}(N)|\asymp k^{3}N^{3}\text{ as }\ k+N\to\infty. \tag{1.5}\] _Remark 1.1_.: The condition (1.2), (1.3) and (1.4) are made either to avoid pathologies and technical difficulties in small weights or characteristic. They will insure the absolute convergence and possibly non-vanishing or various local and global integrals in our argument. The conditions (1.4) and (1.3) can perhaps be improved with more intensive combinatorial efforts while allowing very small weights (1.2) will certainly constitute a major technical challenge. #### 1.1.2. Upper and lower bounds for the first moment Regarding the \(L\)-function \(L(s,\pi_{E}\times\pi_{E}^{\prime})\), the assumptions above allow us to compute explicitly (see Propositions 5.8 and 5.9) the archimedean factor \(L_{\infty}(s,\pi_{E}\times\pi_{E}^{\prime})\), the arithmetic conductor \(C_{f}(\pi_{E}\times\pi_{E}^{\prime})\) and the root number which equals \[\varepsilon(\pi_{E}\times\pi_{E}^{\prime})=+1.\] Moreover the central value \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\) is then non-negative (see below). To state our first main result, we need the _Adjoint_\(L\)-functions of \(\pi_{E}\) and \(\pi_{E}^{\prime}\) \[L(s,\operatorname{Ad},\pi_{E}),\ L(s,\operatorname{Ad},\pi_{E}^{\prime})\] whose analytic continuations around \(s=1\) are a consequence of Rankin-Selberg theory. **Theorem 1.1**.: _Let notations and assumptions be as in SS1.1.1. Given \(N^{\prime}\) there exists \(C(N^{\prime})\geq 1\) such that for any \(k,N\) as above (\(N\) split, \(k>32\) even) and such that_ * \(k+N\geq C(N^{\prime})\) _and_ * _either_ \(N=1\) _or_ \(N\geq C(N^{\prime})\)_,_ _one has_ \[\frac{1}{|\mathcal{A}_{k}^{\mathrm{n}}(N)|}\sum_{\pi\in\mathcal{A}_{k}^{ \mathrm{n}}(N)}\frac{L(1/2,\pi_{E}\times\pi_{E}^{\prime})}{L(1,\operatorname {Ad},\pi_{E})L(1,\operatorname{Ad},\pi_{E}^{\prime})}\asymp 1, \tag{1.6}\] _where the implicit constants depend on \(E\) and \(N^{\prime}\)._ _In particular for any such \((k,N,N^{\prime})\) there exists \(\pi\in\mathcal{A}_{k}^{\mathrm{n}}(N)\) such that_ \[L(1/2,\pi_{E}\times\pi_{E}^{\prime})\neq 0. \tag{1.7}\] _Remark 1.2_.: Given \(A,B:\mathcal{P}\to\mathbb{R}\) two real valued functions on a set \(\mathcal{P}\), we use the notation \[A\asymp B\] to mean that there exists positive constants \(0<c<C\) such that \[\forall p\in\mathcal{P},\ cA(p)\leq B(p)\leq CA(p).\] In particular, \(A\) and \(B\) have the same support and have the same sign (when non-zero). If \(\mathcal{P}\) and \(A,B\) belong to families of sets \((\mathcal{P}_{E})\) and functions \((A_{E},B_{E}:\mathcal{P}_{E}\to\mathbb{R})\) indexed by some parameter \(E\), we write \[A\asymp_{E}B\] to mean that there exists functions \(E\to c_{E},C_{E}\in\mathbb{R}_{>0}\) such that \[\forall E,\forall p\in\mathcal{P}_{E},\ c_{E}A_{E}(p)\leq B_{E}(p)\leq C_{E}A_ {E}(p).\] _Remark 1.3_.: We have assumed that \(N\) and \(N^{\prime}\) are prime to simplify the proof and leave it to the interested reader to extend these results to more general odd squarefree integers. _Remark 1.4_.: Theorem 1.1 so far establishes the existence of at least _one_ non vanishing central value \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\). In Theorem 1.3 below, we will establish a weak lower bound on the number of \(\pi\)'s such that (1.7) holds as \(k+N\to\infty\). _Remark 1.5_.: The problem of evaluating moments of \(L\)-functions involving families of automorphic forms on groups of higher rank is difficult and there are not many positive results. One may think of the work of X. Li [111] involving \(\operatorname{GL}(3)\times\operatorname{GL}(2)\) Rankin-Selberg \(L\)-functions as well as the work of Blomer-Khan [1] and Qi [20] in a similar context. However a common feature of these works is that the moments are on average over the automorphic forms of the smaller group \(\operatorname{GL}_{2}\). Closer to the spirit of this paper is the work of Blomer-Buttcane [20] who estimated the fourth moment of standard \(L\)-functions on average over families of \(\operatorname{GL}_{3}\)-automorphic representations with large archimedean parameters and obtain subconvex bounds. Another is the work of Nelson-Venkatesh [21] who build on their substantial development of microlocal calculus on Lie groups, and obtain an asymptotic formula for the first moment of Rankin-Selberg \(L\)-functions \(L(1/2,\pi_{E}\times\pi^{\prime}_{E})\) associated with pairs of unitary groups \(U(n+1)\times U(n)\) (for any \(n\geq 2\)) on average over families of \(U(n)\)-automorphic forms with large archimedean parameters in generic position. In [11], these methods were expanded further, and Nelson succeeded in evaluating the first moment above this time on average over suitable families of \(U(n+1)\)-automorphic forms; finally in [11], Nelson treated the degenerate case of _split_ unitary groups (relative to \(E=\mathbb{Q}\times\mathbb{Q}\), so that \(G\times G^{\prime}=\operatorname{GL}(n+1)\times\operatorname{GL}(n)\)) and with \(\pi^{\prime}\) being an Eisenstein series representation: this gave bounds for the \(n\)-th moment of standard \(\operatorname{GL}(n+1)\)\(L\)-functions \(L(1/2,\pi)\) with \(\pi\) having large archimedean parameters in generic position (so as to avoid the "conductor dropping phenomenon"). Further in that direction, very recently, Jana-Nunes and the last named author, independently obtained asymptotic formula for weighted moments of central values of products of \(\operatorname{GL}(n+1)\times\operatorname{GL}(n)\) Rankin-Selberg \(L\)-functions \(L(1/2,\pi\times\pi^{\prime}_{1})\overline{L(1/2,\pi\times\pi^{\prime}_{2})}\) for a pair of possibly varying cuspidal representations \(\pi^{\prime}_{1},\pi^{\prime}_{2}\) and on average over \(\pi\)'s whose spectral parameters are in generic position (avoiding the conductor dropping phenomenon)[23, 24]. Let us point out that the situation we consider here is very non-generic and indeed the conductor of our degree \(12\)\(L\)-functions drop significantly (see SS5.4). ### Galois representations, Ramanujan & the Bloch-Kato conjectures The problem of exhibiting non-vanishing of central values within families of \(L\)-functions is a natural one in analytic number theory but also in arithmetic geometry, notably in the context of the Bloch-Kato conjectures. We explain this connections here along with the fact that \(\pi\) and \(\pi^{\prime}\) are tempered. #### On temperedness It is classical and due to Deligne that for any prime \(\ell\) there is an \(\ell\)-adic Galois representation, \(V_{\ell}(\pi^{\prime})\), associated to \(\pi^{\prime}\) whose Frobenius eigenvalues at primes \(v\nmid\ell N^{\prime}\) equal (up to a twist) the Langlands parameters of \(\pi^{\prime}_{\ell}\). As \(V_{\ell}(\pi^{\prime})\) occurs in the cohomology of a certain Kuga-Sato modular variety, this implies, by Deligne's Weil II, the purity of the Frobenius \(\operatorname{Frob}_{v}\) and that \(\pi^{\prime}_{v}\) is tempered; varying \(v\), that \(\pi^{\prime}\) is tempered everywhere (the Ramanujan-Petersson conjecture). For \(\pi\), which is regular cohomological, the association of an \(\ell\)-adic Galois representation \(V_{\ell}(\pi)\) is due to the works of Rogawski, Kottwitz et al, and the proofs are assembled in [10] (see Chap. 7 Thms A and B). If \(\pi\) is stable, the associated \(3\)-dimensional Galois representation \(V_{\ell}(\pi)\) occurs in the cohomology in degree \(2\) of a modular Picard surface with locally constant coefficients: by the work of Deligne, this implies the purity of the Frobenius \(\operatorname{Frob}_{v},\ v\nmid\ell N\) and eventually the temperedness of \(\pi\) at every place. On the other hand, when \(\pi\) is endoscopic, the Galois representation occurring in the cohomology need not be \(3\)-dimensional; however due to our infinity type, the archimedean parameter forces it to come from a representation \(\pi_{1}\times\xi\) of \(U(1)\) with \(\xi\) unitary and \(\pi_{1}\) in the discrete series at infinity, in fact of the same weight \(2k\). So again \(\pi\) is tempered because \(\pi_{1}\) and \(\xi\) are. Note that regarding temperedness, \(\pi\) being cohomological is not sufficient and we need to use that \(\pi\) occurs in the middle degree cohomology of Picard modular surfaces. By contrast those occurring in degree \(1\) are always non-tempered. On the Bloch-Kato conjectureSince \(\pi\) and \(\pi^{\prime}\) are cuspidal, the representations \(V_{\ell}(\pi)\), \(V_{\ell}(\pi^{\prime})\) are irreducible and even absolutely irreducible as neither \(\pi\) nor \(\pi^{\prime}\) admits self twists (because of the Steinberg components at \(N^{\prime}\) and \(N\)). The same holds modulo \(\ell\) for \(\ell\) large enough. The tensor product \(V_{\ell}(\pi)\otimes V_{\ell}(\pi^{\prime})\) is also absolutely irreducible : again the Steinberg components at the distinct \(N\) and \(N^{\prime}\) prevent \(\pi\) from being a twist of the symmetric square of \(\pi^{\prime}\). To the later representation is associated a Bloch-Kato Selmer group \(H^{1}_{f}(V_{\ell}(\pi)\otimes V_{\ell}(\pi^{\prime})(*))\) (here \((*)\) is a suitable Tate twist depending on \(k\)) and the Bloch-Kato conjecture predicts that this Selmer group is zero if \(L(1/2,\pi_{E}\times\pi^{\prime}_{E})\) does not vanish. We expect this conjecture to follow from the work of Y. Liu, Y. Tian, L. Xiao, W. Zhang and X. Zhu [11]. Indeed their results established the Bloch-Kato conjecture \(\pi\) and \(\pi^{\prime}\) are regular algebraic (as we have here) but cohomological with trivial coefficients, for appropriate admissible \(\ell\)'s whenever \(V_{\ell}(\pi)\otimes V_{\ell}(\pi^{\prime})\) is absolutely irreducible and \(V_{\ell}(\pi)\), \(V_{\ell}(\pi^{\prime})\) are residually irreducible (they also need to assume the presence of Steinberg components -which we have- and supercuspidal component). In our situation the trivial coefficients condition forces \(k\) to be \(2\) which we do not consider to avoid some technical difficulties (that may be serious). We are happy on the other hand to hear from X. Zhu that their results extend to non-trivial coefficients (and without requiring supercuspidal components): this will be discussed in a forthcoming work. ### \(L\)-functions and Bessel periods Our proof of Theorem 1.1 follows along the lines of the earlier work of the second author and J. Rogawski [10] but with substancially more complicated calculations; it is a consequence of the asymptotic evaluation, using the _Relative Trace Formula_, of sums of _Bessel periods_ of the shape \[\mathcal{P}(\varphi,\varphi^{\prime}):=\int_{G^{\prime}(\mathbb{Q})\setminus G ^{\prime}(\mathbb{A})}\varphi(g)\overline{\varphi^{\prime}}(g^{\prime})dg^{\prime}\] where \(\varphi\in\pi\) and \(\varphi^{\prime}\in\pi^{\prime}\) are suitable factorable automorphic forms (see [10] for a detailed discussion of these periods). #### 1.3.1. From periods to \(L\)-functions The derivation of Theorem 1.1 from 1.2 follows from the Gan-Gross-Prasad conjectures which postulate a precise relation between the square of the Bessel period \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\) and the central \(L\)-value \(L(1/2,\pi_{E}\times\pi^{\prime}_{E})\). For unitary pairs \(U(n)\times U(n-1)\) these conjectures have been established by R. Beuzart-Plessis, Y. Liu, W. Zhang and X. Zhu for tempered representations (which is the case by the discussion in SS1.2) in the stable case [1]. More precisely, Theorem 1.9 of [1] gives \[\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{ \langle\varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}= \frac{\Lambda(1,\eta)\Lambda(2,\eta^{2})\Lambda(3,\eta)}{2}\times\\ \frac{\Lambda(1/2,\pi_{E}\times\pi^{\prime}_{E})}{\Lambda(1,\pi_ {E},\mathrm{Ad})\Lambda(1,\pi^{\prime}_{E},\mathrm{Ad})}\cdot\prod_{v} \mathcal{P}^{\natural}_{v}(\varphi,\varphi^{\prime}), \tag{1.8}\] where \(\eta\) is the quadratic character associated to \(E/\mathbb{Q}\), \(\Lambda(s,\cdot)\) denote the _completed \(L\)-function_ (with the archimedean factor included) and \(\prod_{v}\mathcal{P}^{\natural}_{v}(\varphi,\varphi^{\prime})\) is a finite product of local periods. In Section 5.8 we evaluate the local periods \(\mathcal{P}^{\natural}_{v}(\varphi,\varphi^{\prime})\) explicitly for a specific automorphic form \(\varphi^{\prime}\) (see SS4.5.1) and for \(\varphi\) varying over an orthogonal family \(\mathcal{B}^{\widetilde{n}}_{k}(N)\) of factorable automorphic forms of level \(N\) and minimal weights \((-2k,k)\) belonging the various representations in \(\mathcal{A}_{k}(N)\) (see SS5.1 for precise definitions). We show that for such \(\varphi^{\prime}\) and \(\varphi\), the local periods are non-negative and that \[\frac{L_{\infty}(1/2,\pi_{E}\times\pi^{\prime}_{E})}{L_{\infty}(1,\pi_{E}, \operatorname{Ad})L_{\infty}(1,\pi^{\prime}_{E},\operatorname{Ad})}\prod_{v} \mathcal{P}^{\natural}(\varphi,\varphi^{\prime})\asymp_{E}\frac{1}{{kNN^{ \prime}}^{2}} \tag{1.9}\] and by (1.8) one has \[\frac{1}{{kNN^{\prime}}^{2}}\frac{L(1/2,\pi_{E}\times\pi^{\prime}_{E})}{L(1, \operatorname{Ad},\pi_{E})L(1,\operatorname{Ad},\pi^{\prime}_{E})}\asymp_{E} \frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}; \tag{1.10}\] the central values \(L(1/2,\pi_{E}\times\pi^{\prime}_{E})\) are thus non-negative. _Remark 1.6_.: The Ichino-Ikeda conjectures, relating central \(L\)-values to Bessel periods, has now been proved by Beuzart-Plessis for unitary groups in all cases [2]. Still we have to perform some local calculations for our specific test function (which not of positive type) to deduce positivity. #### 1.3.2. Averages of squares of Bessel periods With (1.10) established, (1.6) is then consequence (for \(\ell=1\)) of the following result which evaluate the average of the square of the Bessel periods along the family \(\mathcal{B}^{\widetilde{n}}_{k}(N)\) (see Theorem 11.1 for a more precise version): **Theorem 1.2**.: _Let notations and assumptions be as in SS1.1.1. Let \(\varphi^{\prime}\in\pi^{\prime}\) be the (fixed) automorphic newform of level \(N^{\prime}\) and minimal weight \(k>32\), defined in SS4.5.1 and let \(\mathcal{B}^{\widetilde{n}}_{k}(N)\) be the finite family of automorphic forms defined in SS5.1._ _Given \(\ell\geq 1\) an integer coprime with \(N\) and divisible only by primes inert in \(E\), we denote by \(\lambda_{\varphi}(\ell)\) and \(\lambda_{\varphi^{\prime}}(\ell)\) the eigenvalues at \(\varphi\) and \(\varphi^{\prime}\) of the Hecke operators \(T(\ell)\) and \(T^{\prime}(\ell)\) described in SS11.1._ _There is an absolute constant \(C\geq 1\) such that for any \(\delta>0\) and any quadruple \((k,\ell,N,N^{\prime})\) satisfying_ _either_ \[(\ell N^{\prime})^{2}\leq N^{1-\delta},\ N>16,k\geq C(1+1/\delta) \tag{1.11}\] _or_ \[(\ell N^{\prime})^{2}\leq k^{1-\delta},\ N\leq 2^{4k}, \tag{1.12}\] _we have as \(k+\ell+N+N^{\prime}\to\infty\),_ \[\sum_{\varphi\in\mathcal{B}^{\widetilde{n}}_{k}(N)}\lambda_{\varphi}(\ell) \frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}=w_{E}\frac{d_{ \Lambda}}{d_{k}}(\frac{N}{N^{\prime}})^{2}\Psi(N)\mathfrak{S}(N^{\prime})\frac {\lambda_{\pi^{\prime}}(\ell)+o_{\delta,E}(1)}{\ell}. \tag{1.13}\] _Here \(w_{E}\) is the number of units of \(E\),_ \[\Psi(N)=\prod_{p\mid N}\left(1-\frac{1}{p}+\frac{1}{p^{2}}\right),\ \mathfrak{S}(N^{\prime})=\prod_{p\mid N^{ \prime}}(1-\frac{1}{p^{2}})^{-1}\] _(possibly equal to \(1\) if \(N\) or \(N^{\prime}\) is equal to \(1\)) and_ \[d_{\Lambda}=\frac{(2k-2)(k+2)(k-6)}{3},\ d_{k}=k-1\] _(the formal degrees of \(\pi_{\infty}\) and \(\pi^{\prime}_{\infty}\) respectively)._ _Remark 1.7_.: We call the conditions (1.11) and (1.12), on the relative sizes of \(N^{\prime}\), \(N\) and \(k\), the _stable ranges_. As we will see these conditions imply that the error term \(o_{\delta,E}(1)\) in (1.13) decay exponentially in \(k\) as \(k\to\infty\) or by a positive power of \(N\) (with an exponent linear in \(k\)) as \(N\to\infty\). The first stable range (1.11) is reminiscent to the stable range present in [11] and subsequently in [10]. ### Quantitative non-vanishing Theorems 1.1 (through 1.2) show that the set of \(\pi\)'s for which the corresponding central value \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\) does not vanish is non empty. One may wonder on its size in terms of \(N\) or \(k\). Using the amplification method, we prove that the size of this set has polynomial growth as \(k+N\to\infty\). **Theorem 1.3**.: _Let notations and assumptions be as in Theorem 1.1._ _There exists an absolute constant \(\delta>0\) such that as \(k+N\to\infty\), we have_ \[|\{\pi\in\mathcal{A}_{k}^{\mathrm{n}}(N),\ L(1/2,\pi_{E}\times\pi_{E}^{\prime })\neq 0\}|\gg_{N^{\prime}}(kN)^{\delta}. \tag{1.14}\] The lower bound (1.14) is an immediate consequence of the following _pointwise_ upper bound which we deduced from Theorem 1.2 using the _amplification method_: **Theorem 1.4**.: _Notations be as above; there exists an absolute constant \(\delta>0\) such that for any \(\pi\in\mathcal{A}_{k}(N)\) one has_ \[\sum_{\varphi\in\mathcal{B}_{k,\pi}^{\mathrm{n}}(N)}\frac{\left|\mathcal{P}( \varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}\ll_{N^{\prime}}(kN)^{2-\delta}. \tag{1.15}\] This bound, together with (see (1.13)) \[\sum_{\varphi\in\mathcal{B}_{k}^{\mathrm{n}}(N)}\frac{\left|\mathcal{P}( \varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}\asymp(kN)^{2}\] and (1.10) implies (1.14). _Remark 1.8_.: As the proof will show, any fixed \(\delta\) in the interval \((0,1/82)\) would work. We have not tried to optimize this exponent as it is probably very far from the truth. _Remark 1.9_.: Notice that (1.15), (1.10) and the upper bound \[L(1,\mathrm{Ad},\pi_{E})\ll_{E}(kN)^{o(1)} \tag{1.16}\] imply the bound \[L(1/2,\pi_{E}\times\pi_{E}^{\prime})\ll(kN)^{3-\delta+o(1)}.\] This bound, however, is _weaker_ that the _convexity bound_ for \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\): as we will see below in (5.32), the analytic conductor of \(L(s,\pi_{E}\times\pi_{E}^{\prime})\) at \(s=1/2\) is \(\asymp_{N^{\prime}}k^{8}N^{4}\) so that the convexity bound reads \[L(1/2,\pi_{E}\times\pi_{E}^{\prime})\ll_{N}^{\prime}(N^{4}k^{8})^{1/4+o(1)}= (kN)^{o(1)}k^{2}N. \tag{1.17}\] Unfortunately, we are unable to turn tables and use (1.17), because we don't know how to prove -unconditionally- a good _lower bound_ for \(L(1,\mathrm{Ad},\pi_{E})\): one would expect that \[L(1,\mathrm{Ad},\pi_{E})=(kN)^{o(1)} \tag{1.18}\] (see [11] for a general discussion about this problem), which would give by (1.10) \[\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}\ll_{N^{\prime}} (kN)^{o(1)}k \tag{1.19}\] and by (1.6) \[|\{\pi\in\mathcal{A}_{k}^{\mathrm{n}}(N),\ L(1/2,\pi_{E}\times\pi_{E}^{\prime}) \neq 0\}|\gg_{N^{\prime}}(kN)^{o(1)}kN^{2}\geq|\mathcal{A}_{k}^{\mathrm{n}}(N)|^{ 2/3-o(1)}.\] It would be interesting to obtain the "convexity" bound (1.19) by a direct geometric analysis of the period integral \(\mathcal{P}(\varphi,\varphi^{\prime})\). _Remark 1.10_.: Another possibility to improve the lower bound (1.14) would be to obtain "good" upper bounds for a second moment, for instance: \[\frac{1}{|\mathcal{A}_{k}(N)|}\sum_{\pi\in\mathcal{A}_{k}(N)}\frac{L(1/2,\pi_{E }\times\pi_{E}^{\prime})^{2}}{L(1,\mathrm{Ad},\pi_{E})L(1,\mathrm{Ad},\pi_{E}^ {\prime})}\ll(kN)^{o(1)}.\] This however should not be simple : it would yield a "Burges type" subconvex bound in the \(k\)-aspect. _Remark 1.11_.: In view of the proof of the Bloch-Kato conjecture expected from the ongoing work of Y. Liu, Y. Tian, L. Xiao, W. Zhang and X. Zhu mentionned in SS1.2, our result would imply the existence of infinitely many Selmer groups having rank \(0\). ### Idea of Proofs and Structure of the Paper The main ingredient of this work is the _relative trace formula of Jacquet and Rallis_ for the pair \((G,G^{\prime})\) for a suitable choice of test functions (described in SS4). As in [20], our treatment differs from the traditional uses of the relative trace formula in functoriality (such as [1]) where one compares the geometric sides of two instances of the RTF to deduce consequences for the spectral sides) as we evaluate the geometric side by direct arguments. Such an approach was initiated by Rogawski and the second author of this paper in [14] when they rederived, with a delineation of the underlying measure, the non-vanishing result of Duke mentioned above; the relative trace formula then was for the pair \((G,H)=(\mathrm{GL}_{2},T)\) for \(T\simeq\mathrm{Res}_{E/\mathbb{Q}}\,\mathbb{G}_{\mathrm{m}}/\mathbb{G}_{ \mathrm{m}}\) an embedded torus attached to an imaginary quadratic field. This connection was exploited in a subsequent paper by the first and second authors of this paper, as they established, by averaging the Gross-Zagier formula, a hybrid subconvexity bound for \(L(s,\pi_{E})\) as \(D\) and \(N\) vary in a controlled way; the key ingredient was the derivation of an _exact_ average formula in a _stable_ range [13]. A generalization to Hilbert modular forms and suitably general idele class characters \(\chi\) was achieved by B. Feigon and D. Whitehouse, using an anisotropic pair \((G,T)\), with \(G\) an inner form of \(\mathrm{GL}(2)\)[11]. The knowledgeable reader will have noted that the pair \((\mathrm{GL}(2),T)\) is not very far from to unitary group case \(U(1,1)\times U(1)\) and we observe both similarities and discrepancies when passing to the \(U(2,1)\times U(1,1)\) case. A similarity with [14] is that the main terms come from the contributions of the identity and unipotent cosets while the regular coset contribution is an error term2. It is worth noting, however that in the present case, the unipotent coset contribution is _significantly smaller_ than the main term: by a factor which is at least a positive power of the size of the family \(\mathcal{A}_{k}(N)\), while in [14] the difference is at most by a logarithmic factor. Another important difference is that the treatment of the regular orbital integrals is quite a bit more involved. We proceed by reducing the problem to bounding local integrals which we do by splitting into many subcases. In the present paper, we evaluate the average of the product \(\mathcal{P}(\varphi,\varphi_{1}^{\prime})\overline{\mathcal{P}(\varphi, \varphi_{2}^{\prime})}\) for \(\varphi_{1}^{\prime}\) and \(\varphi_{2}^{\prime}\) belonging to the _same_ representation \(\pi^{\prime}\); it turns out that, most of the time, the identity contribution is non-zero and in fact dominates the unipotent contribution. As in [10], if \(\varphi_{1}^{\prime}\) and \(\varphi_{2}^{\prime}\) belong to distinct representations, one can check easily that the identity contribution vanishes (because \(\varphi_{1}^{\prime}\) and \(\varphi_{2}^{\prime}\) are orthogonal). As for the unipotent contribution, we expect it to become the dominant term (proportional to \(L(1,\pi_{1,E}^{\prime}\times\pi_{2,E}^{\prime})\)); this will lead to simultanenous non-vanishing results analogous to those of [10, 11, 12], namely the existence of \(\pi\) for which \[L(1/2,\pi_{E}\times\pi_{1,E}^{\prime})L(1/2,\pi_{E}\times\pi_{2,E}^{\prime}) \neq 0.\] We will come back to this question in a forthcoming work. Let us now provide a bit more details. We will allow ourselves, in this introduction, to be at time imprecise and write things which are only "morally" true. So the sketch should not be taken as a precise reflection of the details of our argument. Let \(\varphi^{\prime}\) be a primitive holomorphic cusp form on \(G^{\prime}(\mathbb{A})\) of weight \(k\), level \(N^{\prime}\) and trivial central character. Let \(\pi^{\prime}\) be the corresponding cuspidal representation. We consider Jacquet's relative trace formula which takes the shape \[\text{Spectral Side}=\int_{[G^{\prime}]}\int_{[G^{\prime}]}\mathrm{K}(x,y) \varphi^{\prime}(x)\overline{\varphi^{\prime}(y)}dxdy=\text{Geometric Side}, \tag{1.20}\] where \(\mathrm{K}(x,y)=\mathrm{K}^{f}(x,y)\) is the kernel function of the Hecke operator \(R(f)\) associated to a test function \(f\), see Section 3 for details. Note that (1.20) can be thought as a'section' of the Jacquet-Rallis trace formula [10]. In Sec. 4 we construct an explicit test function \(f^{\mathfrak{n}}\) and use it into (1.20) to compute/estimate both sides. The spectral side of (1.20) is handled in Sec. 5. We show that the operator \(R(f^{\mathfrak{n}})\) eliminates the non-cuspidal spectrum so that the spectral side (1.13) becomes a (finite) second moment of Bessel periods relative to specific holomorphic cusp forms on \(G\) and \(G^{\prime}\). For these automorphic forms, we use the recent work of [1] and [1], we obtain an explicit Gan-Gross-Prasad formula of Ichino-Ikeda type for \(G\times G^{\prime}\) relating the central \(L\)-values \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\) to local and global period integrals. For this, we need compute explicitely several integrals of local matrix coefficients; this is done in SSA.1 in the Appendix, and the main result in this section is Proposition 5.4. With this, one can write the spectral side of (1.20) as a weighted sum of central \(L\)-values \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\). Next we evaluate the geometric side which is a sum of orbital integrals indexed by the double quotient \(G^{\prime}(\mathbb{Q})\backslash G(\mathbb{Q})/G^{\prime}(\mathbb{Q})\). In Sec. 6 we decompose these orbital integrals into three subsets according to the properties of the classes in the quotient: the identity element, the unipotent type and regular type; a priori there also could be a term associated with an element of the shape \(s.u\) with \(s,u\) non-trivial and respectively semisimple and unipotent but luckily, with our choice of global double coset representatives (Proposition 6.4) such a term does not occur. So (1.20) becomes \[\text{Geometric Side}=\text{Identity Orb.}+\text{Unipotent Orb.}+\text{ Regular Orb.}, \tag{1.21}\] where 'Orb.' refers to orbital integrals. The first term is made of a single orbital integral, the second is a finite sum of unipotent orbital integrals while the third term is an infinite sum of regular orbital integrals. They will be handled by different approaches in the subsequent sections. The identity orbital integral is calculated in Section 7. Its contribution provides the main term in Theorem 1.2. In Section 8, we estimate the unipotent orbital integral by local computations. The contribution from this orbital integral gives a second main term on the right hand side of (1.13) which decays exponentially fast as \(k\) grows. Lastly, the more involved regular orbital integrals are investigated in Sections 9 and 10. The main result in this part is Theorem 10.1, which provides an upper bound for the infinite sum of the regular orbital integrals. A particular feature of this bound is that in the stable range (1.11) the contribution of the regular orbital integrals again decay exponentially fast with \(k\). Gathering these estimates in Section 11, we then prove Theorem 1.2 in its more precise form, Theorem 11.1. In SS12 we also interpret Theorem 11.1 as an horizontal Sato-Tate type equidistribution result for the Hecke eigenvalues of the \(\pi\) at a finite fixed set of inert primes weigthed the periods \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\). This is inspired by the work of Royer [14] who obtained vertical Sato-Tate type equidistribution results for Hecke eigenvalues of holomorphic modular forms of weight \(2\) and large level weigthed the Hecke \(L\)-values \(L(1/2,f)\). Notice that Royer combined his results with a technique of Serre [13] to exhibit irreducible factors of \(\operatorname{Jac}(X_{0}(N))\) of dimension \(\gg\log\log N\) and rank \(0\) (or of rank equal to the dimension). We expect that the ongoing work of X. Zhu and his collaborators will make it possible to obtain results of similar flavor. Combining Theorem 11.1 with Proposition 5.4, one can deduce Theorem 1.1; however we need to be able to average only over _new forms_ (if \(N>1\)). In SS13 we show that the old forms contribution is indeed smaller. In SS14 we prove Theorem 1.4 using Theorem 11.1 and the amplification method. Using again Proposition 5.4, we eventually prove Theorem 1.3. ## 2. **Notations** ### The quadratic field \(E\) Let \(E=\mathbb{Q}(\sqrt{-D})\hookrightarrow\mathbb{C}\) be an imaginary quadratic field; we denote by \(\eta\) the associated Legendre symbol which we view indifferently as a quadratic Dirichlet character, a curater on the group of ideles or a character on the Galois group of \(\mathbb{Q}\). We denote the Galois involution by \(\sigma\in\operatorname{Gal}(E/\mathbb{Q})\); it will also be useful to write \[\sigma(z)=\overline{z}.\] The trace and the norm are denoted by \[z\mapsto\operatorname{tr}_{E/\mathbb{Q}}(z)=z+\overline{z},z\mapsto \operatorname{Nr}_{E/\mathbb{Q}}(z)=z.\overline{z}\] respectively. We denote by \(E^{\times}\) the multiplicative group of inversible elements and by \[E^{1}=\{z\in E^{\times},\ z.\overline{z}=1\}\subset E^{\times}\] the subgroup of norm \(1\) elements; whenever useful we will denote in the same way the corresponding \(\mathbb{Q}\)-algebraic groups. #### 2.1.1. Integers Let \(\mathcal{O}_{E}\) be the ring of integers of \(E\) and \[\mathcal{O}_{E}^{\times}=\mathcal{O}_{E}^{1}=E^{1}\cap\mathcal{O}_{E}\] is group of units; set \(w_{E}:=\#\mathcal{O}_{E}^{1}\). Let \(D_{E}<0\) be the discriminant of \(\mathcal{O}_{E}\); we set \[\Delta=i|D_{E}|^{1/2}\in\mathcal{O}_{E}.\] The fractional ideal \(\mathcal{D}_{E}^{-1}=\Delta^{-1}\mathcal{O}_{E}\) is the different: the \(\mathbb{Z}\)-dual of \(\mathcal{O}_{E}\) with respect to the trace bilinear form \[(z,z^{\prime})\mapsto\operatorname{tr}_{E/\mathbb{Q}}(zz^{\prime}).\] ### The Hermitian space and its unitary group group Let \(V\) be a \(3\)-dimensional vector space over \(E\), with basis \(\{e_{1},e_{0},e_{-11}\}.\) Let \(\langle\cdot,\cdot\rangle_{J}\) be a Hermitian form on \(V\) whose matrix with respect to \(\{e_{1},e_{0},e_{-1}\}\) is \[J=\left(\begin{array}{ccc}&&1\\ &1\\ &1\end{array}\right). \tag{2.1}\] We denote by \[G=U(V)\] the unitary group preserving the form \(\langle\cdot,\cdot\rangle_{J}\). this is an algebraic defined over \(\mathbb{Q}\) and for any \(\mathbb{Q}\)-algebra \(R\), the group of it \(R\)-points is \[G(R)=\big{\{}g\in\operatorname{GL}(V\otimes_{\mathbb{Q}}R),\ \overline{g}Jg=J\big{\}}.\] The center of \(G\) is noted \(Z_{G}\) and made of the diagonal hermitian matrices \[Z_{G}(R)=\big{\{}\begin{pmatrix}z&&\\ &z&\\ &&z\end{pmatrix},\ z\in E^{1}(R)\big{\}}\] so that \[Z_{G}\simeq E^{1}=U(1)\] The special hermitian subgroup is noted \(\operatorname{SU}(V)\) and its \(R\) point are given by \[\operatorname{SU}(V)(R)=\{g\in U(V)(R),\ \det g=1\}.\] Also \(n=p+q\) with \(p,q\geq 0\) we denote by \(U(p,q)\) the unitary group for the space \(E^{n}\) equipped with the hermitian form \(\langle\cdot,\cdot\rangle_{p,q}\) with \(n\times n\) matrix \[J_{p,q}:=\left(\begin{array}{cc}\operatorname{Id}_{p}&\\ &-\operatorname{Id}_{q}\end{array}\right). \tag{2.2}\] In other terms \[U(p,q)(R)=\big{\{}g\in\operatorname{GL}(V\otimes_{\mathbb{Q}}R),\ \overline{g}J_{p,q}g=J_{p,q}\big{\}}.\] We set \(\operatorname{SU}(p,q)\) its special subgroup of elements of determinant \(1\). As usual we write \(U(n)\) and \(SU(n)\) for \(U(n,0)\) and \(SU(n,0)\). In particular we have \[U(1)=E^{1}.\] In fact, in this paper, excepted for \((p,q)=(1,0)\), we will only need the \(\mathbb{R}\)-points of the groups \(U(p,q)\) so to shorten notations, we will often write \[U(p,q)\text{ for }U(p,q)(\mathbb{R}).\] #### 2.2.1. The subgroup \(G^{\prime}\) Let \(G^{\prime}\leq G\) be the stabilizer of the anisotropic line \(\{e_{0}\}\). Then \(G^{\prime}\) also preserves \[W=\langle e_{0}\rangle^{\perp}=\langle e_{1},e_{-1}\rangle,\] the orthocomplement of \(\{e_{0}\}.\) Note that \(W\) is a \(2\)-dimensional Hermitian space, whose Hermitian form matrix is \(\left(\begin{array}{cc}&&1\\ &1\end{array}\right)\) with respect to the basis \(\langle e_{1},e_{-1}\rangle.\) Hence we have an isomorphism of \(\mathbb{Q}\)-algebraic subgroups \(U(W)\simeq G^{\prime}\) via the embedding \[i:\quad\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\mapsto\left(\begin{array}{cc}a&&b\\ &1&\\ c&&d\end{array}\right),\quad\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in U(W\otimes_{\mathbb{Q}}R), \tag{2.3}\] for any \(\mathbb{Q}\)-algebra \(R.\) We will identify \(U(W)\) with \(G^{\prime}\) henceforth. In particular we will sometimes represent an element of \(G^{\prime}\) as a \(2\times 2\) matrix (its matrix in the above basis). We also set \[H:=G^{\prime}\times G^{\prime}\subset G\times G. \tag{2.4}\] #### 2.2.2. Convention regarding the split places Let \(p\) be a finite prime. For any \(\mathbb{Q}\)-algebra \(R\) we denote by \(R_{p}\) its completion with respect to the \(p\)-adic valuation. If \(p\) is split, we have a decomposition \(p\mathcal{O}_{E}=\mathfrak{p}.\overline{\mathfrak{p}}\) into a product of distinct prime ideals of \(\mathcal{O}_{E}\). Let us choose such a prime say \(\mathfrak{p}\). The injection \(\mathbb{Q}\hookrightarrow E\) of filed induces isomorphisms \[E_{\mathfrak{p}}\simeq\mathbb{Q}_{p},\ V_{\mathfrak{p}}=E_{\mathfrak{p}}.e_{ 1}\oplus E_{\mathfrak{p}}.e_{0}\oplus E_{\mathfrak{p}}.e_{-1}\simeq\mathbb{Q} _{p}.e_{1}\oplus\mathbb{Q}_{p}.e_{0}\oplus\mathbb{Q}_{p}.e_{-1}\] and an isomorphism of linear groups \[G(\mathbb{Q}_{p})=U(V)(\mathbb{Q}_{p})\simeq\operatorname{GL}(3,\mathbb{Q}_{ p}). \tag{2.5}\] For every split prime, we make a such choice, once and for all, and represent the elements of \(G(\mathbb{Q}_{p})\) as \(3\times 3\) matrices with coefficients in \(\mathbb{Q}_{p}\). Likewise we represent the elements of \(G^{\prime}(\mathbb{Q}_{p})\) either as \(2\times 2\) or \(3\times 3\) matrices with coefficients in \(\mathbb{Q}_{p}\) (with a \(1\) as central coefficient for the later). ## 3. **A Relative Trace Formula on \(U(2,1)\)** ### Recollection of the general principles of the trace Formula on \(U(v)\) Denote by \(\mathbb{A}\) the adele ring of \(\mathbb{Q}.\) Let \(\mathcal{A}_{0}(G)\) be the space of cuspidal representations on \(G(\mathbb{A}).\) In this section, we will introduce the framework of a relative trace formula on \(U(V)\) for general test functions. We give the coarse geometric side of the trace formula, regardless of the convergence issue. In Sec. 4 we will specify our test function. Further careful analysis and computation towards the trace formula will be provided in following sections. #### 3.1.1. Automorphic Kernel Let \(K_{\infty}\subset G(\mathbb{R})\) be a maximal compact subgroup \((K_{\infty}\simeq U(2)(\mathbb{R})\times U(1)(\mathbb{R}))\). We consider a smooth function \(h\in C_{c}^{\infty}(G(\mathbb{A}))\) which is left and right \(K_{\infty}\)-finite, transforms by a unitary character \(\omega\) of \(Z_{G}\left(\mathbb{A}\right).\) Denote by \(\mathcal{H}(G(\mathbb{A}))\) the space of such functions. Then \(h\in\mathcal{H}(G(\mathbb{A}))\) defines a convolution operator \[R(h)\varphi=h*\varphi:x\mapsto\int_{G(\mathbb{A})}h(y)\varphi(xy)dy, \tag{3.1}\] on the space \(L^{2}\left(G(F)\backslash G(\mathbb{A}),\omega^{-1}\right)\) of functions on \(G(F)\backslash G(\mathbb{A})\) which transform under \(Z_{G}(\mathbb{A})\) by \(\omega^{-1}\) and are square integrable on \(G(F)\backslash G(\mathbb{A}).\) This operator is represented by the kernel function \[\operatorname{K}^{h}(x,y)=\sum_{\gamma\in G(\mathbb{Q})}h(x^{-1}\gamma y). \tag{3.2}\] When the test function \(h\) is clear or fixed, we simply write \(\operatorname{K}(x,y)\) for \(\operatorname{K}^{h}(x,y).\) It is well known that \(L^{2}\left(G(\mathbb{Q})\backslash G(\mathbb{A}),\omega^{-1}\right)\) decomposes into the direct sums of the space \(L_{0}^{2}\left(G(\mathbb{Q})\backslash G(\mathbb{A}),\omega^{-1}\right)\) of cusp forms and spaces \(L_{\operatorname{Eis}}^{2}\left(G(\mathbb{Q})\backslash G(\mathbb{A}),\omega ^{-1}\right)\) and \(L_{\operatorname{Res}}^{2}\left(G(\mathbb{Q})\backslash G(\mathbb{A}),\omega ^{-1}\right)\) defined using Eisenstein series and residues of Eisenstein series respectively and the operator \(\operatorname{K}\) splits up as: \[\operatorname{K}=\operatorname{K}_{0}+\operatorname{K}_{\operatorname{Eis}}+ \operatorname{K}_{\operatorname{Res}}.\] Let notations be as before. Given \(\varphi^{\prime}\) a cuspidal automorphic form on \(G^{\prime}(\mathbb{A})\) we consider the distribution \[h\mapsto J(h):=\int_{H(\mathbb{Q})\backslash H(\mathbb{A})}\operatorname{K}(x _{1},x_{2})\varphi^{\prime}(x_{1})\overline{\varphi}^{\prime}(x_{2})dx_{1}dx_{2}, \tag{3.3}\] where \(d\) refers the Tamagawa measure; more generally for \(*\in\{0,\operatorname{Eis},\operatorname{Res}\}\), we set \[h\mapsto J_{*}(h):=\int_{H(\mathbb{Q})\setminus H(\mathbb{A})}\operatorname{K}_{* }(x_{1},x_{2})\varphi^{\prime}(x_{1})\overline{\varphi}^{\prime}(x_{2})dx_{1}dx _{2}. \tag{3.4}\] Typically, because of convergence issue, one needs to introduce certain regularization into (3.4) to make these expressions well defined and then we have \[J(h)=J_{0}(h)+J_{\operatorname{Eis}}(h)+J_{\operatorname{Res}}(h).\] In our situation we have (SS5.1) \[J_{\operatorname{Eis}}(h)=J_{\operatorname{Res}}(h)=0 \tag{3.5}\] so that \[J(h)=J_{0}(h).\] Since \(\operatorname{K}^{h}\) is the kernel of \(R(h)\), \(J(h)\) admit a spectral expansion, ie. is a weighted sum, over an orthogonal family \(\varphi\in\mathcal{B}^{\widehat{\mathfrak{n}}}_{k}(N)\) of cuspforms of level \(N\) and weight \(k\), of the periods squared \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\) (see Lemma 5.2). We refer to this expression as the spectral side of the relative trace formula. #### 3.1.2. Geometric Reduction Assume now that \(\omega=1\). Let \(\Phi\) be a set of representatives of the double quotient \(G^{\prime}(\mathbb{Q})\setminus G(\mathbb{Q})/G^{\prime}(\mathbb{Q}).\) For each \(\gamma\in\Phi\), we denote by \[H_{\gamma}=\{(u,v)\in H=G^{\prime}\times G^{\prime},\;u^{-1}\gamma v=\gamma\}\] its stabilizer in \(G^{\prime}\times G^{\prime}\). Then one has (assuming that everything converges absolutely) \[J(h)= \int_{H(\mathbb{Q})\setminus H(\mathbb{A})}\sum_{\gamma\in G( \mathbb{Q})}h(x_{1}^{-1}\gamma x_{2})\varphi^{\prime}(x_{1})\overline{\varphi }^{\prime}(x_{2})dx_{1}dx_{2}\] \[= \int_{H(\mathbb{Q})\setminus H(\mathbb{A})}\sum_{\gamma\in \Phi}\sum_{\delta\in[\gamma]}h(x_{1}^{-1}\delta x_{2})\varphi^{\prime}(x_{1}) \overline{\varphi}^{\prime}(x_{2})dx_{1}dx_{2}. \tag{3.6}\] Therefore (assuming that everything converges absolutely) one can write \(J(h)\) as \[J(h)=\int_{H(\mathbb{Q})\setminus H(\mathbb{A})}\sum_{\gamma\in\Phi}\sum_{( \delta_{1},\delta_{2})\in H_{\gamma}(\mathbb{Q})\setminus H(\mathbb{Q})}h(x_ {1}^{-1}\delta_{1}^{-1}\gamma\delta_{2}x_{2})\varphi^{\prime}(x_{1})\overline {\varphi}^{\prime}(x_{2})dx_{1}dx_{2}.\] Then switching the sums and noticing the automorphy of \(\varphi^{\prime}\), one then obtains \[J(h)=\sum_{\gamma\in\Phi}\int_{H_{\gamma}(\mathbb{Q})\setminus H(\mathbb{A})} h(x_{1}^{-1}\gamma x_{2})\varphi^{\prime}(x_{1})\overline{\varphi}^{\prime}(x_{2} )dx_{1}dx_{2}. \tag{3.7}\] In Sec. 4.4 (see (4.31)) we will choose \(h=f^{\mathfrak{n}}\) precisely to make (3.6) converge absolutely so that (3.7) holds rigorously. ## 4. **Choice of local and global data** In this section, we describe our choices of the test function \(h\) and the automorphic form \(\varphi^{\prime}\) so that the relative trace formula captures the family of automorphic forms indicated above. The test function \(h\in\mathcal{C}^{\infty}_{c}(G(\mathbb{A}))\) will be a linear combination of factorable test functions of the shape \[f^{\mathfrak{n}}=f_{\infty}\otimes\otimes_{p}^{\prime}f^{\mathfrak{n}}_{p}. \tag{4.1}\] The non-archimedean components \(f^{\mathfrak{n}}_{p}\) are discussed in SS4.2. As we will see these components also depend (in addition to \(N\) and \(N^{\prime}\)) on an integer \(\ell\geq 1\) coprime to \(NN^{\prime}D\). The archimedean component \(f_{\infty}\) is discussed in the next subsections. It is obtained from matrix coefficients of holomorphic discrete series of \(U(2,1)\). ### Holomorphic Discrete Series Representation of \(U(2,1)\) Let us recall that there are three types of discrete series of \(U(2,1)\) which embed in the non-unitary principal series, namely the holomorphic, the antiholomorphic, and the nonholomorphic discrete series. A full description of these three discrete series and models for their respective representation spaces can be found in [26]. In this paper, we will focus on holomorphic discrete series. We recall that \(U(2,1)\) is the unitary group of the hermitian space \(\mathbb{C}^{3}\) with Hermitian form given by the matrix \[J_{2,1}:=\left(\begin{array}{ccc}1&&\\ &1&\\ &&-1\end{array}\right)\] Its maximal compact subgroup is \[U(2,1)\cap U(3)=\left(\begin{array}{ccc}U(2)&&\\ &U(1)\end{array}\right)\simeq U(2)\times U(1)\] This relates to the group \(G(\mathbb{R})\) as follows: let \[\mathrm{B}=\left(\begin{array}{ccc}1/\sqrt{2}&&1/\sqrt{2}\\ &1&\\ 1/\sqrt{2}&&-1/\sqrt{2}\end{array}\right). \tag{4.2}\] Then we have \(\mathrm{B}=\mathrm{B}^{-1}\), \(J_{2,1}=\mathrm{B}J\mathrm{B}^{-1}\) and we have an isomorphism \[\iota_{\mathrm{B}}:\ G(\mathbb{R})\xrightarrow{\sim}G_{J_{2,1}}(\mathbb{R}), \quad\ g\mapsto\mathrm{B}g\mathrm{B}^{-1}. \tag{4.3}\] Consequently the maximal compact subgroup of \(G(\mathbb{R})\) equals \[K_{\infty}=\mathrm{B}\left(\begin{array}{ccc}U(2)(\mathbb{R})&&\\ &U(1)(\mathbb{R})\end{array}\right)\mathrm{B}^{-1}. \tag{4.4}\] #### 4.1.1. Holomorphic Discrete Series on \(Su(2,1)\) Let \(SU(2,1)\subset U(2,1)\) be its special subgroup. It's maximal compact subgroup is noted \[K_{\infty,2,1}=SU(2,1)\cap U(3)=\Bigg{\{}\left(\begin{array}{ccc}u&0\\ 0&(\det u)^{-1}\end{array}\right):\ u\in U(2)\Bigg{\}}\simeq U(2).\] Since \(\mathrm{rank}\,SU(2,1)=\mathrm{rank}\,K_{\infty,2,1}=1\), \(SU(2,1)\) has discrete series representations. In this section we recall that explicit description of holomorphic discrete series given by Wallach [26]. Let \(S^{3}\) be the unit sphere \[S^{3}=\big{\{}z={}^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$}$}$}$}$}}}}}}}}}}(z_{1},z_{2}) \in\mathbb{C}^{2}:\ |z_{1}|^{2}+|z_{2}|^{2}=1\big{\}}.\] We have an homeomorphism \(S^{3}\simeq SU(2)(\mathbb{R})\) \[u:{}^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$}}$}$}$}}}}}}}}}}}(z_{1},z_{2}) \in S^{3}\mapsto u(z_{1},z_{2})=\left(\begin{array}{cc}z_{1}&-\overline{z}_{2} \\ z_{2}&\overline{z}_{1}\end{array}\right)\in SU(2). \tag{4.5}\] The group \(SU(2,1)(\mathbb{R})\) acts on \(S^{3}\) via \[g.z=\frac{Az+b}{\langle z,\overline{c}\rangle+d},\quad g=\left(\begin{array}[] {cc}A&b\\ c&d\end{array}\right)\in SU(2,1)(\mathbb{R}), \tag{4.6}\] (here the \(\langle\cdot,\cdot\rangle\) is the usual hermitian product on \(\mathbb{C}^{2}\)). Let \(\alpha_{1}=(1,-1,0)\) and \(\alpha_{2}=(0,1,-1)\); this form a basis of simple roots in the root system of \(\mathfrak{sl}(3,\mathbb{C})\) relative to the diagonal \(\mathfrak{h}\). The basic highest weights for this order are \[\Lambda_{1}=(2/3,-1/3,-1/3)\ \mbox{and}\ \Lambda_{2}=(1/3,1/3,-2/3).\] For \((k_{1},k_{2})\in\mathbb{Z}^{2}\), let \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}.\] For \(h\in C^{\infty}(S^{3})\) we define \[(\pi_{\Lambda}(g)(h))(z):=a(g,z)^{k_{1}}\overline{a(g,z)}^{k_{2}}h(g^{-1}.z)\] where \[a(g,z)=\overline{d}-\langle z,b\rangle\] for \(z\) and \(g\) as in (4.6). Then \(\pi_{\Lambda}\) extends to a bounded operator on \(L^{2}(S^{3})\) and \((\pi_{\Lambda},L^{2}(S^{3}))\) defines a continuous representation of \(SU(2,1)(\mathbb{R})\). This representation restricted to the compact subgroup \(K_{0}\) decomposes into irreducible as follows: let \(p,q\) be nonnegative integers and \(\mathcal{H}^{p,q}\) be the space of polynomials \(h\in\mathbb{C}[z_{1},z_{2},\overline{z}_{1},\overline{z}_{2}]\) which are homogeneous of degree \(p\) in \(z_{1}\), \(z_{2}\), degree \(q\) in \(\overline{z}_{1},\overline{z}_{2}\); and harmonic, namely, \[\Delta h=\left(\frac{\partial^{2}}{\partial z_{1}\partial\overline{z}_{1}}+ \frac{\partial^{2}}{\partial z_{2}\partial\overline{z}_{2}}\right)h\equiv 0.\] Denote by \(\mathscr{H}^{p,q}=\mathcal{H}^{p,q}\mid_{S^{3}}.\) Then \((\pi_{\Lambda}\mid_{K_{0}},\mathscr{H}^{p,q})\) is irreducible and \[L^{2}(S^{3})=\oplus_{p\geq 0}\oplus_{q\geq 0}\mathscr{H}^{p,q}.\] To describe the holomorphic discrete series, we also assume that \(k_{1}<0\) and \(k_{2}\geq 0\). Let \(\rho\) be the half sum of positive roots, i.e., \[\rho=(1,0,-1)=\Lambda_{1}+\Lambda_{2}.\] Given integers \(p\geq 0\) and \(0\leq q\leq k_{2}\) we set \[c_{p,q}(\Lambda)=\prod_{k=1}^{p}\frac{\langle\Lambda+(k+1)\rho,\alpha_{2} \rangle}{\langle-\Lambda+(k-1)\rho,\alpha_{1}\rangle}\cdot\prod_{j=1}^{q}\frac {\langle\Lambda+(j+1)\rho,\alpha_{1}\rangle}{\langle-\Lambda+(j-1)\rho,\alpha_ {2}\rangle}\] with each of the above products equal to \(1\) if \(p\) or \(q=0\). A straightforward calculation shows that \[c_{p,q}(\Lambda)=\prod_{k=1}^{p}\frac{k+k_{2}+1}{k-k_{1}-1}\cdot\prod_{j=1}^{q }\frac{j+k_{1}+1}{j-k_{2}-1}.\] In particular \(c_{p,q}(\Lambda)\) is well defined and if \(k_{2}+k_{1}+1<0\), it is nonvanishing. From these, we define an inner product \(\langle\cdot,\cdot\rangle_{\Lambda}\) with respect to \(\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\) via the one on \(L^{2}(S^{3})\) as follows: given \(h_{1},h_{2}\in C^{\infty}(S^{3})\), by spectral decomposition we can write \[h_{1}=\sum h_{1,p,q},\quad h_{2}=\sum h_{2,p,q},\quad h_{1,p,q},\ h_{2,p,q} \in\mathscr{H}^{p,q}\] and set \[\langle h_{1},h_{2}\rangle_{\Lambda}:=\sum_{p}\sum_{q}c_{p,q}(\Lambda)\langle h _{1,p,q},h_{2,p,q}\rangle. \tag{4.7}\] The following parametrization of holomorphic discrete series of \(SU(2,1)(\mathbb{R})\) is due to Wallach [10] (see p. 183): **Proposition 4.1**.: _Let \((k_{1},k_{2})\in\mathbb{Z}^{2}.\) Let \(\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\in\mathfrak{h}^{*}.\) Assume_ \[\langle\Lambda+\rho,S_{1}S_{2}\alpha_{i}\rangle>0,\] _where \(1\leq i\leq 2,\) and_ \[S_{1}:(x,y,z)\mapsto(y,x,z),S_{2}:(x,y,z)\mapsto(x,z,y)\] _are the simple Weyl reflections. Let_ \[V_{k_{2}}^{+}:=\{h\in C^{\infty}(S^{3}):\ h_{p,q}=0\ \text{if}\ q>k_{2}\}.\] _Let \(V_{+}^{\Lambda}\) be the Hilbert space completion of \(V_{k_{2}}^{+}\) relative to the inner product (4.7). Then \(D_{\Lambda}^{+}:=\pi_{\Lambda}\mid_{V_{+}^{\Lambda}}\) is a unitary holomorphic discrete series representation of \(SU(2,1)(\mathbb{R}).\) Moreover, the holomorphic discrete series representations of \(SU(2,1)(\mathbb{R})\) are of form \(D_{\Lambda}^{+}.\)_ _Remark 4.1_.: Note that the inner product \(\langle\cdot,\cdot\rangle_{\Lambda}\) with respect to \(\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\) is well defined on the space \(V_{k_{2}}^{+}.\) Later we will need to compute explicitly some inner products \(\langle h_{1,p,q},h_{2,p,q}\rangle\): after decomposing \(h_{1,p,q}\) (resp. \(h_{2,p,q}\)) into a finite linear combination of monomials of the form \(z_{1}^{a}z_{2}^{b}\overline{z_{1}^{c}z_{2}}^{d}\) one can use the following **Lemma 4.2**.: _Let notation be as above. Let \(a,b,c,d\) be nonnegative integers. Then_ \[\langle z_{1}^{a}z_{2}^{b},z_{1}^{c}z_{2}^{d}\rangle=\frac{\delta_{a,c}\delta_ {b,d}a!b!}{(a+b+1)!}. \tag{4.8}\] Proof.: The inner product \[\langle z_{1}^{a}z_{2}^{b},z_{1}^{c}z_{2}^{d}\rangle=\int_{S^{3}}z_{1}^{a}z_{2 }^{b}\overline{z_{1}^{c}}\overline{z_{2}}^{d}d\mu(z_{1},z_{2})\] where \(\mu(z_{1},z_{2})\) denote the \(SU(2,1)\)-invariant probability measure on the sphere \(S^{3}\) (which is also the Haar measure on \(SU(2)(\mathbb{R})\) under (4.5)). Write \[z_{1}=e^{i(\alpha+\beta)}\cos\theta,\ z_{2}=e^{i(\alpha-\beta)}\sin\theta,\] where \(\theta\in[0,\pi/2],\)\(\alpha\in[-\pi,\pi],\)\(\beta\in[0,\pi].\) In these polar coordinate system we have \[d\mu(z,\overline{z})=\frac{\sin\theta\cos\theta}{\pi^{2}}d\theta d\alpha d\beta.\] Then the left hand side of (4.8) is equal to \[\frac{1}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\pi}\int_{-\pi}^{\pi}e^{(a -c)(\alpha+\beta)i}\cos^{a+c+1}\theta e^{(b-d)(\alpha-\beta)i}\sin^{b+d+1} \theta d\alpha d\beta d\theta. \tag{4.9}\] Appealing to orthogonality we then see that (4.9) is equal to \[2\delta_{a,c}\delta_{b,d}\int_{0}^{\frac{\pi}{2}}\cos^{2a+1}\theta\sin^{2b+1} \theta d\theta=\frac{\delta_{a,c}\delta_{b,d}a!b!}{(a+b+1)!}.\] Hence the formula (4.8) follows. #### 4.1.2. \(K\)-types Recall that the maximal compact subgroup \(K_{0}\) of \(SU_{J_{2,1}}(\mathbb{R})\) consisting of \(SU_{J_{2,1}}\cap U(3)\) can be identified with \(U(2).\) Let \(K_{0c}\simeq U(1)\) be the central part of \(K_{0},\) and \(K_{0s}\simeq SU(2)\) be the semisimple part. An irreducible unitary representation of \(K_{0}\) is completely determined by its restriction to \(K_{0c}\) and \(K_{0s}.\) Therefore, such representations are parameterized by two integers \(m,n,\) such that \(n\geq 0\) and \(m-n\) even: \(m\) determines the character of \(K_{0c}\) and \(n+1\) is the dimension of the irreducible representation of \(K_{0s}.\) Write \(z=\,^{\mathfrak{t}}(z_{1},z_{2}).\) For each integer \(N,\) the group \(K_{0}\) acts on \(\mathcal{H}^{p,q}\) via \[\tau_{p,q}^{N}\left(\begin{array}{c}u\\ (\det u)^{-1}\end{array}\right)h(z,\overline{z})=(\det u)^{-N}h(uz,u^{-1} \overline{z}),\ u\in U(2).\] Let \(T\) denote the Cartan subgroup of \(SU_{J_{2,1}}(\mathbb{R})\): \[T=\Big{\{}\operatorname{diag}(z_{1},z_{2},z_{3}):\ |z_{1}|=|z_{2}|=|z_{3}|=1,\ z_{1}z_{2}z_ {3}=1\Big{\}}.\] When restricted to \(K_{0s}\) we can take \(\phi_{p,q}(z,\overline{z})=z_{1}^{p}\overline{z}_{2}^{q}\) as a highest weight vector in \(\mathcal{H}^{p,q}\). Observing that \[\tau_{p,q}^{N}\left(\begin{array}{ccc}e^{i\alpha}&&\\ &e^{i\beta}&\\ &&e^{-i(\alpha+\beta)}\end{array}\right)\phi_{p,q}=e^{pi\alpha}e^{-qi\beta}e^{ -Ni(\alpha+\beta)}\phi_{p,q}.\] Then the parametrization of irreducible unitary representations of \(K_{0}\) becomes \((m,n)=(p-q-2N,p+q)\). A straightforward computation shows that the highest weight of the representation \(\tau_{p,q}^{N}\) is \((p+q)\Lambda_{1}-(q+N)\Lambda_{2}\). #### 4.1.3. Highest Weight Vector in Minimal \(K\)-types and Matrix Coefficients Let \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\] be as in Proposition 4.1. In this section, we will find a minimal \(K\)-type of the discrete series \(D_{\Lambda}^{+}\). By definition and Theorem 9.20 in [14] we see that the minimal \(K\)-type we are seeking is the Blattner parameter \(\widetilde{\Lambda}\) of \(D_{\Lambda}^{+}\). **Definition 4.1**.: Let \(\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\) be a weight in \(\mathfrak{h}^{*}\), with \(k_{1},k_{2}\in\mathbb{Z}\). We say \(\Lambda\) is _holomorphic_ if \(\langle\Lambda+\rho,w_{1}w_{2}\alpha_{i}\rangle>0\), where \(1\leq i\leq 2\), and \(w_{i}\) is the Weyl element. **Lemma 4.3**.: _Let notation be as before. Assume \(\Lambda\) is holomorphic. Then \(k_{2}\geq 0\), \(k_{1}+k_{2}+2<0\) and_ \[\widetilde{\Lambda}=k_{2}\Lambda_{1}+k_{1}\Lambda_{2}. \tag{4.10}\] Proof.: Let \(l=k_{2}-k_{1}\in\mathbb{Z}_{>0}\). By Lemma 7.9 in [13], we have \[(\pi_{\Lambda}\mid_{K},\mathcal{H}^{p,q})\equiv\tau_{p+q}^{2l+3(p-q)},\] which is \(K\)-equivariant. Hence, for \(p\geq 0\) and \(0\leq q\leq k_{2}\), \[(D_{\Lambda}^{+}\mid_{K},\mathcal{H}^{p,q})\equiv\tau_{p+q}^{2l+3(p-q)}.\] We can describe the corresponding highest weight in terms of coordinates in \(\mathbb{C}^{3}\). Choose the basis \(\alpha_{1}=(1,-1,0)\), \(\alpha_{2}=(0,1,-1)\) of simple roots in the root system of \(\mathfrak{su}(2,1)^{\mathbb{C}}=\mathfrak{sl}(3,\mathbb{C})\) relative to the diagonal \(\mathfrak{h}\). Under this choice, the basic weights are \(\Lambda_{1}=(2/3,-1/3,-1/3)\) and \(\Lambda_{2}=(1/3,1/3,-2/3)\). So the highest weight is \[H(p,q)=\left(\frac{3q-l}{3},\frac{-3p-l}{3},\frac{2l+3p-3q}{3}\right),\ p\geq 0,\ 0\leq q\leq k_{2}.\] Note that \(\alpha_{1}\) is the only positive compact root. Let \[G(p,q):=\|H(p,q)+\alpha_{1}\|^{2}.\] We then need to find pairs \((p,q)\) such that \(G(p,q)\) is minimal. Since \(\frac{\partial G}{\partial p}(p,q)>0\) for all \(p,q\geq 0\), we have \(G(p,q)\geq G(0,q)\). Note that \[G(0,q)=\frac{1}{9}\cdot\left[(3q-l+3)^{2}+(l+3)^{2}+(2l-3q)^{2}\right]=2\left( q-\frac{l-1}{2}\right)^{2}+\frac{(l+3)^{2}}{6}.\] On the other hand, we have \(\langle\Lambda+\rho,w_{1}w_{2}\alpha_{i}\rangle>0\), where \(1\leq i\leq 2\), and \(w_{i}\) is the Weyl element. By definition, for a weight \(\nu\), \[w_{i}\nu=\nu-\frac{2\langle\nu,\alpha_{i}\rangle}{\langle\alpha_{i},\alpha_{2 }\rangle}\cdot\alpha_{i}=\nu-\langle\nu,\alpha_{i}\rangle\alpha_{i},\quad 1\leq i \leq 2.\] Hence \(\langle\Lambda+\rho,w_{1}w_{2}\alpha_{i}\rangle>0\) is equivalent to the conditions \(k_{2}\geq 0\) and \(k_{1}+k_{2}+2<0\). Therefore, \(k_{2}<(l-1)/2\), implying that \[G(p,q)\geq G(0,q)\geq G(0,k_{2})\] for all \(p\geq 0\) and \(0\leq q\leq k_{2}\). Then (4.10) follows. From Lemma 4.3 we have a highest weight vector \[\phi(z,\overline{z})=\overline{z}_{2}^{k_{2}} \tag{4.11}\] for the minimal \(K\)-type of \(D_{\Lambda}^{+}.\) We then compute the corresponding matrix coefficient in Proposition 4.5. To prepare for the proof, we need the following auxiliary computation: **Lemma 4.4**.: _Let \(A,B\in\mathbb{C}\) be such that \(|A|\neq|B|.\) Let \(m,n\in\mathbb{Z}\) with \(n>|m|.\) Then_ \[I_{m,n}(A,B)=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{e^{-mi\alpha}}{(A+Be^{i\alpha} )^{n}}d\alpha=\begin{cases}\frac{(-B)^{m}}{A^{n+m}}\binom{n+m-1}{m},&\text{ if }|A|>|B|;\\ 0,&\text{ if }|A|<|B|.\end{cases}\] Proof.: Suppose \(|A|>|B|\geq 0.\) Let \(C=B/A.\) Then \[\int_{0}^{2\pi}\frac{e^{-mi\alpha}}{(A+Be^{i\alpha})^{n}}d\alpha=\frac{1}{A^{ n}}\int_{0}^{2\pi}\frac{e^{-mi\alpha}}{(1+Ce^{i\alpha})^{n}}d\alpha=\frac{1}{A^{ n}}\sum_{k\geq 0}C_{n,k}\int_{0}^{2\pi}e^{(k-m)i\alpha}d\alpha,\] which is vanishing if \(m<0.\) Suppose \(m\geq 0.\) Then \[I_{m,n}(A,B)=\frac{C_{n,m}}{A^{n}}=\frac{C^{m}}{A^{n}}\binom{-n}{m}=\frac{(-B) ^{m}}{A^{n+m}}\binom{n+m-1}{m}.\] Now we suppose \(|A|<|B|.\) Let \(D=A/B.\) Then \[I_{m,n}(A,B)=\frac{1}{2\pi B^{n}}\int_{0}^{2\pi}\frac{e^{-(m+n)i\alpha}}{(1+De ^{-i\alpha})^{n}}d\alpha=\frac{1}{2\pi B^{n}}\sum_{k\geq 0}D_{n,k}\int_{0}^{2 \pi}e^{-(k+m+n)i\alpha}d\alpha,\] which is vanishing since \(m+n>0.\) **Proposition 4.5**.: _Let notation be as before. Let \(g=(g_{i,j})_{1\leq i,j\leq 3}\in SU_{J_{2,1}}(\mathbb{R}).\) Let_ \[\phi_{\circ}=\phi/\langle\phi,\phi\rangle_{\Lambda}^{1/2} \tag{4.12}\] _(for \(\phi\) defined in (4.11)). Then_ \[\langle D_{+}^{\Lambda}(g)\phi_{\circ},\phi_{\circ}\rangle_{\Lambda}=g_{22}^{ k_{2}}\overline{g}_{33}^{k_{1}}. \tag{4.13}\] Proof.: By definition (see Proposition 4.1) \(D_{\Lambda}^{+}=\pi_{\Lambda}\left.\right|_{V_{+}^{\Lambda}}\) and \(\phi\in V_{+}^{\Lambda}.\) Therefore, \[\langle D_{+}^{\Lambda}(g)\phi,\phi\rangle_{\Lambda}= \langle\pi_{\Lambda}(g)\phi,\phi\rangle_{\Lambda}=\sum_{p}\sum_{q}c _{p,q}(\Lambda)\langle(\pi_{\Lambda}(g)\phi)_{p,q},\phi_{p,q}\rangle\] \[= c_{0,k_{2}}(\Lambda)\langle(\pi_{\Lambda}(g)\phi)_{0,k_{2}},\phi\rangle.\] Write \(g=(g_{ij})_{1\leq i\leq 3}\in SU_{J_{2,1}}\subset SL(3,\mathbb{C}).\) By definition \(\overline{g}J_{2,1}g=J_{2,1}.\) So \[g^{-1}=J_{2,1}^{-1}\,\overline{g}J_{2,1}=\left(\begin{array}{ccc}\overline{ g}_{11}&\overline{g}_{21}&-\overline{g}_{31}\\ \overline{g}_{12}&\overline{g}_{22}&-\overline{g}_{32}\\ -\overline{g}_{13}&-\overline{g}_{23}&\overline{g}_{33}\end{array}\right). \tag{4.14}\] According to the group action (4.6) we obtain \[g^{-1}.z=\overset{\mathrm{t}}{\left(\frac{\overline{g}_{11}z_{1}+\overline{ g}_{21}z_{2}-\overline{g}_{31}}{-\overline{g}_{13}z_{1}-\overline{g}_{23}z_{2}+ \overline{g}_{33}},\frac{\overline{g}_{12}z_{1}+\overline{g}_{22}z_{2}- \overline{g}_{32}}{-\overline{g}_{13}z_{1}-\overline{g}_{23}z_{2}+\overline{ g}_{33}}\right)}\in S^{3}.\] Thus \(\pi_{\Lambda}(g)\phi(z,\overline{z})\) is equal to \[(\overline{g}_{33}-\overline{g}_{13}z_{1}-\overline{g}_{23}z_{2})^{k_{1}}( \overline{\overline{g}_{33}-\overline{g}_{13}z_{1}-\overline{g}_{23}z_{2}} )^{k_{2}}\cdot\overline{\left(\frac{\overline{g}_{12}z_{1}+\overline{g}_{22}z_ {2}-\overline{g}_{32}}{-\overline{g}_{13}z_{1}-\overline{g}_{23}z_{2}+ \overline{g}_{33}}\right)}^{k_{2}},\] namely, \[\pi_{\Lambda}(g)\phi(z,\overline{z})=(\overline{g}_{33}-\overline{g}_{13}z_{1} -\overline{g}_{23}z_{2})^{k_{1}}\cdot(g_{12}\overline{z}_{1}+g_{22}\overline{ z}_{2}-g_{32})^{k_{2}}.\] Since \[\phi_{\circ}=\frac{1}{c_{0,k_{2}}(\Lambda)^{1/2}}\frac{\phi}{\langle\phi,\phi \rangle^{1/2}}.\] We have by Lemma 4.2 \[\langle D^{\Lambda}_{+}(g)\phi_{\circ},\phi_{\circ}\rangle_{\Lambda}= \frac{c_{0,k_{2}}(\Lambda)\cdot\int_{S^{3}}\frac{(g_{12}\overline{ z}_{1}+g_{22}\overline{z}_{2}-g_{32})^{k_{2}}}{(\overline{g}_{33}-\overline{g}_{13}z _{1}-\overline{g}_{23}z_{2})^{-k_{1}}}\cdot z_{2}^{k_{2}}d\mu(z,\overline{z})}{ c_{0,k_{2}}(\Lambda)\cdot\int_{S^{3}}\overline{z}_{2}^{k_{2}}\cdot z_{2}^{k_{2}}d \mu(z,\overline{z})}\] \[= (k_{2}+1)\cdot\int_{S^{3}}\frac{(g_{12}\overline{z}_{1}+g_{22} \overline{z}_{2}-g_{32})^{k_{2}}}{(\overline{g}_{33}-\overline{g}_{13}z_{1}- \overline{g}_{23}z_{2})^{|k_{1}|}}\cdot z_{2}^{k_{2}}d\mu(z,\overline{z})\] Since we have the parametrization \[z_{1}=e^{i(\alpha+\beta)}\cos\theta,\ z_{2}=e^{i(\alpha-\beta)}\sin\theta,\ \theta \in[0,\pi/2],\ \alpha\in[-\pi,\pi],\ \beta\in[0,\pi],\] and \[d\mu(z,\overline{z})=\frac{\sin\theta\cos\theta}{\pi^{2}}d\theta d\alpha d\beta,\] we have \[\pi^{2}\int_{S^{3}}\frac{\left(g_{12}\overline{z}_{1}+g_{22} \overline{z}_{2}-g_{32}\right)^{k_{2}}}{(\overline{g}_{33}-\overline{g}_{13}z _{1}-\overline{g}_{23}z_{2})^{|k_{1}|}}\cdot z_{2}^{k_{2}}d\mu(z,\overline{z})\] \[=\int_{0}^{\frac{\pi}{2}}\int_{0}^{\pi}\int_{-\pi}^{\pi}\frac{ \left(g_{12}e^{-2i\beta}\cos\theta+g_{22}\sin\theta-g_{32}e^{i(\alpha-\beta)} \right)^{k_{2}}}{(\overline{g}_{33}-\overline{g}_{13}e^{i(\alpha+\beta)}\cos \theta-\overline{g}_{23}e^{i(\alpha-\beta)}\sin\theta)^{|k_{1}|}}\cdot\sin^{k_ {2}+1}\theta\cos\theta d\alpha d\beta d\theta.\] Applying the expression (13.17) for \(g^{-1}\) into \(g^{-1}g=\mathrm{Id}\) one has \[|g|_{33}^{2}=|g_{13}|^{2}+|g_{23}|^{2}+1.\] Then in conjunction with Cauchy inequality, we get \[|\overline{g}_{13}e^{i\beta}\cos\theta-\overline{g}_{23}e^{-i\beta}\sin\theta |\leq\sqrt{|g_{13}|^{2}+|g_{23}|^{2}}=\sqrt{|g_{33}|^{2}-1}<|g_{33}|.\] Therefore we can appeal to Lemma 4.4 to conclude that \[\frac{\langle D^{\Lambda}_{+}(g)\phi_{\circ},\phi_{\circ}\rangle_{ \Lambda}}{k_{2}+1}= \frac{2}{\pi\overline{g}_{33}^{|k_{1}|}}\int_{0}^{\frac{\pi}{2}} \int_{0}^{\pi}\left(g_{12}e^{-2i\beta}\cos\theta+g_{22}\sin\theta\right)^{k_{ 2}}\cdot\sin^{k_{2}+1}\theta\cos\theta d\beta d\theta\] \[= 2g_{22}^{k_{2}}\overline{g}_{33}^{k_{1}}\int_{0}^{\frac{\pi}{2}} \sin^{2k_{2}+1}\theta\cos\theta d\theta=\frac{g_{22}^{k_{2}}\overline{g}_{33} ^{k_{1}}}{k_{2}+1}.\] #### 4.1.4. Discrete Series on \(G(\mathbb{R})\) Recall that \(\mathbf{B}=\mathbf{B}^{-1}\) and \[G(\mathbb{R})=\mathbf{B}U(2,1)(\mathbb{R})\mathbf{B}^{-1}\] and therefore, setting \[G^{1}=\ker(\det:G\mapsto\mathbb{G}_{\mathrm{m}})=SU(W)\] we have \[G^{1}(\mathbb{R})=\mathbf{B}SU(2,1)\mathbf{B}^{-1}=\mathbf{B}SU(2,1)\mathbf{B}.\] Setting for \(g\in G^{1}(\mathbb{R})\) \[D^{\Lambda}_{+,\mathrm{B}}(g):=D^{\Lambda}_{+}(\mathrm{B}g\mathrm{B}),\] we denote by \((D^{\Lambda}_{+,\mathrm{B}},V^{\Lambda}_{+})\) the discrete series representation on \(G^{1}(\mathbb{R})\). From the split exact sequence \[1\longrightarrow G^{1}(\mathbb{R})\longrightarrow G(\mathbb{R})\longrightarrow Z _{G}(\mathbb{R})\longrightarrow 1,\] (with \(Z_{G}(\mathbb{R})\simeq U(1)(\mathbb{R})=\mathbb{C}^{1}\), we have the decomposition \[G(\mathbb{R})=Z_{G}(\mathbb{R})^{+}G^{1}(\mathbb{R}). \tag{4.15}\] where \[Z_{G}^{+}(\mathbb{R})=\{\text{diag}(e^{i\theta},e^{i\theta},e^{i\theta}):\ -\pi/3 <\theta\leq\pi/3\}.\] Using (4.15) we extend the \(G^{1}(\mathbb{R})\)-action \((D^{\Lambda}_{+,\mathbb{B}},V^{\Lambda}_{+})\) to \(G(\mathbb{R})\) by requiring \(Z_{G}(\mathbb{R})^{+}\) to act trivially. Let \(z=\text{diag}(e^{i\theta},e^{i\theta},e^{i\theta})\in Z_{G}(\mathbb{R})\), \(-\pi<\theta\leq\pi.\) Let \(\theta_{\circ}\in(-\pi/3,\pi/3]\) be such that \[\frac{3\theta}{2\pi}\equiv\frac{3\theta_{\circ}}{2\pi}\ (\text{mod}\ 1). \tag{4.16}\] Such a \(\theta_{\circ}\) is uniquely determined by \(\theta.\) Set \[z_{\circ}=\text{diag}(e^{i\theta_{\circ}},e^{i\theta_{\circ}},e^{i\theta_{ \circ}})\in Z_{G}(\mathbb{R})^{+}.\] Then by (4.16) there exists a unique \(k_{z}\in\{-1,0,1\}\) be such that \(z=z_{\circ}e^{2\pi k_{z}i/3}.\) We have \[z\cdot z_{\circ}^{-1}=\text{diag}(e^{2\pi k_{z}i/3},e^{2\pi k_{z}i/3},e^{2\pi k _{z}i/3})\in Z_{G}(\mathbb{R})\cap G^{1}(\mathbb{R}),\] \(k_{z}\in\{-1,0,1\}.\) Let \(\rho(z):=e^{2\pi k_{z}i/3}.\) Therefore, \(z\) acts by the scalar \[\omega_{\Lambda}(z)=D^{\Lambda}_{+}(z\cdot z_{\circ}^{-1})=\overline{\rho(z)} ^{k_{1}}\cdot\rho(z)^{k_{2}}=\rho(z)^{k_{2}-k_{1}}.\] However, \(\omega_{\Lambda}\) is typically not a homomorphism unless we assume that \(k_{2}\equiv k_{1}\ (\text{mod}\ 3)\) so that \(\omega_{\Lambda}(z)\equiv 1.\) Under the assumption that \(k_{2}\equiv k_{1}\ (\text{mod}\ 3)\) we obtain a representation of \(G(\mathbb{R})\) on \(V^{\Lambda}_{+}\) with trivial central character which we denote by \(D^{\Lambda}\) this representation. We have **Proposition 4.6**.: _Let \(\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\) be holomorphic with \(k_{1}\equiv k_{2}\ (\text{mod}\ 3).\) Let \((D^{\Lambda},V^{\Lambda}_{+})\) be the representation of \(G(\mathbb{R})\) defined as above. Then \(D^{\Lambda}\) is irreducible and square-integrable. Furthermore, every square-integrable holomorphic representation of \(G(\mathbb{R})\) is of the form \(D^{\Lambda}\otimes\chi\) for some holomorphic \(\Lambda\) and a unitary character \(\chi.\)_ Now we compute the matrix coefficient of \((D^{\Lambda},V^{\Lambda}_{+}).\) For \(g\in G(\mathbb{R})\) we set \[\det g=e^{i\theta_{g}}\in U(1)\] for \(-\pi<\theta_{g}\leq\pi\) uniquely defined. **Lemma 4.7**.: _Let \(g=(g_{ij})_{1\leq i,j\leq 3}\in G(\mathbb{R})\) and \(\phi^{\circ}\) as in (4.12) we have_ \[\langle D^{\Lambda}(g)\phi_{\circ},\phi_{\circ}\rangle_{\Lambda}=\frac{( \overline{g}_{11}-\overline{g}_{13}-\overline{g}_{31}+\overline{g}_{33})^{k_ {1}}g_{22}^{k_{2}}}{2^{k_{1}}\cdot(\det g)^{\frac{k_{2}-k_{1}}{3}}}. \tag{4.17}\] Proof.: Denote by \(z_{g}=\text{diag}(e^{i\theta_{g}/3},e^{i\theta_{g}/3},e^{i\theta_{g}/3})\in Z_ {G}(\mathbb{R})^{+}.\) Then \(z_{g}^{-1}\cdot g\in G^{1}(\mathbb{R}).\) We have \[\langle D^{\Lambda}(g)\phi_{\circ},\phi_{\circ}\rangle_{\Lambda}=\langle D^{ \Lambda}_{+}(z_{g}^{-1}\cdot\text{B}g\text{B})\phi_{\circ},\phi_{\circ}\rangle _{\Lambda}.\] Note that \[\text{B}\left(\begin{array}{ccc}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{array}\right)\text{B}=\left(\begin{array}{ccc}\frac {g_{11}+g_{13}+g_{31}+g_{33}}{2}&\frac{g_{12}+g_{32}}{\sqrt{2}}&\frac{g_{11}-g _{13}+g_{31}-g_{13}}{2}\\ \frac{g_{21}+g_{23}}{\sqrt{2}}&g_{22}&\frac{g_{21}-g_{22}}{\sqrt{2}}\\ \frac{g_{11}+g_{12}-g_{13}}{2}&\frac{g_{12}-g_{23}}{\sqrt{2}}&\frac{g_{11}-g_{1 3}-g_{13}+g_{13}}{2}\end{array}\right).\] Hence it follows from Proposition 4.5 that \[\langle D^{\Lambda}_{+}(z_{g}^{-1}\cdot\text{B}g\text{B})\phi_{\circ},\phi_{ \circ}\rangle_{\Lambda}=\frac{e^{-il\theta_{g}/3}(\overline{g}_{11}-\overline {g}_{13}-\overline{g}_{31}+\overline{g}_{33})^{k_{1}}g_{22}^{k_{2}}}{2^{k_{1}}}.\] Then (4.17) follows. #### 4.1.5. Choice of the archimedean component Recall that \(E=\mathbb{Q}(\sqrt{-D})\) is an imaginary quadratic extension. Let \(D_{E}\) be it fundamental discriminant. Let \[g_{E}\ =\operatorname{diag}(|D_{E}|^{1/4},1,|D_{E}|^{-1/4})\in G(\mathbb{R});\] we set \[f_{\infty}(g):=\langle D^{\Lambda}(g_{E}^{-1}gg_{E})\phi_{\circ},\phi_{\circ} \rangle_{\Lambda}=\langle D^{\Lambda}(g)D^{\Lambda}(g_{E})\phi_{\circ},D^{ \Lambda}(g_{E})\phi_{\circ}\rangle_{\Lambda}, \tag{4.18}\] or more explicitly \[f_{\infty}(g)=\frac{e^{-il\theta_{g}/3}(\overline{g}_{11}-\overline{g}_{13}| D_{E}|^{-1/2}-\overline{g}_{31}|D_{E}|^{1/2}+\overline{g}_{33})^{k_{1}}g_{22}^{ k_{2}}}{2^{k_{1}}}, \tag{4.19}\] where the notations are the same as those in Lemma 4.7 and \(l=k_{2}-k_{1}\). #### 4.1.6. Restriction of matrix coefficients As we explain below we have an isomorphism \(\operatorname{SL}(2,\mathbb{R})\xrightarrow{\sim}SU(W)(\mathbb{R})\) given by \[\iota_{E}:\ \begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{R})\mapsto g_{E}^{-1}\begin{pmatrix}a &&-b\Delta\\ &1&&\\ -c\Delta^{-1}&&d\end{pmatrix}g_{E}\in SU(W)(\mathbb{R}). \tag{4.20}\] where \(\Delta=i|D_{E}|^{1/2}\). Under \(\iota_{E}\), the maximal compact subgroup \(\operatorname{SO}_{2}(\mathbb{R})\) is mapped to the maximal compact subgroup \(K^{\prime}_{\infty}\) whose matrices are given by \[\kappa_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\stackrel{{\iota_{E}}}{{ \longmapsto}}\begin{pmatrix}\cos\theta&-i\sin\theta\\ &1\\ i\sin\theta&\cos\theta\end{pmatrix}=\operatorname{B}\begin{pmatrix}e^{i\theta} &&\\ &1&\\ &&e^{-i\theta}\end{pmatrix}\operatorname{B},\] where \(\operatorname{B}\) is defined in (4.2), and \(\theta\in[-\pi,\pi)\). **Lemma 4.8**.: _The restriction to \(SU(W)(\mathbb{R})\) of the matrix coefficients \(f_{\infty}(g)\) is, via the isomorphism (4.20), a matrix coefficient of the holomorphic discrete series on \(\operatorname{SL}(2,\mathbb{R})\) of weight \(-k_{1}\) (recall that \(-k_{1}\geq k_{2}+2\) and \(k_{2}\geq 0\))_ Proof.: Let \(\pi_{k}^{+}\) be the holomorphic discrete series of weight \(k>1\) on \(\operatorname{SL}(2,\mathbb{R}).\) Let \(F_{k}\) be the matrix coefficient of its normalized lowest weight vector \(v_{k}^{\circ}\). It is given explicitly for \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2,\mathbb{R})\) by (see [10, Prop. 14.1]) \[F_{k}(g):=\langle\pi_{k}^{+}(g)v_{k}^{\circ},v_{k}^{\circ}\rangle=\int_{ \mathbb{H}}\pi_{k}^{+}(g)v_{k}^{\circ}(z)\overline{v_{k}^{\circ}(z)}y^{k} \frac{dxdy}{y^{2}}=\frac{2^{k}}{(a+d-i(b-c))^{k}}, \tag{4.21}\] where \(z=x+iy\in\mathbb{H}\) the upper half plane. By Lemma 4.7 we have for \(g\in\operatorname{SL}(2,\mathbb{R})\) \[f_{\infty}(\iota_{E}(g))=F_{-k_{1}}(g)=\frac{2^{|k_{1}|}}{(a+d-ib+ic)^{|k_{1}| }}. \tag{4.22}\] ### Non-archimedean components Let \(V=Ee_{1}\oplus Ee_{0}\oplus Ee_{-1}.\) Note that \(V\) is a 3-dimensional Hermitian space with respect to \(J\) and \(U(V)=G(\mathbb{Q}).\) Let \[L=\mathcal{O}_{E}e_{1}\oplus\mathcal{O}_{E}e_{0}\oplus\mathcal{D}_{E}^{-1}e_{ -1}\subseteq V.\] Then \(L\) is a lattice of \(V\). Its \(\mathbb{Z}\)-dual lattice is \[L^{*}=\big{\{}v\in V:\ \operatorname{tr}_{E/\mathbb{Q}}\langle v,L\rangle_{J} \subset\mathbb{Z}\big{\}}=\mathcal{O}_{E}e_{1}\oplus\mathcal{D}_{E}^{-1}e_{ 0}\oplus\mathcal{D}_{E}^{-1}e_{-1}.\] One can verify that \(L\) is an \(\mathcal{O}_{E}\)-module of full rank equipped with \(\langle\cdot,\cdot\rangle_{J}.\) Since for \(v,v^{\prime}\in L\), \(\langle v,v^{\prime}\rangle_{J}\in\mathcal{D}_{E}^{-1}\), and \(\langle v,v\rangle_{J}\in\mathbb{Z}\), the lattice \(L\) is integral and even. Let \[G(\mathbb{Z}):=U(L)\] be the group of isometries preserving \(L\) Then \(U(L)\) is an arithmetic subgroup of \(G(\mathbb{Q}).\) Explicitly, we have \[G(\mathbb{Z})=\{g\in G(\mathbb{Q}):\ g.L=L\}=G(\mathbb{Q})\cap\Bigg{\{}\left( \begin{matrix}\mathcal{O}_{E}&\mathcal{O}_{E}&\mathcal{D}_{E}\\ \mathcal{O}_{E}&\mathcal{O}_{E}&\mathcal{D}_{E}\\ \mathcal{D}_{E}^{-1}&\mathcal{D}_{E}^{-1}&\mathcal{O}_{E}\end{matrix}\right) \Bigg{\}}. \tag{4.23}\] Let \[G(\widehat{\mathbb{Z}}):=\big{\{}g\in G(\mathbb{A}_{f}):\ g.L\otimes_{\mathbb{ Z}}\widehat{\mathbb{Z}}=L\otimes_{\mathbb{Z}}\widehat{\mathbb{Z}}\big{\}}=\prod_{p}G( \mathbb{Z}_{p}).\] Moreover, recall (cf SS2.2.2) that for any split prime \(p\), we have fixed a place \(\mathfrak{p}\) above \(p\) so that \[G(\mathbb{Q}_{p})\simeq\operatorname{GL}(3,\mathbb{Q}_{p}).\] Since \(\mathcal{O}_{E,\mathfrak{p}}\simeq\mathbb{Z}_{p}\) we have \[L_{\mathfrak{p}}\simeq\mathbb{Z}_{p}e_{1}\oplus\mathbb{Z}_{p}.e_{0}\oplus \mathbb{Z}_{p}.e_{-1}\] and under the above isomorphism we have \[G(\mathbb{Z}_{p})\simeq\operatorname{GL}_{3}(\mathbb{Z}_{p}).\] #### 4.2.1. The open compact \(K_{f}(N)\) Let \(N\) be either \(1\) or a positive odd prime that remains inert in \(E\) Let \[K_{f}(N):=\prod_{p<\infty}K_{p}(N)\subset G(\mathbb{A}_{\operatorname{fin}}),\] be the open compact subgroup whose local components \(K_{p}(N)\) are defined as follows: _The case \(p=2\)._ Let \(v\) be the place above \(2\) (when \(2\) is unramified, \(v=2\)). Choose \(\lambda\in E_{v}\) such that \[\operatorname{tr}_{E_{v}/\mathbb{Q}_{2}}(\lambda)=1\text{ and }\|\lambda\|= \min_{\begin{subarray}{c}x\in E\\ \operatorname{tr}_{K_{v}/\mathbb{Q}_{2}}(x)=1\end{subarray}}\|x\|\] with \(\|x\|=2^{-\nu_{v}(x)}.\) Let \(K_{2}\) be the stabilizer in \(G(\mathbb{Q}_{2})\) of the lattice \[\big{\{}ae_{1}+be_{0}+c.e_{-1},\ a,b,\lambda c\in\mathcal{O}_{E_{v}}\big{\}}.\] Then \(K_{2}\) is a special maximal compact subgroup according to [20] and we set \[K_{2}(N)=K_{2}.\] _The generic case._ If \(p\nmid 2N\) (that is \(p\) is either ramified, split or inert but does not divide \(N\)) we set \[K_{p}(N)=K_{p}:=G(\mathbb{Z}_{p}).\] _The case \(p=N\)._ The prime \(p\) is then inert and we set \[K_{p}(N):=I_{p}=\big{\{}g=(g_{i,j})\in G(\mathbb{Z}_{p}),\ g\equiv\begin{pmatrix} *&*&*\\ 0&*&*\\ 0&0&*\end{pmatrix}\,(\operatorname{mod}p)\big{\}} \tag{4.24}\] the Iwahori subgroup of \(K_{p}=G(\mathbb{Z}_{p})\): the inverse image of the Borel subgroup \(B(\mathbb{F}_{p})\subset G(\mathbb{F}_{p})\) under the reduction modulo \(p\) map. ### Definition of \(f^{\mathfrak{n}}\) We can now define the non-archimedean components \(f^{\mathfrak{n}}_{p}\) of \(f^{\mathfrak{n}}\) in (4.1). Let \[\ell=\prod_{p}p^{r_{r}}\geq 1 \tag{4.25}\] be an integer whose prime divisors (if any) \(p\) are inert primes and coprime with \(N\). #### 4.3.1. The generic case \(p\nmid\ell N\) We set \[f_{p}:=\frac{1}{\mu(K_{p})}\cdot\mathbf{1}_{K_{p}}; \tag{4.26}\] here \(\mu(K_{p})\) is the volume of \(K_{p}\) with respect to the Tamagawa measure. #### 4.3.2. The case \(p=N\) Recall that \(K_{p}(N)=I_{p}\) is the Iwahori subgroup and we choose the normalized characteristic function \[f_{p}=\frac{1}{\mu(I_{p})}\cdot\mathbf{1}_{I_{p}}\] #### 4.3.3. If \(p\mid\ell\) If \(p|\ell\) (in particular inert and coprime with \(N\)) we take \(f_{p}\) to be the compactly supported bi-\(G(\mathbb{Z}_{p})\) invariant characteristic function \[f_{p}=\mathbf{1}_{G(\mathbb{Z}_{p})A^{r_{p}}G(\mathbb{Z}_{p})} \tag{4.27}\] where for \(r\geq 0\) any integer we have set \[A^{r}:=\begin{pmatrix}p^{r}&&\\ &1&\\ &&&p^{-r}\end{pmatrix}.\] #### 4.3.4. An extra twist for \(p=N^{\prime}\) For \(p=N^{\prime}\) a prime split in \(E\), we define \(\mathfrak{n}_{p}\in G(\mathbb{Q}_{p})\) to be the element corresponding to the matrix \[\mathfrak{n}_{p}\simeq\begin{pmatrix}1&&p^{-1}\\ &1&&\\ &&1\end{pmatrix}\in\operatorname{GL}(3,\mathbb{Q}_{p})\] under the isomorphism (2.5). Let \(w^{\prime}\) be the Weyl element \(w^{\prime}=\begin{pmatrix}1&&\\ &&1\\ &1&\end{pmatrix}\) fixing \(e_{1}\) and switching \(e_{0}\) and \(e_{-1}\). We then define for \(p|N^{\prime}\) \[\widetilde{\mathfrak{n}}_{p}=w^{\prime}\mathfrak{n}_{p}w^{\prime}\simeq \begin{pmatrix}1&p^{-1}&\\ &1&\\ &&1\end{pmatrix}\] and set \[f_{p}^{\mathfrak{n}_{p}}:\ x\in G(\mathbb{Q}_{p})\mapsto f_{p}(\widetilde{ \mathfrak{n}}_{p}^{-1}x\widetilde{\mathfrak{n}}_{p}). \tag{4.28}\] _Remark 4.2_.: The introduction of this extra twist is absolutely crucial: without it, the spectral size of the relative trace formula would select the spherical vector in \(\pi_{p}\), and the local period integral at \(p\) would vanish. See Remark 5.8. ### Choice of the global test function We then set \[f=f_{\infty}.\prod_{p}f_{p},\] \[f^{\mathfrak{n}}=f_{\infty}.\prod_{p|N^{\prime}}f_{p}^{\mathfrak{n}_{p}}. \prod_{p|N^{\prime}}f_{p}. \tag{4.29}\] Alternatively, if we set, for any place \(v\) \[\widetilde{\mathfrak{n}}_{v}=\begin{cases}w^{\prime}\mathfrak{n}_{p}w^{ \prime}\simeq\begin{pmatrix}1&p^{-1}\\ &1\\ &&1\end{pmatrix}&\text{ if }v=p=N^{\prime}\text{ and }\\ &&1\end{cases} \tag{4.30}\] and \[\widetilde{\mathfrak{n}}=(\widetilde{\mathfrak{n}}_{v})_{v}\in G(\mathbb{A}),\] we have \[f^{\mathfrak{n}}:x\mapsto f(\widetilde{\mathfrak{n}}^{-1}x\widetilde{\mathfrak{n}}). \tag{4.31}\] ### Cusp Forms on \(U(w)\) In this section, we discuss the cuspidal representation \[\pi^{\prime}\simeq\pi^{\prime}_{\infty}\otimes\bigotimes_{p}^{\prime}\pi^{ \prime}_{p}\] of \(G^{\prime}(\mathbb{A})=U(W)(\mathbb{A})\) and the associated cuspform \(\varphi^{\prime}\in\pi^{\prime}\). We recall that we want \(\pi^{\prime}\) to have trivial central character, its archimedean component \(\pi^{\prime}_{\infty}\) isomorphic to the holomorphic discrete series of weight \(k\), for every \(p|N^{\prime}\) its \(p\)-component, \(\pi^{\prime}_{p}\) is isomorphic to the Steinberg representation \(\mathrm{St}_{p}\) and for any prime \(p\) not dividing \(N^{\prime}\), \(\pi^{\prime}_{p}\) is unramified. Using the trace formula, one can show that such \(\pi^{\prime}\) exists (see below); alternatively one can construct \(\pi^{\prime}\) "explicitly" by functoriality: let \(\pi_{1}\) be a cuspidal automorphic representation of \(\mathrm{GL}(2,\mathbb{A})\) with trivial central character such that * \(\pi_{1\infty}\simeq\pi^{+}_{k}\) is the holomorphic discrete series of weight \(k\), * for every \(p=N^{\prime}\), \(\pi_{1p}\) is the Steinberg representation, * for \(p\nmid N^{\prime}\), \(\pi_{1p}\) is unramified. Let \(\pi^{\prime}_{E}\) be the base change of \(\pi_{1}\) to \(E\); it follows from the works of Flicker and Rogawski ([11, 12]) that \(\pi_{1,E}\) descent to an automorphic cuspidal representation of \(G^{\prime}(\mathbb{A})\) with trivial central character which has the required local properties. Indeed, because \(\pi_{1}\) is selfdual and \(\pi_{1,E}\) is by construction invariant under the complex conjugation \(\sigma_{E}\), the representation \(\pi_{1,E}\) is conjugate dual: \[\pi^{\vee}_{1,E}\simeq\pi_{1,E}\simeq\pi_{1,E}\circ\sigma_{E}.\] Since \(W\) is even dimensional, it is sufficient to see that \(\pi_{1,E}\) is conjugate symplectic [10] or in other terms that the automorphic induction \(\mathrm{Ind}^{\mathbb{Q}}_{E}(\pi_{1,E})\) is symplectic. We can verify this by considering the global 2-dimensional Galois representation \(\rho^{\prime}\) attached to \(\pi^{\prime}\) and showing that the Galois representation \(\mathrm{Ind}^{\mathbb{Q}}_{E}(\mathrm{res}_{E}(\rho^{\prime}))\) is symplectic: its exterior square contains the trivial representation. We have \[\mathrm{Ind}^{\mathbb{Q}}_{E}(\mathrm{res}_{E}(\rho^{\prime}))\simeq\rho^{ \prime}\oplus\rho^{\prime}.\eta\] (\(\eta=\eta_{E}\) is the quadratic character corresponding to \(E/\mathbb{Q}\)) so that \[\Lambda^{2}(\rho^{\prime}\oplus\rho^{\prime}.\eta)\simeq\Lambda^{2}(\rho^{ \prime})\oplus\Lambda^{2}(\rho^{\prime}.\eta)\oplus\Lambda^{2}(\rho^{\prime}) \oplus(\rho^{\prime}\otimes\rho^{\prime}.\eta)=1\oplus 1\oplus\mathrm{sym}_{2}( \rho^{\prime}).\eta\oplus\eta\] since the determinant of \(\rho^{\prime}\) is trivial. Conversely one can show using [12] that any representation \(\pi^{\prime}\) can be obtained in that way. #### 4.5.1. The automorphic form \(\varphi^{\prime}\) We choose \(\varphi^{\prime}\) to be the "newform" on \(U(W)(\mathbb{A})\) corresponding to a pure tensor \[\varphi^{\prime}\simeq\otimes_{v}\xi^{\prime}_{v}\in\bigotimes_{v}^{\prime} \pi^{\prime}_{v}\] where * \(\xi^{\prime}_{\infty}\in\pi^{\prime}_{\infty}\) is a vector of lowest weight \(k\) (ie. is multiplied by \(e^{-ik\theta}\) under the action of the matrix \(\kappa(\theta)=\mathrm{diag}(e^{i\theta},1,e^{-i\theta})\in SU(W)(\mathbb{R})\)), * for every prime \(p\), \(\xi^{\prime}_{p}\) is invariant under the open-compact subgroup \(K^{\prime}_{p}(N^{\prime})\) defined below. Regarding the last point we set \[G^{\prime}(\widehat{\mathbb{Z}})=\prod_{p}G^{\prime}(\mathbb{Z}_{p})=G^{ \prime}(\mathbb{A}_{\mathrm{fin}})\cap G(\widehat{\mathbb{Z}})\] and define \[K^{\prime}_{f}(N^{\prime})=\prod_{p}K^{\prime}_{p}(N^{\prime})\subset G^{\prime}( \mathbb{Z}_{p}) \tag{4.32}\] where * If \(p\) split in \(E\), (4.33) \[K^{\prime}_{p}(N^{\prime}):=\Big{\{}g=\begin{pmatrix}a&0&b\\ 0&1&0\\ c&0&d\end{pmatrix}\in G^{\prime}(\mathbb{Z}_{p}):\ c\in N^{\prime}\mathbb{Z}_{p} \Big{\}}\] * If \(p\) is inert in \(E\), (4.34) \[K^{\prime}_{p}(N^{\prime}):=\Big{\{}g=\begin{pmatrix}a&0&b\\ 0&1&0\\ c&0&d\end{pmatrix}\in G^{\prime}(\mathbb{Z}_{p}):\ c\in N^{\prime}\mathcal{O}_{ E_{p}}\Big{\}}\] * If \(p\) is ramified in \(E\), \(K^{\prime}_{p}(N^{\prime})=G^{\prime}(\mathbb{Z}_{p})\). _Remark 4.3_.: The subgroup \(G^{\prime}(\widehat{\mathbb{Z}})\) is the stabilizer in \(G^{\prime}(\mathbb{A}_{\rm fin})\) of the hermitian \(\mathcal{O}_{E}\)-submodule generated by \(e_{1}\) and \(\Delta^{-1}e_{-1}\), \[L^{\prime}=\mathcal{O}_{E}e_{1}\oplus\mathcal{D}_{E}^{-1}e_{-1}\subseteq W;\] In other terms \(G^{\prime}(\widehat{\mathbb{Z}})\) is the closure in \(G^{\prime}(\mathbb{A}_{\rm fin})\) of \(U(L^{\prime})\) the stabilizer of \(L^{\prime}\) in \(U(W)(\mathbb{Q})\). #### 4.5.2. The exceptional isomorphism To be more concrete we recall that we have an exceptional isomorphism of \(\mathbb{Q}\)-algebraic group \[\operatorname{SU}(W)\simeq\operatorname{SL}(2)\] given by \[\iota_{E}=\iota:\ \ \begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}(2)\mapsto\begin{pmatrix}a&0&-b\Delta\\ 0&1&0\\ -c\Delta^{-1}&0&d\end{pmatrix}\in SU(W). \tag{4.35}\] Under this isomorphism, the image of the full congruence subgroup \(\operatorname{SL}(2,\mathbb{Z})\) is given by \[\iota(\operatorname{SL}(2,\mathbb{Z}))=SU(L^{\prime})=G^{\prime}(\widehat{ \mathbb{Z}})\cap SU(W)(\mathbb{Q})\] and more generally, if \(N^{\prime}\) is a prime coprime with \(D_{E}\), the image of the usual congruence subgroup of level \(N^{\prime}\) \[\Gamma_{0}(N^{\prime})=\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL(2,\mathbb{Z}),\ N^{\prime}|c\},\] is \[\iota(\Gamma_{0}(N^{\prime})) =\big{\{}\begin{pmatrix}a&0&-b\Delta\\ 0&1&0\\ -c\Delta^{-1}&0&d\end{pmatrix},\ \begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL(2,\mathbb{Z}),\ N^{\prime}|c\big{\}}\] \[=K^{\prime}_{f}(N^{\prime})\cap SU(W)(\mathbb{Q}).\] Given our automorphic form \(\varphi^{\prime}:G^{\prime}(\mathbb{A})\to\mathbb{C}\) as above let \(\phi^{\prime}:\mathbb{H}\mapsto\mathbb{C}\) be the function on the upperhalf plane defined by \[\phi^{\prime}(z)=j(g_{\infty},i)^{k}\varphi^{\prime}(\iota(g_{\infty})) \tag{4.36}\] for \(g_{\infty}\in\operatorname{SL}(2,\mathbb{R})\) such that \(g_{\infty}.i=z\) and for \[j(g,z)=(\det g)^{-1/2}(cz+d)\] be the usual automorphy factor on \(\operatorname{GL}(2,\mathbb{R})^{+}\times\mathbb{H}\). The invariance of \(\varphi^{\prime}\) along with the strong approximation property for \(\operatorname{SL}(2)\) imply that the function \(\phi^{\prime}\) is a well defined holomorphic cuspform of weight \(k\) with trivial nebentypus and a newform of level \(N^{\prime}\). Now using again the strong approximation property for \(\operatorname{SL}(2)\) one can construct out of \(\phi^{\prime}\) an automorphic form which (abusing notations) we denote \[\varphi:\operatorname{GL}(2,\mathbb{Q})\backslash\operatorname{GL}(2\mathbb{A} )\mapsto\mathbb{C}\] of weight \(k\), ie. which multiplies by the character \(e^{-ik\theta}\) under the left action of \[\kappa_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\in\operatorname{SO}_{2}(\mathbb{R})\] and invariant under the center \(Z_{\operatorname{GL}(2)}(\mathbb{A})\) and under the open compact congruence subgroup of level \(N^{\prime}\) \[K_{0,f}(N^{\prime})=\{g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}(2,\widehat{\mathbb{Z}}),\ c\in N^{\prime} \widehat{\mathbb{Z}}\}.\] The automorphic form \(\varphi^{\prime}\) generates the representation denoted (abusing notations) \(\pi^{\prime}\) and is a new, lowest weight vector of it. We assume that \(\varphi^{\prime}\) is \(L^{2}\)-normalized: \[\langle\varphi^{\prime},\varphi^{\prime}\rangle=\int_{[G^{\prime}]}|\varphi^{ \prime}(g^{\prime})|^{2}dg^{\prime}=1 \tag{4.37}\] where \(dg^{\prime}\) denote the Tamagawa measure. With this normalization one has for the classical form \(\phi^{\prime}\) \[\langle\phi^{\prime},\phi^{\prime}\rangle:=\frac{1}{\operatorname{vol}( \Gamma_{0}(N^{\prime})\backslash\mathbb{H})}\int_{\Gamma_{0}(N^{\prime}) \backslash\mathbb{H}}y^{k}|\phi^{\prime}(z)|^{2}\frac{dxdy}{y^{2}}=c_{E} \tag{4.38}\] where \[\operatorname{vol}(\Gamma_{0}(N^{\prime})\backslash\mathbb{H})=\int_{\Gamma_{ 0}(N^{\prime})\backslash\mathbb{H}}\frac{dxdy}{y^{2}}=\frac{\pi}{3}N^{\prime} \prod_{p|N^{\prime}}(1+\frac{1}{p})\] and \(c_{E}>0\) depends only on \(E\). Consider its Fourier expansion \[\phi^{\prime}(z)=\sum_{n\geq 1}a_{n}e(nz)\] where \[a_{n}=a_{n}(\phi^{\prime}):=e^{2\pi n\operatorname{Im}\tau}\int_{0}^{1}\phi^{ \prime}(\tau+x)e^{-2\pi inx}dx. \tag{4.39}\] Since \(\phi^{\prime}\) is a newform, we also have \[a_{n}=a_{1}.n^{\frac{k-1}{2}}\lambda_{\pi_{1}}(n); \tag{4.40}\] here for any integer \(n\), \(\lambda_{\pi_{1}}(n)\) denote the \(n\)-th coefficient of the Hecke \(L\)-function \(L(\pi_{1},s)\) (normalized analytically); it satisfies Deligne's bound \[|\lambda_{\pi_{1}}(n)|\leq d(n)=n^{o(1)} \tag{4.41}\] ( \(d(n)\) denote the divisor function) and its first coefficient \(a_{1}\) satisfies (see [11, (7)]) \[|a_{1}|^{2}=c_{E}\frac{2\pi^{3}}{3}\prod_{p|N^{\prime}}(1+\frac{1}{p})\frac{( 4\pi)^{k-1}}{\Gamma(k)L(1,\operatorname{Ad},\pi^{\prime})} \tag{4.42}\] where \(L(s,\operatorname{Ad},\pi^{\prime})\) is the adjoint \(L\)-function. We have therefore \[|a_{n}|^{2}=c_{E}\frac{2\pi^{3}}{3}\frac{\prod_{p|N^{\prime}}(1+1/p)}{L(1, \operatorname{Ad},\pi^{\prime})}\frac{(4\pi)^{k-1}n^{k-1}}{\Gamma(k)}|\lambda _{\pi^{\prime}}(n)|^{2} \tag{4.43}\] Also, since \(N^{\prime}\) is squarefree \(L(s,\operatorname{Ad},\pi^{\prime})\) does not have a Landau-Siegel zero (see [11]) and one has \[L(1,\operatorname{Ad},\pi^{\prime})=(kN^{\prime})^{o(1)} \tag{4.44}\] so that \[|a_{n}|^{2}\leq(kN^{\prime})^{o(1)}\frac{(4\pi)^{k-1}n^{k-1}}{\Gamma(k)}n^{o(1)} \tag{4.45}\] ## 5. **The spectral Side** Let \(E/\mathbb{Q}\) be a quadratic extension and let \(W\subset V\) be Hermitian spaces of dimensions \(n\) and \(n+1\) over \(E\) respectively. Set \(G=U(V)\) and \(G^{\prime}=U(W)\) be the corresponding unitary groups where \(G^{\prime}\) is embedded into \(G\) in the obvious way (as the stabilizer of \(W^{\perp}\subset V\)). Let \((\pi,V_{\pi})\) (resp. \((\pi^{\prime},V_{\pi^{\prime}})\)) be cuspidal representations on \(G(\mathbb{A})\) (resp \(G^{\prime}(\mathbb{A})\)). Define the global Petersson pairing \(\langle\cdot,\cdot\rangle\) on \(G(\mathbb{A})\) by \[\langle\varphi_{1},\varphi_{2}\rangle=\int_{G(\mathbb{Q})\backslash G(\mathbb{ A})}\varphi_{1}(g)\overline{\varphi_{2}(g)}dg,\] where \(dg\) denote the Tamagawa measure on \(G(\mathbb{Q})\backslash G(\mathbb{A}).\) Similarly one defines \(\langle\varphi_{1}^{\prime},\varphi_{2}^{\prime}\rangle\) on \(G^{\prime}(\mathbb{A}).\) For any place \(v\) set \(G_{v}=G(\mathbb{Q}_{v})\), \(G_{v}^{\prime}=G^{\prime}(\mathbb{Q}_{v})\) and \(dg_{v},\ dg_{v}^{\prime}\)'s are local Haar measures on \(G_{v},\ G_{v}^{\prime}\) such that \(\prod dg_{v}=dg,\ \prod dg_{v}^{\prime}=dg^{\prime}.\) Under the decomposition \(\pi=\otimes_{v}^{\prime}\pi_{v}\) and \(\pi^{\prime}=\otimes_{v}^{\prime}\pi_{v}^{\prime}\) we fix a decomposition of either of the global inner products \(\langle\cdot,\cdot\rangle\) into local ones as \[\langle\cdot,\cdot\rangle=\prod_{v}\langle\cdot,\cdot\rangle_{v}.\] ### Spectral Expansion Given \(k_{1},k_{2}\) be integers such that \(k_{2}\geq 2\), \(k_{1}+k_{2}+2<0\) and \(k_{1}\equiv k_{2}\,(\operatorname{mod}3)\), let \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}\] as in SS4.1 and let \(D^{\Lambda}\) be the corresponding holomorphic discrete series of \(G(\mathbb{R})\). Let \[\mathcal{A}_{\Lambda}(N)=\{\pi=\pi_{\infty}\otimes\pi_{f}\in\mathcal{A}(G),\ \omega_{\pi}=\mathbf{1},\ \pi_{\infty}\simeq D^{\Lambda},\ \pi_{f}^{K_{f}(N)}\neq\{0\}\}\] the set of automorphic representations of \(G(\mathbb{A})\) having trivial central character, whose archimedean component \(\pi_{\infty}\) is isomorphic to \(D_{\Lambda}\) and whose non-archimedean component \(\pi_{f}\) admits non-trivial \(K_{f}(N)\)-invariant vectors. The set \(\mathcal{A}_{\Lambda}(N)\) is finite and contains only cuspidal representations. For any \(\pi\in\mathcal{A}_{\Lambda}(N)\), the subspace of lowest weight vectors of \(\pi_{\infty}\simeq D_{\Lambda}\) is one dimensional (generated by \(\phi_{\circ}\) say) and let \(v_{\pi_{\infty}}\neq 0\) be such a non-zero vector. We define the finite dimensional vector spaces of automorphic forms \[\mathcal{V}_{\pi,\Lambda}(N):=\operatorname{Im}(\mathbb{C}v_{\pi_{\infty}} \otimes\pi_{f}^{K_{f}(N)}\hookrightarrow L^{2}([G]))\] \[\mathcal{V}_{\Lambda}(N):=\bigoplus_{\pi\in\mathcal{A}_{\Lambda}(N)}\mathcal{ V}_{\pi,\Lambda}(N).\] Let \(\widetilde{\mathfrak{n}}\) be the matrix \[\widetilde{\mathfrak{n}}=\prod_{p|N^{\prime}}\begin{pmatrix}1&p^{-1}\\ &1\\ &&1\end{pmatrix},\] which we view as an element in \(G(\mathbb{A})\) (cf. SS 4.2); let \[\mathcal{V}_{\pi,\Lambda}^{\widetilde{\mathfrak{n}}}(N):=\pi(\widetilde{ \mathfrak{n}}).\mathcal{V}_{\pi,\Lambda}(N)=\big{\{}\pi(\widetilde{\mathfrak{n}} )\varphi:\ \varphi\in\mathcal{V}_{\pi,\Lambda}(N)\big{\}},\] be their transforms under the action of \(\widetilde{\mathfrak{n}}\) and \[\mathcal{V}_{\Lambda}^{\widetilde{\mathfrak{n}}}(N):=\bigoplus_{\pi\in\mathcal{A} _{\Lambda}(N)}\mathcal{V}_{\pi,\Lambda}^{\widetilde{\mathfrak{n}}}(N).\] We fix orthonormal bases of the \(\mathcal{V}_{\pi,\Lambda}(N)\) and \(\mathcal{V}_{\pi,\Lambda}^{\widetilde{\mathfrak{n}}}(N)\) by \[\mathcal{B}_{\pi,\Lambda}(N)\text{ and }\mathcal{B}_{\pi,\Lambda}^{ \widetilde{\mathfrak{n}}}(N)=\pi(\widetilde{\mathfrak{n}})\mathcal{B}_{\pi, \Lambda}(N)\] and finally let \[\mathcal{B}_{\pi,\Lambda}(N)\text{ and }\mathcal{B}_{\Lambda}^{\widetilde{ \mathfrak{n}}}(N)\] be their unions over the \(\pi\in\mathcal{A}_{\Lambda}(N).\) **Lemma 5.1**.: _Notations be as above. Let \(\ell\geq 1\) be an integer whose prime divisors are all inert and coprime with \(N\) and let \(f^{\mathfrak{n}}\) be the function defined in SS4.4, (4.29). The image of \(R(f^{\mathfrak{n}})\) is contained in \(\mathcal{V}_{\Lambda}^{\widetilde{\mathfrak{n}}}(N)\): let \(\tilde{\varphi}\) be an automorphic form on \(G(\mathbb{Q})\backslash G(\mathbb{A})\), then \(R(f^{\mathfrak{n}})\varphi=0\) unless \(\tilde{\varphi}\in\mathcal{V}_{\Lambda}^{\widetilde{\mathfrak{n}}}(N).\) Moreover, for \(\pi\in\mathcal{A}_{\Lambda}(N)\) and \(\tilde{\varphi}\in\mathrm{Im}(\mathbb{C}v_{\pi_{\infty}}\otimes\pi_{f}^{K_{f}( N)}\hookrightarrow L^{2}([G])),\) we have_ \[R(f^{\mathfrak{n}})\varphi=\frac{1}{d_{\Lambda}}\lambda_{\pi}(f_{\ell}).\varphi, \tag{5.1}\] _where \(d_{\Lambda}\) is the formal degree of \(D^{\Lambda}\) and_ \[\lambda_{\pi}(f_{\ell})=\prod_{p\mid\ell}\lambda_{\pi_{p}}(f_{p})\in\mathbb{C}\] _is a scalar depending on \(\pi_{\ell}=\otimes_{p\mid\ell}\pi_{p}\) and on the test functions \((f_{p})_{p\mid\ell}\)._ Proof.: Denote by \(R^{\widetilde{\mathfrak{n}}}(f)=\pi(\widetilde{\mathfrak{n}})^{-1}R(f^{ \mathfrak{n}})\pi(\widetilde{\mathfrak{n}}).\) Then \[R^{\widetilde{\mathfrak{n}}}(f)\varphi(x)=\int_{\overline{G}(\mathbb{A})}f( \widetilde{\mathfrak{n}}^{-1}y\widetilde{\mathfrak{n}})\varphi(x\widetilde{ \mathfrak{n}}^{-1}y\widetilde{\mathfrak{n}})dy=\int_{\overline{G}(\mathbb{A})}f (y)\varphi(xy)dy=\pi(f)\varphi(x).\] From the definition of \(f\) : \[f=f_{\infty}\otimes\bigotimes_{p}f_{p}\in C^{\infty}(G(\mathbb{A})),\] where \(f_{\infty}\) is a matrix coefficient of \(D^{\Lambda}\) and each \(f_{p}\) is right \(K_{p}(N)\)-invariant, we see that the image of \(R^{\widetilde{\mathfrak{n}}}(f)\) is contained in \(\mathcal{V}_{\Lambda}(N)\) (in particular \(R^{\widetilde{\mathfrak{n}}}(f)\) is zero on the Eisenstein or one the cuspidal spectrum: this comes from the choice of \(f_{\infty}\) to be the matrix coefficient of the holomorphic discrete series \(D_{\Lambda}\)). For the same reason we have \[\pi_{\infty}(f_{\infty}).v_{\pi_{\infty}}=\frac{1}{d_{\Lambda}}v_{\pi_{\infty }}.\] Moreover, for \(p\mid\ell\), \(\pi_{p}\) is unramified and \(\pi_{p}^{K_{p}(N)}\) is one dimensional; since \(f_{p}\) is bi-\(K_{p}(N)\) invariant, for any \(v_{p}\in\pi_{p}^{K_{p}(N)}\), one has \[\pi_{p}(f_{p}).v_{p}=\lambda_{\pi_{p}}(f_{p}).v_{p}.\] It follows that \(\mathcal{V}_{\pi,\Lambda}(N)\) is one dimensional made of factorable vectors and that for any \(\varphi\in\mathcal{V}_{\Lambda}(N)\) one has \[R(f)\varphi=\frac{1}{d_{\Lambda}}\prod_{p\mid\ell}\lambda_{\pi_{p}}(f_{p}) \varphi=\frac{1}{d_{\Lambda}}\lambda_{\pi}(f_{\ell})\varphi\] and that \[R(f^{\mathfrak{n}})\pi(\widetilde{\mathfrak{n}})\varphi=\frac{1}{d_{\Lambda}} \lambda_{\pi}(f_{\ell})\pi(\widetilde{\mathfrak{n}})\varphi,\] proving (5.1). By the spectral decomposition of the automorphic kernel and the above lemma, one has \[\sum_{\gamma\in G(\mathbb{Q})}f^{\mathfrak{n}}(x^{-1}\gamma y) \tag{5.2}\] Applying the expansion (5.2) into \(J(f^{\mathfrak{n}})\) (see (3.3)) we then conclude that \[J(f^{\mathfrak{n}}) =\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{\Lambda}(N)} \lambda_{\pi}(f_{\ell})\sum_{\tilde{\varphi}\in\mathcal{B}^{\widetilde{ \mathfrak{n}}}_{\pi,\Lambda}(N)}\frac{\mathcal{P}(\tilde{\varphi},\varphi^{ \prime})\overline{\mathcal{P}(\tilde{\varphi},\varphi^{\prime})}}{\langle \tilde{\varphi},\tilde{\varphi}\rangle}\] \[=\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{\Lambda}(N)} \lambda_{\pi}(f_{\ell})\sum_{\tilde{\varphi}\in\mathcal{B}^{\widetilde{ \mathfrak{n}}}_{\pi,\Lambda}(N)}\frac{\big{|}\mathcal{P}(\tilde{\varphi}, \varphi^{\prime})\big{|}^{2}}{\langle\tilde{\varphi},\tilde{\varphi}\rangle}. \tag{5.3}\] where \(\mathcal{P}(\varphi,\varphi^{\prime})\) denote the automorphic period integral \[\mathcal{P}(\tilde{\varphi},\varphi^{\prime})=\int_{G^{\prime}(\mathbb{Q}) \backslash G^{\prime}(\Lambda)}\tilde{\varphi}(g)\varphi^{\prime}(g)dg. \tag{5.4}\] Since \(\tilde{\varphi}\) and \(\varphi^{\prime}\) are cusp forms, \(\mathcal{P}(\tilde{\varphi},\varphi^{\prime})\) converges absolutely and since \(\mathcal{R}^{\widetilde{\mathfrak{n}}}_{\Lambda}(N)\) is finite dimensional, the right hand side of (5.3) is absolutely converging. Setting \[\mathcal{B}^{\widetilde{\mathfrak{n}}}_{\Lambda}(N)=\bigsqcup_{\pi\in \mathcal{A}_{\Lambda}(N)}\mathcal{B}^{\widetilde{\mathfrak{n}}}_{\pi,\Lambda} (N)\] and for \(\tilde{\varphi}\in\mathcal{B}^{\widetilde{\mathfrak{n}}}_{\pi,\Lambda}(N)\) (5.3) becomes **Lemma 5.2**.: _Let notations be as above. We have_ \[J(f^{\mathfrak{n}})=\frac{1}{d_{\Lambda}}\sum_{\tilde{\varphi}\in\mathcal{B}^{ \widetilde{\mathfrak{n}}}_{\Lambda}(N)}\lambda_{\tilde{\varphi}}(f_{\ell}) \frac{\big{|}\mathcal{P}(\tilde{\varphi},\varphi^{\prime})\big{|}^{2}}{ \langle\tilde{\varphi},\tilde{\varphi}\rangle}. \tag{5.5}\] ### The Ichino-Ikeda Conjecture for Unitary Groups In this section we review the global Ichino-Ikeda conjecture for automorphic forms of \(G\times G^{\prime}=U(V)\times U(W)\) (e.g., see [14], Conjecture 1.3), which is a refinement of the Gan-Gross-Prasad conjecture [10] by giving an explicit formula relating the periods and central values of Rankin-Selberg \(L\)-functions. We now recall the definition of the local analogs of the global period integral (5.4) (see the beginning of SS5 for the notations) We recall that \(\pi\simeq\otimes_{v}\pi_{v}\) and \(\pi^{\prime}\simeq\otimes_{v}\pi^{\prime}_{v}\) denote suitable automorphic representations of \(U(V)\) and \(U(W)\) which we assume are everywhere tempered. Their respective base changes to \(\mathrm{GL}_{3,E}\) and \(\mathrm{GL}_{2,E}\) are noted \(\pi_{E}\simeq\otimes_{v}\pi_{E_{v}}\) and \(\pi^{\prime}_{E}\simeq\otimes_{v}\pi^{\prime}_{E_{v}}\). We denote by \[L(s,\pi_{E}\times\pi^{\prime}_{E})=\prod_{p}L_{p}(s,\pi_{E}\times\pi^{\prime} _{E})=\prod_{\mathfrak{p}|p}\prod_{\mathfrak{p}}L_{\mathfrak{p}}(s,\pi_{E} \times\pi^{\prime}_{E})\] the finite part of their Rankin-Selberg \(L\)-function and \[\Lambda(s,\pi_{E}\times\pi^{\prime}_{E})=L_{\infty}(s,\pi_{E}\times\pi^{ \prime}_{E})L(s,\pi_{E}\times\pi^{\prime}_{E})\] its completed version (see Prop. 5.8 for the exact expresion of \(L_{\infty}(s,\pi_{E}\times\pi^{\prime}_{E})\)). As recalled in the introduction, it admits analytic continuation to \(\mathbb{C}\) and a functional equation \[\Lambda(s,\pi_{E}\times\pi^{\prime}_{E})=\varepsilon(\pi_{E}\times\pi^{\prime }_{E})C_{f}(\pi_{E}\times\pi^{\prime}_{E})^{s}\Lambda(1-s,\pi_{E}\times\pi^{ \prime}_{E}) \tag{5.6}\] In Proposition 5.9) below we prove that \[C_{f}(\pi_{E}\times\pi^{\prime}_{E})=N^{4}{N^{\prime}}^{6}\] and that the root number equals \[\varepsilon(\pi_{E}\times\pi^{\prime}_{E})=+1.\] Let \[\Delta_{G}=\Lambda(M_{G}^{\vee}(1),0)\] be the special (complete) \(L\)-value where \(M_{G}^{\vee}(1)\) is the twisted dual of the motive \(M_{G}\) associated to \(G\) by Gross [10]. Locally, for \(v\) any place, we set \[\Delta_{G,v}=L_{v}(M_{G}^{\vee}(1),0).\] Explicitely, let \(\eta=\prod_{v}\eta_{v}\) denote the quadratic character of \(\mathbb{Q}^{\times}\backslash\mathbb{A}^{\times}\) associated to \(E/\mathbb{Q}\) by class field theory. We have \[\Delta_{G,v}=\prod_{j=1}^{3}L_{v}(j,\eta^{j})=L_{v}(1,\eta)L_{v}(2,\mathbf{1} )L_{v}(3,\eta^{3}),\] and \[\Delta_{G}=\Lambda(1,\eta)\Lambda(2,\mathbf{1})\Lambda(3,\eta^{3}).\] We set \[\Lambda(\pi,\pi^{\prime}):=\Delta_{G}\frac{\Lambda(1/2,\pi_{E}\times\pi^{ \prime}_{E})}{\Lambda(1,\mathrm{Ad},\pi_{E})\Lambda(1,\mathrm{Ad},\pi^{ \prime}_{E})} \tag{5.7}\] and for any place \(v\) we set \[L_{v}(\pi_{v},\pi^{\prime}_{v}):=\Delta_{G,v}\frac{L_{v}(1/2,\pi_{E_{v}}\times \pi^{\prime}_{E_{v}})}{L_{v}(1,\mathrm{Ad},\pi_{E_{v}})L_{v}(1,\mathrm{Ad}, \pi^{\prime})}. \tag{5.8}\] Note that, by temperedness, we have for any prime \(p\) \[L_{p}(\pi_{p},\pi^{\prime}_{p})=1+O(p^{-1/2})\] where the implicit constant is absolute; moreover there exists an absolute constant \(C\geq 1\) such that we have for any prime \(p\) \[C^{-1}\leq L_{p}(\pi_{p},\pi^{\prime}_{p})\leq C. \tag{5.9}\] We also denote by \[L(\pi,\pi^{\prime}):=\frac{\Lambda(\pi,\pi^{\prime})}{L_{\infty}(\pi_{\infty},\pi^{\prime}_{\infty})} \tag{5.10}\] the "finite part" of the complete Euler product \(\Lambda(\pi,\pi^{\prime})\). Given any place \(v\) of \(\mathbb{Q}\) and any tuple of local vectors \[(\xi_{1,v},\xi_{2,v},\xi^{\prime}_{1,v},\xi^{\prime}_{2,v})\in\pi_{v}\times \pi_{v}\times\pi^{\prime}_{v}\times\pi^{\prime}_{v},\] the local period is defined formally by \[\mathcal{P}_{v}(\xi_{1,v},\xi_{2,v};\xi^{\prime}_{1,v},\xi^{\prime}_{2,v}):= \int_{G^{\prime}_{v}}\langle\pi_{v}(g_{v})\xi_{1,v},\xi_{2,v}\rangle_{v}\cdot \overline{\langle\pi^{\prime}_{v}(g_{v})\xi^{\prime}_{1,v},\xi^{\prime}_{2,v} \rangle_{v}}dg_{v};\] by a result of Harris [12] the integral \(\mathcal{P}_{v}(\xi_{1,v},\xi_{2,v};\xi^{\prime}_{1,v},\xi^{\prime}_{2,v})\) converges absolutely when both \(\pi_{v}\) and \(\pi^{\prime}_{v}\) are tempered and \[\mathcal{P}_{v}(\xi_{1,v},\xi_{1,v};\xi^{\prime}_{1,v},\xi^{\prime}_{1,v})\geq 0. \tag{5.11}\] One then defines the unitarily and arithmetically normalized local period as \[\mathcal{P}_{v}^{*}(\xi_{1,v},\xi_{2,v};\xi_{1,v}^{\prime},\xi_{2,v}^ {\prime}):= \frac{\mathcal{P}_{v}(\xi_{1,v},\xi_{2,v};\xi_{1,v}^{\prime},\xi_ {2,v}^{\prime})}{\langle\xi_{1,v},\xi_{2,v}\rangle_{v}\langle\xi_{1,v}^{\prime },\xi_{2,v}^{\prime}\rangle_{v}}, \tag{5.13}\] \[\mathcal{P}_{v}^{\natural}(\xi_{1,v},\xi_{2,v};\xi_{1,v}^{\prime}, \xi_{2,v}^{\prime}):= \frac{\mathcal{P}_{v}^{*}(\xi_{1,v},\xi_{2,v};\xi_{1,v}^{\prime },\xi_{2,v}^{\prime})}{L_{v}(\pi_{v},\pi_{v}^{\prime})}. \tag{5.12}\] According to Theorem 2.12 in [1], we have \[\mathcal{P}_{v}^{\natural}=1\] for almost all places and \[\prod_{v}\mathcal{P}_{v}^{\natural}:\ (V_{\pi}\boxtimes V_{\pi})\otimes(V_{\pi^{ \prime}}\boxtimes V_{\pi^{\prime}})\longrightarrow\mathbb{C}.\] is a well defined \(G(\mathbb{A})\times G^{\prime}(\mathbb{A})\)-invariant functional. The global Ichino-Ikeda conjecture for the unitary groups \(G\times G^{\prime}\) then provides an explicit constant of proportionality between \(|\mathcal{P}|^{2}\) and \(\prod\mathcal{P}_{v}^{\natural}\). It is now a theorem due to the recent work [1]: **Theorem 5.3** ([1], Theorem 1.9).: _Let notation be as before. Assume \(\pi\) and \(\pi^{\prime}\) are tempered. Let \(\varphi_{1},\varphi_{2}\in V_{\pi},\,\varphi_{1}^{\prime},\varphi_{2}^{ \prime}\in V_{\pi^{\prime}}\) be factorable vectors. We have_ \[\frac{\mathcal{P}(\varphi_{1},\varphi_{1}^{\prime})\overline{\mathcal{P}( \varphi_{2},\varphi_{2}^{\prime})}}{\langle\varphi_{1},\varphi_{2}\rangle \langle\varphi_{1}^{\prime},\varphi_{2}^{\prime}\rangle}=\frac{1}{2}\cdot \Lambda(\pi,\pi^{\prime})\cdot\prod_{v}\mathcal{P}_{v}^{\natural}, \tag{5.14}\] _where \(\mathcal{P}_{v}^{\natural}\)'s are defined by (5.13)._ #### 5.2.1. Explicitation of the Ichino-Ikeda formula In the next subsection, we explicitate the right-hand side of the formula (5.14) for the pairs \[(\varphi_{1},\varphi_{1}^{\prime})=(\varphi_{2},\varphi_{2}^{\prime})=( \tilde{\varphi},\varphi^{\prime})\in\pi\times\pi^{\prime}\] for \(\tilde{\varphi}=\pi(\vec{\mathfrak{n}})\varphi\) appearing in (5.5) and \(\varphi^{\prime}\) discussed in SS4. Let us recall that * the representation \(\pi=\otimes_{v\leq\infty}^{\prime}\pi_{v}\) is a cuspidal representation of \(G(\mathbb{A})\) with trivial central character, of level \(N\) equal to \(1\) or to a prime inert in \(E\) such that \(\pi_{\infty}\simeq D^{\Lambda}\), with \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2},\ k_{1}+k_{2}+2<0,\ k_{2}\geq 0\] and \(k_{1}\equiv k_{2}\,(\operatorname{mod}3)\) namely, \(D^{\Lambda}\) is a holomorphic discrete series. * the representation \(\pi^{\prime}=\otimes_{v\leq\infty}^{\prime}\pi_{v}^{\prime}\) is a cuspidal representation of \(G^{\prime}(\mathbb{A})\) with trivial central character, which at finite places is everywhere unramified excepted for at most one place \(N^{\prime}\), split in \(E\), where it is Steinberg and at the finite place has restriction, under the isomorphism \(SU(W)\simeq\operatorname{SL}_{2}\), such that \[\pi_{\infty}^{\prime}\simeq\pi_{k}^{\prime}\] the holomorphic discrete series representation of even weight \(k\geq 2\). Let us now recall (this is a slight generalization of the discussion in SS1.2) why the representations \(\pi\) and \(\pi^{\prime}\) are tempered. For \(\pi^{\prime}\) this is classical and due to Deligne ([1]). For \(\pi\), if \(N\) is a prime (in which case \(\pi_{N}\) is the Steinberg representation whose parameters are \(3\)-dimensional and indecomposable), \(\pi\) cannot be globally endoscopic. If \(\pi\) has prime level \(1\) it could a priori be endoscopic but then come from a representation \(\tau=\tau_{1}\otimes\tau_{2}\) on \(U(1,1)\times U(1)\) as the parameter of \(\tau_{\infty}\) but match the parameter of \(\pi_{\infty}\), \(\tau_{1,\infty}\) must in the discrete series and hence \(\tau_{1}\) must be tempered by Deligne and therefore \(\tau\) is tempered. By the works of Kottwitz, Milne, Rogawski et al, as assembled in [1] (see Theorems A p. 291 and B p. 293) the \(L\)-function \(L(s-1,\pi_{E})\) is a factor of \(L^{(2)}(s,\widetilde{S}^{K},V_{\ell})\), the \(L\)-function defined by the \(\operatorname{Gal}(\overline{E}/E)\)-action on \(\operatorname{IH}^{2}_{et}(\overline{S}^{K}_{\overline{Q}},V_{\ell})\), the intersection cohomology in degree 2 of the Baily-Borel-Satake compactification \(\overline{S}^{K}_{\overline{Q}}\) with coefficient in a local system \(V\) depending on the weight of \(\pi_{\infty}\), of the associated Picard modular surface \(S^{K}\) for \(K=K_{0}(N)\). By the work of Gabber, one knows that the intersection etale cohomology is pure and by the Beilinson-Bernstein-Deligne decomposition theorem, \(\Pi^{*}_{et}(\overline{S}^{K}_{\overline{Q}},V_{\ell})\) is a direct summand of \(H^{*}_{et}(\widetilde{S}^{K}_{\overline{Q}},V_{\ell})\) for any smooth toroidal compactification \(\widetilde{S}^{K}\) of \(S^{K}\) relative to \(\widetilde{S}^{K}\mapsto\overline{S}^{K}\). Now by Deligne's proof of the Weil conjectures [10], the eigenvalues of \(\operatorname{Frob}_{\mathfrak{p}}\) at any prime \(\mathfrak{p}|p\nmid ND\) acting on \(H^{*}_{et}(\widetilde{S}^{K}_{\overline{Q}},\mathbb{Q}_{\ell})\) have absolute value \(\operatorname{Nr}_{E/\mathbb{Q}}(\mathfrak{p})^{j/2}\) (for \(j\) depending on the degree and \(V_{\ell}\) ) from which it follows that \(\pi_{E,p}\) is tempered. _Remark 5.1_.: If \(\pi\) weren't stable, it could be that only a factor of \(L(s-1,\pi_{E})\) divide \(L^{(2)}(s,\widetilde{S}^{K},V_{\ell})\). Let us recall that the automorphic forms \(\varphi,\tilde{\varphi}=\pi(\tilde{\mathfrak{n}})\varphi\) and \(\varphi^{\prime}\) are factorable vectors and correspond to pure tensors which we denote by \[\varphi\simeq\otimes_{v}^{\prime}\xi_{v},\ \tilde{\varphi}\simeq\otimes_{v}^{ \prime}\tilde{\xi}_{v},\ \varphi^{\prime}\simeq\otimes_{v}^{\prime}\xi_{v}^{\prime}. \tag{5.15}\] and that the local vectors \[\xi_{v},\ \tilde{\xi}_{v}=\pi_{v}(\tilde{\mathfrak{n}}_{v})\xi_{v},\ \xi_{v}^{\prime}\] have the following properties and are uniquely defined up to scalars: * \(\tilde{\xi}_{v}=\xi_{v}\) unless \(v=p=N^{\prime}\). * If \(v=\infty\), \(\tilde{\xi}_{\infty}=\xi_{\infty}\) is an highest weight vector of the minimal \(K\)-type of \(D^{\Lambda}\) and \(\xi_{\infty}^{\prime}\) is of minimal weight \(k\) (see SS4.5.1). * For every \(p\), \(\xi_{p}\) is \(K_{p}(N)\)-invariant and \(\xi_{p}^{\prime}\) is \(K_{p}^{\prime}(N^{\prime})\)-invariant. * In particular if \(p\) does not divide \(D_{E}NN^{\prime}\), then \(\xi_{p}=\tilde{\xi}_{p}\) and \(\xi_{p}^{\prime}\) are invariant under the maximal compact subgroups \(G(\mathbb{Z}_{p})\) and \(G^{\prime}(\mathbb{Z}_{p})\) respectively and \(\pi_{p}\), \(\pi_{p}^{\prime}\) are unramified principal series representations. Since \(\pi^{\prime}\) and \(\pi\) are everywhere tempered, formula (5.14) holds and in the next subsections we will evaluate the local period integrals and will provide for \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2},\ \ k_{1}=-k,\ \ k_{2}=-k/2\] an explicit approximation of the central value \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\) in terms of the square of the period \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\). Our main objective in this section is the following **Proposition 5.4**.: _Let notations and assumptions be as in SS1.1.1 and as above. We have_ \[\frac{\left|\mathcal{P}(\tilde{\varphi},\varphi^{\prime})\right|^{2}}{\langle \tilde{\varphi},\tilde{\varphi}\rangle\langle\varphi^{\prime},\varphi^{\prime }\rangle}\asymp\frac{1}{d_{k}{N}{N}^{\prime}{}^{2}}\frac{L(1/2,\pi_{E}\times \pi_{E}^{\prime})}{L(1,\pi_{E},\operatorname{Ad})L(1,\pi_{E}^{\prime}, \operatorname{Ad})} \tag{5.16}\] _where \(L(\cdot)\) refers to the finite part of the \(L\)-functions, and the implicit (positive) constants in \(\asymp\) depends at most on the absolute discriminant of \(E\)._ _Remark 5.2_.: In particular, this implies \[L(1/2,\pi_{E}\times\pi_{E}^{\prime})\geq 0 \tag{5.17}\] whenever the local periods \(\mathcal{P}_{v}^{\natural}\) are non-zero. This follows immediately from Theorem 5.14 and (5.11) and was likely known to experts. However, what we achieve here is an effective dependency between the sizes of the global period and the central \(L\)-value for our explicit test vectors. This will crucial for our forthcoming argument (see the proof of Theorem 1.1 in SS13.1). The proof is a consequence of Theorem 5.3 and of the following proposition which evaluate the local period (5.13) for each place \(v\): **Theorem 5.5**.: _Let \(\pi\) and \(\pi^{\prime}\) be the automorphic representations of \(G(\mathbb{A})\) and \(G^{\prime}(\mathbb{A})\) described in SS5.2.1 with \(\Lambda=-k\Lambda_{1}+\frac{k}{2}\Lambda_{2}\), and let \(\varphi,\varphi^{\prime}\) be the factorable automorphic forms described in (5.15) and below. Let \(v\) be a place of \(\mathbb{Q}\). We have_ _- Archimedean case: If \(v=\infty\) we have_ \[L_{\infty}(\pi_{\infty},\pi^{\prime}_{\infty})\mathcal{P}^{\natural}_{\infty} (\xi_{\infty},\xi_{\infty};\xi^{\prime}_{\infty},\xi^{\prime}_{\infty})=\frac {1}{d_{k}} \tag{5.18}\] _where_ \[d_{k}=k-1 \tag{5.19}\] _is the formal degree of \(\pi^{+}_{k}\)._ _- Unramified case: If \(v=p\) does not divide \(2D_{E}NN^{\prime}\), one has_ \[\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})=1. \tag{5.20}\] _- The case \(v=p|D_{E}\): we have_ \[\Big{|}L_{p}(\pi_{p},\pi^{\prime}_{p})\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_ {p};\xi^{\prime}_{p},\xi^{\prime}_{p})-1\Big{|}\leq\frac{71}{18}\frac{1}{p^{2}}. \tag{5.21}\] _In particular, we have_ \[C^{-1}\leq\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{ \prime}_{p})\leq C, \tag{5.22}\] _for some absolute constant \(C\)._ _- The case \(v=p=N\): we have_ \[L_{p}(\pi_{p},\pi^{\prime}_{p})\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_{p};\xi^{ \prime}_{p},\xi^{\prime}_{p})=\frac{1}{p}(1-\frac{1}{p})+\frac{O}{p^{2}} \tag{5.23}\] _with \(|O|\leq 4\). In particular (since \(p\geq 3\)) we have_ \[\frac{C^{-1}}{p}\leq\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p },\xi^{\prime}_{p})\leq\frac{C}{p} \tag{5.24}\] _for some absolute constant \(C\geq 1\)._ _- The case \(v=p=N^{\prime}\): we have_ \[L_{p}(\pi_{p},\pi^{\prime}_{p})\mathcal{P}^{\natural}_{p}(\tilde{\xi}_{p}, \tilde{\xi}_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})=\frac{1}{p^{2}-1}(1+\frac{ O}{p}) \tag{5.25}\] _with \(|O|\leq 10^{6}\). In particular for \(p>10^{6}\) we have_ \[\frac{C^{-1}}{p^{2}}\leq\mathcal{P}^{\natural}_{p}(\tilde{\xi}_{p},\tilde{\xi }_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})\leq\frac{C}{p^{2}} \tag{5.26}\] _for some absolute constant \(C\geq 1\)._ ### The archimedean local period In this subsection we discuss (5.18). We start with the following proposition which justifies our choice \((k_{1},k_{2})\): **Proposition 5.6**.: _Let notations and assumption be as above. The global period (5.4) vanishes unless \(k_{1}=-k\) and \(k_{2}=k/2\)._ Proof.: For \(\theta\in[-\pi,\pi)\), let \[z^{\prime}(\theta)=\mathrm{diag}(e^{i\theta},1,e^{i\theta})\in Z_{G^{\prime}} (\mathbb{R}),\ \kappa(\theta)=\mathrm{diag}(e^{i\theta},1,e^{-i\theta})\in G^{\prime}( \mathbb{R}),\] \[\widetilde{z}(\theta)=\mathrm{diag}(e^{i\theta/3},e^{-2i\theta/3},e^{i\theta /3})\in SU(V)(\mathbb{R}).\] Then an explicit computation (as in the proof of Proposition 4.5) shows that for \[\forall g\in G^{\prime}(\mathbb{A}),\ \varphi(g\widetilde{z}(\theta))=e^{-(k_{1}+2k _{2})i\theta}\varphi(g).\] One has for any \(\theta\) \[\varphi^{\prime}(gz^{\prime}(\theta))=\varphi^{\prime}(g),\ \varphi^{\prime}(g \kappa(\theta))=e^{-ik\theta}\varphi^{\prime}(g). \tag{5.27}\] The first equality implies that \[\varphi(gz^{\prime}(\theta))=\varphi(g)\] (or the period integral would be \(0\)). Since \[z^{\prime}(\theta)\tilde{z}(\theta)^{-1}=\operatorname{diag}(e^{2i\theta/3},e^{2 i\theta/3},e^{2i\theta/3})\in Z_{G}(\mathbb{R})\] and \(\varphi\) is invariant under the center one has \[\varphi(g)=\varphi(gz^{\prime}(\theta))=\varphi(g\tilde{z}(\theta))=e^{-(k_{1 }+2k_{2})i\theta}\varphi(g)\] and \[k_{1}+2k_{2}=0.\] The computation in Lemma 9.6 shows that \[\varphi(g\kappa(\theta))=e^{ik_{1}\theta}\varphi(g)\] and by the second equality in (5.27) we have \(k+k_{1}=0\) or otherwise \(\mathcal{P}(\varphi,\varphi^{\prime})=0\). _Remark 5.3_.: One could also obtain Lemma 5.6 through the relative trace formula by computing the geometric side, i.e., orbital integrals: as a consequence of Lemma 4.7 we have \(f_{\infty}(gz(\theta))=e^{-(k_{1}+2k_{2})i\theta}f_{\infty}(g)\), for all \(g\in G^{\prime}(\mathbb{R}).\) Hence, if \(k_{1}+2k_{2}\neq 0\), the geometric side vanishes. Then the sum of \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\) is zero. Since each term is nonnegative, then each periods is vanishing. _Remark 5.4_.: Due to Lemma 5.6, we will take from now on \((k_{1},k_{2})=(-k,k/2)\), i.e. \[\Lambda=-k\Lambda_{1}+\frac{k}{2}\Lambda_{2}. \tag{5.28}\] To insure absolute convergence of various integrals later we will moreover assume that \[k\geq 32\] an even integer. In the sequel, to simplify notations and since \(\Lambda\) is defined in terms of \(k\), we will sometimes replace the indice \(\Lambda\) by \(k\) and write \(\mathcal{V}_{k}(N)\) for \(\mathcal{V}_{\Lambda}(N)\), \(\mathcal{B}_{\pi,k}(N)\) for \(\mathcal{B}_{\pi,\Lambda}(N)\), etc... The next lemma provides the value of \(d_{\Lambda}\): **Lemma 5.7**.: _Let_ \[\Lambda=k_{1}\Lambda_{1}+k_{2}\Lambda_{2}=-k\Lambda_{1}+\frac{k}{2}\Lambda_{2}\] _defined in (5.28) and \(D^{\Lambda}\) be the corresponding holomorphic discrete series. When \(dg\) is the Euler-Poincare measure, its formal degree equals_ \[d_{\Lambda}=\frac{(k_{1}+1)(k_{2}+1)(k_{1}+k_{2}+2)}{6}=\frac{(2k-2)(k+2)(k-6 )}{3}\asymp\frac{2}{3}k^{3}. \tag{5.29}\] Proof.: We recall that the simple positive root in this case are \(e_{1}-e_{2}\) and \(e_{2}-e_{3}\) where \(e_{i}\) are the standard basis vectors in \(\mathbb{R}^{3}\) and the root space is the hyperplane \(\{(x,y,z)\in\mathbb{R}^{3},x+y+z=0\}\). The \(\Lambda_{j},\ j=1,2\) are given by \[\Lambda_{1}=\frac{1}{3}(2,1,-1),\ \Lambda_{2}=\frac{1}{3}(1,1,-2).\] Let \(\rho\) be half the sum of the positive roots, it is given by \[\rho=\frac{1}{2}(e_{1}-e_{2}+e_{2}-e_{3}+e_{1}-e_{3})=e_{1}-e_{3}=(1,0,-1).\] Consider the Weyl reflections \[S_{1}:(x,y,z)\mapsto(y,x,z),\ S_{2}:(x,y,z)\mapsto(x,z,y)\] and let \(\Lambda^{\prime}\) be such that \[k_{1}\Lambda_{1}+k_{2}\Lambda_{2}=S_{1}\circ S_{2}(\Lambda^{\prime}+\rho)-\rho.\] _Remark 5.5_.: In [10, Lemma 9.4]\(\langle\lambda+\rho,\alpha\rangle/\langle\alpha,\alpha\rangle\) should be \(\langle\lambda+\rho,\alpha\rangle/\langle\rho,\alpha\rangle\). Let us compute the Langlands parameter \(\Lambda^{\prime}=(a,b,c)\): \[S_{1}\circ S_{2}(\Lambda^{\prime}+\rho)-\rho=(c-2,a+1,b+1)=\frac{k_{1}}{3}(2,- 1,-1)+\frac{k_{2}}{3}(1,1,-2)\] so that \[\Lambda^{\prime}=(\frac{-k_{1}+k_{2}}{3}-1,-\frac{k_{1}+2k_{2}}{3}-1,\frac{2k _{1}+k_{2}}{3}+2). \tag{5.30}\] We have the Blattner parameter \[\Lambda^{\prime}+\rho=(\frac{-k_{1}+k_{2}}{3},-\frac{k_{1}+2k_{2}}{3}-1,\frac{ 2k_{1}+k_{2}}{3}+1)\] and \[\langle\Lambda^{\prime}+\rho,\alpha\rangle=\begin{cases}k_{2}+1&\alpha=e_{1}- e_{2}\\ -k_{1}-k_{2}-2&\alpha=e_{2}-e_{3}\\ -k_{1}-1&\alpha=e_{1}-e_{3}.\end{cases}\] Then (5.29) follows from Harish-Chandra's formula [10] that \[d(D^{\Lambda})=\frac{1}{3}\prod_{\alpha>0}\frac{\langle\Lambda^{\prime}+\rho, \alpha\rangle}{\langle\rho,\alpha\rangle},\] with the product over all the positive roots \(\alpha\in\{e_{1}-e_{2},e_{2}-e_{3},e_{1}-e_{3}\}\). We can now evaluate explicitly the archimedean local period integral: **Proposition 5.8**.: _Let \(\pi\) and \(\pi^{\prime}\) be the automorphic representations of \(G(\mathbb{A})\) and \(G^{\prime}(\mathbb{A})\) described in SS5.2.1. We have (with \(k_{1}=-k,\ k_{2}=k/2\))_ \[L_{\infty}(s,\pi_{E}\times\pi^{\prime}_{E})=\Gamma_{\mathbb{C}} \left(s+\frac{1}{2}\right)\Gamma_{\mathbb{C}}\left(s+\frac{3}{2}\right)\Gamma _{\mathbb{C}}\left(s+\frac{k}{2}-\frac{3}{2}\right)\\ \Gamma_{\mathbb{C}}\left(s+\frac{k}{2}+\frac{1}{2}\right)\Gamma_{ \mathbb{C}}\left(s+k-\frac{5}{2}\right)\Gamma_{\mathbb{C}}\left(s+k-\frac{3}{ 2}\right). \tag{5.31}\] _where_ \[\Gamma_{\mathbb{C}}(s)=2(2\pi)^{-s}\Gamma(s)=\Gamma_{\mathbb{R}}(s+1)\Gamma_ {\mathbb{R}}(s),\ \Gamma_{\mathbb{R}}(s)=\pi^{-s/2}\Gamma(s/2).\] Proof.: Recall that the Langlands parameter of the base changed representation of the holomorphic discrete series of weight \(k\) is \((\frac{k-1}{2},-\frac{k-1}{2})\), i.e., \[z\mapsto\operatorname{diag}((\overline{z}/z)^{\frac{k-1}{2}},(\overline{z}/z )^{-\frac{k-1}{2}}).\] Therefore, the archimedean parameter of the base changed representation \(\pi^{\prime}_{E}\) is given by \[(\frac{k-1}{2},\frac{1-k}{2}):\ z\mapsto\operatorname{diag}((\bar{z}/z)^{ \frac{k-1}{2}},(\bar{z}/z)^{-\frac{k-1}{2}}),\] Recall that (cf. (5.30)) the Langlands parameter of \(\pi_{\infty}\) is \[\Lambda^{\prime}=(\frac{-k_{1}+k_{2}}{3}-1,-\frac{k_{1}+2k_{2}}{3}-1,\frac{2k _{1}+k_{2}}{3}+2)=(\frac{k}{2}-1,-1,-\frac{k}{2}+2).\] Let \(\pi_{\infty,\mathbb{C}}\) be the base change of \(\pi_{\infty}\) to \(\operatorname{GL}_{3}(\mathbb{C})\). Then the archimedean parameter of \(\pi_{\infty,\mathbb{C}}\otimes\pi^{\prime}_{\infty,\mathbb{C}}\) is given by \[(\frac{k}{2}-1,-1,-\frac{k}{2}+2)\otimes(\frac{k-1}{2},\frac{1-k}{2})=(k- \frac{3}{2},\frac{k}{2}-\frac{3}{2},\frac{3}{2},-\frac{1}{2},-\frac{k}{2}- \frac{1}{2},-k+\frac{5}{2}).\] and the result follows since for \(r\in\frac{1}{2}\mathbb{Z}\) \[L_{\infty}(z\mapsto(\overline{z}/z)^{r},s)=L_{\infty}((z\overline{z})^{s}( \overline{z}/z)^{r})=\Gamma_{\mathbb{C}}(s+|r|)\] #### 5.3.1. Proof of (5.18) By definition we have \[\mathcal{P}_{\infty}(\xi_{\infty},\xi_{\infty};\xi_{\infty}^{\prime},\xi_{ \infty}^{\prime})=\int_{G^{\prime}(\mathbb{R})}\langle g_{\infty}.\xi_{\infty },\xi_{\infty}\rangle_{\infty}\overline{\langle\pi_{\infty}^{\prime}(g_{ \infty})\xi_{\infty}^{\prime},\xi_{\infty}^{\prime}\rangle_{\infty}}dg_{ \infty}.\] Note that by Lemma 4.7, \[\frac{\langle\pi_{\infty}(g_{\infty})\xi_{\infty},\xi_{\infty}\rangle_{\infty }}{\langle\xi_{\infty},\xi_{\infty}\rangle_{\infty}}=f_{\infty}(g_{\infty}).\] Then by Lemma 4.8 and the Schur orthogonality relations we obtain \[\frac{\mathcal{P}_{\infty}(\xi_{\infty},\xi_{\infty};\xi_{\infty}^{\prime}, \xi_{\infty}^{\prime})}{\langle\xi_{\infty},\xi_{\infty}\rangle_{\infty} \langle\xi_{\infty}^{\prime},\xi_{\infty}^{\prime}\rangle_{\infty}}=\int_{G^{ \prime}(\mathbb{R})}f_{\infty}(g_{\infty})\frac{\langle\pi_{\infty}^{\prime}(g _{\infty})\xi_{\infty}^{\prime},\xi_{\infty}^{\prime}\rangle_{\infty}}{ \langle\xi_{\infty}^{\prime},\xi_{\infty}^{\prime}\rangle_{\infty}}dg_{ \infty}=\frac{1}{d_{\pi_{\infty}^{\prime}}}.\] where \(d_{\pi_{\infty}^{\prime}}=d_{k}\) is the formal degree of \(\pi_{k}^{+}\) Then (5.18) follows. ### The root number and the conductor of \(L(s,\pi_{E}\times\pi_{E}^{\prime})\) In this section we compute the root number and the arithmetic conductor of \(L(s,\pi_{E}\times\pi_{E}^{\prime})\). First we observe that \(\pi_{E}\) and \(\pi_{E}^{\prime}\) are conjugate self-dual, ie. if \(c\in\operatorname{Aut}_{\mathbb{Q}}(E)\) denote the non-trivial automorphism, we have \[\pi_{E}\circ c\simeq\pi_{E}^{\vee}\simeq\overline{\pi}_{E},\ \pi^{\prime}{}_{E} \circ c\simeq\pi^{\prime}{}_{E}\simeq\overline{\pi^{\prime}}{}_{E}\] ( and since the representations are unitary \(\pi_{E}^{\vee}\simeq\overline{\pi}_{E},\ \pi^{\prime}{}_{E}\simeq\overline{\pi^{ \prime}}{}_{E}\)). In particular, the functional equation indeed relates \(\Lambda(s,\pi_{E}\times\pi_{E}^{\prime})\) to \(\Lambda(1-s,\pi_{E}\times\pi_{E}^{\prime})\) and \(\varepsilon(\pi_{E}\times\pi_{E}^{\prime})\in\{\pm 1\}\). **Proposition 5.9**.: _Let \(\pi\) and \(\pi^{\prime}\) be the automorphic representations of \(G(\mathbb{A})\) and \(G^{\prime}(\mathbb{A})\) described in SS5.2.1 ; let \(\pi_{E}\), \(\pi_{E}^{\prime}\) be the corresponding base change to \(\operatorname{GL}(3,\mathbb{A}_{E})\) and \(\operatorname{GL}(2,\mathbb{A}_{E})\) and \(L(s,\pi_{E}\times\pi_{E}^{\prime})\) be (the finite part of) its associated Rankin-Selberg \(L\)-function. Its arithmetic conductor equal_ \[C_{f}(\pi_{E}\times\pi_{E}^{\prime})=N^{4}{N^{\prime}}^{6}\] _and its root number equals_ \[\varepsilon(\pi_{E}\times\pi_{E}^{\prime})=+1.\] _Consequently, its analytic conductor \(C(s,\pi_{E}\times\pi_{E}^{\prime})\) satisfies (for \(\operatorname{Re}s=1/2\))_ \[C(s,\pi_{E}\times\pi_{E}^{\prime})\asymp N^{4}{N^{\prime}}^{6}|s|^{4}(|s|+k)^ {8}. \tag{5.32}\] _In particular for \(s=1/2\) one has the convexity bound_ \[L(1/2,\pi_{E}\times\pi_{E}^{\prime})\ll(kNN^{\prime})^{o(1)}C(1/2,\pi_{E} \times\pi_{E}^{\prime})^{1/4}\ll(kNN^{\prime})^{o(1)}k^{3}{N{N^{\prime}}^{3/2}}.\] Proof.: Since \((N,N^{\prime})=1\), the arithmetic conductor of \(L(s,\pi_{E}\times\pi_{E}^{\prime})\) is simply \[C_{f}(\pi_{E}\times\pi_{E}^{\prime})=(N^{2})^{2}({N^{\prime}}^{2})^{3}=N^{4}{N^ {\prime}}^{6};\] indeed for \(N\) a prime inert in \(E\), the conductor of the Steinberg representation for \(\operatorname{GL}_{3}(E_{N})\) is \(N^{2}\) (the norm of the ideal \(N\mathcal{O}_{E_{N}}\)) and for \(N^{\prime}\) a prime split in \(E\), \(\operatorname{GL}_{2}(E_{N^{\prime}})\simeq\operatorname{GL}_{2}(\mathbb{Q}_{N^ {\prime}})^{2}\) and \(\pi_{N^{\prime}}\simeq\operatorname{St}_{N^{\prime}}\otimes\operatorname{St}_{N ^{\prime}}\) has conductor \({N^{\prime}}^{2}\). By (5.31) the archimedean conductor is (for \(\operatorname{Re}s=1/2\)), \[C_{\infty}(s,\pi_{E}\times\pi_{E}^{\prime})\asymp |s+1/2|^{2}|s+3/2|^{2}|s+k/2-3/2|^{2}\] \[\times|s+k/2+1/2|^{2}|s+k-5/2|^{2}|s+k-3/2|^{2}\] \[\asymp |s|^{4}(|s|+k)^{8}.\] and the analytic conductor \[C(s,\pi_{E}\times\pi^{\prime}_{E})=C_{\infty}(s,\pi_{E}\times\pi^{\prime}_{E})C_{ f}(\pi_{E}\times\pi^{\prime}_{E})\] satisfies (5.32). The convexity bound follows from apply the approximate functional equation for \(L(1/2,\pi_{E}\times\pi^{\prime}_{E})\) and from the following bounds for the coefficients of \(L(s,\pi_{E}\times\pi^{\prime}_{E})\) \[\lambda_{\pi_{E}\times\pi^{\prime}_{E}}(n)\ll_{\varepsilon}n^{\varepsilon}\] for any \(\varepsilon>0\) (the later is a consequence of the temperedness of \(\pi\) and \(\pi^{\prime}\)). Let us turn to the computation of the root number: ita decomposes as a product of local root numbers (along the places of \(\mathbb{Q}\) and, say, relative to the usual unramified additive character) \[\varepsilon(\pi_{E}\times\pi^{\prime}_{E})=\varepsilon_{\infty}(\pi_{E} \times\pi^{\prime}_{E})\prod_{p}\varepsilon_{p}(\pi_{E}\times\pi^{\prime}_{E})\] and for any such place \(v\) we have the further factorisation \[\varepsilon_{v}(\pi_{E}\times\pi^{\prime}_{E})=\prod_{w|v}\varepsilon_{w}( \pi_{E}\times\pi^{\prime}_{E})\] If \(v=p\not|NN^{\prime}\) then \(\pi_{E}\) and \(\pi^{\prime}_{E}\) are un ramified at any place \(w|p\) and \[\varepsilon_{w}(\pi_{E}\times\pi^{\prime}_{E})=1.\] If \(p=N\) which is inert in \(E\), that \(\pi^{\prime}_{E}\) is unramified and \[\varepsilon_{p}(\pi_{E}\times\pi^{\prime}_{E})=\varepsilon_{p}(\pi_{E})^{2}=1\] (\(\pi_{E,p}=\operatorname{St}_{p}\) and \(\varepsilon_{p}(\pi_{E})=\pm 1\)). If \(p|N^{\prime}\) then \[\varepsilon_{p}(\pi_{E}\times\pi^{\prime}_{E})=\varepsilon_{\mathfrak{p}}( \pi_{E}\times\pi^{\prime}_{E})\varepsilon_{\overline{\mathfrak{p}}}(\pi_{E} \times\pi^{\prime}_{E})=(\varepsilon_{\mathfrak{p}}(\pi^{\prime}_{E}) \varepsilon_{\overline{\mathfrak{p}}}(\pi^{\prime}_{E}))^{3}\] since \(\pi_{E}\) is unramified at these places. Also since \(\pi^{\prime}_{E_{\mathfrak{p}}}\) is the base change of the Steinberg representation, \(\pi^{\prime}_{E_{\mathfrak{p}}}\simeq\pi^{\prime}_{E_{\overline{\mathfrak{p}}}}\) and \(\varepsilon_{\mathfrak{p}}(\pi^{\prime}_{E})\varepsilon_{\overline{\mathfrak{p} }}(\pi^{\prime}_{E})=1\). Finally for \(v=\infty\) the unique archimedean place we have seen above that \(\pi_{E}\times\pi^{\prime}_{E}\) has parameters \[(k-\frac{3}{2},\frac{k}{2}-\frac{3}{2},\frac{3}{2},-\frac{1}{2},-\frac{k}{2} -\frac{1}{2},-k+\frac{5}{2}).\] By [14, SS3.2] for \(r\in\frac{1}{2}\mathbb{Z}\) \[\varepsilon_{\infty}(z\to(\overline{z}/z)^{r})=i^{2r}\] and in this case we have \[k-\frac{3}{2}+\frac{k}{2}-\frac{3}{2}+\frac{3}{2}-\frac{1}{2}-\frac{k}{2}- \frac{1}{2}-k+\frac{5}{2}=0\] hence \[\varepsilon_{\infty}(\pi_{E}\times\pi^{\prime}_{E})=1.\] _Remark 5.6_.: Notice that by [12], \(\pi_{E}\) is conjugate orthogonal and \(\pi^{\prime}_{E}\) conjugate symplectic so with this information only the sign \(\varepsilon(\pi_{E}\times\pi^{\prime}_{E})\) could be \(+1\) or \(-1\). ### The local periods at unramified primes Let now \(v=p\) be a prime that does not divide \(NN^{\prime}D_{E}\) then (see the discussion in SS5.2.1) \(\xi_{p},\xi^{\prime}_{p}\) are unramified vectors and the \(7\) conditions on page 308 of [14] (see also [11], (U1)-(U6), p.5) are satisfied for \(\pi\) and \(\pi^{\prime}\). Consequently, by Theorem 2.12 of [14], \[\mathcal{P}^{\natural}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})=1\] which is (5.20). ### The local periods at primes ramified in \(E/\mathbb{Q}\) In this section we establish (5.21). Let \(p=\mathfrak{p}^{2}\) be a ramified prime and \(\varpi\) be a uniformizer of \(\mathfrak{p}\). Let \[A_{n}=\operatorname{diag}(\varpi^{n},1,\varpi^{-n}),\ n\geq 0.\] Since \(\pi_{p}\) and \(\pi_{p}^{\prime}\) are tempered principal series, we may assume \(\pi_{p}=\operatorname{Ind}\chi_{p}\) and \(\pi_{p}^{\prime}=\operatorname{Ind}\chi_{p}^{\prime}\), for some unramified unitary characters \(\chi_{p}\) and \(\chi_{p}^{\prime}\) of the respective diagonal tori. Denote by \(\gamma_{p}=\chi_{p}(A_{1})\) and \(\gamma_{p}^{\prime}=\chi_{p}^{\prime}(A_{1}^{\prime})\). By Macdonald's formula (cf. [10] or [11]) we have for \(n\geq 0\), \[\frac{\langle\pi_{p}^{\prime}(A_{n})\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_ {p}}{\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}=\frac{(1-p^{-1} \gamma_{p}^{\prime-1})\gamma_{p}^{\prime n}-(1-p^{-1}\gamma_{p}^{\prime}) \gamma_{p}^{\prime-n-1}}{p^{n}\big{[}(1-p^{-1}\gamma_{p}^{\prime-1})-(1-p^{-1 }\gamma_{p}^{\prime})\gamma_{p}^{\prime-1}\big{]}}, \tag{5.33}\] \[\frac{\langle\pi_{p}(A_{n})\xi_{p},\xi_{p}\rangle_{p}}{\langle\xi_{p},\xi_{p}\rangle_{p}}=\\ \frac{1}{p^{2n}}\Big{[}\frac{(\gamma_{p}-p^{-2})(1+p^{-1}\gamma_ {p}^{-1})\gamma_{p}^{n}-(\gamma_{p}^{-1}-p^{-2})(1+p^{-1}\gamma_{p})\gamma_{p }^{-n}}{(\gamma_{p}-p^{-2})(1+p^{-1}\gamma_{p}^{-1})\gamma_{p}-(\gamma_{p}^{- 1}-p^{-2})(1+p^{-1}\gamma_{p})\gamma_{p}^{-1}}\Big{]} \tag{5.34}\] and these inner product vanish for \(n<0\). Therefore, if the Haar measure on \(G^{\prime}(\mathbb{Q}_{p})\) is such that \(G^{\prime}(\mathbb{Z}_{p})\) has measure \(1\), by the Cartan decomposition we have \[\int\!\!\frac{\langle\pi_{p}(g_{p})\xi_{p},\xi_{p}\rangle_{p}\langle\pi_{p}^{ \prime}(g_{p})\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}{\langle\xi_{p},\xi _{p}\rangle_{p}\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}dg_{p}=\sum _{n\geq 0}\frac{\langle\pi_{p}(A_{n})\xi_{p},\xi_{p}\rangle_{p}\langle\pi_{p}^{ \prime}(A_{n})\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}{\langle\xi_{p}, \xi_{p}\rangle_{p}\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}, \tag{5.35}\] where the integral on the left hand side is taken over \(G^{\prime}(\mathbb{Q}_{p})\). A straightforward calculation shows that \[\left|\frac{\langle\pi_{p}^{\prime}(A_{n})\xi_{p}^{\prime},\xi_{p}^{\prime} \rangle_{p}}{\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}\right|= \left|\frac{1-\gamma_{p}^{\prime 2n+1}-p^{-1}\gamma_{p}^{\prime}(1-\gamma_{p}^{ \prime 2n-1})}{(1+p^{-1})(1-\gamma_{p}^{\prime})p^{n}}\right|.\] Then expanding the fractions into geometric series and appealing to triangle inequality, we obtain, when \(n\geq 1\), that \[L_{n}^{\prime}:=\left|\frac{\langle\pi_{p}^{\prime}(A_{n})\xi_{p}^{\prime}, \xi_{p}^{\prime}\rangle_{p}}{\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{ p}}\right|\leq\frac{2n+1+p^{-1}(2n-1)}{(1+p^{-1})p^{n}}. \tag{5.36}\] Similarly, after a straightforward calculation we have \[L_{n}:=\left|\frac{\langle\pi_{p}(A_{n})\xi_{p},\xi_{p}\rangle_{p}}{\langle \xi_{p},\xi_{p}\rangle_{p}}\right|=\left|\frac{\widetilde{\gamma}_{p}^{n+1}+p ^{-1}(1-p^{-1})\widetilde{\gamma}_{p}^{n}-p^{-3}\widetilde{\gamma}_{p}^{n-1} }{p^{2n}(1+p^{-3})(\gamma_{p}-\gamma_{p}^{-1})}\right|,\] where \[\widetilde{\gamma}_{p}^{m}:=\gamma_{p}^{m}-\gamma_{p}^{-m},\ m\in\mathbb{Z}.\] Expanding the fractions into geometric series and appealing to triangle inequality we then obtain \[L_{n}\leq\frac{2+(1-p^{-3})\big{[}2\big{[}\frac{n-1}{2}\big{]}+\frac{(-1)^{n}+ 1}{2}\big{]}+p^{-1}(1-p^{-1})\big{[}2\big{[}\frac{n}{2}\big{]}+\frac{(-1)^{n-1} +1}{2}\big{]}}{(1+p^{-3})p^{2n}} \tag{5.37}\] Therefore, we have from (5.36) and (5.37) that \[\left|\frac{\langle\pi_{p}(A_{1})\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\pi_{p}^{ \prime}(A_{1}^{\prime})\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}{\langle \xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{ p}}\right|\leq\frac{3+p^{-1}}{p+1}\cdot\frac{2+p^{-1}(1-p^{-1})}{(1+p^{-3})p^{2}}< \frac{6-p^{-1}}{p^{3}+1}; \tag{5.38}\] and \[\left|\frac{\langle\pi_{p}(A_{2})\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\pi^{\prime }_{p}(A^{\prime}_{2})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{\langle\xi_{p },\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}} \right|\leq\frac{(5+3p^{-1})(3+2p^{-1})}{(1+p^{-1})p^{6}}; \tag{5.39}\] moreover, when \(n\geq 3,\) \[\left|\frac{\langle\pi_{p}(A_{n})\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\pi^{ \prime}_{p}(A_{n})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{\langle\xi_{p },\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}} \right|<\frac{2n^{2}+5n+2}{p^{3n}}. \tag{5.40}\] Combining (5.38), (5.39) with (5.40) we then conclude that \[\sum_{n\geq 1}L^{\prime}_{n}L_{n}\leq\frac{6-p^{-1}}{p^{3}+1}+\frac{(5+3p^{-1} )(3+2p^{-1})}{(1+p^{-1})p^{6}}+\sum_{n\geq 3}\frac{2n^{2}+5n+2}{p^{3n}}. \tag{5.41}\] By induction one has \(2n^{2}+5n+2\leq 5\cdot 2^{n}\) for \(n\geq 2\). Therefore, substituting this estimate into (5.41) and computing the geometric series we then obtain \[\sum_{n\geq 1}\left|\frac{\langle\pi_{p}(A_{1})\xi_{p},\xi_{p}\rangle_{p}\cdot \langle\pi^{\prime}_{p}(A^{\prime}_{1})\xi^{\prime}_{p},\xi^{\prime}_{p} \rangle_{p}}{\langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p}, \xi^{\prime}_{p}\rangle_{p}}\right|\leq H(p), \tag{5.42}\] where the auxiliary arithmetic function \(H(\cdot)\) is defined by \[H(n):=\frac{6-n^{-1}}{n^{3}+1}+\frac{(5+3n^{-1})(3+2n^{-1})}{(1+n^{-1})n^{6}} +\frac{40}{n^{6}}\cdot\frac{1}{n^{3}-2},\ n\geq 2.\] For \(n\geq 2,\) one has \(n^{2}H(n)\geq(n+1)^{2}H(n+1)\) and \(H(p)\leq\frac{4H(2)}{p^{2}}=\frac{71}{18p^{2}}.\) Hence, combining (5.35) with (5.42) we then obtain \[\left|\int_{G^{\prime}(\mathbb{Q}_{p})}\frac{\langle\pi_{p}(g_{p})\xi_{p},\xi _{p}\rangle_{p}\langle\pi^{\prime}_{p}(g_{p})\xi^{\prime}_{p},\xi^{\prime}_{p }\rangle_{p}}{\langle\xi_{p},\xi_{p}\rangle_{p}\langle\xi^{\prime}_{p},\xi^{ \prime}_{p}\rangle_{p}}dg_{p}-1\right|\leq\frac{4H(2)}{p^{2}}=\frac{71}{18p^{2}}\] and (5.21) follows. ### The matrix coefficient of the Steinberg representation for \(U(v)\) We continue to assume that \(p\) is inert in \(E\). Let \(\mu=\mu_{p}\) be a Haar measure on \(G(\mathbb{Q}_{p}).\) Denote by \(W_{0}=\big{\{}\mathbf{1}_{p},J\big{\}}.\) For \(n\geq 1,\) set \[W_{n}=\big{\{}A_{n},JA_{n},A_{n}J,JA_{n}J\big{\}}.\] Let \[W:=\bigsqcup_{n\geq 0}W_{n}.\] By Lemma A.6 (see the Appendix) and the Cartan decomposition we then obtain \[G(\mathbb{Q}_{p})=\bigsqcup_{n\geq 0}G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})= \bigsqcup_{w\in W}I_{p}wI_{p}. \tag{5.43}\] By Lemma A.4 one has \(\mu(I_{p}JI_{p})=p^{3}\mu(I_{p}).\) Moreover, by Lemma A.5, for \(n\geq 1,\) \[\mu(I_{p}A_{n}I_{p})=p^{4n}\mu(I_{p}),\ \mu(I_{p}JA_{n}I_{p})=p^{4n-3} \mu(I_{p}),\] \[\mu(I_{p}A_{n}JI_{p})=p^{4n+3}\mu(I_{p}),\ \mu(I_{p}JA_{n}JI_{p})=p^{4n}\mu(I_{p}).\] Then for \(w\in W,\) there exists a unique integer \(\lambda(w)\in\mathbb{Z}_{\geq 0}\) such that \[\mu(I_{p}wI_{p})=p^{\lambda(w)}\mu(I_{p}).\] In particular, \(\lambda(\mathbf{1}_{p})=0,\)\(\lambda(J)=3;\) and for \(n\geq 1,\) \[\lambda(A_{n})=4n,\ \lambda(JA_{n})=4n-3,\ \lambda(A_{n}J)=4n+3\ \text{and}\ \lambda(JA_{n}J)=4n.\] Thus, the Poincare series \[\sum_{w\in W}p^{-2\lambda(w)}=1+p^{-6}+(2+p^{6}+p^{-6})\cdot\sum_{n\geq 1}p^{-8n} \tag{5.44}\] converges absolutely. Let \(\Xi_{p}\) be the \(I_{p}\)-bi-invariant function on \(G(\mathbb{Q}_{p})\) defined by \[\Xi_{p}(w)=(-p)^{-\lambda(w)}\mu(I_{p}),\ w\in W.\] **Lemma 5.10**.: _Let \(f\) be a function on \(G(\mathbb{Q}_{p})\) defined by_ \[f(pk)=\delta_{P}(p)\Xi_{p}(k),\] _for \(p\in P(\mathbb{Q}_{p})\) and \(k\in G(\mathbb{Z}_{p})\) and \(\delta_{P}\) the module character of the Borel subgroup \(P.\) Then_ \[\varphi_{f}(g):=\int_{I_{p}}f(\kappa g)d\kappa=\Xi_{p}(g)\Xi_{p}(I_{p}),\quad \forall g\in G(\mathbb{Q}_{p}). \tag{5.45}\] Proof.: Let \(g=pk\) be the Iwasawa decomposition, then by the Iwahori decomposition \(G(\mathbb{Z}_{p})=I_{p}\bigsqcup I_{p}JI_{p}\) we obtain \[\int_{I_{p}\bigsqcup I_{p}JI_{p}}f(g\kappa)d\kappa=\int_{G( \mathbb{Z}_{p})}f(p\kappa)d\kappa\\ =\delta_{P}(p)\int_{G(\mathbb{Z}_{p})}\Xi(\kappa)d\kappa=\delta_ {P}(p)(1+p^{3}\Xi_{p}(J))\mu(I_{p}),\] where the last equality comes from Lemma A.4. Notice that \(1+p^{3}\Xi_{p}(J)=0,\) so that \[\int_{I_{p}\bigsqcup I_{p}JI_{p}}f(g\kappa)d\kappa=0,\quad\forall g\in G( \mathbb{Q}_{p}). \tag{5.46}\] Let \(g=pk\) with \(k\in I_{p}.\) Then we have by definition \[\int_{I_{p}\bigsqcup I_{p}JA_{1}I_{p}}f(g\kappa)d\kappa= \delta_{P}(p)\cdot\bigg{[}\int_{I_{p}}\Xi_{p}(\kappa)d\kappa+\int _{I_{p}JA_{1}I_{p}}\Xi_{p}(\kappa)d\kappa\bigg{]}\] \[= \delta_{P}(p)\cdot(\mu(I_{p})+\Xi_{p}(JA_{1})\mu(I_{p}JA_{1}I_{p }))=0.\] When \(g=pk\) and \(k\in I_{p}JI_{p},\) we can write \(k=n(\delta,\tau)J\kappa_{1}\) by (A.11). So \[\int_{I_{p}\bigsqcup I_{p}JA_{1}I_{p}}f(g\kappa)d\kappa= \int_{I_{p}}f(pn(\delta,\tau)J\kappa)d\kappa+\int_{I_{p}JA_{1}I_ {p}}f(pn(\delta,\tau)J\kappa)d\kappa\] \[= \delta_{P}(p)\cdot\bigg{[}\int_{I_{p}}\Xi_{p}(J\kappa)d\kappa+ \int_{I_{p}JA_{1}I_{p}}\Xi_{p}(J\kappa)d\kappa\bigg{]}.\] By Lemma A.5 we have \[\int_{I_{p}}\Xi_{p}(J\kappa)d\kappa+\int_{I_{p}JA_{1}I_{p}}\Xi_{p }(J\kappa)d\kappa= \Big{[}\Xi_{p}(J)+\sum_{\tau\in p\mathcal{O}_{p}/p^{2}\mathcal{O} _{p}\atop\tau+\overline{\tau}=0}\Xi_{p}(n(0,\tau)A_{1})\Big{]}\mu(I_{p})\] \[= \Xi_{p}(J)\mu(I_{p})+p\Xi_{p}(A_{1})\mu(I_{p}).\] Note that \(\Xi_{p}(J)+p\Xi_{p}(A_{1})=-p^{-3}+p\cdot(-p)^{-4}=0.\) Hence, we have \[\int_{I_{p}\bigsqcup I_{p}JA_{1}I_{p}}f(g\kappa)d\kappa=0,\quad\forall g\in G (\mathbb{Q}_{p}). \tag{5.47}\] Note that the function \(\varphi_{f}\) is \(I_{p}\)-bi-invariant. So we only need to verify (5.45) for all \(g\in W.\) Clearly, (5.45) holds for \(g=\textbf{1}_{p}\in W.\) Moreover, by (5.46) and (5.47), (5.45) holds for \(g\in\{J,JA_{1}\}\subset W.\) Hence, we have \[\int_{I_{p}\bigsqcup I_{p}wI_{p}}\varphi_{f}(g\kappa)d\kappa=0,\ \forall w\in\big{\{}J,JA_{1}\big{\}},\ g\in G( \mathbb{Q}_{p}). \tag{5.48}\] Hence, expanding (5.48) we then see \(\varphi_{f}(w)=\Xi_{p}(w)\) holds for all \(w\in W\) with \(\lambda(w)\leq 3.\) Let \(n\geq 3.\) Suppose \(\varphi_{f}(w)=\Xi_{p}(w)\) holds for all \(w\in W\) such that \(\lambda(w)\leq n.\) Let \(w^{\prime}\in W\) be such that \(\lambda(w^{\prime})=n+1.\) Then there exists \(w_{1}\in\big{\{}J,JA_{1}\big{\}},\) and \(w_{2}\in W-\big{\{}\textbf{1}_{p}\big{\}}\) with \(w_{2}w_{1}=w^{\prime}\) and \(\lambda(w_{2})+\lambda(w_{1})=n+1.\) Explicitly, suppose \(w^{\prime}\in W_{m},\)\(m\geq 1.\) When \(w^{\prime}=A_{m}J,\) then \(w_{1}=J\) and \(w_{2}=A_{m};\) when \(w^{\prime}=A_{m};\) when \(w^{\prime}=A_{m};\) when \(w^{\prime}=A_{m},\) then \(w_{1}=JA_{1}\) and \(w_{2}=A_{m-1}J;\) when \(w^{\prime}=JA_{m},\) then \(w_{1}=JA_{1}\) and \(w_{2}=JA_{m-1}J.\) We have \(1\leq\lambda(w_{1})\leq 3\) and \(\lambda(w_{2})\leq n.\) Also, by Lemma A.5 one has \[I_{p}w_{2}I_{p}w_{1}I_{p}=I_{p}wI_{p}.\] Hence, by our assumption, and taking \(g=w_{1},\) one then has \[\int_{I_{p}}\varphi_{f}(w_{2}\kappa)d\kappa+\int_{I_{p}w_{1}I_{p}}\varphi_{f}( w_{2}\kappa)d\kappa=\int_{I_{p}\bigsqcup I_{p}w_{1}I_{p}}\varphi_{f}(w_{2} \kappa)d\kappa=0. \tag{5.49}\] Since \(\varphi_{f}\) is bi-invariant under \(I_{p}\) and \(I_{p}w_{1}I_{p}w_{2}I_{p}=I_{p}wI_{p},\) (5.49) then becomes \[\varphi_{f}(w_{2})+\frac{\mu(I_{p}w_{1}I_{p})}{\mu(I_{p})}\varphi_{f}(w)=0,\ i.e.,\ \varphi_{f}(w_{2})+q^{\lambda(w_{1})}\varphi_{f}(w^{\prime})=0. \tag{5.50}\] By assumption, \(\varphi_{f}(w_{2})=\Xi_{p}(w_{2}).\) Hence by (5.50), \[\varphi_{f}(w^{\prime})=-p^{-\lambda(w_{1})}\Xi_{p}(w_{2})=(-p)^{-\lambda(w_{1 })-\lambda(w_{2})}\mu(I_{p})=\Xi_{p}(w^{\prime}).\] Thus (5.45) follows by induction. **Proposition 5.11**.: _Let \(p\) be a prime inert in \(E\) and \(\mathrm{St}_{p}\) be the Steinberg representation of \(G(\mathbb{Q}_{p})\) ; the function \(\Xi_{p}\) is a matrix coefficient of \(\mathrm{St}_{p}.\) Precisely, let \(\xi_{p}\neq 0\) be a local new vector in \(\mathrm{St}_{p}\) (a generator of the one dimensional space of \(I_{p}\)-invariant vectors), then_ \[\frac{\langle\mathrm{St}_{p}(g)\xi_{p},\xi_{p}\rangle_{p}}{\langle\xi_{p},\xi _{p}\rangle_{p}}=\frac{\Xi_{p}(g)}{\Xi_{p}(\textbf{1}_{p})},\quad\forall\ g \in G(\mathbb{Q}_{p}). \tag{5.51}\] Proof.: Let \(\mathcal{V}\) be the vector space spanned by the right translates of the function \(\Xi_{p}.\) Then \(\mathcal{V}\) is a smooth representation of \(G(\mathbb{Q}_{p})\) which we denote by \((\pi_{p},\mathcal{V})\). Suppose that \(\pi_{p}\) is reducible: there exists some nonzero \(G(\mathbb{Q}_{p})\)-invariant subspace \(\mathcal{V}^{\prime}\subsetneq\mathcal{V}\) and some \(g_{0}\in G(\mathbb{Q}_{p})\) such that \(\pi_{p}(g_{0})\Xi_{p}\in\mathcal{V}^{\prime}\) and \[\Xi_{p}(g_{0})=\Xi_{p}(\textbf{1}_{p}\cdot g_{0})=\pi_{p}(g_{0})\Xi_{p}( \textbf{1}_{p})\neq 0.\] Let \(g_{1}\in G(\mathbb{Q}_{p}).\) Denote by \[\varphi(g):=\int_{I_{p}}\Xi_{p}(g_{1}\kappa g)d\kappa. \tag{5.52}\] Then \(\varphi\) is a function of \(g\in G(\mathbb{Q}_{p})\) bi-invariant under \(I_{p}.\) Moreover, similar computation as in (5.46) and (5.47) shows that \[\int_{I_{p}\bigsqcup I_{p}wI_{p}}\Xi_{p}(g^{\prime}\kappa)d\kappa=0,\ \forall w\in\big{\{}J,JA_{1}\big{\}},\ g^{\prime}\in G(\mathbb{Q}_{p}). \tag{5.53}\] Take \(g\) to be the form of \(g_{1}\kappa^{\prime}g\) in (5.53) and integrate over \(\kappa^{\prime}\in I_{p},\) one then has \[\int_{I_{p}\bigsqcup I_{p}wI_{p}}\varphi(g\kappa)d\kappa=\int_{I_{p}\bigsqcup I_{ p}wI_{p}}\int_{I_{p}}\Xi_{p}(g_{1}\kappa^{\prime}g\kappa)d\kappa^{\prime}d \kappa=0,\ \forall w\in\big{\{}J,JA_{1}\big{\}}.\] Then one can apply a similar induction argument to the proof of Lemma 5.10 to deduce that \[\varphi(g)=\Xi_{p}(g_{1})\Xi_{p}(g). \tag{5.54}\] We then take \(g=g_{0}\) and let \(g_{1}\) vary, obtaining \[\Xi_{p}(g_{0})\Xi_{p}(g_{1})=\varphi(g_{0})=\int_{I_{p}}\Xi_{p}(g_{1}\kappa g_{ 0})d\kappa=\int_{I_{p}}(\pi_{p}(\kappa g_{0}))\Xi_{p}(g_{1})d\kappa. \tag{5.55}\] Note by our assumption, \(\Xi_{p}(g_{0})\neq 0.\) Hence, from (5.55) we obtain \[\Xi_{p}(\bullet)=\frac{1}{\Xi_{p}(g_{0})}\int_{I_{p}}\pi_{p}(\kappa g_{0})\Xi _{p}(\bullet)d\kappa\in\mathcal{V}^{\prime}.\] Therefore, \(\mathcal{V}\subseteq\mathcal{V}^{\prime},\) a contradiction! So \((\pi_{p},\mathcal{V})\) is irreducible. Let \[\Pi_{p}=\mathrm{Ind}_{P(\mathbb{Q}_{p})}^{G(\mathbb{Q}_{p})}(|\cdot|_{p}^{1},1,|\cdot|_{p}^{-1})\] be the induced representation. Then the functions \(f\) on \(G(\mathbb{Q}_{p})\) which belong to the space of \(\Pi_{p}\) are precisely the functions \(G(\mathbb{Z}_{p})\)-finite on the right which satisfy \[f(pg)=\delta_{P}(p)f(g).\] Hence the function \(\varphi_{f}\) defined in Lemma 5.10 belongs to the space of \(\Pi_{p}\) and therefore \(\pi_{p}\) is an irreducible component of \(\Pi_{p}\). The classification of irreducible admissible representations on \(G(\mathbb{Q}_{p}),\) implies that \(\pi_{p}\) "is" the Steinberg representation : \(\pi_{p}\simeq\mathrm{St}_{p}.\) Since \(\Xi_{p}\) is \(I_{p}\)-invariant and the space of \(I_{p}\)-invariant vectors (the space of local new-vectors) in the Steinberg representation has dimension \(1,\) there exists a nonzero constant \(\lambda\) such that \(\xi_{p}=\lambda\cdot\Xi_{p}.\) It remains to compute the matrix coefficient of \(\Xi_{p}\). Let \(g\in G(\mathbb{Q}_{p}).\) Then \[\langle\pi_{p}(g)\Xi_{p},\Xi_{p}\rangle_{p}=\int_{\overline{G}( \mathbb{Q}_{p})}\Xi_{p}(g_{1}g)\overline{\Xi}_{p}(g_{1})dg_{1}=\\ \frac{1}{\Xi_{p}(\mathbf{1}_{p})}\int_{\overline{G}(\mathbb{Q}_{ p})}\overline{\Xi}_{p}(g_{1})dg_{1}\int_{I_{p}}\Xi_{p}(g_{1}\kappa g)d\kappa.\] Then by (5.52) and (5.54) (noting that \(\Xi_{p}\) is \(L^{2}\)-integrable by (5.44)), we deduce that \[\langle\pi_{p}(g)\Xi_{p},\Xi_{p}\rangle_{p}=\frac{\Xi_{p}(g)}{\Xi_{p}( \mathbf{1}_{p})}\int_{\overline{G}(\mathbb{Q}_{p})}\Xi_{p}(g_{1})\overline{\Xi }_{p}(g_{1})dg_{1}=\frac{\langle\xi_{p},\xi_{p}\rangle_{p}}{\Xi_{p}(\mathbf{1} _{p})}\cdot\Xi_{p}(g), \tag{5.56}\] which proves (5.51). ### The local period at \(p=n\) In this section we use the results of the previous section to establish (5.23). We recall that \(p=N\) is inert in \(E,\) that \(\pi_{p}\) is the Steinberg representation and that \(\pi_{p}^{\prime}\) is a (tempered) unramified principal series representation. Let \(\xi_{p}\) and \(\xi_{p}^{\prime}\) be local new vectors, of \(\pi_{p}\) and \(\pi_{p}^{\prime}\) respectively. We will show that \[\left|L_{p}(\pi_{p},\pi_{p}^{\prime})\cdot\mathcal{P}_{p}^{\natural}(\xi_{p}, \xi_{p};\xi_{p}^{\prime},\xi_{p}^{\prime})-\frac{p-1}{p^{2}}\right|\leq\frac{3 (1-p^{-2})}{p}\cdot\frac{p-1}{p^{2}}. \tag{5.57}\] This implies (5.23) as well as (5.24) since the local factors at \(1/2\) and \(1\) do not vanish ((5.9)) and are of the shape \(1+o(1)\) as \(p\) becomes large. Write \(\pi^{\prime}_{p}=\operatorname{Ind}\chi^{\prime}_{p}\) and let \[\gamma^{\prime}_{p}=\chi^{\prime}_{p}(A^{\prime}_{1})\] where \[A_{n}=\operatorname{diag}(p^{n},p^{-n}),\ n\geq 0.\] By Macdonald's formula (cf. [10] or [11]) and Lemma A.1, we have \[\frac{\langle\pi^{\prime}_{p}(A_{n})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{ p}}{\langle\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}=\frac{(1-p^{-1}\gamma^{ \prime-1}_{p})\gamma^{\prime n}_{p}-(1-p^{-1}\gamma^{\prime}_{p})\gamma^{ \prime-n-1}_{p}}{p^{n}\big{[}(1-p^{-1}\gamma^{\prime-1}_{p})-(1-p^{-1}\gamma^{ \prime}_{p})\gamma^{\prime-1}_{p}\big{]}}. \tag{5.58}\] By Lemma A.5, Lemma A.6, Proposition 5.11, Lemma A.2 and Lemma A.3, \[\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{ \prime}_{p})}{\langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p}, \xi^{\prime}_{p}\rangle_{p}}= \sum_{n\geq 0}\sum_{w^{\prime}\in W^{\prime}_{n}}(-p)^{- \lambda^{\prime}(i(w^{\prime}))}p^{\lambda^{\prime}(w^{\prime})}\cdot\frac{ \langle\pi^{\prime}_{p}(A_{n})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{ \langle\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}\cdot\mu(I^{\prime}_{p})\] \[= \frac{1-p^{-2}}{p+1}+\sum_{n\geq 1}\frac{\langle\pi^{\prime}_{p}(A_ {n})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{\langle\xi^{\prime}_{p},\xi^ {\prime}_{p}\rangle_{p}}\cdot\frac{2p^{-2n}-p^{-2n+2}-p^{-2n-2}}{p+1},\] where the last equality follows from the fact that \(\xi^{\prime}_{p}\) is spherical. Therefore, \[\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})}{ \langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p},\xi^{\prime}_{p }\rangle_{p}}=\frac{p-1}{p^{2}}-\sum_{n\geq 1}\frac{\langle\pi^{\prime}_{p}(A_ {n})\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{\langle\xi^{\prime}_{p},\xi^ {\prime}_{p}\rangle_{p}}\cdot\frac{(p-1)^{2}(p+1)}{p^{2n+2}}. \tag{5.59}\] Since \(\pi^{\prime}_{p}\) is tempered, then \(|\gamma^{\prime}_{p}|=1.\) Hence, we obtain \[\sum_{n\geq 1}\frac{(1-p^{-1}\overline{\gamma^{\prime}_{p}})\gamma^{\prime n}_{p}-( 1-p^{-1}\gamma^{\prime}_{p})\overline{\gamma^{\prime}_{p}}^{n+1}}{p^{3n} \big{[}(1-p^{-1}\overline{\gamma_{p}})-(1-p^{-1}\gamma^{\prime}_{p})\gamma^{ \prime-1}_{p}\big{]}}=\frac{\frac{(1-p^{-1}\gamma^{\prime-1}_{p})\gamma^{ \prime}_{p}}{p^{3}-\gamma^{\prime}_{p}}-\frac{(1-p^{-1}\gamma^{\prime}_{p}) \overline{\gamma^{\prime}_{p}}^{2}}{p^{3}-\gamma^{\prime-1}_{p}}}{(1+p^{-1})( 1-\gamma^{\prime}_{p})}. \tag{5.60}\] Substituting (5.58) and (5.60) into (5.59) one then obtains \[\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})}{ \langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p},\xi^{\prime}_{p} \rangle_{p}}=\frac{p-1}{p^{2}}-\frac{(p-1)^{2}(p+1)}{p^{2}}\cdot\frac{\frac{(1- p^{-1}\gamma^{\prime-1}_{p})\gamma^{\prime}_{p}}{p^{3}-\gamma^{\prime-1}_{p}}- \frac{(1-p^{-1}\gamma^{\prime}_{p})\gamma^{\prime-2}_{p}}{p^{3}-\gamma^{\prime- 1}_{p}}}{(1+p^{-1})(1-\gamma^{\prime-1}_{p})}.\] A straightforward simplification shows that \[\frac{\frac{(1-p^{-1}\gamma^{\prime-1}_{p})\gamma^{\prime}_{p}}{p^{3}-\gamma^{ \prime}_{p}}-\frac{(1-p^{-1}\gamma^{\prime}_{p})\gamma^{\prime-2}_{p}}{p^{3}- \gamma^{\prime-1}_{p}}}{(1+p^{-1})(1-\gamma^{\prime-1}_{p})}= \frac{p^{3}\gamma^{\prime}_{p}(1+\gamma^{\prime-1}_{p}+\gamma^{ \prime-2}_{p})-p^{2}-1-p^{-1}}{(p^{3}-\gamma^{\prime}_{p})(p^{3}-\gamma^{ \prime-1}_{p})(1+p^{-1})}.\] In conjunction with \(|\gamma^{\prime}_{p}|=1\) we then conclude, when \(p\geq 3\), that \[\left|\frac{\frac{(1-p^{-1}\gamma^{\prime-1}_{p})\gamma^{\prime}_{p}}{p^{3}- \gamma^{\prime}_{p}}-\frac{(1-p^{-1}\gamma^{\prime}_{p})\gamma^{\prime-2}_{p}} {p^{3}-\gamma^{\prime-1}_{p}}}{(1+p^{-1})(1-\gamma^{\prime-1}_{p})}\right|\leq \frac{3p^{3}+p^{2}+1+p^{-1}}{(p^{3}-1)^{2}(1+p^{-1})}\leq\frac{3}{p^{3}}. \tag{5.61}\] Therefore, we have by (5.61) that \[\left|\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi^{\prime}_{p},\xi^{\prime}_{p})}{ \langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi^{\prime}_{p},\xi^{\prime}_{p} \rangle_{p}}-\frac{p-1}{p^{2}}\right|\leq\frac{3(p-1)^{2}(p+1)}{p^{5}}=\frac{3 (1-p^{-2})}{p}\cdot\frac{p-1}{p^{2}}.\] Thus, (5.57) follows. ### The matrix coefficient of the Steinberg representation for \(U(w)\) In this section, we assume that \(p=N^{\prime}\) is _split_. Let \(\mu^{\prime}\) be a Haar measure on \(G^{\prime}(\mathbb{Q}_{p})\). Denote by \(W^{\prime}_{0}=\left\{\mathbf{1}^{\prime}_{p},J^{\prime}\right\}\), where \(\mathbf{1}^{\prime}_{p}\) is the identity in \(G^{\prime}(\mathbb{Q}_{p}).\) For \(n\geq 1\), set \[W^{\prime}_{n}=\big{\{}A_{n},J^{\prime}A_{n},A_{n}J^{\prime},J^{\prime}A_{n}J^{ \prime}\big{\}}.\] Let \[W^{\prime}:=\bigsqcup_{n\geq 0}W^{\prime}_{n}.\] By Lemma A.3 and the Cartan decomposition we have \[G^{\prime}(\mathbb{Q}_{p})=\bigsqcup_{n\geq 0}G^{\prime}(\mathbb{Z}_{p})A_{n}G^ {\prime}(\mathbb{Z}_{p})=\bigsqcup_{w^{\prime}\in W^{\prime}}I^{\prime}_{p}w^ {\prime}I^{\prime}_{p}.\] By Lemma A.1 one has \[\mu^{\prime}(I^{\prime}_{p}J^{\prime}I^{\prime}_{p})=p\mu^{\prime}(I^{\prime }_{p}).\] Moreover, by Lemma A.2, for \(n\geq 1\), one has \[\mu^{\prime}(I^{\prime}_{p}A_{n}I^{\prime}_{p})=p^{2n}\mu^{\prime }(I^{\prime}_{p}),\ \mu^{\prime}(I^{\prime}_{p}J^{\prime}A_{n}I^{\prime}_{p})=p^{2n-1}\mu^{\prime }(I^{\prime}_{p}),\] \[\mu^{\prime}(I^{\prime}_{p}A_{n}J^{\prime}I^{\prime}_{p})=p^{2n+1 }\mu^{\prime}(I^{\prime}_{p}),\ \text{and}\ \mu^{\prime}(I^{\prime}_{p}J^{\prime}A_{n}J^{\prime}I^{\prime}_{p})=p^{2n} \mu^{\prime}(I^{\prime}_{p}).\] Then for \(w^{\prime}\in W^{\prime}\), there exists a unique integer \(\lambda^{\prime}(w^{\prime})\in\mathbb{Z}_{\geq 0}\) such that \[\mu^{\prime}(I^{\prime}_{p}w^{\prime}I^{\prime}_{p})=p^{\lambda^{\prime}(w^{ \prime})}\mu(I^{\prime}_{p}).\] In particular, \(\lambda^{\prime}(\mathbf{1}^{\prime}_{p})=0\), \(\lambda^{\prime}(J^{\prime})=1\); and for \(n\geq 1\), \[\lambda^{\prime}(A_{n})=2n,\ \lambda^{\prime}(J^{\prime}A_{n})=2n-1,\ \lambda^{\prime}(A_{n}J^{\prime})=2n+1\ \text{and}\ \lambda^{\prime}(J^{\prime}A_{n}J^{\prime})=2n.\] Let \(\Xi^{\prime}_{p}\) be the \(I^{\prime}_{p}\)-bi-invariant function on \(G^{\prime}(\mathbb{Q}_{p})\) defined by \[\Xi^{\prime}_{p}(w^{\prime})=(-p)^{-\lambda^{\prime}(w^{\prime})}\mu^{\prime }(I^{\prime}_{p}),\ w^{\prime}\in W^{\prime}.\] Then by similar analysis as that in SS5.7 we have a counterpart of Proposition 5.11: **Proposition 5.12**.: _Let notation be as before. Let \(\mathrm{St}^{\prime}_{p}\) be the Steinberg representation of \(G^{\prime}(\mathbb{Q}_{p})\). The function \(\Xi^{\prime}_{p}\) is a matrix coefficient of \(\mathrm{St}^{\prime}_{p}\). Precisely, let \(\xi^{\prime}_{p}\neq 0\) be a local new vector in \(\mathrm{St}^{\prime}_{p}\) then_ \[\frac{\langle\pi^{\prime}_{p}(g)\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}{ \langle\xi^{\prime}_{p},\xi^{\prime}_{p}\rangle_{p}}=\frac{\Xi^{\prime}_{p}(g )}{\Xi^{\prime}_{p}(\mathbf{1}^{\prime}_{p})},\quad\forall\ g\in G^{\prime}( \mathbb{Q}_{p}). \tag{5.62}\] ### The local period at \(p=N^{\prime}\) In this section we deal with the case \(v=p=N^{\prime}\) is a split prime and establish (5.25) and (5.26). We recall that in this case, we have the identifications \[G(\mathbb{Q}_{p})\simeq\mathrm{GL}_{3}(\mathbb{Q}_{p}),\ K_{p}\simeq\mathrm{ GL}_{3}(\mathbb{Z}_{p})\] and \[G^{\prime}(\mathbb{Q}_{p})\simeq\mathrm{GL}_{2}(\mathbb{Q}_{p}),\ K^{\prime}_{ p}\simeq\mathrm{GL}_{2}(\mathbb{Z}_{p}).\] Moreover, the subgroup \(G^{\prime}\subset G\) is identified with the subgroup of \(\mathrm{GL}_{3}\) leaving invariant the second element of the canonical basis: \[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}a&&b\\ &1&\\ c&&d\end{pmatrix} \tag{5.63}\] We recall that \(\pi^{\prime}_{p}\simeq\mathrm{St}^{\prime}_{p}\) is the Steinberg representation and we denote its new vector by \(\xi^{\prime}_{p}\); the representation \[\pi_{p}\simeq\mathrm{Ind}\,\chi_{p}\] is a tempered unramified principal series induced from a unitary character \(\chi_{p}=\chi\) of the diagonal torus. We denote by \(\xi_{p}\) is a nonzero spherical vector and by \(\Xi_{p}\) its associated matrix coefficient: \[\frac{\langle\pi_{p}(g_{p})\xi_{p},\xi_{p}\rangle}{\langle\xi_{p},\xi_{p}\rangle }=\frac{\Xi_{p}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})}.\] Finally we set \[\tilde{\xi}_{p}:=\pi_{p}(\tilde{\mathfrak{n}}_{p})\xi_{p}\] where we recall that \[\tilde{\mathfrak{n}}_{p}\simeq\begin{pmatrix}1&p^{-1}\\ &1\\ &&1\end{pmatrix}=w^{\prime}.\mathfrak{n}_{p}.w^{\prime},\ \mathfrak{n}_{p}= \begin{pmatrix}1&&p^{-1}\\ &1&\\ &&1\end{pmatrix},\ w^{\prime}=\begin{pmatrix}1&&\\ &&1\\ &1\end{pmatrix}\] Our aim is to compute the normalized period \[\mathcal{P}^{*}(\widetilde{\xi}_{p},\widetilde{\xi}_{p};\xi_{p}^{\prime}, \xi_{p}^{\prime}):=\int_{G^{\prime}(\mathbb{Q}_{p})}\frac{\langle\pi_{p}(g_{p })\widetilde{\xi}_{p},\widetilde{\xi}_{p}\rangle_{p}\langle\pi_{p}^{\prime}(g _{p})\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_{p}}{\langle\widetilde{\xi}_{p },\widetilde{\xi}_{p}\rangle_{p}\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle _{p}}dg_{p} \tag{5.64}\] We have \[\frac{\langle\pi_{p}(g_{p})\widetilde{\xi}_{p},\widetilde{\xi}_{p}\rangle}{ \langle\widetilde{\xi}_{p},\widetilde{\xi}_{p}\rangle}=\frac{\langle\pi_{p}( \tilde{\mathfrak{n}}_{p}^{-1}g_{p}\tilde{\mathfrak{n}}_{p})\xi_{p},\xi_{p} \rangle}{\langle\xi_{p},\xi_{p}\rangle}=\frac{\Xi_{p}(\tilde{\mathfrak{n}}_{p} ^{-1}g_{p}\tilde{\mathfrak{n}}_{p})}{\Xi_{p}(\mathbf{1}_{p})}\] so that \[\mathcal{P}^{*}(\widetilde{\xi}_{p},\widetilde{\xi}_{p};\xi_{p}^{\prime},\xi_ {p}^{\prime})=\int_{G^{\prime}(\mathbb{Q}_{p})}\frac{\Xi_{p}(\tilde{\mathfrak{ n}}_{p}^{-1}g_{p}\tilde{\mathfrak{n}}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}( \mathbf{1}_{p})\Xi_{p}^{\prime}(\mathbf{1}_{p}^{\prime})}dg_{p} \tag{5.65}\] by Proposition 5.12. _Remark 5.7_.: Observe that since \(w^{\prime}\in K_{p}\) we have \[\Xi_{p}(\tilde{\mathfrak{n}}_{p}^{-1}g_{p}\tilde{\mathfrak{n}}_{p})=\Xi_{p}( w^{\prime}.\mathfrak{n}_{p}^{-1}.w^{\prime}.g_{p}.w^{\prime}.\mathfrak{n}_{p}.w^{ \prime})=\Xi_{p}(\mathfrak{n}_{p}^{-1}.w^{\prime}.g_{p}.w^{\prime}.\mathfrak{ n}_{p})\] and for \(g_{p}\in G^{\prime}(\mathbb{Q}_{p})\) \[w^{\prime}.g_{p}.w^{\prime}=g_{p}^{\prime}=\begin{pmatrix}a&b\\ c&d\\ &&1\end{pmatrix}\] which the usual embedding of \(\mathrm{GL}_{2}\hookrightarrow\mathrm{GL}_{3}\). For the rest of this section and to simplify notations, we will use this later embedding in place of (5.63). This will allow us to replace \(\tilde{\mathfrak{n}}_{p}\) by \(\mathfrak{n}_{p}\) in all our forthcoming computation. Let \[w=\begin{pmatrix}&1\\ &1\end{pmatrix}\in K_{p}^{\prime}\ \text{and}\ I_{p}^{\prime}\simeq\left\{ \begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{GL}_{2}(\mathbb{Z}_{p}),\ c\in p\mathbb{Z}_{p}\right\}\] be the Iwahori subgroup of \(G^{\prime}(\mathbb{Q}_{p})\). For \((m,n)\in\mathbb{Z}^{2}\), we set \[A_{m,n}=\begin{pmatrix}p^{m}&&\\ &p^{n}\end{pmatrix}\in G^{\prime}(\mathbb{Q}_{p}).\] By the Iwahori-Cartan decomposition one has \[G^{\prime}(\mathbb{Q}_{p})=G_{1}^{\prime}\bigsqcup G_{2}^{\prime},\] where \[G_{1}^{\prime}:=\bigsqcup_{n\in\mathbb{Z}}\left(I_{p}^{\prime}A_{n,n}\bigsqcup I _{p}^{\prime}A_{n,n}wI_{p}^{\prime}\right) \tag{5.66}\] and \[G_{2}^{\prime}:=\bigsqcup_{m\geq n+1}\left(I_{p}^{\prime}A_{m,n}I_{p}^{\prime} \bigsqcup I_{p}^{\prime}wA_{m,n}I_{p}^{\prime}\bigsqcup I_{p}^{\prime}A_{m,n} wI_{p}^{\prime}\bigsqcup I_{p}^{\prime}wA_{m,n}I_{p}^{\prime}\right). \tag{5.67}\] From (5.64), we have \[\mathcal{P}^{*}(\widetilde{\xi}_{p},\widetilde{\xi}_{p};\xi_{p}^{\prime},\xi_{p}^ {\prime})=\sum_{j=1}^{2}\int_{G_{j}^{\prime}}\frac{\Xi_{p}(\mathsf{n}_{p}^{-1} g_{p}\mathsf{n}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{ \prime}(\mathbf{1}_{p}^{\prime})}dg_{p}. \tag{5.68}\] Let \[I_{p}^{\prime}(1)\simeq\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{GL}_{2}(\mathbb{Z}_{p}):\ c,a-1\in p\mathbb{Z}_{p} \right\}\subset I_{p}^{\prime}. \tag{5.69}\] We have \[\mathsf{n}_{p}^{-1}\begin{pmatrix}a&b\\ c&d\\ &&1\end{pmatrix}\mathsf{n}_{p}=\begin{pmatrix}a&b&(a-1)/p\\ c&d&c/p\\ &&1\end{pmatrix}\] so that \[\mathsf{n}_{p}^{-1}I_{p}^{\prime}(1)\mathsf{n}_{p}\subseteq K_{p}.\] It follows that \[\int_{I_{p}^{\prime}A_{n,n}}\frac{\Xi_{p}(\mathsf{n}_{p}^{-1}g_{p}\mathsf{n}_ {p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{\prime}(\mathbf{ 1}_{p}^{\prime})}dg_{p}=\frac{\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p}) }\sum_{\delta}\Xi_{p}\left(\mathsf{n}_{p}^{-1}A_{n,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathsf{n}_{p}\right),\] where \(\delta\) runs over \((\mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}\). We will compute the above integral depending on \(n\): for this we notice that 1. For \(n\geq 1\), we have \[\mathsf{n}_{p}^{-1}A_{n,n}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathsf{n}_{p}=\begin{pmatrix}\delta.p^{n}&&(\delta p^{n}-1)/p \\ &p^{n}&&1\end{pmatrix}\] \[\in K_{p}\begin{pmatrix}p^{n+1}&&\\ &&p^{n}&\\ &&p^{-1}\end{pmatrix}K_{p}.\] 2. For \(n\leq-1\) we have similarly \[\mathsf{n}_{p}^{-1}A_{n,n}\begin{pmatrix}\delta&\\ &1&\\ &&1\end{pmatrix}\mathsf{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &p^{n}&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] 3. For \(n=0\) and \(\delta\neq 1\in(\mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}\) we have \[\mathsf{n}_{p}^{-1}A_{n,n}\begin{pmatrix}\delta&\\ &1&\\ &&1\end{pmatrix}\mathsf{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{-1}\end{pmatrix}K_{p}.\] From the above discussion we obtain that \[\int_{I_{p}^{\prime}A_{n,n}}\frac{\Xi_{p}(\mathsf{n}_{p}^{-1}g_{p}\mathsf{n}_ {p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{\prime}(\mathbf{ 1}_{p}^{\prime})}dg_{p}=\Sigma_{01}+\Sigma_{02}+\Sigma_{03}, \tag{5.70}\] where \[\Sigma_{01}:= \frac{\mu^{\prime}(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})} \Bigg{[}\Xi_{p}(\mathbf{1}_{p})+(p-2)\Xi_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{-1}\end{pmatrix}\Bigg{]},\] \[\Sigma_{02}:= \frac{(p-1)\mu^{\prime}(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p })}\sum_{n\geq 1}\Xi_{p}\begin{pmatrix}p^{n+1}&&\\ &p^{n}&\\ &&p^{-1}\end{pmatrix},\] \[\Sigma_{03}:= \frac{(p-1)\mu^{\prime}(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p })}\sum_{n\leq-1}\Xi_{p}\begin{pmatrix}p^{n+1}&&\\ &p^{n}&\\ &&p^{-1}\end{pmatrix}.\] Regarding the integral along \(I_{p}^{\prime}A_{n,n}wI_{p}^{\prime}\) we have \[\int_{I_{p}^{\prime}A_{n,n}w^{\prime}I_{p}^{\prime}}\frac{\Xi_{p}(\mathbf{n}_ {p}^{-1}g_{p}\mathbf{n}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p}) \Xi_{p}^{\prime}(\mathbf{1}_{p}^{\prime})}dg_{p}=-\frac{\mu(I_{p}^{\prime}(1) )}{\Xi_{p}(\mathbf{1}_{p})}\sum_{\delta}\Xi_{p}\begin{pmatrix}\mathbf{n}_{p}^ {-1}w\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathbf{n}_{p}\end{pmatrix},\] where \(\delta\) runs over \((\mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}\). We will compute the above integral depending on the value of \(n\): for this we observe that 1. For \(n\geq 1\), we have \[\begin{split}\mathfrak{n}_{p}^{-1}A_{n,n}w\begin{pmatrix} \delta&&\\ &&1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}&=\begin{pmatrix}\delta p^{n}&&\delta p^{n-1} \\ &p^{n}&-p^{-1}\\ &&1\end{pmatrix}\\ &\in K_{p}\begin{pmatrix}p^{n+1}&&\\ &&p^{n}&\\ &&p^{-1}\end{pmatrix}K_{p}.\end{split}\] 2. For \(n\leq-1\) we have \[\mathfrak{n}_{p}^{-1}A_{n,n}w\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &p^{n}&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] 3. For \(n=0\) we have \[\mathfrak{n}_{p}^{-1}w\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{-1}\end{pmatrix}K_{p}.\] Thus from the above discussion we then obtain \[\int_{I_{p}^{\prime}A_{n,n}w^{\prime}I_{p}^{\prime}}\frac{\Xi_{p}(\mathfrak{n} _{p}^{-1}g_{p}\mathfrak{n}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p })\Xi_{p}^{\prime}(\mathbf{1}_{p}^{\prime})}dg_{p}=\Sigma_{01}^{\prime}+\Sigma _{02}^{\prime}+\Sigma_{03}^{\prime}, \tag{5.71}\] where Then we have by (5.70) and (5.71) that \[\int_{G_{1}^{\prime}}\frac{\Xi_{p}(\mathfrak{n}_{p}^{-1}g_{p}\mathfrak{n}_{p}) \Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{\prime}(\mathbf{1}_{p }^{\prime})}dg_{p}=\mu(I_{p}^{\prime}(1))-\frac{\mu^{\prime}(I_{p}^{\prime}(1)) }{\Xi_{p}(\mathbf{1}_{p})}\Xi_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{-1}\end{pmatrix}. \tag{5.72}\] Let \(n\in\mathbb{Z}.\) Let \(m\geq n+1.\) We have, similar to Lemma A.2, that \[I_{p}^{\prime}A_{m,n}I_{p}^{\prime}=\bigsqcup_{\tau\in\mathbb{Z}_{p}/p^{m-n} \mathbb{Z}_{p}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}A_{m,n}I_{p}^{\prime}.\] A straightforward calculation shows that 1. Suppose \(n\geq 0.\) Then \(m\geq n+1\geq 1.\) Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1\\ &&1\end{pmatrix}A_{m,n}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m+1}&&\\ &p^{n}&\\ &&p^{-1}\end{pmatrix}K_{p}.\] 2. Suppose \(n\leq-1\) and \(m\geq 1.\) Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m+1}&&\\ &p^{-1}&\\ &&p^{n}\end{pmatrix}K_{p}.\] 3. Suppose \(n\leq-1\) and \(m=0.\) If \(\delta\neq 1\in\mathbb{F}_{p},\) then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m+1}&&\\ &p^{-1}&\\ &&p^{n}\end{pmatrix}K_{p}.\] 4. Suppose \(n\leq-1\) and \(m\leq-1.\) Note that \(m-1\geq n.\) Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&\\ &p^{m-1}&\\ &&p^{n}\end{pmatrix}K_{p}.\] Therefore, similar to (5.70) and (5.71) we have that \[\Sigma_{1}:= \sum_{n\in\mathbb{Z}}\sum_{m\geq n+1}\int_{I_{p}^{\prime}A_{m,n} I_{p}^{\prime}}\frac{\Xi_{p}(\mathfrak{n}_{p}^{-1}g_{p}\mathfrak{n}_{p})\Xi_{p}^{ \prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{\prime}(\mathbf{1}_{p}^{\prime })}dg_{p}y\] \[=\Sigma_{11}+\Sigma_{12}+\Sigma_{13}+\Sigma_{14}, \tag{5.73}\] where \[\Sigma_{11}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})}\sum _{n\geq 0}\sum_{m\geq n+1}\Xi_{p}\begin{pmatrix}p^{m+1}&&\\ &p^{n}&\\ &&p^{-1}\end{pmatrix},\] \[\Sigma_{12}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})}\sum _{n\leq-1}\sum_{m\geq 1}\Xi_{p}\begin{pmatrix}p^{m+1}&&\\ &&p^{-1}&\\ &&p^{n}\end{pmatrix},\] \[\Sigma_{13}:= \frac{\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})}\sum_{n \leq-1}\Bigg{[}\Xi_{p}\begin{pmatrix}1&1\\ &p^{n}\end{pmatrix}+(p-2)\Xi_{p}\begin{pmatrix}p&\\ &p^{-1}&\\ &&p^{n}\end{pmatrix}\Bigg{]},\] \[\Sigma_{14}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})}\sum_{ n\leq-2}\sum_{n+1\leq m\leq-1}\Xi_{p}\begin{pmatrix}p&\\ &p^{m-1}&\\ &&p^{n}\end{pmatrix}.\] Let \(m\geq n+1\). We have, similar to Lemma A.2, that \[I_{p}^{\prime}w^{\prime}A_{m,n}I_{p}^{\prime}=\bigsqcup_{\tau\in\mathbb{P} \mathbb{Z}_{p}/p^{m-n}\mathbb{Z}_{p}}\begin{pmatrix}1\\ \tau&1\end{pmatrix}w^{\prime}A_{m,n}I_{p}^{\prime}.\] \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1\\ \tau&1\\ &1\end{pmatrix}w^{\prime}A_{m,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&p^{m-1}\\ &p^{n}&-p^{-1}\\ &&1\end{pmatrix}K_{p}.\] 1. Suppose \(n\geq 0\). Then \(m\geq n+1\geq 1\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\\ \tau&1\\ &1\end{pmatrix}w^{\prime}A_{m,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&\\ &p^{n+1}&\\ &&p^{-1}\end{pmatrix}K_{p}.\] 2. Suppose \(n\leq-1\) and \(m\geq 1\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\\ \tau&1\\ &&1\end{pmatrix}w^{\prime}A_{m,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&\\ &1\\ &&p^{n}\end{pmatrix}K_{p}.\] 3. Suppose \(n\leq-1\) and \(m=0\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\\ \tau&1\\ &&1\end{pmatrix}w^{\prime}A_{m,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&\\ &p^{-1}&\\ &&p^{n}\end{pmatrix}K_{p}.\] 4. Suppose \(n\leq-1\) and \(m\leq-1\). Note that \(m-1\geq n\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\\ \tau&1\\ &&1\end{pmatrix}w^{\prime}A_{m,n}\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&\\ &p^{m-1}&\\ &&p^{n}\end{pmatrix}K_{p}.\] Therefore,similar to (5.70) and (5.73) we have that \[\Sigma_{2} :=\sum_{n\in\mathbb{Z}}\sum_{m\geq n+1}\int_{I_{p}^{\prime}w^{ \prime}A_{m,n}I_{p}^{\prime}}\frac{\Xi_{p}(\mathfrak{n}_{p}^{-1}g_{p} \mathfrak{n}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{ \prime}(\mathbf{1}_{p}^{\prime})}dg_{p} \tag{5.74}\] \[=\Sigma_{21}+\Sigma_{22}+\Sigma_{23}+\Sigma_{24},\] \[\Sigma_{21} :=-\frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\geq 0}\sum_{m\geq n+1}\Xi_{p}\begin{pmatrix}p^{m}&\\ &p^{n+1}&\\ &&p^{-1}\end{pmatrix},\] \[\Sigma_{22} :=-\frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\begin{pmatrix}p^{m}&\\ &1&\\ &&p^{n}\end{pmatrix},\] \[\Sigma_{23} :=-\frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-1}\Xi_{p}\begin{pmatrix}p&\\ &p^{-1}&\\ &&p^{n}\end{pmatrix},\] \[\Sigma_{24} :=-\frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-2}\sum_{n+1\leq m\leq-1}\Xi_{p}\begin{pmatrix}p&\\ &p^{m-1}&\\ &&p^{n}\end{pmatrix}.\] Let \(m\geq n+1\). We have, similar to Lemma A.2, that \[I_{p}^{\prime}A_{m,n}w^{\prime}I_{p}^{\prime}=\bigsqcup_{\tau\in\mathbb{Z}_{p} /p^{m-n+1}\mathbb{Z}_{p}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}A_{m,n}w^{\prime}I_{p}^{\prime}.\] \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1\\ &&1\end{pmatrix}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&-p^{-1}\\ &p^{n}&p^{n-1}\\ &&1\end{pmatrix}K_{p}.\] 1. Suppose \(n\geq 0\). Then \(m\geq n+1\geq 1\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m+1}&&\\ &p^{n}&\\ &&p^{-1}\end{pmatrix}K_{p}.\] 2. Suppose \(n\leq-1\) and \(m\geq 1\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m+1}&&\\ &&1&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] 3. Suppose \(n\leq-1\) and \(m=0\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] 4. Suppose \(n\leq-1\) and \(m\leq-1\). Note that \(m-1\geq n\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&\tau\\ &1&\\ &&1\end{pmatrix}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&&\\ &p^{m}&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] Therefore, similar to (5.70) and (5.74) we have that \[\Sigma_{3} :=\sum_{n\in\mathbb{Z}}\sum_{m\geq n+1}\int_{I^{\prime}_{p}w^{ \prime}A_{m,n}I^{\prime}_{p}}\frac{\Xi_{p}(\mathfrak{n}_{p}^{-1}g_{p} \mathfrak{n}_{p})\Xi^{\prime}_{p}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi^{\prime} _{p}(\mathbf{1}^{\prime}_{p})}dg_{p} \tag{5.75}\] \[=\Sigma_{31}+\Sigma_{32}+\Sigma_{33}+\Sigma_{34},\] where \[\Sigma_{31} :=-\frac{(p-1)\mu(I^{\prime}_{p}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\geq 0}\sum_{m\geq n+1}\Xi_{p}\begin{pmatrix}p^{m+1}&&\\ &&p^{n}\end{pmatrix},\] \[\Sigma_{32} :=-\frac{(p-1)\mu(I^{\prime}_{p}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\begin{pmatrix}p^{m+1}&&\\ &&1&\\ &&p^{n-1}\end{pmatrix},\] \[\Sigma_{33} :=-\frac{(p-1)\mu(I^{\prime}_{p}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-1}\Xi_{p}\begin{pmatrix}p&&\\ &1&\\ &&p^{n-1}\end{pmatrix},\] \[\Sigma_{34} :=-\frac{(p-1)\mu(I^{\prime}_{p}(1))}{\Xi_{p}(\mathbf{1}_{p})} \sum_{n\leq-2}\sum_{n+1\leq m\leq-1}\Xi_{p}\begin{pmatrix}p&&\\ &p^{m}&\\ &&p^{n-1}\end{pmatrix}.\] Let \(m\geq n+1\). We have, similar to Lemma A.2, that \[I^{\prime}_{p}w^{\prime}A_{m,n}w^{\prime}I^{\prime}_{p}=\bigsqcup_{\tau\in \mathbb{Z}_{p}/p^{m-n+1}\mathbb{Z}_{p}}\begin{pmatrix}1\\ \tau&1\end{pmatrix}w^{\prime}A_{m,n}I^{\prime}_{p}.\] \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&&\\ \tau&1&\\ &&1\end{pmatrix}w^{\prime}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&&\\ &p^{n}&(\delta p^{n}-1)p^{-1}\end{pmatrix}K_{p}.\] 1. Suppose \(n\geq 1\). Then \(m\geq n+1\geq 2\). Then \(\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&&\\ \tau&1&\\ &1\end{pmatrix}w^{\prime}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&&\\ &p^{n+1}&\\ &&p^{-1}\end{pmatrix}K_{p}\). 2. Suppose \(n=0\). Let \(\delta\neq 1\in\mathbb{F}_{p}\). Then \(\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&&\\ \tau&1&\\ &&1\end{pmatrix}w^{\prime}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&&\\ &p&\\ &&p^{-1}\end{pmatrix}K_{p}\). 3. Suppose \(n\leq-1\) and \(m\geq 1\). Then \(\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&&\\ &1&\\ &&1\end{pmatrix}w^{\prime}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p^{m}&&\\ &p&\\ &&p^{n-1}\end{pmatrix}K_{p}\). 4. Suppose \(n\leq-1\) and \(m\leq 0\). Then \[\mathfrak{n}_{p}^{-1}\begin{pmatrix}1&&\\ \tau&1&\\ &&1\end{pmatrix}w^{\prime}A_{m,n}w^{\prime}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\mathfrak{n}_{p}\in K_{p}\begin{pmatrix}p&\\ &p^{m}&\\ &&p^{n-1}\end{pmatrix}K_{p}.\] Therefore, similar to (5.70) and (5.75) we have that \[\Sigma_{4} :=\sum_{n\in\mathbb{Z}}\sum_{m\geq n+1}\int_{X_{m,n}}\frac{ \Xi_{p}(\mathfrak{n}_{p}^{-1}g_{p}\mathfrak{n}_{p})\Xi_{p}^{\prime}(g_{p})}{ \Xi_{p}(\mathfrak{1}_{p})\Xi_{p}^{\prime}(\mathfrak{1}_{p}^{\prime})}dg_{p}\] \[=\Sigma_{41}+\Sigma_{42}+\Sigma_{43}+\Sigma_{44}, \tag{5.76}\] where \[X_{m,n}=I_{p}^{\prime}w^{\prime}A_{m,n}w^{\prime}I_{p}^{\prime}\] and \[\Sigma_{41}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathfrak{1}_{p})} \sum_{n\geq 1}\sum_{m\geq n+1}\Xi_{p}\begin{pmatrix}p^{m}&&\\ &p^{n+1}&\\ &&p^{-1}\end{pmatrix},\] \[\Sigma_{42}:= \frac{\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathfrak{1}_{p})}\sum_{m \geq 1}\Bigg{[}\Xi_{p}\begin{pmatrix}p^{m}&&\\ &1&\\ &&1\end{pmatrix}+(p-2)\Xi_{p}\begin{pmatrix}p^{m}&&\\ &p&\\ &&p^{-1}\end{pmatrix}\Bigg{]},\] \[\Sigma_{43}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathfrak{1}_{p})} \sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\begin{pmatrix}p^{m}&&\\ &p&\\ &&p^{n-1}\end{pmatrix},\] \[\Sigma_{44}:= \frac{(p-1)\mu(I_{p}^{\prime}(1))}{\Xi_{p}(\mathfrak{1}_{p})} \sum_{n\leq-1}\sum_{n+1\leq m\leq 0}\Xi_{p}\begin{pmatrix}p&\\ &p^{m}&\\ &&p^{n-1}\end{pmatrix}.\] Recall that \(\mathcal{P}^{*}(\widetilde{\xi}_{p},\widetilde{\xi}_{p};\xi_{p}^{\prime},\xi_ {p}^{\prime})\) was defined in (5.64). Then combining (5.72), (5.73), (5.74), (5.75) with (5.76) we obtain \[\mathcal{P}^{*}(\widetilde{\xi}_{p},\widetilde{\xi}_{p};\xi_{p}^{\prime},\xi_ {p}^{\prime})-\mu(I_{p}^{\prime}(1))=-\frac{\mu(I_{p}^{\prime}(1))}{\Xi_{p}( \mathfrak{1}_{p})}\Xi_{p}\begin{pmatrix}p&\\ &1&\\ &&p^{-1}\end{pmatrix}+\sum_{i=1}^{4}\sum_{j=1}^{4}\Sigma_{ij}. \tag{5.77}\] Denote by RHS the right hand side of (5.77). Then substituting definitions of \(\Sigma_{ij}\)'s one finds that \(\Xi_{p}(\mathfrak{1}_{p})\cdot\mu(I_{p}^{\prime}(1))^{-1}\cdot\text{RHS}\) is equal to \[-\Xi_{p}\left(\begin{matrix}p&&\\ &1&\\ &&p^{-1}\end{matrix}\right)+(p-1)\sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\left( \begin{matrix}p^{m+1}&&\\ &p^{-1}&\\ &&p^{n}\end{matrix}\right)\] \[+\sum_{n\leq-1}\left[\Xi_{p}\left(\begin{matrix}1&&\\ &1&\\ &&p^{n}\end{matrix}\right)-\Xi_{p}\left(\begin{matrix}p&&\\ &p^{-1}&\\ &&p^{n}\end{matrix}\right)\right]\] \[-\sum_{m\geq 1}\Xi_{p}\left(\begin{matrix}p^{m}&&\\ &p&\\ &&p^{-1}\end{matrix}\right)-(p-1)\sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\left( \begin{matrix}p^{m}&&\\ &1&\\ &&p^{n}\end{matrix}\right)\] \[-(p-1)\sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\left(\begin{matrix}p^{m+1} &&\\ &&1&\\ &&p^{n-1}\end{matrix}\right)+\sum_{m\geq 1}\Xi_{p}\left(\begin{matrix}p^{m}&\\ &1&\\ &&1\end{matrix}\right)\] \[+(p-1)\sum_{n\leq-1}\sum_{m\geq 1}\Xi_{p}\left(\begin{matrix}p^{m} &&\\ &p&\\ &&p^{n-1}\end{matrix}\right).\] To bound this last sum, we recall Macdonald's formula for \(\operatorname{GL}(3,\mathbb{Q}_{p})\) (cf. [10]). Let \(\boldsymbol{\lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})\) a dominant coweight (ie. \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\)) and \[p^{\boldsymbol{\lambda}}:=\operatorname{diag}(p^{\lambda_{1}},p^{\lambda_{2}},p^{\lambda_{3}}).\] Let \(\mathcal{C}_{\rho}\) be the set of weights of the irreducible representation of highest weight \(\rho\). By [12, Theorem 5.5, p. 31], \[\frac{\Xi_{p}(p^{\boldsymbol{\lambda}})}{\Xi_{p}(\mathbf{1}_{p})} =\frac{\delta_{B}(p^{\boldsymbol{\lambda}})^{\frac{1}{2}}}{|K_{p}p^{ \boldsymbol{\lambda}}K_{p}/K_{p}|\sum_{w}q^{-l(w)}}\] \[\qquad\qquad\sum_{w\in W_{\boldsymbol{\lambda}}}\operatorname{ sgn}(w)\sum_{\boldsymbol{\mu}\in[W_{\boldsymbol{\lambda}}\setminus \mathcal{C}_{\rho}]}\sum_{S\subseteq\Sigma^{+}\sum_{\gamma\in S}\gamma=\rho-w \boldsymbol{\mu}}(-1)^{|S|}q^{-|S|}\tau_{\boldsymbol{\lambda}+\boldsymbol{\mu }-\rho}(\chi),\] where \(W_{\boldsymbol{\lambda}}\) is the group generated by the simple roots \(\alpha\) such that \(\langle\boldsymbol{\lambda},\alpha^{\vee}\rangle=0\), \(\Sigma^{+}\) is the set of positive roots, and \(\tau_{\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho}\) is the character of the irreducible representation \(\sigma_{\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho}\) of \(\operatorname{GL}_{3}(\mathbb{C})\) with highest weight \(\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho\). Note that \[|C_{\rho}|=\sum_{S\subseteq\Sigma^{+}}1=2^{3}=8.\] So \[\left|\frac{\Xi_{p}(p^{\boldsymbol{\lambda}})}{\Xi_{p}(\mathbf{1}_{p})}\right|\leq \frac{\delta_{B}(p^{\boldsymbol{\lambda}})^{\frac{1}{2}}}{|K_{p}p ^{\boldsymbol{\lambda}}K_{p}/K_{p}|\sum_{w}q^{-l(w)}}\sum_{w}\sum_{\boldsymbol{ \mu}}\sum_{\begin{subarray}{c}S\subseteq\Sigma^{+}\\ \sum_{\gamma\in S}\gamma=\rho-w\boldsymbol{\mu}\end{subarray}}q^{-|S|}\cdot \dim\sigma_{\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho}\] \[\leq \frac{\delta_{B}(p^{\boldsymbol{\lambda}})^{\frac{1}{2}}|C_{\rho} |}{|K_{p}p^{\boldsymbol{\lambda}}K_{p}/K_{p}|}\cdot\max_{\boldsymbol{\mu}} \dim\sigma_{\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho}\] By definition as \(\boldsymbol{\mu}\) ranges through \(C_{\rho}\), \(\boldsymbol{\mu}-\rho\) ranges over negative roots. Let \(\Lambda_{1}\) and \(\Lambda_{2}\) be basic weights. Then the possible values of \(\boldsymbol{\mu}-\rho\) is \[0,\ -2\Lambda_{1}+\Lambda_{2},\ \Lambda_{1}-2\Lambda_{2},\ -\Lambda_{1}- \Lambda_{2},\ -3\Lambda_{1},\ -3\Lambda_{2},\ -2\Lambda_{1}-2\Lambda_{2}.\] Note that \(\boldsymbol{\lambda}\equiv(\lambda_{1}-\lambda_{2})\Lambda_{1}+(\lambda_{2}- \lambda_{3})\Lambda_{2}\) modulo the center. Hence, by [11, Example 10.23, p. 288], we have \[\max_{\boldsymbol{\mu}}\dim\sigma_{\boldsymbol{\lambda}+\boldsymbol{\mu}-\rho }\leq\frac{(\lambda_{1}-\lambda_{2}+4)(\lambda_{2}-\lambda_{3}+4)(\lambda_{1} -\lambda_{3}+6)}{2}.\] Therefore, \[\left|\frac{\Xi_{p}(p^{\lambda})}{\Xi_{p}(\mathbf{1}_{p})}\right|\leq \frac{4\delta_{B}(p^{\lambda})^{\frac{1}{2}}}{|K_{p}p^{\lambda}K_{ p}/K_{p}|}\cdot(\lambda_{1}-\lambda_{2}+4)(\lambda_{2}-\lambda_{3}+4)(\lambda_{1}- \lambda_{3}+6).\] The absolute value of the right hand side of (5.77) is thus \[\leq 4\cdot 4\cdot 10^{3}\cdot\mu(I_{p}^{\prime}(1))\cdot\left(\frac{1}{p^{2} }+p\sum_{n\geq 1}\sum_{m\geq 1}\frac{m+n+2}{p^{m+n}}+\sum_{n\geq 1}\frac{n+1}{p^{n} }\right)\leq\frac{10^{6}\mu(I_{p}^{\prime}(1))}{p}.\] As a consequence, we have \[\left|\int_{G^{\prime}(\mathbb{Q}_{p})}\frac{\Xi_{p}(\mathbf{n}_{p}^{-1}g_{p} \mathbf{n}_{p})\Xi_{p}^{\prime}(g_{p})}{\Xi_{p}(\mathbf{1}_{p})\Xi_{p}^{\prime }(\mathbf{1}_{p}^{\prime})}dg_{p}-\mu(I_{p}^{\prime}(1))\right|\leq\frac{10^ {6}\mu(I_{p}^{\prime}(1))}{p}. \tag{5.78}\] Now (5.25) follows from (5.78), the identity \[\mu(I_{p}^{\prime}(1))=\frac{1}{(p-1)(p+1)}=\frac{1}{p^{2}-1} \tag{5.79}\] and our assumption that if \(N^{\prime}=p>1\) then \(p>10^{6}\). _Remark 5.8_.: When one works on a spherical vector \(\xi_{p}\) without the translation by \(\mathbf{n}_{p}\), the periods \(\mathcal{P}_{p}^{\natural}(\xi_{p},\xi_{p};\xi_{p}^{\prime},\xi_{p}^{\prime})\) is vanishing. In fact, applying Cartan-Iwahori decomposition, we obtain by Lemma A.2, Lemma A.3 and Proposition 5.12, that \[\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi_{p}^{\prime},\xi_{p}^{\prime})}{ \langle\xi_{p},\xi_{p}\rangle_{p}\cdot\langle\xi_{p}^{\prime},\xi_{p}^{\prime }\rangle_{p}}=\sum_{n\geq 0}\sum_{w^{\prime}\in W_{n}^{\prime}}\frac{\langle\pi_{p} (w^{\prime})\xi_{p},\xi_{p}\rangle_{p}}{\langle\xi_{p},\xi_{p}\rangle_{p}} \cdot\frac{(-1)^{\lambda^{\prime}(w^{\prime})}}{p+1}, \tag{5.80}\] which converges absolutely. Therefore, one can switch the sums on the right hand of (5.80), obtaining \[\frac{\mathcal{P}_{p}(\xi_{p},\xi_{p};\xi_{p}^{\prime},\xi_{p}^{\prime})}{ \langle\xi_{p},\xi_{p}\rangle\langle\xi_{p}^{\prime},\xi_{p}^{\prime}\rangle_ {p}}=\sum_{n\geq 0}\frac{\langle\pi_{p}(A_{n})\xi_{p},\xi_{p}\rangle_{p}}{ \langle\xi_{p},\xi_{p}\rangle_{p}}\cdot\mu(I_{p}^{\prime})\sum_{w^{\prime}\in W _{n}^{\prime}}\frac{(-1)^{\lambda^{\prime}(w^{\prime})}}{p+1}=0.\] ## 6. **The Geometric Side** In this section and the next three sections, we compute the terms on the right hand side of (3.6) with the choices of \(f^{n}\) and \(\varphi^{\prime}\) that have been described in the previous sections. In particular this will show the absolute convergence of the sums/integral appearing in (3.6) and (3.7) so that (3.7) is completely justified. ### Basic Decomposition In this subsection we briefly recall some basic decomposition related to \(G=U(V).\) These results will be used to calculate representatives of double cosets and estimate regular orbital integrals. Let \(P\) be the parabolic subgroup stabilizing the isotropic line through \(e_{-1}.\) Explicitly, \(P=MN,\) with \[M=\Bigg{\{}m(\alpha,\beta):=\left(\begin{array}{ccc}\alpha&&\\ &\beta&\\ &&\overline{\alpha}^{-1}\end{array}\right):\ \alpha\in E^{\times},\ \beta\in E^{1} \Bigg{\}};\] and the unipotent radical \[N=\Bigg{\{}n(b,z):=\left(\begin{array}{ccc}1&b&z\\ &1&-\overline{b}\\ &&1\end{array}\right):\ z,b\in E,\ z+\overline{z}=-b\overline{b}\Bigg{\}}.\] As algebraic groups defined over \(\mathbb{Q}\), \(M\) is a \(3\)-dimensional torus with split rank \(1\), and \(N\) is a \(3\)-dimensional unipotent group. The center of \(G\) is \[Z_{G}=\{m(\beta,\beta)=\operatorname{diag}(\beta,\beta,\beta):\ \beta\in E^{1}\}\subseteq M\] so that \(Z_{G}\simeq E^{1}\); given \(\beta\in E^{1}\), we set \[\gamma_{\beta}:=\operatorname{diag}(\beta,\beta,\beta)\in Z_{G}(\mathbb{Q}). \tag{6.1}\] **Lemma 6.1** (Bruhat decomposition).: _Let notation be as before and \(J\) given in (2.1). Then_ \[G=P\sqcup PJP, \tag{6.2}\] _and \(PJP=NJP=PJN.\) The expression of an element from the cell \(PJP\) as \(nJp\) or \(pJn\) with \(n\in N\) and \(p\in P\) is unique._ Proof.: Suppose \(g\notin P.\) So \(ge_{-1}\notin\langle e_{-1}\rangle.\) Suppose \(ge_{-1}=c_{-1}e_{-1}+c_{0}e_{0}.\) Then \((ge_{-1},ge_{-1})_{V}=c_{0}\overline{c_{0}}.\) On the other hand, \((ge_{-1},ge_{-1})_{V}=(e_{-1},e_{-1})_{V}=0.\) So we get a contradiction. Thus \(ge_{-1}\) must involve the line \(\langle e_{1}\rangle.\) One can write \(ge_{-1}=c_{-1}e_{-1}+c_{0}e_{0}+c_{1}e_{1}\) with \(c_{1}\neq 0.\) Then a straightforward computation using linear algebra shows that one can find some \(p\in P\) satisfying \(e_{1}=pge_{-1}.\) Hence from \((pge_{0},pge_{-1})_{V}=0\) we deduce that \((pge_{0},e_{1})_{V}=0,\) namely, \(pge_{0}\in\langle e_{1}\rangle^{\perp}=\langle e_{0},e_{1}\rangle.\) Therefore, \[pg=\left(\begin{array}{ccc}&&*\\ &*&*\\ *&*&*\end{array}\right)\in JP.\] Thus \(g\in PJP,\) proving (6.2). The remaining part of this lemma is similar. _Remark 6.1_.: Note that (13.14) is not a decomposition as algebraic groups, since there are \(6\) Bruhat cells required to cover \(G(E)=\operatorname{GL}(3,E).\) ### Representatives of \(G^{\prime}(\mathbb{Q})\backslash G(\mathbb{Q})/G^{\prime}(\mathbb{Q})\) To deal with the geometric side of the relative trace formula, we need to describe the double coset \(G^{\prime}(\mathbb{Q})\backslash G(\mathbb{Q})/G^{\prime}(\mathbb{Q}).\) Our main tool is the Bruhat decomposition (13.14). **Lemma 6.2**.: _Let \(c\in E\) be such that \(c+\overline{c}=-1.\) Then_ \[J\left(\begin{array}{ccc}1&1&c\\ &1&-1\\ &&1\end{array}\right)J=\left(\begin{array}{ccc}1&-\frac{1}{1+c}&-\frac{1}{1 +\overline{c}}\\ &1&\frac{1}{1+\overline{c}}\\ &&1\end{array}\right)J\left(\begin{array}{ccc}c&1&1\\ &\frac{\overline{c}}{1+\overline{c}}&-\frac{1}{1+\overline{c}}\\ &&-\frac{1}{1+c}\end{array}\right). \tag{6.3}\] Proof.: By making the proof of Lemma 6.1 explicit we find \[\left(\begin{array}{ccc}1&\frac{1}{1+c}&-\frac{1}{1+c}\\ &1&-\frac{1}{1+\overline{c}}\\ &&1\end{array}\right)\left(\begin{array}{ccc}1&\\ -1&1\\ c&1&1\end{array}\right)=\left(\begin{array}{ccc}&&-\frac{1}{1+c}\\ &\frac{\overline{c}}{1+\overline{c}}&-\frac{1}{1+\overline{c}}\\ &c&1&1\end{array}\right). \tag{6.4}\] Note also that the second matrix in (6.4) equals the left hand side of (6.3). Hence \[J\left(\begin{array}{ccc}1&1&c\\ &1&-1\\ &&1\end{array}\right)J =\left(\begin{array}{ccc}1&\frac{1}{1+c}&-\frac{1}{1+c}\\ &1&-\frac{1}{1+\overline{c}}\\ &&1\end{array}\right)^{-1}\left(\begin{array}{ccc}&&-\frac{1}{1+c}\\ &\frac{\overline{c}}{1+\overline{c}}&-\frac{1}{1+\overline{c}}\\ &c&1&1\end{array}\right)\] \[=\left(\begin{array}{ccc}1&\frac{1}{1+c}&-\frac{1}{1+c}\\ &1&-\frac{1}{1+\overline{c}}\\ &&1\end{array}\right)^{-1}J\left(\begin{array}{ccc}c&1&1\\ &\frac{\overline{c}}{1+\overline{c}}&-\frac{1}{1+\overline{c}}\\ &&-\frac{1}{1+c}\end{array}\right).\] Then (6.3) follows from computing the inverse of the first matrix in the last line. For \(x\in E\) we set \[\gamma(x)=\left(\begin{array}{ccc}\frac{x\overline{x}+3\overline{x}-x+1}{4} &\frac{1+x}{2}&-\frac{1}{2}\\ \frac{(x+1)(\overline{x}-1)}{x}&x&-1\\ -\frac{(1-x)(1-\overline{x})}{x}&1-x&1\end{array}\right). \tag{6.5}\] **Lemma 6.3**.: _Let \(x_{1},x_{2}\in E\), \(\alpha\in E^{1}-\{1\}\) be such that \(x_{1}=\alpha x_{2},\). Then there exists \(g_{1},g_{2}\in G^{\prime}(\mathbb{Q})\) such that_ \[\gamma(x_{1})=\alpha g_{1}\gamma(x_{2})g_{2}. \tag{6.6}\] Proof.: A straightforward calculation shows that \[\gamma(x_{j})J=\left(\begin{array}{ccc}1&1&-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)J\left(\begin{array}{ccc}1&1-x_{j}&-\frac{(1-x_{j})(1- \overline{x}_{j})}{2}\\ &1&\overline{x}_{j}-1\\ &&1\end{array}\right),\ \ j=1,2. \tag{6.7}\] By Hilbert 90, there exists \(\beta=a^{\prime}+b^{\prime}\sqrt{-D}\in E^{\times}\) such that \(\alpha=\beta\overline{\beta}^{-1},\) with \(a^{\prime},b^{\prime}\in\mathbb{Q}.\) Since \(\alpha\neq 1,\) then \(b^{\prime}\neq 0.\) Let \(v^{\prime}=a^{\prime}/(2b^{\prime})\in\mathbb{Q}.\) Then one has \[\alpha=\frac{a^{\prime}+b^{\prime}\sqrt{-D}}{a^{\prime}-b^{\prime}\sqrt{-D}}= -\frac{-1/2+v^{\prime}\sqrt{-D}}{-1/2-v^{\prime}\sqrt{-D}}.\] Let \(v=v^{\prime}\sqrt{-D}\in E.\) Then \(v+\overline{v}=0.\) Let \(c=-1/2+v.\) By (6.3) we have \[J\left(\begin{array}{ccc}1&1&c\\ &1&-1\\ &&1\end{array}\right)J=\left(\begin{array}{ccc}\overline{c}^{-1}&-c^{-1}&1 \\ &-\overline{c}c^{-1}&-1\\ &&c\end{array}\right)J\left(\begin{array}{ccc}1&c^{-1}&c^{-1}\\ &1&-\overline{c}^{-1}\\ &&1\end{array}\right), \tag{6.8}\] since \(c+\overline{c}+1=0.\) Let \(u,d\in E\) be such that \(u+\overline{u}=d+\overline{d}=0.\) Let \(a,b\in E^{\times}.\) Appealing to the identities (6.7) and (6.8) we then obtain \[\left(\begin{array}{ccc}a&&au\\ &\alpha&\frac{a(1+uc)}{b}\\ &&\frac{a(1+uc)}{b}\\ &&-\frac{\alpha\overline{c}}{c}&-\frac{\alpha}{b}\\ &&\frac{\overline{c}}{ab}\end{array}\right)J\left(\begin{array}{ccc}1&\frac {1-x_{2}+c^{-1}}{b}&\frac{-(1-x_{2})(1-\overline{x}_{2})+\overline{x}_{2}c^{ -1}}{b\overline{b}}+d\\ &1&\frac{\overline{x}_{2}-1-\overline{c}^{-1}}{b}\\ &&1\end{array}\right).\] To compare the right hand side of this equality with \(\gamma(x_{1})J,\) we consider \[\begin{cases}ab=\overline{c},\ a=-c,\ \alpha=-c\overline{c}^{-1}\\ a\overline{b}^{-1}+auc\overline{b}^{-1}=-1/2,\ \frac{1-x_{2}+c^{-1}}{b}=1-x_{1}\\ \frac{-(1-x_{2})(1-\overline{x}_{2})+\overline{x}_{2}c^{-1}}{b\overline{b}}+ d=-\frac{(1-x_{1})(1-\overline{x}_{1})}{2}\\ d+\overline{d}=u+\overline{u}=0.\end{cases} \tag{6.9}\] Solve the system of equations (6.9) we have \[\begin{cases}a=-c,\ b=-\overline{c}c^{-1},\ \alpha=-c\overline{c}^{-1},\ u=- \frac{1/2+\overline{c}}{c\overline{c}},\ x_{1}=\alpha_{1}x_{2}\\ d=-\frac{(1-x_{1})(1-\overline{x}_{1})}{2}+\frac{(1-x_{2})(1-\overline{x}_{2}) }{2}-\overline{x}_{2}c^{-1}=\frac{(\alpha-1)x_{2}+(\overline{\alpha}-1) \overline{x}_{2}}{2}-\overline{x}_{2}c^{-1}.\end{cases} \tag{6.10}\] Let \(a,b,c,d,u,v,\alpha\in E\) be as in (6.10). Then we have \[\gamma(x_{1})J=\left(\begin{array}{ccc}a&&au\\ &\alpha&\\ &&\overline{a}^{-1}\end{array}\right)J\left(\begin{array}{ccc}1&&v\\ &1&\\ &&1\end{array}\right)\gamma(x_{2})J\left(\begin{array}{ccc}b&&bd\\ &1&\\ &&\overline{b}^{-1}\end{array}\right). \tag{6.11}\] Then (6.6) follows from (6.11) by setting \[g_{1}=\left(\begin{array}{ccc}\alpha^{-1}a&&\alpha^{-1}au\\ &1&\alpha^{-1}\overline{a}^{-1}\end{array}\right)J\left(\begin{array}{ccc}1&&v \\ &1&\\ &&1\end{array}\right),\ \ \ g_{2}=J\left(\begin{array}{ccc}b&&bd\\ &1&\\ &&\overline{b}^{-1}\end{array}\right)J.\] One verifies that \(g_{1},g_{2}\in G^{\prime}(\mathbb{Q}).\) Hence Lemma 6.3 follows. **Proposition 6.4**.: _For \(\gamma_{\beta}\) and \(\gamma(x)\) defined in (6.1) and (6.5), the set_ \[\Phi=\left\{\gamma_{\beta},\ \gamma(x),\ \beta\in E^{1},\ x\in E\right\}\] _form a complete set of representatives for the double quotient \(G^{\prime}(\mathbb{Q}\backslash G(\mathbb{Q})/G^{\prime}(\mathbb{Q}).\)_ Proof.: Let \(g\in P(\mathbb{Q}).\) We can write \(g=\beta m(\alpha,1)n(b,z)\) for some \(\alpha\in E^{\times},\)\(\beta\in E^{1}\) and \(b,z\in E\). We then have 1. If \(b=0,\) then \(G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\gamma_{ \beta}G^{\prime}(\mathbb{Q}),\) because the semisimple part \(\operatorname{diag}(\alpha,1,\overline{\alpha}^{-1})\in G^{\prime}(\mathbb{Q}).\) 2. Suppose \(b\neq 0.\) Note that for any \(z\) satisfying \(z+\overline{z}=0,\) one has \(n(0,z)\in G^{\prime}(\mathbb{Q}).\) Hence \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\left( \begin{array}{rr}1&1&\frac{z}{b\overline{b}}\\ &1&-1\\ &&1\end{array}\right)G^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\gamma(1)G^{ \prime}(\mathbb{Q}).\] Let \(g\in P(\mathbb{Q})JP(\mathbb{Q}).\) We can write \(g=m(\alpha,\beta)n(b,z)Jn(b^{\prime},z^{\prime})\) for some \(\alpha\in E^{\times},\)\(\beta\in E^{1}\) and \(b,b^{\prime},z,z^{\prime}\in E\). We then have \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\left( \begin{array}{rr}\alpha&&\\ &\beta&&\\ &&\overline{\alpha}^{-1}\end{array}\right)\left(\begin{array}{rr}1&b&z\\ &1&-\overline{b}\\ &&1\end{array}\right)J\left(\begin{array}{rr}1&b^{\prime}&z^{\prime}\\ &1&-\overline{b}^{\prime}\\ &&1\end{array}\right)G^{\prime}(\mathbb{Q})\] 3. Suppose \(b=0.\) Then \(z+\overline{z}=-b\overline{b}=0,\) implying that \(m(\alpha,1)n(b,z)\in G^{\prime}(\mathbb{Q}).\) Note that \(J\in G^{\prime}(\mathbb{Q}).\) Hence this situation will boil down to case (i) or (ii), namely, we obtain, under the hypothesis \(b=0,\) that \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})\subseteq G^{\prime}(\mathbb{Q}) \gamma_{\beta}\cup G^{\prime}(\mathbb{Q})\gamma_{\beta}\gamma(1)G^{\prime}( \mathbb{Q}).\] 4. Suppose \(b^{\prime}=0.\) Then similarly we obtain that \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})\subseteq G^{\prime}(\mathbb{Q}) \gamma_{\beta}\cup G^{\prime}(\mathbb{Q})\gamma_{\beta}\gamma(1)G^{\prime}( \mathbb{Q}).\] 5. Suppose \(b\neq 0\) and \(b^{\prime}\neq 0.\) Let \(x=1-\overline{b}b^{\prime}\neq 1\) and \(\widetilde{z}=b\overline{b}z^{\prime}.\) Then \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\gamma_{ \beta}\left(\begin{array}{rr}1&1&\frac{z}{b\overline{b}}\\ &1&-1\\ &&1\end{array}\right)J\left(\begin{array}{rr}1&1-x&\widetilde{z}\\ &1&\overline{x}-1\\ &&1\end{array}\right)G^{\prime}(\mathbb{Q}).\] Using the fact that \(n(0,z)\in G^{\prime}(\mathbb{Q})\) when \(z+\overline{z}=0,\) we can further deduce \[G^{\prime}(\mathbb{Q})gG^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q})\gamma_{ \beta}\left(\begin{array}{rr}1&1&-1/2\\ &1&-1\\ &&1\end{array}\right)J\left(\begin{array}{rr}1&1-x&-\frac{(1-x)(1-\overline{x} )}{2}\\ &1&\overline{x}-1\\ &&1\end{array}\right)G^{\prime}(\mathbb{Q}).\] Then it follows from Lemma 6.1 and the identity \[\left(\begin{array}{rr}1&1&-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)J\left(\begin{array}{rr}1&1-x&-\frac{(1-x)(1-\overline{x} )}{2}\\ &1&\overline{x}-1\\ &&1\end{array}\right)=\left(\begin{array}{rr}-\frac{1}{2}&\frac{1+x}{2}& \frac{x\overline{x}+3\overline{x}-x+1}{2}\\ -1&x&\frac{(x+1)(\overline{x}-1)}{2}\\ 1&1-x&-\frac{(1-\overline{x})(1-\overline{x})}{2}\end{array}\right)\] that \[G^{\prime}(\mathbb{Q})\backslash G(\mathbb{Q})/G^{\prime}(\mathbb{Q})=\bigcup_{ \gamma\in\Phi}Z_{G}(\mathbb{Q})G^{\prime}(\mathbb{Q})\gamma G^{\prime}( \mathbb{Q}), \tag{6.12}\] where \(Z_{G}\) is the center of \(G\). By Lemma 6.3 we can write (6.12) as \[G(\mathbb{Q})=\bigcup_{\gamma\in\Phi}G^{\prime}(\mathbb{Q})\gamma G^{\prime}(\mathbb{Q}). \tag{6.13}\] Now we show the union in (6.13) is actually disjoint. Let \(\gamma(x_{1}),\gamma(x_{2})\in\Phi.\) Suppose \(\gamma(x_{i})\) (\(1\leq i\leq 2\)) are such that \[G^{\prime}(\mathbb{Q})\gamma(x_{1})G^{\prime}(\mathbb{Q})=G^{\prime}(\mathbb{Q}) \gamma(x_{2})G^{\prime}(\mathbb{Q}).\] Combining the definition of \(\gamma(x_{1}),\gamma(x_{2})\) in (6.5) and the identity \[\left(\begin{array}{ccc}*&&*\\ &1&\\ *&&*\end{array}\right)\left(\begin{array}{ccc}*&*&*\\ *&x&*\\ *&*&*\end{array}\right)\left(\begin{array}{ccc}*&&*\\ &1&\\ *&&*\end{array}\right)=\left(\begin{array}{ccc}*&*&*\\ *&x&*\\ *&*&*\end{array}\right),\] we deduce (by comparing the \((2,2)\)-th entry) that \(x_{1}=x_{2}.\) By the Bruhat decomposition, the orbit \(G^{\prime}(\mathbb{Q})\gamma_{\beta}G^{\prime}(\mathbb{Q})\) does not intersect the orbit \(G^{\prime}(\mathbb{Q})\gamma(x)G^{\prime}(\mathbb{Q})\) for any \(\beta\in E^{1}\) and \(x\in E.\) In conclusion, (6.13) is a disjoint union. Given \(\gamma\in\Phi\), we denote by \(H_{\gamma}\subset G^{\prime}\times G^{\prime}\) the stabilizer of \(\gamma\), namely, \[H_{\gamma}=\big{\{}(u,v)\in G^{\prime}\times G^{\prime}:\ u^{-1}\gamma v= \gamma\big{\}}.\] Let \(\varphi^{\prime}\) be an automorphic form on \(G^{\prime}(\mathbb{Q})\backslash G^{\prime}(\mathbb{A})\). We define (at least formally) the orbital integral \[\mathcal{O}_{\gamma}(f^{\mathfrak{n}},\varphi^{\prime})=\int_{H_{\gamma}( \mathbb{Q})\backslash(G^{\prime}\times G^{\prime})(\mathbb{A})}f^{\mathfrak{n }}(u^{-1}\gamma v)\varphi^{\prime}(u)\overline{\varphi}^{\prime}(v)dudv. \tag{6.14}\] By Proposition 6.4 we can rewrite \(J(f)\) (at least formally) as \[J(f^{\mathfrak{n}},\varphi^{\prime})=\sum_{\beta\in E^{1}}\mathcal{O}_{\gamma _{\beta}}(f^{\mathfrak{n}},\varphi^{\prime})+\sum_{x\in E}\mathcal{O}_{\gamma (x)}(f^{\mathfrak{n}},\varphi^{\prime}). \tag{6.15}\] The analysis of these integrals of course depend heavily on the structure of the stabilizer \(H_{\gamma}\) and we give here a roadmap of what is to come. * For \(\beta\in E^{1}\), \(\gamma_{\beta}\in Z_{G}(\mathbb{Q})\) and the stabilizer \(H_{\gamma_{\beta}}\) is the diagonal subgroup \[H_{\gamma_{\beta}}=\Delta G^{\prime}\subset G^{\prime}\times G^{\prime}.\] * We will see in Sec. 8 that the the stabilizer \(H_{\gamma(1)}\) (and more generally \(H_{\gamma(x)}\) for \(x\in E^{1}\)) is isomorphic to the unipotent radical of the Borel subgroup of \(G^{\prime}\). * Finally for \(x\in E-E^{1}\) we will see that \(H_{\gamma(x)}\) is a torus isomorphic to the unitary group \(U(1)\). Moreover we can use the support and invariance properties of \(f^{\mathfrak{n}}\) to infer further restrictions on the \(\gamma_{\beta}\) and \(\gamma(x)\) whose orbital integral is non-zero. Regarding the former, notice that \(u,v\in G^{\prime}(\mathbb{A})\), \(u^{-1}\gamma_{\beta}v\) is of the form \(\begin{pmatrix}*&&*\\ &\beta&\\ *&&*\end{pmatrix}.\) Therefore, by definition of \(f^{\mathfrak{n}}\) one has \[f^{\mathfrak{n}}(u^{-1}\gamma_{\beta}v)=0\] unless \[\beta\in\mathcal{O}_{E}^{1}=E^{1}\cap\mathcal{O}_{E}.\] In this case \(f^{\mathfrak{n}}\) is invariant under \(Z_{G}(\mathcal{O}_{E}^{1})\), by Lemma 6.3 we have \[\mathcal{O}_{\gamma_{\beta}}(f^{\mathfrak{n}},\varphi^{\prime})=\mathcal{O}_{ \gamma_{1}}(f^{\mathfrak{n}},\varphi^{\prime}),\ \ \beta\in\mathcal{O}_{E}^{1},\] therefore \[\sum_{\beta\in E^{1}}\mathcal{O}_{\gamma_{\beta}}(f^{\mathfrak{n}},\varphi^{ \prime})=w_{E}\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}},\varphi^{\prime})\] where \(w_{E}=\#\mathcal{O}_{E}^{1}\) is finite (as \(E\) is an imaginary quadratic field). Regarding the orbital integrals \(\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime}),\;x\in E^{1},\) we will see in SS8 that these converge absolutely. Moreover, using by Lemma 6.3 we will show, in Proposition 8.13 in Sec. 8.6, that for \(x\in E^{1},\) one has \[\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})=\begin{cases} \mathcal{O}_{\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime}),\;\text{if}\;x\in \mathcal{O}_{E}^{1};\\ 0,\;\text{otherwise}.\end{cases} \tag{6.16}\] This implies that \[\sum_{x\in E^{1}}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})= w_{E}\mathcal{O}_{\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime})\] Finally, for the (more complicated) orbital integrals \(\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime}),\;x\in E-E^{1},\) we will see in SS10, that these orbital integrals as well as their sum converge absolutely. From (6.16) we then obtain \[J(f^{\mathfrak{n}},\varphi^{\prime})=w_{E}\mathcal{O}_{\gamma_{1}}(f^{ \mathfrak{n}},\varphi^{\prime})+w_{E}\mathcal{O}_{\gamma(1)}(f^{\mathfrak{n} },\varphi^{\prime})+\sum_{x\in E-E^{1}}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n }},\varphi^{\prime}), \tag{6.17}\] In the sequel, we will call * \(\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}},\varphi^{\prime})\) the _identity orbital integral_; * \(\mathcal{O}_{\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime})\) and more generally the integrals \(\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime}),\;x\in E^{1}\) the _unipotent orbital integrals_, * \(\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime}),\;x\in E-E^{1}\)_the regular orbital integrals_ (as these \(\gamma(x)\)'s are regular). ## 7. **The Identity Orbital Integral** In this section, we deal with the identity orbital integral (associated to the identity element \(\gamma_{1}=\operatorname{Id}_{3}\)): \[\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}},\varphi^{\prime})=\int_{H_{\gamma_ {1}}(\mathbb{Q})\backslash H(\mathbb{A})}f^{\mathfrak{n}}(x^{-1}y)\overline{ \varphi}^{\prime}(x)\varphi^{\prime}(y)dxdy.\] Let \(\nu(f)\) be the set of primes \(p\) such that \[f_{p}=1_{G(\mathbb{Z}_{p})A_{p^{r}p}G(\mathbb{Z}_{p})}\] for some integer \(r_{p}\geq 1.\) By construction, as \(p\in\nu(f),\)\(p\) is inert \((p,NN^{\prime})=1;\) in particular \(\pi_{p}^{\prime}\) is unramified. The matrix \(A_{p^{r_{p}}}\) also belongs to \(G^{\prime}(\mathbb{Q}_{p})\) and convolution by the function \[1_{G^{\prime}(\mathbb{Z}_{p})A_{p^{r}}^{\prime}G^{\prime}(\mathbb{Z}_{p})}\] is an Hecke operator; the space of \(G^{\prime}(\mathbb{Z}_{p})\)-invariant function \(\pi_{p}^{\prime\;G^{\prime}(\mathbb{Z}_{p})}\) is an eigenspace with eigenvalue \[p^{r_{p}}\lambda_{\pi_{p}^{\prime}}(p^{r_{p}}).\] Moreover since \(\pi^{\prime}\) is tempered one has \(|\lambda_{\pi_{p}^{\prime}}(p^{r_{p}})|\leq 2\) (and for \(r_{p}=0\)\(\lambda_{\pi_{p}^{\prime}}(1)=1\)) **Proposition 7.1**.: _Let notation be as before. Then \(\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}},\varphi^{\prime})\) is equal to_ \[\frac{\langle\varphi^{\prime},\varphi^{\prime}\rangle}{d_{k}}\frac{N^{2}}{N^{ \prime 2}}\Psi(N)\mathfrak{S}(N^{\prime})\prod_{p\in\nu(f)}p^{r_{p}}\lambda_{\pi^{ \prime}}(p^{r_{p}}). \tag{7.1}\] _where_ \[d_{k}=k-1,\;\Psi(N)=\prod_{p|N}\left(1-\frac{1}{p}+\frac{1}{p^{2}}\right),\; \mathfrak{S}(N^{\prime})=\prod_{p|N^{\prime}}\frac{1}{1-p^{-2}}.\] Proof.: Note that \(H_{\gamma_{1}}=\Delta G^{\prime},\) the diagonal embedding of \(G^{\prime}\) into \(G^{\prime}\times G^{\prime}.\) Hence we can change variables to obtain \[\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}})= \int_{\Delta G^{\prime}(\mathbb{A})\setminus G^{\prime}(\mathbb{A })\times G^{\prime}(\mathbb{A})}f(\widetilde{\mathfrak{n}}^{-1}x^{-1}y \widetilde{\mathfrak{n}})\int_{G^{\prime}(\mathbb{Q})\setminus G^{\prime}( \mathbb{A})}\overline{\varphi}^{\prime}(hx)\varphi^{\prime}(hy)dhdxdy.\] We can write it as the Petersson inner product of cusp forms: \[\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}})=\int_{G^{\prime}(\mathbb{Q}) \setminus G^{\prime}(\mathbb{A})}\overline{\varphi}^{\prime}(x)\pi^{\prime} (f^{\mathfrak{n}})\varphi^{\prime}(x)dx=\langle\pi^{\prime}(f^{\mathfrak{n}} )\varphi^{\prime},\varphi^{\prime}\rangle. \tag{7.2}\] Since \(\varphi^{\prime}\) is rapidly decreasing, each integral above is absolutely convergent. Write \[(\pi^{\prime},V)=(\pi^{\prime}_{\infty},V_{\infty})\otimes(\pi^{\prime}_{\rm fin },V_{\rm fin});\] we have \(\varphi^{\prime}\simeq\xi^{\prime}_{\infty}\otimes\xi^{\prime}_{\rm fin}\in V _{\infty}\otimes V_{\rm fin}\) for some local new vector \(\xi^{\prime}_{\rm fin}\in V_{\rm fin}.\) Let us also recall that through the isomorphism \(\iota\) (see (4.35)) we have \[\pi^{\prime}\simeq\pi^{+}_{k},\] the cuspidal representation of \(\mathrm{SL}(2,\mathbb{A})\) of level \(1\) and whose archimedean component is the holomorphic discrete series of weight \(k\) and set \(\xi^{\prime}_{\infty}=v^{\circ}_{k}\circ\iota.\) Wrining \(f=f_{\infty}\times f_{\rm fin}\) we have \[\pi^{\prime}(f^{\mathfrak{n}})(\xi^{\prime}_{\infty}\otimes\xi^{\prime}_{\rm fin })= \int_{G^{\prime}(\mathbb{R})}\int_{G^{\prime}(\mathbb{A}_{\rm fin})}f_{ \infty}(y_{\infty})\pi^{\prime}_{\infty}(y_{\infty})\xi^{\prime}_{\infty} \otimes f^{\mathfrak{n}}_{\rm fin}(y_{\rm fin})\pi^{\prime}_{f}(y_{\rm fin}) \xi^{\prime}_{\rm fin}dy,\] where \(dy=dy_{\infty}dy_{\rm fin}\) with \(y_{\rm fin}=\otimes_{p<\infty}y_{p}\in G^{\prime}(\mathbb{A}_{\rm fin})\) and we have \[\pi^{\prime}(f^{\mathfrak{n}})\varphi^{\prime}=\pi^{\prime}(f^{\mathfrak{n}} )(\xi^{\prime}_{\infty}\otimes\xi^{\prime}_{\rm fin})=\pi^{\prime}_{\infty}(f _{\infty})\xi^{\prime}_{\infty}\bigotimes_{p<\infty}\pi^{\prime}_{p}(f^{ \mathfrak{n}}_{p})\xi^{\prime}_{p}.\] We recall that that \(\xi^{\prime}_{\rm fin}\) is unique up to scalar. Specifically, at \(p\nmid N^{\prime},\)\(\xi^{\prime}_{p}\) is spherical; and at \(p\mid N^{\prime},\)\(\xi^{\prime}_{p}\) is a nonzero Iwahori fixed vector. Note that by (4.22) we can regard \(f_{\infty}\) as the matrix coefficient \(F_{-k_{1}}.\) Since \(SU(1,1;\mathbb{R})\) is unimodular and the central characters in the above representations are trivial, we can apply Schur orthogonality relations to conclude that \[\pi^{\prime}_{\infty}(f_{\infty})\xi^{\prime}_{\infty}=0\] if \(k\neq-k_{1};\) and when \(k=-k_{1},\) we have \[\pi^{\prime}_{\infty}(f_{\infty})\xi^{\prime}_{\infty}=\frac{1}{k-1}\xi^{ \prime}_{\infty}. \tag{7.3}\] 1. Suppose \(p\mid N.\) Then \(\xi^{\prime}_{p}\) is \(I^{\prime}_{p}\)-fixed, \(f^{\mathfrak{n}_{p}}_{p}=f_{p}\) and \(G^{\prime}(\mathbb{Q}_{p})\cap\operatorname{supp}f_{p}=I^{\prime}_{p}.\) Therefore, we have (7.4) \[\pi^{\prime}_{p}(f^{\mathfrak{n}_{p}}_{p})\xi^{\prime}_{p} =\pi^{\prime}_{p}(f_{p})\xi^{\prime}_{p}=\int_{G^{\prime}( \mathbb{Q}_{p})}f_{p}(y_{p})\pi^{\prime}_{p}(y_{p})\xi^{\prime}_{p}dy_{p}\] \[=\frac{\mu(I^{\prime}_{p})}{\mu(K_{p}(N))}\cdot\xi^{\prime}_{p}=( p^{2}-p+1)\xi^{\prime}_{p}\] by Lemma A.4 and Lemma A.1 (see the Appendix). 2. Suppose \(p\mid N^{\prime}.\) In that case \(f^{\mathfrak{n}_{p}}_{p}\neq f_{p}\) and we then write \(y_{p}=\begin{pmatrix}a&&b\\ &1&\\ c&&d\end{pmatrix}.\) By definition the non-vanishing \[f_{p}(\widetilde{\mathfrak{n}}_{p}^{-1}y_{p}\widetilde{\mathfrak{n}}_{p})\neq 0\] amounts to \[\mathfrak{n}^{-1}y\mathfrak{n}=\mathfrak{n}_{p}^{-1}z_{p}\begin{pmatrix}a&b\\ c&d\\ &&1\end{pmatrix}\mathfrak{n}_{p}=\begin{pmatrix}a&b&(a-1)p^{-1}\\ c&d&cp^{-1}\\ &&1\end{pmatrix}\in K_{p},\] which is equivalent to \[y_{p}=\begin{pmatrix}a&&b\\ &1&\\ c&&d\end{pmatrix}\in I_{p}^{\prime}(1)=\Big{\{}\begin{pmatrix}g_{11}&&g_{12} \\ &1&\\ g_{21}&&g_{22}\end{pmatrix}\in I_{p}^{\prime}:\ g_{11}\in 1+p\mathbb{Z}_{p}\Big{\}}.\] Therefore, we have by (5.79) that (7.5) \[\pi_{p}^{\prime}(f_{p}^{\mathfrak{n}_{p}})\xi_{p}^{\prime}=\frac{1}{\mu(K_{p} )}\int_{I_{p}^{\prime}(1)}\pi^{\prime}(y_{p})\xi_{p}^{\prime}dy_{p}=\frac{\mu (I_{p}^{\prime}(1))}{\mu(K_{p})}\xi_{p}^{\prime}=\frac{1}{p^{2}-1}\xi_{p}^{ \prime}.\] 3. Suppose \(p\) is inert and \(p\nmid NN^{\prime}\), and \[f_{p}^{\mathfrak{n}_{p}}=f_{p}=1_{G(\mathbb{Z}_{p})A_{p^{r}}G(\mathbb{Z}_{p})}\] for some \(r\geq 0.\) Since \(\xi_{p}^{\prime}\) is spherical and \[G(\mathbb{Z}_{p})A_{p^{r_{p}}}G(\mathbb{Z}_{p})\cap G^{\prime}(\mathbb{Q}_{p} )=G^{\prime}(\mathbb{Z}_{p})A_{p^{r_{p}}}G^{\prime}(\mathbb{Z}_{p}).\] we have \[\pi_{p}^{\prime}(f_{p}^{\mathfrak{n}_{p}})\xi_{p}^{\prime}=p^{r_{p}}\lambda_{ \pi^{\prime}}(p^{r_{p}})\xi_{p}^{\prime}.\] Combining (7.3) with (7.4) and (7.5) we obtain (7.6) \[\pi^{\prime}(f)\varphi^{\prime}=\frac{1}{k-1}\frac{N^{2}}{{N^{\prime}}^{2}} \Psi(N)\mathfrak{S}(N^{\prime})\prod_{p\in\nu(f)}p^{r_{p}}\lambda_{\pi^{ \prime}}(p^{r_{p}})\cdot\varphi^{\prime}.\] Substituting (7.6) into (7.2), (7.1) follows. ## 8. **The unipotent Orbital Integrals** In this section, we will deal with the orbital integral with respect to \(\gamma(x)\) when \(x\in E^{1}.\) We will start with the special case that \(x=1;\) as we will see from Lemma 8.13, the general case reduces to this special case. Recall that (see (6.5)) \[\gamma(1)=\left(\begin{array}{ccc}1&1&-1/2\\ &1&-1\\ &&1\end{array}\right).\] Let \(H_{\gamma(1)}\) be the stabilizer of \(\gamma(1)\) and define (at least formally) the following integral \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\int_{H_{\gamma(1)}(\mathbb{Q}) \setminus H(\mathbb{A})}f(x^{-1}\gamma(1)y)\overline{\varphi}^{\prime}(x) \varphi^{\prime}(y)dxdy.\] where \(f\) is the function noted \(f^{\mathfrak{n}}\) in (4.31). We will show below that \(\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})\) converges absolutely so that this integral is well defined. ### Factorization of the Unipotent Orbital Integral The orbital integral \(\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})\) is not factorable into a product of local components over \(p\leq\infty\) but we will apply the Fourier expansion to it which will provide an infinite sum of factorable integrals over \(\mathbb{A}\) from which a sharp upper bound will be deduced. We start with the following explicit expression: **Lemma 8.1**.: _Let notations be as before and let3\(N^{\prime}\) be the unipotent of the standard parabolic subgroup of \(G^{\prime}=U(W),\) i.e., for any \(\mathbb{Q}\)-algebra \(R\),_ Footnote 3: Hopefully this will not create a confusion with the conductor of \(\pi^{\prime}\) \[N^{\prime}(R)=\Bigg{\{}n(0,b):=\left(\begin{array}{ccc}1&&b\\ &1&\\ &&1\\ &&1\end{array}\right)\in\mathrm{GL}(3,E\otimes_{\mathbb{Q}}R):\ b+\overline{b}= 0\Bigg{\}}.\] _Then_ \[H_{\gamma(1)}=\Delta N^{\prime}\subset G^{\prime}\times G^{\prime}\] _and_ \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\int_{N^{\prime}(\mathbb{A}) \backslash G^{\prime}(\mathbb{A})}\int_{G^{\prime}(\mathbb{A})}f\left(x^{-1} \gamma(1)y\right)\int_{[N^{\prime}]}\overline{\varphi}^{\prime}(vx)\varphi^{ \prime}(vy)dvdxdy. \tag{8.1}\] Proof.: To compute the stabilizer \(H_{\gamma(1)},\) we consider the equation \[\left(\begin{array}{ccc}a&&b\\ &1&\\ c&&d\end{array}\right)\left(\begin{array}{ccc}1&1&-1/2\\ &1&-1\\ &&1\end{array}\right)=\left(\begin{array}{ccc}1&1&-1/2\\ &1&-1\\ &&1\end{array}\right)\left(\begin{array}{ccc}a^{\prime}&&b^{\prime}\\ &1&\\ c^{\prime}&&d^{\prime}\end{array}\right).\] The solution is \(a=d=a^{\prime}=d^{\prime}=1,\)\(c=c^{\prime}=0\) and \(b=b^{\prime}.\) In other terms, \(H_{\gamma(1)}=\Delta N^{\prime},\) the image of the diagonal embedding \[\Delta:N^{\prime}\hookrightarrow N^{\prime}\times N^{\prime}.\] We have \[\mathcal{O}_{\gamma(1)}(f)=\int_{\Delta N^{\prime}(\mathbb{Q})\backslash H( \mathbb{A})}f(x^{-1}\gamma(1)y)\overline{\varphi}^{\prime}(x)\varphi^{\prime} (y)dxdy. \tag{8.2}\] Let \[H_{1}=N^{\prime}(\mathbb{A})^{2}\backslash G^{\prime}(\mathbb{A})^{2},\ H_{2}= \Delta N^{\prime}(\mathbb{Q})\backslash N^{\prime}(\mathbb{A})^{2},\ H_{3}=N^{ \prime}(\mathbb{A})\backslash G^{\prime}(\mathbb{A}).\] Then the right hand side is equal to \[\int_{H_{1}}\int_{H_{2}}f(x^{-1}n(0,\overline{b})\gamma(1)n(0,b^{ \prime})y)\overline{\varphi}^{\prime}(n(0,b)x)\varphi^{\prime}(n(0,b^{\prime })y)dbdb^{\prime}dxdy\] \[= \int_{H_{1}}\int_{H_{2}}f\left(x^{-1}\left(\begin{array}{ccc}1&1& -\frac{1}{2}+b^{\prime}-b\\ &1&-1\\ &&1\end{array}\right)y\right)\overline{\varphi}^{\prime}(n(0,b)x)\varphi^{ \prime}(n(0,b^{\prime})y)dbdb^{\prime}dxdy\] \[= \int_{H_{3}}\int_{H_{3}}\int_{H_{2}}f\left(x^{-1}\left(\begin{array} []{ccc}1&1&-\frac{1}{2}+b^{\prime}\\ &1&-1\\ &&1\end{array}\right)y\right)\overline{\varphi}^{\prime}(n(0,b)x)\varphi^{ \prime}(n(0,b+b^{\prime})y)dbdb^{\prime}dxdy\] \[= \int_{H_{3}}\int_{[N^{\prime}]}\int_{G^{\prime}(\mathbb{A})}f \left(x^{-1}\left(\begin{array}{ccc}1&1&-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)y\right)\overline{\varphi}^{\prime}(n(0,b)x)\varphi^{ \prime}(n(0,b)y)dbdxdy.\] Therefore, we obtain \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\int_{N^{\prime}(\mathbb{A}) \backslash G^{\prime}(\mathbb{A})}\int_{G^{\prime}(\mathbb{A})}f\left(x^{-1} \gamma(1)y\right)\int_{[N^{\prime}]}\overline{\varphi}^{\prime}(vx)\varphi^{ \prime}(vy)dvdxdy,\] which proves Lemma 8.1. Let us recall (see SS 4.5) that the automorphic forms \(\varphi^{\prime}\) on \(G^{\prime}(\mathbb{A})\) correspond to a \(\mathrm{GL}_{2}(\mathbb{A})\)-cusp form \(\varphi_{1}\). The later admits a Fourier expansion which translates to a corresponding expansion for \(\varphi^{\prime}\). We spell this out below. Let \[\theta=\theta_{\infty}.\theta_{f}=\theta_{\infty}\prod_{p}\theta_{p}\] be the usual unramified additive character of \(\mathbb{A}/\mathbb{Q}\): ie. \(\theta_{\infty}(x):=e^{-2\pi ix}\), and for \(p\) a prime, \(\theta_{p}(x)=e^{2\pi ir_{p}(x)}\), where \(r_{p}(x)\) is the principal part of \(x\in\mathbb{Q}_{p}.\) For \(n\in\mathbb{A}\) and \(x\in\mathbb{A}\), we define \[\theta_{n}(x):=\theta(nx).\] The additive character \(\theta_{n}\) defines a character on \(N^{\prime}(\mathbb{Q})\backslash N^{\prime}(\mathbb{A})\) by setting \[\psi_{n}(u):=\theta_{n}(x)\] for \[u=u(x)=\begin{pmatrix}1&0&x\Delta\\ 0&1&0\\ 0&0&1\end{pmatrix}\in N^{\prime}(\mathbb{A}), \tag{8.3}\] We also set for \(R\) a commutative \(\mathbb{Q}\)-algebra and \(v,w\in R^{\times}\) \[a(v,w):=\begin{pmatrix}v&&\\ &1&\\ &&w\end{pmatrix},\ a(v):=a(v,1)=\begin{pmatrix}v&&\\ &1&\\ &&1\end{pmatrix}.\] We denote by \[W_{n}(g;\varphi^{\prime}):=\int_{N^{\prime}(\mathbb{Q})\backslash N^{\prime} (\mathbb{A})}\varphi^{\prime}(ug)\overline{\psi_{n}(u)}du\] the \(n\)-Whittaker function of \(\varphi^{\prime}\). By \(G^{\prime}(\mathbb{Q})\)-invariance of \(\varphi^{\prime}\) and the identity \[\iota^{-1}(u(x))=\begin{pmatrix}1&-x\\ 0&1\end{pmatrix},\] for \(\iota:\mathrm{SL}_{2}\simeq SU(W)\) the exceptional isomorphism discussed in SS4.5.2, we have \[W_{n}(g;\varphi^{\prime})=W_{1}(a(n)g;\varphi_{1}),\] the Whittaker function associated to \(\varphi_{1}\) relative to the character \(\theta\) and the Fourier expansion \[\varphi^{\prime}\left(ug\right)=\sum_{n\in\mathbb{Q}^{\times}}W_{n}(g;\varphi ^{\prime})\psi_{n}(u). \tag{8.4}\] Substituting (8.4) into the expression (8.1) of \(\mathcal{O}_{\gamma(1)}(f)\) we then get \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})= \int_{N^{\prime}(\mathbb{A})\backslash G^{\prime}(\mathbb{A})} \int_{G^{\prime}(\mathbb{A})}f\left(x^{-1}\gamma(1)y\right)\sum_{n\in\mathbb{Q }}\overline{W_{n}}(x;\varphi^{\prime})W_{n}(y;\varphi^{\prime})dxdy\] \[= \sum_{n\in\mathbb{Q}}\int_{N^{\prime}(\mathbb{A})\backslash G^{ \prime}(\mathbb{A})}\int_{G^{\prime}(\mathbb{A})}f\left(x^{-1}\gamma(1)y \right)\overline{W_{n}}(x;\varphi^{\prime})W_{n}(y;\varphi^{\prime})dxdy.\] Since \(\phi^{\prime}\) has been chosen to be a primitive cusp form, \(\varphi^{\prime}\) is decomposable. Let \[\varphi^{\prime}\simeq\otimes_{v}\xi^{\prime}_{v}\in\otimes^{\prime}_{v}V_{ \pi^{\prime}_{v}}.\] Decomposing \(\psi_{n}\) into local characters \[\psi_{n}=\prod_{v}\psi_{n,v}\] yields a decomposition of the corresponding Whittaker function (\(g=(g_{v})_{v}\in G^{\prime}(\mathbb{A})\)) \[W_{n}(g;\varphi^{\prime})=\prod_{v}W_{n,v}(g_{v};\xi^{\prime}_{v})=\prod_{v}W_ {1,v}(a(n)g_{v};\xi^{\prime}_{v}), \tag{8.5}\] where the \(W_{n,v}(g_{v};\xi^{\prime}_{v})=W_{1,v}(a(n)g_{v};\xi^{\prime}_{v})\) are the local Whittaker functions which satisfy \[W_{n,v}(g_{v};\xi^{\prime}_{v})=W_{n,v}(u_{v}g_{v};\xi^{\prime}_{v})=\psi_{n,v }(u_{v})W_{n,v}(g_{v};\xi^{\prime}_{v})\] for all \(u_{v}\in N^{\prime}(\mathbb{Q}_{v}),\)\(g_{v}\in G^{\prime}(\mathbb{Q}_{v}).\) Also to simplify notation we write in the rest of this section \[W_{n,v}(g_{v}):=W_{1,v}(a(n)g_{v};\xi_{v}^{\prime}):=W_{n,v}(g_{v};\xi_{v}^{ \prime}). \tag{8.6}\] For \(p\) a prime, let \(\mathbf{1}_{p}\) be the identity element in \(G^{\prime}(\mathbb{Q}_{p})\) and \(\mathbf{1}_{f}\) for the identity element of \(G^{\prime}(\mathbb{A}_{\mathrm{fin}})\). We choose for all primes \(p\), \(\xi_{p}^{\prime}\)'s to be the local new vector normalized such that \[W_{1,p}(\mathbf{1}_{p};\xi_{p}^{\prime})=1.\] This normalization will be our choice for \(\xi_{p}^{\prime}\)'s henceforth. Using this decomposition of Whittaker functions we can write \(\mathcal{O}_{\gamma(1)}(f)\) into a sum of product of local orbital integrals. **Lemma 8.2**.: _Let notation be as before. Then_ \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\sum_{n\in\mathbb{Q}^{\times}} \mathcal{O}(f;n)=\sum_{n\in\mathbb{Q}^{\times}}\prod_{v}\mathcal{O}_{v}(f;n), \tag{8.7}\] _where_ \[\mathcal{O}(f;n)=\prod_{v}\mathcal{O}_{v}(f;n)\] _and \(v\) runs through all the places of \(\mathbb{Q}\) and_ \[\mathcal{O}_{v}(f;n):=\int_{N^{\prime}(\mathbb{Q}_{v})\setminus G^{\prime}( \mathbb{Q}_{v})}\int_{G^{\prime}(\mathbb{Q}_{v})}\overline{W_{n,v}}(x_{v};\xi _{v}^{\prime})W_{n,v}(y_{v};\xi_{v}^{\prime})f_{v}\left(x_{v}^{-1}\gamma(1)y _{v}\right)dx_{v}dy_{v}, \tag{8.8}\] Proof.: We have \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\sum_{n\in\mathbb{Q}^{\times}} \int_{N^{\prime}(\mathbb{A})\setminus G^{\prime}(\mathbb{A})}\int_{G^{\prime }(\mathbb{A})}f\left(x^{-1}\gamma(1)y\right)\overline{W_{n}}(x;\varphi^{\prime })W_{n}(y;\varphi^{\prime})dxdy.\] Then (8.7) follows from the factorization of Whittaker functions. _Remark 8.1_.: As we will see below \(n\) is in fact a non-zero integer. ### Computation of \(\mathcal{O}_{\infty}(f;n)\) In this section, we compute compute the local orbital integral \(\mathcal{O}_{\infty}(f;n)\). For this we compute explicitly the archimedean Whittaker functions \(W_{n,\infty}(g_{\infty})\) and \(f_{\infty}(x_{\infty}^{-1}\gamma(1)y_{\infty})\). Identifying \(g_{\infty}\in G^{\prime}(\mathbb{R})\) with \(g_{\infty}.\mathbf{1}_{f}\in G^{\prime}(\mathbb{A})\) we have \[W_{n}(g_{\infty};\varphi^{\prime})=W_{n,\infty}(g_{\infty})\prod_{p}W_{n,p}( \mathbf{1}_{p}). \tag{8.9}\] #### 8.2.1. Whittaker Functions Let \(g_{\infty}\in G^{\prime}(\mathbb{R}).\) Write \(g_{\infty}\) into its Iwasawa form: \[g_{\infty}=n_{\infty}a_{\infty}k_{\infty}=\left(\begin{array}{ccc}1&-\Delta t \\ &1&\\ &&1\end{array}\right)\left(\begin{array}{ccc}a&&\\ &1&\\ &&\overline{a}^{-1}\end{array}\right)\left(\begin{array}{ccc}\cos\alpha&& \Delta\sin\alpha\\ &&1&\\ -\Delta^{-1}\sin\alpha&&\cos\alpha\end{array}\right),\] where \(\alpha\in[-\pi,\pi);\)\(a\in\mathbb{C}^{\times}\) and \(t\in\mathbb{R}\) and \[\Delta=i\sqrt{|D_{E}|}\in i\mathbb{R}_{>0}.\] We also write \[a_{\infty}=z_{\infty}.a_{\infty}^{1}=\left(\begin{array}{ccc}(a/\overline{a })^{1/2}&&\\ &1&\\ &&(a/\overline{a})^{1/2}\end{array}\right)\left(\begin{array}{ccc}(a\overline{ a})^{1/2}&&\\ &&1&\\ &&(a\overline{a})^{-1/2}\end{array}\right)\] where \(z_{\infty}\) is in the center \(Z_{G^{\prime}}(\mathbb{R})\) and \(a_{\infty}^{1}\) has determinant \(1\). **Lemma 8.3**.: _Let notation be as before. Then_ \[W_{n,\infty}(g_{\infty})\cdot\prod_{p<\infty}W_{n,p}(\textbf{1}_{p})=e^{-ik\alpha }e^{-2\pi na\overline{a}+2\pi nit}\cdot(a\overline{a})^{k/2}\,a_{n}. \tag{8.10}\] _where \(a_{n}\) denote the \(n\)-th Fourier coefficient of the classical form \(\phi^{\prime}\) (cf. (4.39))._ Proof.: By (4.36), we have (since \(\varphi^{\prime}\) is invariant by \(Z_{G^{\prime}}(\mathbb{R})\)) \[W_{n}(g_{\infty};\varphi^{\prime})= \psi_{n,\infty}(n_{\infty})\int_{N^{\prime}(\mathbb{Z})\setminus N ^{\prime}(\mathbb{R})}\,\varphi^{\prime}(u_{\infty}a_{\infty}^{1}k_{\infty}) \overline{\psi_{n,\infty}(u_{\infty})}du_{\infty}\] \[= \psi_{n,\infty}(n_{\infty})\int_{N^{\prime}(\mathbb{Z})\setminus N ^{\prime}(\mathbb{R})}\frac{\phi^{\prime}(\iota^{-1}(u_{\infty}a_{\infty}^{1} k_{\infty}).i)}{j(\iota^{-1}(u_{\infty}a_{\infty}^{1}k_{\infty}),i)^{k}}\overline{ \psi_{n,\infty}(u_{\infty})}du_{\infty}\] \[= \frac{\psi_{n,\infty}(n_{\infty})}{j(\iota^{-1}(a_{\infty}^{1}), i)^{k}j(\iota^{-1}(k_{\infty}),i)^{k}}\int_{N^{\prime}(\mathbb{Z})\setminus N ^{\prime}(\mathbb{R})}\phi^{\prime}(\iota^{-1}(u_{\infty}a_{\infty}^{1}).i) \overline{\psi_{n,\infty}(u_{\infty})}du_{\infty}\] \[= \frac{\psi_{n,\infty}(n_{\infty})}{j(\iota^{-1}(a_{\infty}^{1}), i)^{k}j(\iota^{-1}(k_{\infty}),i)^{k}}\int_{0}^{1}\phi^{\prime}(a\overline{a}i+t)e^{ -2\pi nit}dt\] \[= \frac{\psi_{n,\infty}(n_{\infty})}{(a\overline{a})^{-k/2}e^{ik \alpha}}\int_{0}^{1}\phi^{\prime}(a\overline{a}i+t)e^{-2\pi nit}dt.\] Then (8.10) follows from (8.9), (4.39) and the equality \(\psi_{n,\infty}(n_{\infty})=e^{2\pi nit}\). #### 8.2.2. The archimedean test function Let \(x_{\infty},y_{\infty}\in G^{\prime}(\mathbb{R})\) written in their Iwasawa forms: \[x_{\infty}= \left(\begin{array}{cccc}e^{i\alpha_{1}}&&\\ &1&\\ &&e^{i\alpha_{1}}\end{array}\right)\left(\begin{array}{cccc}a_{1}&&\\ &1&\\ &&a_{1}^{-1}\end{array}\right)\left(\begin{array}{cccc}\cos\alpha&&\Delta \sin\alpha\\ &1&\\ -\Delta^{-1}\sin\alpha&&\cos\alpha\end{array}\right);\] \[y_{\infty}= \left(\begin{array}{cccc}e^{i\alpha_{2}}&&\\ &1&\\ &&e^{i\alpha_{2}}\end{array}\right)\left(\begin{array}{cccc}a_{2}&-a_{2}^{-1} \Delta t\\ &1&\\ &&a_{2}^{-1}\end{array}\right)\left(\begin{array}{cccc}\cos\beta&&\Delta \sin\beta\\ &1&\\ -\Delta^{-1}\sin\beta&&\cos\beta\end{array}\right),\] where \(\alpha_{1},\alpha_{2},\alpha,\beta\in[-\pi,\pi)\); \(a_{1},a_{2}\in\mathbb{R}^{\times}\) and \(t\in\mathbb{R}\). **Lemma 8.4**.: _With the notations above, we have_ \[f_{\infty}(x_{\infty}^{-1}\gamma(1)y_{\infty})=2^{k}\big{[}(a_{1}^{-1}a_{2}+a _{1}a_{2}^{-1})-a_{1}^{-1}a_{2}^{-1}i(t+\Delta^{-1}/2)\big{]}^{-k}\cdot e^{ik( \alpha-\beta)}.\] Proof.: It follows from the definition that \(f_{\infty}(x_{\infty}^{-1}\gamma(1)y_{\infty})\) does not depend on \(\alpha_{i},\ i=1,2\) so we may assume that \(\alpha_{1}=\alpha_{2}=0\). Let \(t^{\prime}=t-\Delta^{-1}/2.\) Computing the matrices one obtains \[x_{\infty}= \left(\begin{array}{cccc}a_{1}\cos\alpha&&a_{1}\Delta\sin \alpha\\ &1&\\ -a_{1}^{-1}\Delta^{-1}\sin\alpha&&a_{1}^{-1}\cos\alpha\end{array}\right),\] \[y_{\infty}= \left(\begin{array}{cccc}a_{2}\cos\beta+a_{2}^{-1}t\sin\beta&a_{ 2}\Delta\sin\beta-a_{2}^{-1}\Delta t\cos\beta\\ &1&\\ -a_{2}^{-1}\Delta^{-1}\sin\beta&&a_{2}^{-1}\cos\beta\end{array}\right).\] We set \[a=a_{1}\cos\alpha,\ b=a_{1}\Delta\sin\alpha,\ c=-a_{1}^{-1}\Delta^{-1}\sin \alpha,\ d=a_{1}^{-1}\cos\alpha;\] and \[a^{\prime}=a_{2}\cos\beta+a_{2}^{-1}t\sin\beta,\ b^{\prime}=a_{2}\Delta\sin \beta-a_{2}^{-1}\Delta t\cos\beta,\] \[c^{\prime}=-a_{2}^{-1}\Delta^{-1}\sin\beta,\ d^{\prime}=a_{2}^{-1}\cos\beta.\] With these notations we have that \(x_{\infty}^{-1}\gamma(1)y_{\infty}\) is equal to \[\left(\begin{array}{cc}\overline{d}&&\overline{b}\\ \overline{c}&&\overline{a}\end{array}\right)\gamma(1)\left(\begin{array}{cc}a^ {\prime}&&b^{\prime}\\ &1\\ c^{\prime}&&d^{\prime}\end{array}\right)= \left(\begin{array}{cc}d&-b\\ 1&\\ -c&a\end{array}\right)\gamma(1)\left(\begin{array}{cc}a^{\prime}&&b^{\prime} \\ &1&\\ c^{\prime}&&d^{\prime}\end{array}\right)\] \[= \left(\begin{array}{cc}d(a^{\prime}-\frac{c^{\prime}}{2})-bc^{ \prime}&d&d(b^{\prime}-\frac{d^{\prime}}{2})-bd^{\prime}\\ -c^{\prime}&1&-d^{\prime}\\ -c(a^{\prime}-\frac{c^{\prime}}{2})+ac^{\prime}&-c&-c(b^{\prime}-\frac{d^{ \prime}}{2})+ad^{\prime}\end{array}\right).\] Then by definition (4.19) we have (recall that \(k_{1}=-k,\ k_{2}=k/2\)) \[f_{\infty}(x_{\infty}^{-1}\gamma(1)y_{\infty})=2^{k}\cdot(\overline{A}- \overline{B})^{-k}, \tag{8.11}\] where \[A=d(a^{\prime}-\frac{c^{\prime}}{2})-bc^{\prime}-c(b^{\prime}-\frac{d^{\prime }}{2})+ad^{\prime}\] and \[B=\left[d(b^{\prime}-\frac{d^{\prime}}{2})-bd^{\prime}\right]\cdot|D_{E}|^{-1/ 2}+\big{[}-c(a^{\prime}-\frac{c^{\prime}}{2})+ac^{\prime}\big{]}\cdot|D_{E}|^ {1/2}.\] Substituting expressions of \(a,b,c,d\) and \(a^{\prime},b^{\prime},c^{\prime},d^{\prime}\) we then have \[A=(a_{1}^{-1}a_{2}+a_{1}a_{2}^{-1})\cos(\alpha-\beta)-a_{1}^{-1}a_{2}^{-1}t^{ \prime}\sin(\alpha-\beta). \tag{8.12}\] Then \[B=-i\Big{[}(a_{1}^{-1}a_{2}+a_{1}a_{2}^{-1})\sin(\alpha-\beta)+a_{1}^{-1}a_{2} ^{-1}t^{\prime}\cos(\alpha-\beta)\Big{]}. \tag{8.13}\] so that \[A-B=\big{[}(a_{1}^{-1}a_{2}+a_{1}a_{2}^{-1})+a_{1}^{-1}a_{2}^{-1}it^{\prime} \big{]}\cdot e^{i(\alpha-\beta)}.\] Hence the Lemma follows from (8.11), (8.12), (8.13) and this last identity. #### 8.2.3. Orbital Integrals In this subsection, we will combine Lemma 8.3 and Lemma 8.4 to compute the archimedean unipotent orbital integral. We set \[|W_{n,f}(\textbf{1})|^{2}:=\prod_{p<\infty}|W_{n,p}(\textbf{1}_{p})|^{2} \tag{8.14}\] **Proposition 8.5**.: _Let notation be as before. If \(|W_{n,f}(\textbf{1})|^{2}=0\) we have_ \[\mathcal{O}_{\infty}(f;n)=0.\] _Otherwise we have_ \[\frac{\mathcal{O}_{\infty}(f;n)}{|W_{n,f}(\textbf{1})|^{2}} =\frac{2^{3}\pi^{2}}{2^{4(k-1)}}\frac{\Gamma(k-1)^{2}}{\Gamma(k)} \frac{|a_{n}|^{2}}{(4\pi n)^{k-1}}e^{-\pi nD_{E}^{-1/2}} \tag{8.15}\] \[=\frac{2^{4}\pi^{5}}{3.2^{4(k-1)}}\frac{1}{(k-1)^{2}}\prod_{p|N^ {\prime}}(1+\frac{1}{p})\frac{|\lambda_{\pi^{\prime}}(n)|^{2}}{L(\pi^{\prime},\operatorname{Ad},1)}e^{-\pi nD_{E}^{-1/2}}.\] _In particular we have_ \[\mathcal{O}_{\infty}(f;n)\leq\frac{(kN^{\prime}n)^{o(1)}}{2^{4k}k^{2}}e^{-\pi nD _{E}^{-1/2}}\prod_{p<\infty}|W_{n,p}(\textbf{1}_{p})|^{2}. \tag{8.16}\] Proof.: Suppose \(|W_{n,f}(\textbf{1})|^{2}\neq 0\). We recall that \[\frac{\mathcal{O}_{\infty}(f;n)}{|W_{n,f}(\textbf{1})|^{2}}=\int_{N^{\prime}( \mathbb{R})\setminus G^{\prime}(\mathbb{R})}\int_{G^{\prime}(\mathbb{R})} \frac{\overline{W_{n,\infty}}(x)W_{n,\infty}(y)}{|W_{n,f}(\textbf{1})|^{2}}f _{\infty}\left(x^{-1}\gamma(1)y\right)dxdy.\] By Lemma 8.3 and Lemma 8.4 we have \[\frac{\mathcal{O}_{\infty}(f;n)}{|W_{n,f}(\mathbf{1})|^{2}}= 2^{k}|a_{n}|^{2}\int_{\mathbb{R}}\int_{\mathbb{R}_{+}}\int_{ \mathbb{R}_{+}}\int_{0}^{2\pi}\int_{0}^{2\pi}e^{-2\pi n(a_{1}^{2}+a_{2}^{2})-2 \pi nit}\] \[\frac{a_{1}^{k}a_{2}^{k}}{\big{[}(a_{1}^{-1}a_{2}+a_{1}a_{2}^{-1} )-a_{1}^{-1}a_{2}^{-1}i(t+\Delta^{-1}/2)\big{]}^{k}}\frac{d\alpha d\beta}{4 \pi^{2}}\frac{da_{1}}{a_{1}^{3}}\frac{da_{2}}{a_{2}^{3}}dt.\] Since \(k=|k_{1}|\geq 8\), the integral \(\mathcal{O}_{\infty}(f;n)\) converges absolutely. After passing to polar coordinates, \(a_{1}^{2}+a_{2}^{2}=r^{2}(\cos^{2}\theta+\sin^{2}\theta)\), we obtain \[\frac{\mathcal{O}_{\infty}(f;n)}{|W_{n,f}(\mathbf{1})|^{2}}= 2^{k}|a_{n}|^{2}\cdot\int_{\mathbb{R}_{+}}\int_{\mathbb{R}_{+}} \int_{\mathbb{R}}\frac{e^{-2\pi n(a_{1}^{2}+a_{2}^{2})-2\pi nit}\cdot(a_{1}a_{ 2})^{2k}}{\big{[}a_{1}^{2}+a_{2}^{2}-i(t+\Delta^{-1}/2)\big{]}^{k}}dt\frac{da_ {1}}{a_{1}^{3}}\frac{da_{2}}{a_{2}^{3}}\] \[= 2^{k}|a_{n}|^{2}e^{\pi in\Delta^{-1}}\iint_{\mathbb{R}_{+}^{2}} \int_{\mathbb{R}}\frac{e^{-2\pi n(a_{1}^{2}+a_{2}^{2})-2\pi nit}\cdot(a_{1}a_{ 2})^{2k}}{(a_{1}^{2}+a_{2}^{2}-it)^{k}}dt\frac{da_{1}}{a_{1}^{3}}\frac{da_{2}}{ a_{2}^{3}}\] \[= 2^{k-2k+3}|a_{n}|^{2}e^{\pi in\Delta^{-1}}\int_{0}^{\infty}\int_ {0}^{\frac{\pi}{2}}\int_{\mathbb{R}}\frac{e^{-2\pi nr^{2}-2\pi nit}\cdot(r^{2} \cos\theta\sin\theta)^{2k-3}}{(r^{2}-it)^{k}}dtrdrd\theta\] \[= \frac{2}{2^{k-1}}|a_{n}|^{2}e^{\pi in\Delta^{-1}}\int_{0}^{\pi}( \sin\theta)^{2k-3}d\theta\cdot\int_{0}^{\infty}\int_{\mathbb{R}}\frac{e^{-2 \pi nr-2\pi nit}\cdot r^{2k-3}}{(r-it)^{k}}dtdr.\] after making the changes of variable \(2\theta\leftrightarrow\theta\), \(r^{2}\leftrightarrow r\). We have \[\int_{0}^{\pi}\sin^{2k-3}\theta d\theta=\pi^{1/2}\frac{\Gamma(k-1)}{\Gamma(k-1/ 2)}.\] Appealing to Cauchy integral formula we then obtain \[\frac{1}{e^{\pi in\Delta^{-1}}}\frac{\mathcal{O}_{\infty}(f;n)}{|W _{n,f}(\mathbf{1})|^{2}}= \frac{\pi^{1/2}}{2^{k-2}}|a_{n}|^{2}\frac{\Gamma(k-1)}{\Gamma(k-1/ 2)}\frac{(-1)^{k}}{i}\int_{0}^{\infty}\int_{i\mathbb{R}}\frac{e^{-2\pi nr-2\pi nz }\cdot r^{2k-3}}{(z-r)^{k}}dzdr\] \[= \frac{2\pi^{1/2}}{2^{k-1}}|a_{n}|^{2}\frac{\Gamma(k-1)}{\Gamma(k- 1/2)}\cdot\frac{2\pi(-1)^{k-1}}{(k-1)!}\int_{0}^{\infty}\frac{r^{2k-3}}{e^{2 \pi nr}}\bigg{[}\frac{d^{k-1}e^{-2\pi mz}}{dz^{k-1}}\bigg{]}_{z=r}dr\] \[= \frac{2\pi^{1/2}}{2^{k-1}}|a_{n}|^{2}\frac{\Gamma(k-1)}{\Gamma(k- 1/2)}\frac{2\pi(-1)^{k-1}}{\Gamma(k)}(-2\pi n)^{k-1}\cdot\int_{0}^{\infty}e^{ -4\pi nr}\cdot r^{2k-2}\frac{dr}{r}\] \[= \frac{2\pi^{1/2}}{2^{k-1}}|a_{n}|^{2}\frac{\Gamma(k-1)}{\Gamma(k- 1/2)}\frac{2\pi}{\Gamma(k)}\frac{(2\pi n)^{k-1}}{(4\pi n)^{2(k-1)}}\cdot\Gamma (2(k-1))\] \[= \frac{2\pi^{1/2}}{2^{k-1}}|a_{n}|^{2}\frac{\Gamma(k-1)}{\Gamma(k- 1/2)}\frac{2\pi}{\Gamma(k)}\frac{(2\pi n)^{k-1}}{(4\pi n)^{2(k-1)}}\pi^{1/2}2^ {1-2(k-1)}\Gamma(k-1)\Gamma(k-1/2)\] \[= \frac{2^{3}\pi^{2}}{2^{4(k-1)}}\frac{\Gamma(k-1)^{2}}{\Gamma(k)} \frac{|a_{n}|^{2}}{(4\pi n)^{k-1}}\] on using the duplication formula \[\Gamma(2(k-1))=\pi^{1/2}2^{1-2(k-1)}\Gamma(k-1)\Gamma(k-1/2).\] Then formula (8.15) follows from (4.40) and the bound (8.16) results from (4.41) and (4.44). _Remark 8.2_.: The reason for this normalization by the factor \[|W_{n,f}(\mathbf{1})|^{2}:=\prod_{p<\infty}|W_{n,p}(\mathbf{1}_{p})|^{2}\] is that as we will see in the forthcoming lemma, for any \(n\), the local orbital integrals \(\mathcal{O}_{p}(f;n)\) appearing in Lemma 8.2 are equal to \(|W_{n,p}(\mathbf{1}_{p})|^{2}\) for almost every \(p\). ### Computation of \(\mathcal{O}_{p}(f;n)\) when \(\pi^{\prime}_{p}\) is unramified In this section and the next ones, we compute the local orbital integrals over nonarchimedean places. Let \(p\) be a rational prime we denote by \(\nu\) the usual \(p\)-adic valution. In this subsection, we will assume \(p\nmid N^{\prime}.\) Thus \(\xi^{\prime}_{p}\) is a spherical vector. The next three lemmatas consider this situation when * \(p\) splits in \(E\). * \(p\) is inert in \(E\) and \(\pi_{p}\) is unramified (ie. \(p\nmid N\)) * \(p\) is inert in \(E\) and \(\pi_{p}\) is ramified (ie. \(p|N\)). * \(p\) is ramified in \(E\). For the sequel we denote by \(\nu\) the usual valuation on \(\mathbb{Q}_{p}.\) **Lemma 8.6**.: _Let notation be as before. Let \(p\) be a prime split in \(E\) and coprime to \(N\). Under the isomorphism_ \[G^{\prime}(\mathbb{Q}_{p})\simeq\operatorname{GL}(2,\mathbb{Q}_{p}),\] \(\pi^{\prime}_{p}\) _is isomorphic to a principal series representation_ \[\pi^{\prime}_{p}\simeq\operatorname{Ind}(\chi\otimes\overline{\chi})\] _for some unramified unitary character \(\chi\). We have_ \[\mathcal{O}_{p}(f;n)=|W_{n,p}(\boldsymbol{1}_{p})|^{2}\frac{e^{2\pi\mathrm{ n}ir_{p}(-\frac{1}{2\Delta})}}{p^{\nu(n)}}\cdot\sum_{l=0}^{\nu(n)}(l+1)\cdot \left|\frac{\chi(p)^{\nu(n)-l+1}-\overline{\chi}(p)^{\nu(n)-l+1}}{\chi(p)^{\nu (n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\right|^{2}. \tag{8.17}\] _In particular, (8.17) equals zero if \(\nu(n)<0\) and equals \(1\) if \(\nu(n)=0.\)_ Proof.: Given \(g_{p}\in G^{\prime}(\mathbb{Q}_{p})\) with the Iwasawa decomposition \(g_{p}=u_{p}a_{p}\kappa_{p}\). Then \[W_{n,p}(g_{p})=|n|_{p}^{-1/2}\psi_{n,p}(u_{p})\overline{\chi}(n)W_{1,p}\left( \left(\begin{matrix}n&&\\ &1&\\ &&1\end{matrix}\right)a_{p}\right). \tag{8.18}\] Write \(a_{p}=\operatorname{diag}(p^{j_{1}},1,p^{j_{2}}),\) where \((j_{1},j_{2})\in\mathbb{Z}^{2}.\) Then it follows from (8.18) that \(W_{n,p}(g_{p})=0\) unless \(j_{1}\geq j_{2}-\nu(n).\) Let \(x_{p}\in N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q}_{p})\) and \(y_{p}\in G^{\prime}(\mathbb{Q}_{p}).\) We can write them into their Iwasawa coordinates: \[x_{p}=\left(\begin{matrix}p^{i_{1}}&&\\ &1&&\\ &&p^{i_{2}}\end{matrix}\right)\kappa_{1},\quad y_{p}=\left(\begin{matrix}1&& \Delta t\\ &1&\\ &&1\end{matrix}\right)\left(\begin{matrix}p^{j_{1}}&&\\ &1&\\ &&p^{j_{2}}\end{matrix}\right)\kappa_{2},\] where \((i_{1},i_{2})\in\mathbb{Z}^{2},\)\((j_{1},j_{2})\in\mathbb{Z}^{2};\)\(t\in\mathbb{Q}_{p}\) and \(\kappa_{1},\kappa_{2}\in G^{\prime}(\mathbb{Z}_{p}).\) Since \(p\nmid N,\)\(\iota(\kappa_{1}),\iota(\kappa_{2})\in K_{p}=\operatorname{GL}(3,\mathbb{Z}_{p})\) via the natural embedding (2.3) and in this case we have \[f_{p}=\boldsymbol{1}_{\operatorname{GL}(3,\mathbb{Z}_{p})}.\] Hence \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) if and only if \[\left(\begin{matrix}p^{-i_{1}}&&\\ &1&\\ &&p^{-i_{2}}\end{matrix}\right)\left(\begin{matrix}1&1&-1/2\\ &1&-1\\ &&1\end{matrix}\right)\left(\begin{matrix}p^{j_{1}}&&p^{j_{2}}\Delta t\\ &1&\\ &&p^{j_{2}}\end{matrix}\right)\in\operatorname{GL}(3,\mathbb{Z}_{p}).\] which, by a direct computation, can be further reduced to \[\left(\begin{matrix}p^{j_{1}-i_{1}}&p^{-i_{1}}&p^{j_{2}-i_{1}}(\Delta t-\frac {1}{2})\\ &1&-p^{j_{2}}\\ &&p^{j_{2}-i_{2}}\end{matrix}\right)\in\operatorname{GL}(3,\mathbb{Z}_{p}).\] Hence \(j_{1}=i_{1}\leq 0\) and \(j_{2}=i_{2}\geq 0.\) By the support of Whittaker functions we have \[\overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})=0\] unless \(i_{1}\geq i_{2}-\nu(n)\) and \(j_{1}\geq j_{2}-\nu(n).\) Hence \(\mathcal{O}_{p}(f;n)=0\) unless \[0\geq i_{1}\geq i_{2}-\nu(n)\geq-\nu(n).\] In particular we have \(\nu(n)\geq 0.\) Since \(\operatorname{vol}(G^{\prime}(\mathbb{Z}_{p}))=1.\) Therefore, we have \[\mathcal{O}_{p}(f;n)= \int_{N^{\prime}(\mathbb{Q}_{p})\setminus G^{\prime}(\mathbb{Q}_ {p})}\int_{G^{\prime}(\mathbb{Q}_{p})}f_{p}(x_{p}^{-1}\gamma(1)y_{p})\overline {W_{n,p}}(x_{p})W_{n,p}(y_{p})dx_{p}dy_{p}\] \[= |n|_{p}^{-1}\sum_{i_{1}=-\nu(n)}^{0}\sum_{i_{2}=0}^{i_{1}+\nu(n)} p^{2i_{1}-2i_{2}}I_{p}(i_{1},i_{2})\Big{|}W_{1,p}\begin{pmatrix}np^{i_{1}}&\\ &1&\\ &&p^{i_{2}}\end{pmatrix}\Big{|}^{2},\] where \[I_{p}(i_{1},i_{2}):=\int_{\mathbb{Q}_{p}}\mathbf{1}_{\mathbb{Z}_{p}}(p^{i_{2} -i_{1}}(\Delta t-1/2))\theta_{n,p}(t)dt.\] Since \(\Delta\in\mathbb{Z}_{p}^{\times}\) and \(i_{2}-i_{1}\leq\nu(n),\) we have \[I_{p}(i_{1},i_{2})=e^{2\pi\text{n}ir_{p}(-\Delta^{-1}/2)}p^{i_{2}-i_{1}}.\] Hence we have \[\mathcal{O}_{p}(f;n)= \frac{e^{2\pi\text{n}ir_{p}(-\Delta^{-1}/2)}}{|n|_{p}}\cdot\sum_ {l=0}^{\nu(n)}(\nu(n)-l+1)p^{l-\nu(n)}\Bigg{|}W_{1,p}\left(\iota\begin{pmatrix} p^{l}&\\ &1\end{pmatrix}\right)\Bigg{|}^{2}.\] Substituting Casselman-Shalika formula (cf. [10]) into the above computation we then obtain \[\frac{\mathcal{O}_{p}(f;n)}{e^{2\pi\text{n}ir_{p}(-\frac{1}{2 \chi})}}= \frac{|W_{1,p}(\mathbf{1}_{p})|^{2}}{|n|_{p}}\cdot\sum_{l=0}^{\nu(n )}(\nu(n)-l+1)p^{l-\nu(n)}\cdot\Bigg{|}\frac{p^{-\frac{l}{2}}(\chi(p)^{l+1}- \overline{\chi}(p)^{l+1})}{\chi(p)-\overline{\chi}(p)}\Bigg{|}^{2}\] \[= |W_{n,p}(\mathbf{1}_{p})|^{2}\cdot\sum_{l=0}^{\nu(n)}(l+1)p^{-\nu (n)}\cdot\Bigg{|}\frac{\chi(p)^{\nu(n)-l+1}-\overline{\chi}(p)^{\nu(n)-l+1}}{ \chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\Bigg{|}^{2}.\] Then (8.17) follows. **Lemma 8.7**.: _Let notation be as before. Let \(p\) be a rational prime inert in \(E\) and such that \(p\nmid N^{\prime}.\)_ _For_ \[f_{p}=1_{G(\mathbb{Z}_{p})},\] _we have_ \[\mathcal{O}_{p}(f;n)=|W_{n,p}(\textbf{1}_{p})|^{2}\sum_{-\nu(n)/2\leq i\leq 0 }\Bigg{|}\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p) ^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\Bigg{|}^{2}. \tag{8.19}\] _In particular, (8.19) equal \(0\) if \(\nu(n)<0\) and equals \(1\) if \(\nu(n)=0.\)_ _For_ \[f_{p}=1_{G(\mathbb{Z}_{p})A_{*}G(\mathbb{Z}_{p})},\] _for \(r\geq 1\) (cf. (A.12) in the Appendix), we have_ \[\mathcal{O}_{p}(f;n)\ll(r+\nu(n))^{4}p^{r}|W_{n,p}(\textbf{1}_{p})|^{2}, \tag{8.20}\] _where the implied constant is absolute._ Proof.: By our assumptions \[K^{\prime}_{p}(N)=G^{\prime}(\mathbb{Z}_{p})=U(L\otimes_{\mathbb{Q}}\mathbb{Q }_{p}).\] Let \(g_{p}\in G^{\prime}(\mathbb{Q}_{p}).\) Write \(g_{p}=u_{p}a_{p}k_{p}\) be the Iwasawa decomposition of \(g_{p}.\) Then one has \[W_{n,p}(g_{p})=W_{n,p}(u_{p}a_{p}k_{p})=\psi_{n,p}(u_{p})W_{n,p}(a_{p}). \tag{8.21}\] For \(a_{p}=\operatorname{diag}(p^{i},1,p^{-i}),\) where \(i\in\mathbb{Z},\) we have, by Casselman-Shalika formula, that \[W_{n,p}\begin{pmatrix}p^{i}&&\\ &1&\\ &&p^{-i}\end{pmatrix}=\frac{\mathbf{1}_{\mathcal{O}_{E_{p}}}(np^{2i})}{p^{i}} \cdot\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p)^{ \nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\cdot W_{n,p}(\mathbf{1}_{p}). \tag{8.22}\] Let \(x_{p}\in N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q}_{p})\) and \(y_{p}\in G^{\prime}(\mathbb{Q}_{p}).\) We can write them into their Iwasawa coordinates: \[x_{p}=\begin{pmatrix}p^{i}&&\\ &1&\\ &&p^{-i}\end{pmatrix}\kappa_{1},\quad y_{p}=\begin{pmatrix}1&&\Delta t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}\kappa_{2},\] for \(i,j\in\mathbb{Z}^{2};\)\(t\in\mathbb{Q}_{p}\) and \(\kappa_{1},\kappa_{2}\in K_{p}^{\prime}(N).\) By definition, \(f_{p}=\mathbf{1}_{K_{p}}.\) Noting that \(p\nmid N,\) then \(\kappa_{1},\kappa_{2}\in K_{p}\) via the natural embedding (2.3). Hence \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) if and only if \[\begin{pmatrix}p^{-i}&&\\ &1&\\ &&p^{i}\end{pmatrix}\begin{pmatrix}1&1&-1/2\\ &1&-1\\ &&1\end{pmatrix}\begin{pmatrix}1&&\Delta t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}\in G(\mathbb{Z}_{p}),\] which could be further reduced to \[\left(\begin{array}{cc}p^{j-i}&p^{-i}&p^{-j-i}(\Delta t-\frac{1}{2})\\ &1&-p^{-j}\\ &&p^{-j+i}\end{array}\right)\in G(\mathbb{Z}_{p}).\] Hence \(j=i\leq 0.\) By the support of Whittaker functions we have \[\overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})=0\] unless \[\nu(n)+2i\geq 0,\ \nu(n)+2j\geq 0.\] In particular \(\nu(n)\geq 0.\) It follows that \(\mathcal{O}_{p}(f;n)=0\) unless \[i=j\geq-\nu(n)/2.\] Therefore, we have \[\mathcal{O}_{p}(f;n)= \int_{N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q} _{p})}\int_{G^{\prime}(\mathbb{Q}_{p})}f_{p}(x_{p}^{-1}\gamma(1)y_{p}) \overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})dx_{p}dy_{p}\] \[=\] The condition \(p^{-2i}(\Delta t-1/2)\in\mathcal{O}_{E_{p}}\) is equivalent to \(t\in p^{2i}\mathbb{Z}_{p}\) and since \(-2i\leq\nu(n),\) we have \[\int_{\mathbb{Q}_{p}}\mathbf{1}_{\mathcal{O}_{E_{p}}}(p^{-2i}(\Delta t-1/2)) \theta_{n,p}(t)dt=p^{-2i}.\] This in conjunction with formula (8.22) implies that \[\frac{\mathcal{O}_{p}(f;n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}=\sum_{-\nu(n)/2 \leq i\leq 0}\left|\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1} }{\chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\right|^{2},\] proving (8.19). Now let's take \(f_{p}=1_{G(\mathbb{Z}_{p})A_{r}G(\mathbb{Z}_{p})}\), for \(r\geq 1.\) Then \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) if and only if \[\begin{pmatrix}p^{-i}&&\\ &1&\\ &&p^{i}\end{pmatrix}\begin{pmatrix}1&1&-1/2\\ &1&-1\\ &&1\end{pmatrix}\begin{pmatrix}1&&\Delta t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}\in G(\mathbb{Z}_{p})A_{r}G(\mathbb{Z}_{p}),\] which could be further reduced to \(t\in\mathcal{E}(i,j;r).\) Here \(\mathcal{E}(i,j;r)\) is defined by \[\Bigg{\{}t\in\mathbb{Q}_{p}:\ \left(\begin{array}{ccc}p^{j-i}&p^{-i}&p^{-j-i}( \Delta t-\frac{1}{2})\\ &1&-p^{-j}\\ &&p^{-j+i}\end{array}\right)\in G(\mathbb{Z}_{p})\begin{pmatrix}p^{r}&&\\ &1&\\ &&p^{-r}\end{pmatrix}G(\mathbb{Z}_{p})\Bigg{\}}.\] Note that \(\mathcal{E}(i,j;r)\) is empty unless \(i\leq r\), \(j\leq r\), and \(\Delta t-1/2\in p^{i+j-r}\mathbb{Z}_{p}\). Considering the support of Whittaker functions as above, it follows that \[\mathcal{O}_{p}(f;n)= \int_{N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q }_{p})}\int_{G^{\prime}(\mathbb{Q}_{p})}f_{p}(x_{p}^{-1}\gamma(1)y_{p}) \overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})dx_{p}dy_{p}\] is equal to \[\sum_{\begin{subarray}{c}-\frac{\nu(n)}{2}\leq i\leq r\\ -\frac{\nu(n)}{2}\leq j\leq r\end{subarray}}p^{2(i+j)}\int_{\mathbb{Q}_{p}} \mathbf{1}_{\mathcal{E}(i,j;r)}(t)\theta_{n,p}(t)dtW_{n,p}\begin{pmatrix}p^{i} &&\\ &1&\\ &&p^{-i}\end{pmatrix}\overline{W_{n,p}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}}.\] Appealing to the triangle inequality, \(|\mathcal{O}_{p}(f;n)|\) is \[\leq\sum_{\begin{subarray}{c}-\frac{\nu(n)}{2}\leq i\leq r\\ -\frac{\nu(n)}{2}\leq j\leq r\end{subarray}}p^{2(i+j)}\int_{\mathbb{Q}_{p}} \mathbf{1}_{\mathcal{E}(i,j;r)}(t)dt\Bigg{|}W_{n,p}\begin{pmatrix}p^{i}&&\\ &1&\\ &&p^{-i}\end{pmatrix}\overline{W_{n,p}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}}\Bigg{|}\] \[\leq\sum_{\begin{subarray}{c}-\frac{\nu(n)}{2}\leq i\leq r\\ -\frac{\nu(n)}{2}\leq j\leq r\end{subarray}}p^{r}\Bigg{|}\frac{\chi(p)^{\nu(n) +2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p)^{\nu(n)+1}-\overline{\chi}(p )^{\nu(n)+1}}\cdot\frac{\chi(p)^{\nu(n)+2j+1}-\overline{\chi}(p)^{\nu(n)+2j+1} }{\chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\Bigg{|}W_{n,p}(\mathbf{1} _{p})|^{2}.\] Hence (8.20) follows. If \(p\mid N\) then \(p\) is inert in \(E\) and \(\pi_{p}\simeq\mathrm{St}_{p}\) is the Steinberg representation. From (4.23) we have \[G(\mathbb{Z}_{p})=U(\Gamma\otimes_{\mathbb{Q}}\mathbb{Q}_{p})\text{and }G^{\prime}(\mathbb{Z}_{p})=U(L\otimes_{\mathbb{Q}}\mathbb{Q}_{p}).\] Explicitly, \[G(\mathbb{Z}_{p}) =\{g\in G(\mathbb{Q}_{p}):\ g.\Gamma\otimes_{\mathbb{Q}}\mathbb{Q }_{p}=\Gamma\otimes_{\mathbb{Q}}\mathbb{Q}_{p}\}=G(\mathbb{Q}_{p})\cap \mathrm{GL}(3,\mathcal{O}_{E_{p}}),\] \[G^{\prime}(\mathbb{Z}_{p}) =\{g\in G^{\prime}(\mathbb{Q}_{p}):\ g.L\otimes_{\mathbb{Q}} \mathbb{Q}_{p}=L\otimes_{\mathbb{Q}}\mathbb{Q}_{p}\}=G^{\prime}(\mathbb{Q}) \cap\mathrm{GL}(2,\mathcal{O}_{E_{p}}).\] **Lemma 8.8**.: _Let notation be as before. Suppose that \(p\mid N\) (in particular \(p\) is inert) and \((p,n)=1\). Then_ \[\mathcal{O}_{p}(f;n)=|W_{n,p}(\mathbf{I}_{p})|^{2}\frac{\mu(I_{p}^{ \prime})^{2}}{\mu(I_{p})}. \tag{8.23}\] Proof.: Let \(x_{p}\in N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q}_{p})\) and \(y_{p}\in G^{\prime}(\mathbb{Q}_{p})\). By Lemma A.1 and Iwasawa decomposition, we can write \[x_{p}=\begin{pmatrix}p^{i}&&\\ &1&\\ &&p^{-i}\end{pmatrix}\mu_{1}\kappa_{1}^{\prime},\ y_{p}=\begin{pmatrix}1&&t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&&\\ &1&\\ &&p^{-j}\end{pmatrix}\mu_{2}\kappa_{2}^{\prime},\] where \(\kappa_{1}^{\prime},\kappa_{2}^{\prime}\in I_{p}^{\prime}\); \(i,j\in\mathbb{Z}\); and \(\mu_{1}\), \(\mu_{2}\) are either trivial or of form \(\begin{pmatrix}\delta&1\\ &1\\ 1&\end{pmatrix}\) as in Lemma A.1. By definition, \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) is equivalent to \[x_{p}^{-1}\gamma(1)y_{p}=\mu_{1}^{-1}\begin{pmatrix}p^{-i}&\\ &1&\\ &&p^{i}\end{pmatrix}\begin{pmatrix}1&\Delta t\\ &1\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&\\ &1&\\ &&p^{-j}\end{pmatrix}\mu_{2}\in I_{p},\] which could be further simplified to \[\mu_{1}^{-1}\left(\begin{array}{ccc}p^{j-i}&p^{-i}&p^{-j-i}(\Delta t-\frac{ 1}{2})\\ &1&-p^{-j}\\ &&p^{-j+i}\end{array}\right)\mu_{2}\in I_{p}\subseteq G(\mathbb{Z}_{p}). \tag{8.24}\] Hence \(j=i\leq 0.\) However, when \(i<0,\) from the properties of Whittaker functions (since \((n,p)=1\)) we have \(W_{n,p}(x_{p})=W_{n,p}(\mathrm{diag}(p^{i},1,p^{-i}))=0.\) Then \(\mathcal{O}_{p}(f;n)=0\) is this case. So \(i=j=0\) and (8.24) becomes \[\mu_{1}^{-1}\left(\begin{array}{ccc}1&1&\Delta t-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)\mu_{2}\in I_{p}. \tag{8.25}\] If \(\mu_{1}\neq\mathbf{1}_{p}\) and \(\mu_{2}\neq\mathbf{1}_{p}\), we may write \(\mu_{1}=\begin{pmatrix}\delta_{1}&1\\ &1\\ 1&\end{pmatrix},\)\(\mu_{2}=\begin{pmatrix}\delta_{2}&1\\ &1\\ 1&&\end{pmatrix},\) with \(\delta_{j}+\overline{\delta}_{j}=0\), \(j=1,2.\) Since \[\begin{pmatrix}\delta_{1}&1\\ &1\\ 1&\end{pmatrix}^{-1}\left(\begin{array}{ccc}1&1&\Delta t-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)\begin{pmatrix}\delta_{2}&1\\ &1\\ 1&&\end{pmatrix}=\begin{pmatrix}&1&0&1\\ &-1&1&-1\\ &\Delta t-\frac{1}{2}-\delta_{1}+\delta_{2}&1&1\end{pmatrix}\] does not belong to \(I_{p}\), (8.25) cannot hold. A contradiction! Hence our assumption fails, namely, at least one \(\mu_{1}\) and \(\mu_{2}\) is equal to \(\mathbf{1}_{p}\). Let \(b\in\mathcal{O}_{E_{p}}^{\times}.\) From the following computations \[\begin{pmatrix}1&1&\Delta t-\frac{1}{2}\\ &1&-1\\ &&1\end{pmatrix}\begin{pmatrix}\delta_{2}&1\\ &1&\\ 1&&\end{pmatrix}=\left(\begin{array}{ccc}\Delta t-\frac{1}{2}+\delta_{2}&1&1 \\ -1&1&\\ &1&&\end{array}\right)\not\in I_{p},\] \[\begin{pmatrix}\delta_{1}&&1\\ &1&\end{pmatrix}^{-1}\left(\begin{array}{ccc}1&1&\Delta t-\frac{1}{2}\\ &1&-1\\ &&1\end{array}\right)=\left(\begin{array}{ccc}&1\\ &1&-1\\ 1&1&\Delta t-\frac{1}{2}-\delta_{1}\end{array}\right)\not\in I_{p},\] we see that in order for (8.25) to hold, one must have \(\mu_{1}=\mu_{2}=\mathbf{1}_{p}\). Therefore, \[\frac{\mathcal{O}_{p}(f;n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}= \int_{N^{\prime}(\mathbb{Q}_{p})\setminus G^{\prime}(\mathbb{Q}_{ p})}\int_{G^{\prime}(\mathbb{Q}_{p})}\frac{f_{p}(x_{p}^{-1}\gamma(1)y_{p}) \overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})}{|W_{n,p}(\mathbf{1}_{p})|^{2}}dx_{p}dy _{p}\] \[= \frac{\mu(I_{p}^{\prime})^{2}}{\mu(I_{p})}\int_{\mathbb{Q}_{p}} \mathbf{1}_{\mathcal{O}_{E,p}}(\Delta t-\frac{1}{2})\psi_{p}(nt)dt=\frac{\mu( I_{p}^{\prime})^{2}}{\mu(I_{p})}.\] Thus (8.23) follows. **Lemma 8.9**.: _Let notation be as before. Suppose that \(p\mid(n,N)\). Then we have_ \[\frac{\mathcal{O}_{p}(f;n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}=\frac{\mu(I_{p}^{ \prime})^{2}}{\mu(I_{p})}\cdot\Bigg{\{}1+2p\sum_{-\nu(n)/2\leq i\leq-1}\left| \frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p)^{\nu( n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\right|^{2}\Bigg{\}}. \tag{8.26}\] _In particular (8.26) equals \(0\) if \(\nu(n)\leq 0\)._ Proof.: By the proof of Lemma 8.8 (we use the same notation here), \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) is equivalent to (8.24), i.e., \[\mu_{1}^{-1}\left(\begin{array}{ccc}p^{j-i}&p^{-i}&p^{-j-i}(\Delta t-\frac{1} {2})\\ &1&-p^{-j}\\ &&p^{-j+i}\end{array}\right)\mu_{2}\in I_{p}\subseteq G(\mathbb{Z}_{p}).\] If \(\mu_{1}\neq\mathbf{1}_{p}\) and \(\mu_{2}\neq\mathbf{1}_{p}\), we may write \(\mu_{1}=\begin{pmatrix}\delta_{1}&&1\\ &1&\\ 1&\end{pmatrix},\;\mu_{2}=\begin{pmatrix}\delta_{2}&&1\\ &1&\\ 1&&\end{pmatrix},\) with \(\delta_{j}+\overline{\delta}_{j}=0\), \(j=1,2.\) Then the condition (8.24) becomes \[\left(\begin{array}{ccc}1&&\\ p^{-i}&1&\\ p^{-2i}(\Delta t-\frac{1}{2})+\delta_{2}-\delta_{1}&p^{-i}&1\end{array}\right) \in I_{p}\subseteq G(\mathbb{Z}_{p}),\] which is equivalent to \[i\leq-1,\;p^{-2i}(\Delta t-\frac{1}{2})\in\mathcal{O}_{E_{p}},\;\text{and}\; \delta_{1}-\delta_{2}\equiv p^{-2i}(\Delta t-\frac{1}{2})\pmod{N_{1}}.\] On the other hand, we have \[\left(\begin{array}{ccc}1&p^{-i}&p^{-2i}(\Delta t-\frac{1}{2})\\ &1&-p^{-i}\\ &&1\end{array}\right)\begin{pmatrix}\delta_{2}&&1\\ &1\\ 1&\end{pmatrix}=\left(\begin{array}{ccc}\frac{\Delta t-1/2}{p^{2i}}+\delta_{ 2}&p^{-i}&1\\ -p^{-i}&1\\ 1&&\end{array}\right)\not\in I_{p},\] \[\begin{pmatrix}\delta_{1}&&1\\ &1&\end{pmatrix}^{-1}\left(\begin{array}{ccc}1&p^{-i}&p^{-2i}(\Delta t-\frac {1}{2})\\ &1&-p^{-i}\\ &&1\end{array}\right)=\left(\begin{array}{ccc}1&\\ &1&-p^{-i}\\ 1&p^{-i}&\frac{\Delta t-1/2}{p^{2i}}-\delta_{1}\end{array}\right)\not\in I_{p},\] therefore to make (8.25) hold, one must have either \(\mu_{1}=\mu_{2}=\mathbf{1}_{p}\) or \[\mu_{1}=\begin{pmatrix}\delta_{1}&&1\\ &1&\\ 1&&\end{pmatrix},\;\mu_{2}=\begin{pmatrix}\delta_{2}&&1\\ &1&\\ 1&&\end{pmatrix},\] for some \(\delta_{j}\in\mathcal{O}_{E_{p}}/p\mathcal{O}_{E_{p}}\), \(\delta_{j}+\overline{\delta}_{j}=0\), \(j=1,2.\) By (8.22) we obtain \[\frac{\mathcal{O}_{p}(f;n)}{\mu(I_{p}^{\prime})^{2}/\mu(I_{p})}= \iint f_{p}(x_{p}^{-1}\gamma(1)y_{p})\overline{W_{n,p}}(x_{p})W_{n,p} (y_{p})dx_{p}dy_{p}\] \[= \sum_{\begin{subarray}{c}i\geq-\nu(n)/2\\ i\leq 0\end{subarray}}\int_{\mathbb{Q}_{p}}p^{4i}\mathbf{1}_{\mathcal{O}_{E_{p}}}( p^{-2i}(\Delta t-1/2))\theta_{n,p}(t)dt\Bigg{|}W_{n,p}\begin{pmatrix}p^{i}&\\ &1\\ &&p^{-i}\end{pmatrix}\Bigg{|}^{2}\] \[+\sum_{-\nu(n)/2\leq i\leq-1}\int_{\mathbb{Q}_{p}}p^{4i}\cdot \Bigg{|}W_{n,p}\begin{pmatrix}p^{i}&\\ &1\\ &&p^{-i}\end{pmatrix}\Bigg{|}^{2}\] \[\sum_{\begin{subarray}{c}\delta_{1},\delta_{2}\in\mathcal{O}_{E_ {p}}/N\mathcal{O}_{E_{p}}\\ \delta_{1}+\delta_{2}=0,\ \delta_{2}+\delta_{2}=0\\ \delta_{1}-\delta_{2}\equiv p^{-2i}(\Delta t-\frac{1}{2})\pmod{N}\end{subarray}} \mathbf{1}_{\mathcal{O}_{E_{p}}}(p^{-2i}(\Delta t-1/2))\theta_{n,p}(t)dt\] \[= |W_{n,p}(\mathbf{1}_{p})|^{2}\Bigg{\{}\sum_{-\nu(n)/2\leq i\leq 0 }\Bigg{|}\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p) ^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\Bigg{|}^{2}\] \[+\sum_{-\nu(n)/2\leq i\leq-1}p\Bigg{|}\frac{\chi(p)^{\nu(n)+2i+1} -\overline{\chi}(p)^{\nu(n)+2i+1}}{\chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{ \nu(n)+1}}\Bigg{|}^{2}\Bigg{\}}.\] Then Lemma 8.9 follows. **Lemma 8.10**.: _Let notation be as before. Let \(p\) be a prime ramified in \(E\). We have_ \[\frac{\mathcal{O}_{p}(f;n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}=\sum_{-\nu(n)+1 \leq i\leq-2\nu(D_{E})}\Bigg{|}\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p) ^{\nu(n)+2i+1}}{\chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\Bigg{|}^{2}. \tag{8.27}\] _In particular, \(\mathcal{O}_{p}(f;n)=0\) unless \(\nu(n)\geq 2\nu(D_{E})+1\)._ Proof.: Let \(p=\mathfrak{p}^{2}\) be ramified in \(E\) and \(\nu_{\mathfrak{p}}\) be the corresponding valuation (\(\nu_{\mathfrak{p}}=2\nu\) on \(\mathbb{Q}_{p}\)). Then \(p\mid D_{E}\). Let \(\varpi\) be a uniformizer in \(\mathfrak{p}\). Writing \(g_{p}=u_{p}a_{p}k_{p}\) for the Iwasawa decomposition of \(g_{p}\in G^{\prime}(\mathbb{Q}_{p})\), we have again \[W_{n,p}(g_{p})=W_{n,p}(u_{p}a_{p}k_{p})=\psi_{n,p}(u_{p})W_{n,p}(a_{p}). \tag{8.28}\] Let \(a_{p}=\mathrm{diag}(\varpi^{j},1,\varpi^{-j})\), where \(j\in\mathbb{Z}\). From the support properties of the Whittaker function, we have \(W_{n,p}(g_{p})=0\) unless \(2j+\nu(n)\geq 0\). Let \(x_{p}\in N^{\prime}(\mathbb{Q}_{p})\backslash G^{\prime}(\mathbb{Q}_{p})\) and \(y_{p}\in G^{\prime}(\mathbb{Q}_{p})\). We can write them into their Iwasawa coordinates: \[x_{p}=\begin{pmatrix}\varpi^{i}&&\\ &1&\\ &&\overline{\varpi}^{-i}\end{pmatrix}\kappa_{1},\quad y_{p}=\begin{pmatrix}1& &\Delta t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\varpi^{j}&&\\ &1&\\ &&\overline{\varpi}^{-j}\end{pmatrix}\kappa_{2},\] where \(i,j\in\mathbb{Z}^{2}\), \(t\in\mathbb{Q}_{p}\) and \(\kappa_{1},\kappa_{2}\in K_{p}^{\prime}(N)\). By definition, \(f_{p}=\mathbf{1}_{K_{p}}\). Since \(p\nmid N\), we have \(\kappa_{1},\kappa_{2}\in K_{p}\) and \(f_{p}(x_{p}^{-1}\gamma(1)y_{p})\neq 0\) if and only if \[\begin{pmatrix}\varpi^{-i}&&\\ &1&\\ &&\overline{\varpi}^{i}\end{pmatrix}\begin{pmatrix}1&1&-1/2\\ &&1&-1\\ &&1\end{pmatrix}\begin{pmatrix}1&&\Delta t\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\varpi^{j}&&\\ &1&\\ &&\overline{\varpi}^{-j}\end{pmatrix}\in G(\mathbb{Z}_{p}),\] which could be further reduced to \[\left(\begin{array}{ccc}\varpi^{j-i}&\varpi^{-i}&\varpi^{-j}\overline{ \varpi}^{-i}(\Delta t-\frac{1}{2})\\ &&1&-\overline{\varpi}^{-j}\\ &&\overline{\varpi}^{-j+i}\end{array}\right)\in G(\mathbb{Z}_{p}).\] Hence \(j=i\leq-2\nu(D_{E}).\) By the support of Whittaker functions we have \(\overline{W_{n,p}(x_{p})W_{n,p}(y_{p})}=0\) unless \(2i+\nu_{\mathfrak{p}}(n)\geq 0.\) Hence \(\mathcal{O}_{p}(f;n)=0\) unless \[-\nu(n)\leq i=j\leq-2\nu(D_{E}).\] Therefore, we have \[\mathcal{O}_{p}(f;n)= \int_{N^{\prime}(\mathbb{Q}_{p})\setminus G^{\prime}(\mathbb{Q}_{ p})}\int_{G^{\prime}(\mathbb{Q}_{p})}f_{p}(x_{p}^{-1}\gamma(1)y_{p})\overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})dx_{p}dy_{p}\] \[= \sum_{\begin{subarray}{c}i\geq-\nu(n)\\ i\leq-2\nu(D_{E})\end{subarray}}p^{2i}\int_{E_{\mathfrak{p}}}\mathbf{1}_{ \mathcal{O}_{E_{p}}}(p^{-i}(\Delta t-1/2))\theta_{n,p}(t)dt\cdot\left|W_{n,p} \begin{pmatrix}\varpi^{i}&\\ &1&\\ &&\overline{\varpi}^{-i}\end{pmatrix}\right|^{2}\] \[= |W_{n,p}(\mathbf{1}_{p})|^{2}\cdot\sum_{-\nu(n)+\nu(D_{E})/2\leq i \leq-2\nu(D_{E})}\left|\frac{\chi(p)^{\nu(n)+2i+1}-\overline{\chi}(p)^{\nu(n)+ 2i+1}}{\chi(p)^{\nu(n)+1}-\overline{\chi}(p)^{\nu(n)+1}}\right|^{2}.\] We then conclude (8.27). ### Computation of \(\mathcal{O}_{p}(f;n)\) when \(\pi_{p}^{\prime}\) is ramified In this subsection we deal with the case \(p=N^{\prime}\). In particular \(p\) is split and \(\pi_{p}^{\prime}\simeq\mathrm{St}_{p}^{\prime}\) is the Steinberg representation. We continue to denote by \(\nu\) the usual valuation on \(\mathbb{Q}_{p}\). **Lemma 8.11**.: _Suppose that \(p=N^{\prime}\)._ _For \(\nu(n)<0\), we have_ \[\mathcal{O}_{p}(f^{\mathfrak{n}_{p}};n)=0.\] _For \(\nu(n)=0\) we have_ \[\frac{\mathcal{O}_{p}(f^{\mathfrak{n}_{p}};n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}= \frac{(p-2)\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}. \tag{8.29}\] _For \(\nu(n)\geq 1\) we have_ \[\frac{\mathcal{O}_{p}(f^{\mathfrak{n}_{p}};n)}{|W_{n,p}(\mathbf{1}_{p})|^{2}}\ll \frac{\nu(n)^{2}}{p\mu(K_{p})}, \tag{8.30}\] _where the implied constant is absolute. We recall that \(I_{p}^{\prime}(1)\) is defined in (5.69) (when we identify it with a subgroup of \(\mathrm{GL}_{2}(\mathbb{Z}_{p})\))._ Proof.: By definition of the test function \(f_{p}\), the orbital integral \[\mathcal{O}_{p}(f;n)= \int_{N^{\prime}(\mathbb{Q}_{p})\setminus G^{\prime}(\mathbb{Q} _{p})}\int_{G^{\prime}(\mathbb{Q}_{p})}f_{p}(\mathfrak{n}_{p}^{-1}x_{p}^{-1} \gamma(1)y_{p}\mathfrak{n}_{p})\overline{W_{n,p}}(x_{p})W_{n,p}(y_{p})dx_{p}dy _{p},\] is zero unless \[\mathfrak{n}_{p}^{-1}\left(\begin{array}{ccc}a&b\\ c&d\\ &&1\end{array}\right)^{-1}\left(\begin{array}{ccc}1&-\frac{1}{2}&1\\ &1\\ &-1&1\end{array}\right)\left(\begin{array}{ccc}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}&\\ &&1\end{array}\right)\mathfrak{n}_{p}\in K_{p}. \tag{8.31}\] By the Iwasawa decomposition for \(G(\mathbb{Q}_{p})\) and the Iwahori decomposition for \(K_{p}^{\prime}\), we write \[\left(\begin{array}{ccc}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}&\\ &&1\end{array}\right)\in\left(\begin{array}{ccc}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{array}\right)\left(\begin{array}{ccc}1&b^{\prime}&\\ &1&\\ &&1\end{array}\right)\left(\begin{array}{ccc}\delta&&\\ &1&\\ &&1\end{array}\right)I_{p}^{\prime}(1) \tag{8.32}\] or \[\left(\begin{array}{ccc}a^{\prime}&b^{\prime}&\\ c^{\prime}&d^{\prime}&\\ &&1\end{array}\right)\in\left(\begin{array}{ccc}p^{j^{\prime}}&b^{\prime}p^{k^ {\prime}}&\\ &&p^{k^{\prime}}&\\ &&1\end{array}\right)\left(\begin{array}{ccc}\mu_{2}&1&\\ 1&&\\ &&1\end{array}\right)\left(\begin{array}{ccc}\delta&&\\ &1&\\ &&1\end{array}\right)I_{p}^{\prime}(1) \tag{8.33}\] 1. Suppose \(x_{p}\) and \(y_{p}\) are both of the form in (8.32), namely, suppose \[x_{p}= \begin{pmatrix}p^{j}&&\\ &p^{k}&\\ &&1\end{pmatrix}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\gamma_{1},\] \[y_{p}= \begin{pmatrix}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{pmatrix}\begin{pmatrix}1&b^{\prime}&\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\gamma_{1},\gamma_{2}\in I^{\prime}_{p}(1).\) Denote by \(\mathcal{O}^{(1)}_{p}(f;n)\) the contribution of \(x_{p},y_{p}\) in the above forms. Then (8.31) is equivalent to \[\mathfrak{n}_{p}^{-1}\left(\begin{array}{ccc}\tau p^{j}&&\\ &p^{k}&\\ &&1\end{array}\right)^{-1}\left(\begin{array}{ccc}1&-1/2&1\\ &&1&\\ &-1&1\end{array}\right)\left(\begin{array}{ccc}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{array}\right)\left(\begin{array}{ccc}\delta&b^{\prime}&\\ &1&\\ &&1\end{array}\right)\mathfrak{n}_{p}\in K_{p}\] A direct calculation shows that \[\left(\begin{array}{ccc}p^{j^{\prime}-j}&p^{j^{\prime}-j}b^{\prime}-2^{-1}p ^{k^{\prime}-j}+\tau p^{k-1}&\delta p^{j^{\prime}-j-1}+p^{-j}-\tau p^{-1}\\ &&p^{k^{\prime}-k}&\\ &&-p^{k^{\prime}}&1\end{array}\right)\in K_{p}.\] Similarly, taking the inverse, we have \[\left(\begin{array}{ccc}p^{-j^{\prime}}&-b^{\prime}p^{-k^{\prime}}&-\delta p ^{-1}\\ &p^{-k^{\prime}}&\\ &&1\end{array}\right)\left(\begin{array}{ccc}1&-1/2&-1\\ &1&\\ &1&1\end{array}\right)\left(\begin{array}{ccc}p^{j}&&\tau p^{j-1}\\ &p^{k}&\\ &&1\end{array}\right)\in K_{p}.\] Expanding the matrices we then obtain \[\left(\begin{array}{ccc}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-b^{\prime }p^{k-k^{\prime}}-\delta p^{k-1}&\tau p^{k-j^{\prime}-1}-p^{-j^{\prime}}- \delta p^{-1}\\ 0&p^{k-k^{\prime}}&0\\ 0&p^{k}&1\end{array}\right)\in K_{p}.\] So \(j-j^{\prime}=k-k^{\prime}\) and \(k=k^{\prime}\geq 0,\) therefore \(j=j^{\prime}.\) Hence the above constraints reduces to the following: \[\left(\begin{array}{ccc}1&b^{\prime}-2^{-1}p^{k-j}&(\delta-\tau)p^{-1}+p^{- j}\\ 0&1&0\\ 0&-p^{k}&1\end{array}\right)\in K_{p}.\] \[\left(\begin{array}{ccc}1&-2^{-1}p^{k-j}-b^{\prime}-\delta p^{k-1}&\tau p ^{k-j-1}-p^{-j}-\delta p^{-1}\\ 0&1&0\\ 0&p^{k}&1\end{array}\right)\in K_{p}.\] Since \(\tau p^{k-j-1}-p^{-j}-\delta^{-1}p^{-1}\in\mathbb{Z}_{p}\) and \(k\geq 0,\) then \(0\leq j\leq 1.\) Also, \(p^{k-j}+\delta p^{k-1}\in\mathbb{Z}_{p}.\) So \(j\neq 0,\) forcing that \(j=1.\) From \[\tau p^{k-j-1}-p^{-j}-\delta^{-1}p^{-1}\in\mathbb{Z}_{p}\] we then see that \(k\geq 1.\) Also, by the support properties of Whittaker functions, we have \[\nu(n)+1\geq k\geq 1.\] If \(\nu(n)=0\) we have thus \(k=1\) and \[\mathcal{O}_{p}^{(1)}(f;n)= \frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{\begin{subarray} {c}\delta,\tau\\ (\delta-\tau)p^{-1}+p^{-1}\in\mathbb{Z}_{p}\end{subarray}}\left|W_{n,p}\left( \mathbf{1}_{p}\right)\right|^{2}\] \[= \frac{(p-2)\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}|W_{n,p}( \mathbf{1}_{p})|^{2}.\] Assume now that \(\nu(n)\geq 1\). If \(k\geq 2\), then there are only one choice for the pair \((\delta,\tau)\) and \[\mathcal{O}_{p}^{(1)}(f;n)= \frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{1\leq k\leq\nu( n)}p^{1-k}\Big{|}W_{n,p}\left(\begin{matrix}p&&\\ &1&\\ &&p^{k}\end{matrix}\right)\Big{|}^{2}\ll\frac{\mu(I_{p}^{\prime}(1))^{2}\nu(n )}{\mu(K_{p})}|W_{n,p}(\mathbf{1}_{p})|^{2}.\] 2. Suppose \(x_{p}\) and \(y_{p}\) are both of the form in (8.33), namely, suppose \[x_{p}= \begin{pmatrix}p^{j}&&\\ &p^{k}&\\ &&1\end{pmatrix}\begin{pmatrix}\mu_{1}&1\\ 1&&\\ &&1\end{pmatrix}\begin{pmatrix}\tau^{-1}&&\\ &&1&\\ &&&1\end{pmatrix}\gamma_{1},\] \[y_{p}= \begin{pmatrix}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{pmatrix}\begin{pmatrix}1&b^{\prime}\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\mu_{2}&1&\\ 1&&\\ &&1\end{pmatrix}\begin{pmatrix}\delta^{-1}&&\\ &&1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\gamma_{1},\gamma_{2}\in I_{p}^{\prime}(1).\) Then (8.31) is equivalent to \[\begin{pmatrix}p^{-j}&&\\ &p^{-k}&-\tau p^{-1}\\ &&1\end{pmatrix}\begin{pmatrix}1&-1/2&1\\ &1&\\ &-1&1\end{pmatrix}\begin{pmatrix}p^{j^{\prime}}&bp^{j^{\prime}}&\delta^{-1}bp^ {j^{\prime}-1}\\ &p^{k^{\prime}}&\delta^{-1}p^{k^{\prime}-1}\\ &&1\end{pmatrix}\in K_{p},\] where \(b=b^{\prime}+\mu_{2}-\mu_{1}p^{j-k+k^{\prime}-j^{\prime}}.\) Expanding the left hand side we obtain \[\begin{pmatrix}p^{j^{\prime}-j}&bp^{j^{\prime}-j}-2^{-1}p^{k^{\prime}-j}&z\\ 0&p^{k^{\prime}-k}+\tau p^{k^{\prime}-1}&\delta^{-1}p^{k^{\prime}-1-k}-\tau p ^{-1}+\tau\delta^{-1}p^{k^{\prime}-2}\\ 0&-p^{k^{\prime}}&1-\delta^{-1}p^{k^{\prime}-1}\end{pmatrix}\in K_{p},\] where \(z=\delta^{-1}bp^{j^{\prime}-j-1}-2^{-1}\delta^{-1}p^{k^{\prime}-1-j}+p^{-j}.\) Taking the inverse of the above matrix we then obtain \[\begin{pmatrix}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-bp^{k-k^{\prime}}&z^{ \prime}\\ 0&p^{k-k^{\prime}}-\delta p^{k-1}&\tau^{-1}p^{k-k^{\prime}-1}-\delta p^{-1}- \delta\tau^{-1}p^{k-2}\\ 0&p^{k}&1+\tau^{-1}p^{k-1}\end{pmatrix}\in K_{p},\] where \[z^{\prime}=-2^{-1}\tau^{-1}p^{k-j^{\prime}-1}-p^{-j^{\prime}}-b\tau^{-1}p^{k- k^{\prime}-1}.\] In particular we have \[k\geq 1,\ k^{\prime}\geq 1,\ j-j^{\prime}=k-k^{\prime}.\] If \(k>k^{\prime}\) then \(k\geq 2\), which contradicts the condition \[\tau^{-1}p^{k-k^{\prime}-1}-\delta p^{-1}-\delta\tau^{-1}p^{k-2}\in\mathbb{Z}_ {p}.\] Hence \(k\leq k^{\prime}.\) Likewise, \(k^{\prime}\leq k.\) So we must have \(k=k^{\prime}\) and therefore \(j=j^{\prime}\). Eventually, the above constraints reduces to \[\begin{pmatrix}1&b-2^{-1}p^{k-j}+\mu_{1}p^{k-1}&z_{1}\\ 0&1+\tau p^{k-1}&\delta^{-1}p^{-1}-\tau p^{-1}+\tau\delta^{-1}p^{k-2}\\ 0&-p^{k}&1-\delta^{-1}p^{k-1}\end{pmatrix}\in K_{p},\] where \(z_{1}=\delta^{-1}bp^{-1}-2^{-1}\delta^{-1}p^{-1-j}+p^{-j}.\) From \[bp^{j^{\prime}-j}-2^{-1}p^{k^{\prime}-j}\in\mathbb{Z}_{p},\ \ -2^{-1}p^{k-j^{\prime}}-bp^{k-k^{\prime}}\in \mathbb{Z}_{p}\] one has \(k\geq j\) and \(b\in\mathbb{Z}_{p}.\) On the other hand, from the support of Whittaker functions we have necessarily that \(\nu(n)+j\geq k\geq 1.\) We have the following cases (a) Suppose \(\nu(n)=0.\) Then \(k=j\geq 1.\) Since \(b\in\mathbb{Z}_{p},\) then from \[z^{\prime}=-2^{-1}\tau^{-1}p^{k-j^{\prime}-1}-p^{-j^{\prime}}-br^{-1}p^{k-k^{ \prime}-1}\in\mathbb{Z}_{p}\] one concludes that \(-j\geq-1,\) i.e., \(j\leq 1.\) Hence \(j=k=1.\) Then it follows from (8.34) \[\delta^{-1}p^{k^{\prime}-1-k}-\tau p^{-1}+\tau\delta^{-1}p^{k^{ \prime}-2},\ \tau^{-1}p^{k-k^{\prime}-1}-\delta p^{-1}-\delta\tau^{-1}p^{k-2}\in\mathbb{Z}_{p}\] that \(\tau\equiv-\delta\pmod{p}\) and \(1+\tau+\tau^{2}\equiv 0\pmod{p}.\) From \(\tau z^{\prime}+\delta z\in\mathbb{Z}_{p}\) we have \(1+2\tau\equiv 0\pmod{p},\) which in conjunction with \(1+\tau+\tau^{2}\equiv 0\pmod{p}\) implies \(p\mid 3.\) However, since we have assumed that \(p\geq 5\) we encounter a contradiction if \(\nu(n)=0.\) (b) Therefore \(\nu(n)\geq 1.\) Note that from (8.34) \(\delta\) and \(\tau\) should satisfy either \(\delta\tau\equiv 1\pmod{p}\) or \(\tau^{2}+\tau+1\equiv 0\pmod{p}\) and \(\tau+\delta\equiv 0\pmod{p}.\) Hence, in conjunction with \[W_{n,p}(x_{p})=\theta_{n,p}(p^{j-k}\mu_{1})W_{n,p}\begin{pmatrix}p^{j-k}&&\\ &1&\\ &&1\end{pmatrix}\] \[W_{n,p}(y_{p})=\theta_{n,p}(p^{j-k}\mu_{2})\theta_{n,p}(p^{j-k}b^{ \prime})W_{n,p}\begin{pmatrix}p^{j-k}&&\\ &&1&\\ &&1\end{pmatrix},\] we conclude that \[\mathcal{O}_{p}^{(2)}(f;n)\leq p^{3}\mu(I_{p}^{\prime}(1))^{2}\sum_{k=1}^{\nu(n)+1}\sum_{j=k- \nu(n)}^{1}p^{j-k}\bigg{|}W_{n,p}\begin{pmatrix}p^{j-k}&&\\ &&1&\\ &&1\end{pmatrix}\bigg{|}^{2}\ll\frac{\nu(n)^{2}|W_{n,p}(\mathbf{1}_{p})|^{2}}{ p\mu(K_{p})}.\] 3. Suppose \(x_{p}\) is of the form in (8.32) and \(y_{p}\) is of the form in (8.33), namely, suppose \[x_{p}= \begin{pmatrix}p^{j}&&\\ &p^{k}&\\ &&1\end{pmatrix}\begin{pmatrix}\tau&&\\ &1&\\ &&1\end{pmatrix}\gamma_{1},\] \[y_{p}= \begin{pmatrix}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{pmatrix}\begin{pmatrix}1&b^{\prime}&\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\mu_{2}&1&\\ 1&&\\ &&1\end{pmatrix}\begin{pmatrix}\delta&\\ &1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\gamma_{1},\gamma_{2}\in I_{p}^{\prime}(1).\) Denote by \(\mathcal{O}_{p}^{(3)}(f;n)\) the contribution of \(x_{p},y_{p}\) in the above forms. Then (8.31) is equivalent to \[\left(\begin{array}{ccc}\tau^{-1}p^{-j}&&-p^{-1}\\ &p^{-k}&\\ &&1\end{pmatrix}\begin{pmatrix}1&-1/2&1\\ &1&\\ &-1&1\end{pmatrix}\begin{pmatrix}p^{j^{\prime}}&bp^{j^{\prime}}&\delta^{-1}bp^{j^ {\prime}-1}\\ &p^{k^{\prime}}&\delta^{-1}p^{k^{\prime}-1}\\ &1\end{pmatrix}\in K_{p}\right.\] where \(b=b^{\prime}+\mu_{2}.\) Expanding the left hand side, the above constraint becomes \[\begin{pmatrix}p^{j^{\prime}-j}&bp^{j^{\prime}-j}-2^{-1}p^{k^{\prime}-j}+\tau p ^{k^{\prime}-1}&z_{2}\\ 0&p^{k^{\prime}-k}&\delta^{-1}p^{k^{\prime}-k-1}\\ 0&-p^{k^{\prime}}&1-\delta^{-1}p^{k^{\prime}-1}\end{pmatrix}\in K_{p},\] where \(z_{2}:=\delta^{-1}bp^{j^{\prime}-j-1}-2^{-1}\delta^{-1}p^{k^{\prime}-j-1}+p^{-j}- \tau p^{-1}+\tau\delta^{-1}p^{k^{\prime}-2}\). Considering the inverse as before, we obtain \[\begin{pmatrix}p^{-j^{\prime}}&-bp^{-k^{\prime}}&\\ &p^{-k^{\prime}}&-\delta p^{-1}\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&-2^{-1}p^{k}&\tau^{-1}p^{j-1}-1\\ &&p^{k}&\\ &&p^{k}&1\end{pmatrix}\in K_{p},\] which amounts to the following condition: \[\begin{pmatrix}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-bp^{k-k^{\prime}}& \tau^{-1}p^{j-j^{\prime}-1}-p^{-j^{\prime}}\\ 0&p^{k-k^{\prime}}-\delta p^{k-1}&-\delta p^{-1}\\ 0&p^{k}&1\end{pmatrix}\in K_{p}.\] Then we get a contradiction. So in this case, one has \(\mathcal{O}_{p}^{(3)}(f;n)=0\). 4. Suppose \(x_{p}\) is of the form in (8.33) and \(y_{p}\) is of the form in (8.32), namely, suppose \[x_{p}= \begin{pmatrix}p^{j}&&\\ &p^{k}&\\ &&1\end{pmatrix}\begin{pmatrix}\mu_{1}&1&\\ 1&&\\ &&1\end{pmatrix}\begin{pmatrix}\tau&&\\ &1&\\ &&1\end{pmatrix}\gamma_{1},\] \[y_{p}= \begin{pmatrix}p^{j^{\prime}}&&\\ &p^{k^{\prime}}&\\ &&1\end{pmatrix}\begin{pmatrix}1&b^{\prime}&\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\gamma_{1},\gamma_{2}\in I_{p}^{\prime}(1).\) Denote by \(\mathcal{O}_{p}^{(4)}(f;n)\) the contribution of \(x_{p},y_{p}\) in the above forms. Then (8.31) is equivalent to \[\begin{pmatrix}p^{-j}&-\mu_{1}p^{-1}\\ &p^{-k}&-\tau p^{-1}\\ &&1\end{pmatrix}\begin{pmatrix}p^{j^{\prime}}&-2^{-1}p^{k^{\prime}}&1\\ &&p^{k^{\prime}}&\\ &&-p^{k^{\prime}}&1\end{pmatrix}\begin{pmatrix}1&b^{\prime}&\delta^{-1}p^{-1 }\\ &1&\\ &&1\end{pmatrix}\in K_{p}.\] Expanding the left hand side this becomes \[\begin{pmatrix}p^{j^{\prime}-j}&p^{j^{\prime}-j}b^{\prime}-2^{-1}p^{k^{\prime }-j}+\mu_{1}p^{k^{\prime}-1}&\delta^{-1}p^{j^{\prime}-j-1}+p^{-j}-\mu_{1}p^{-1 }\\ 0&p^{k^{\prime}-k}+\tau p^{k^{\prime}-1}&-\tau p^{-1}\\ 0&-p^{k^{\prime}}&1\end{pmatrix}\in K_{p}.\] Taking the inverse we then obtain \[\begin{pmatrix}p^{-j^{\prime}}&-bp^{-k^{\prime}}&-\delta p^{-1}\\ &p^{-k^{\prime}}&\\ &&1\end{pmatrix}\begin{pmatrix}p^{j}&-2^{-1}p^{k}&-1\\ &p^{k}&\\ &p^{k}&1\end{pmatrix}\begin{pmatrix}1&\mu_{1}&\mu_{1}\tau^{-1}p^{-1}\\ &1&\tau^{-1}p^{-1}\\ &&1\end{pmatrix}\in K_{p}\] By a calculation, this is equivalent to \[\begin{pmatrix}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-bp^{k-k^{\prime}}- \delta p^{k-1}&z_{2}^{\prime}\\ 0&p^{k-k^{\prime}}&\tau^{-1}p^{k-k^{\prime}-1}\\ 0&p^{k}&1+\tau^{-1}p^{k-1}\end{pmatrix}\in K_{p},\] where \[z_{2}^{\prime}=\mu_{1}\tau^{-1}p^{j-j^{\prime}-1}-2^{-1}\tau^{-1}p^{k-j^{ \prime}-1}-p^{-j^{\prime}}-\tau^{-1}bp^{k-k^{\prime}-1}-\delta p^{-1}-\tau^{-1 }\delta p^{k-2}.\] So \(j=j^{\prime}\) and \(k=k^{\prime}\geq 1.\) Thus, \[\left(\begin{array}{ccc}1&b^{\prime}-2^{-1}p^{k-j}+\mu_{1}p^{k-1}&\delta^{-1}p ^{-1}+p^{-j}-\mu_{1}p^{-1}\\ 0&1+\tau p^{k-1}&-\tau p^{-1}\\ 0&-p^{k}&1\end{array}\right)\in K_{p}.\] Then we get a contradiction and conclude that \(\mathcal{O}_{p}^{(4)}(f;n)=0.\) In conclusion, (8.29) and (8.30) hold, giving Lemma 8.11. ### An upper bound for the global orbital integral \(\mathcal{O}_{\gamma(1)}(f;\varphi^{\prime})\) In this section, we combine results from previous sections to deduce an upper bound for \(\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime}).\) Let \(\nu(f)\) be the set of (inert) primes \(p\) (coprime with \(NN^{\prime}D\)) such that \[f_{p}=1_{G(\mathbb{Z}_{p})A_{r_{p}}G(\mathbb{Z}_{p})}\] for some \(r_{p}\geq 1.\) **Proposition 8.12**.: _Let notation be as before. We have_ \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})\leq\frac{(\ell N^{\prime})^{o(1)} }{2^{4k}k^{2}}\frac{N}{{N^{\prime}}^{3}}\prod_{p\in\nu(f)}p^{r_{p}}. \tag{8.35}\] _where the implicit constants depend on \(E\) and \(\pi^{\prime}\) (via \(L(\pi^{\prime},\operatorname{Ad},1)\))._ Proof.: It follows from (8.7), (8.15), and Lemmas 8.6, 8.7, 8.8, 8.9, 8.10, 8.11 that \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})=\eta(\Delta)\frac{C_{k}C_{N,N^{ \prime}}}{L(\pi^{\prime},\operatorname{Ad},1)}\sum_{n\geq 1}|\lambda_{\pi^{ \prime}}(n)|^{2}e^{-\frac{\pi n}{|D_{E}|^{1/2}}}\prod_{p<\infty}I_{p}(n,\varphi ^{\prime}), \tag{8.36}\] where \(\eta(\Delta)\) is a complex number of modulus \(1,\) \[C_{k}=\frac{2^{4}\pi^{5}}{3.2^{4(k-1)}}\frac{1}{(k-1)^{2}},\] \[C_{N,N^{\prime}} =\prod_{p|N}\frac{\mu(I_{p}^{\prime})^{2}}{\mu(K_{p})}\prod_{p\in N ^{\prime}}(1+\frac{1}{p})(p-2)\mu(I_{p}^{\prime}(1))^{2}\] \[=\prod_{p|N}\frac{p^{2}-p+1}{p+1}\prod_{p|N^{\prime}}\frac{(p-2)( p+1)}{p(p^{2}-1)^{2}}\ll\frac{N}{{N^{\prime}}^{3}}. \tag{8.37}\] and where \(I_{p}(n,\varphi^{\prime})=1\) if \((n,p\ell)=1\) and in general satisfies, \[\begin{cases}I_{p}(n,\varphi^{\prime})\leq(n,p)^{1+o(1)},&\text{if }p\notin\nu(f)\\ I_{p}(n,\varphi^{\prime})\ll(r_{p}+1)^{4}(n,p)^{1+o(1)}p^{r_{p}},&\text{if }p \in\nu(f).\end{cases} \tag{8.38}\] Hence, (8.36), Deligne's bound (4.41) and (4.44) yields \[\mathcal{O}_{\gamma(1)}(f,\varphi^{\prime})\ll\frac{(\ell N^{\prime})^{o(1)}} {2^{4k}k^{2}}\frac{N}{{N^{\prime}}^{3}}\cdot\prod_{p\in\nu(f)}p^{r_{p}}. \tag{8.39}\] ### Computation of the remaining unipotent orbital integrals In this subsection, we prove the claim (6.16) and reduce the computation of \(\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})\)\(x\in E^{1}\) to \(\mathcal{O}_{\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime})\). Our result is the following: **Proposition 8.13**.: _Let notation be as before. Let \(x\in E^{1}.\) Then_ \[\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})=\begin{cases} \mathcal{O}_{\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime}),\text{ if }x\in\mathcal{O}_{E}^{1};\\ 0,\text{ otherwise.}\end{cases}\] Proof.: Let \(x\neq 1.\) By Lemma 6.3 we have \[\gamma(x)=g_{1}.x\gamma(1).g_{2},\ g_{1},g_{2}\in G^{\prime}(\mathbb{Q})\] and by \(G^{\prime}(\mathbb{Q})\) invariance of \(\varphi^{\prime},\) we have \[\begin{split}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{ \prime})&=\int_{H_{\gamma(x)}(\mathbb{Q})\backslash G^{\prime}( \mathbb{A})^{2}}f^{\mathfrak{n}}(u^{-1}g_{1}.x\gamma(1).g_{2}v)\overline{ \varphi}^{\prime}(u)\varphi^{\prime}(v)dudv\\ &=\mathcal{O}_{x\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime}). \end{split} \tag{8.40}\] Similar to the proof of Lemma 8.1 the orbital integral \(\mathcal{O}_{x\gamma(1)}(f^{\mathfrak{n}},\varphi^{\prime})\) is equal to \[\int_{N^{\prime}(\mathbb{A})\backslash G^{\prime}(\mathbb{A})}\int_{G^{\prime }(\mathbb{A})}f^{\mathfrak{n}}\left(y_{1}^{-1}x\gamma(1)y_{2}\right)\int_{[N^ {\prime}]}\overline{\varphi}^{\prime}(vy_{1})\varphi^{\prime}(vy_{2})dvdy_{1 }dy_{2}, \tag{8.41}\] which is absolutely converging by Proposition 8.12. Note that \(f^{\mathfrak{n}}\left(y_{1}^{-1}x\gamma(1)y_{2}\right)=0\) unless \[\widetilde{\mathfrak{n}}^{-1}y_{1}^{-1}x\gamma(1)y_{2}\widetilde{\mathfrak{n }}\in\prod_{\begin{subarray}{c}p<\infty\\ p\notin\nu(f)\end{subarray}}K_{p}(N)\prod_{p\in\nu(f)}G(\mathbb{Z}_{p})A_{p^{ p}}G(\mathbb{Z}_{p}).\] Also, \[\left(\begin{array}{cc}a&b\\ &1\\ c&d\end{array}\right)\left(\begin{array}{cc}1&1&-1/2\\ &1&-1\\ &&1\end{array}\right)\left(\begin{array}{cc}a^{\prime}&b^{\prime}\\ &1&\\ c^{\prime}&d^{\prime}\end{array}\right)=\left(\begin{array}{cc}*&*\\ &1\\ *&*\end{array}\right).\] Hence \(f^{\mathfrak{n}}\left(y_{1}^{-1}x\gamma(1)y_{2}\right)=0\) unless \(\nu_{v}(x)\geq 0\) (i.e., \(x\in\mathcal{O}_{E_{v}}\)) for all places \(v\) in \(E\) such that \(v\) is not above \(N^{\prime}\) and \(v\notin\nu(f).\) Let \(p\in\nu(f).\) Then \(f_{p}^{\mathfrak{n}_{p}}\left(y_{1}^{-1}x\gamma(1)y_{2}\right)=0\) unless \[\widetilde{\mathfrak{n}}_{p}^{-1}y_{1,p}^{-1}\gamma(1)y_{2,p}\widetilde{ \mathfrak{n}}_{p}\in x^{-1}G(\mathbb{Z}_{p})A_{p^{\prime}p}G(\mathbb{Z}_{p}). \tag{8.42}\] Taking the determinant on both sides of (8.42) we then derive that \(|x|_{p}=1,\) implying that \(x\in\mathcal{O}_{E_{p}}^{\times}.\) Let \(p\mid N^{\prime}.\) Then \(f_{p}^{\mathfrak{n}_{p}}\left(y_{1}^{-1}x\gamma(1)y_{2}\right)=0\) unless \[\widetilde{\mathfrak{n}}_{p}^{-1}y_{1,p}^{-1}\gamma(1)y_{2,p}\widetilde{ \mathfrak{n}}_{p}\in x^{-1}K_{p}. \tag{8.43}\] We then apply the manipulation as in the proof of Lemma 8.11 to show that, if (8.43) holds for some \(x\in E^{1},\) then necessarily \(x\in\mathcal{O}_{E_{p}}^{\times}.\) Since \(p|N^{\prime}\) we may identify \(G(\mathbb{Q}_{p})\) (resp. \(G^{\prime}(\mathbb{Q}_{p})\)) with \(\operatorname{GL}(3,\mathbb{Q}_{p})\) (resp. \(\operatorname{GL}(2,\mathbb{Q}_{p})\)); suppose for the contrary that \(\nu(x)\neq 0.\) In the notations of the proof of Lemma 8.11 we have four cases to consider. In Case 1, (8.43) implies that \[\left(\begin{array}{cc}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-b^{\prime}p^ {k-k^{\prime}}-\delta p^{k-1}&\tau p^{k-j^{\prime}-1}-p^{-j^{\prime}}-\delta p ^{-1}\\ 0&p^{k-k^{\prime}}&0\\ 0&p^{k}&1\end{array}\right)\in x^{-1}K_{p},\] which contradicts the assumption that \(\nu(x)<0.\) Similarly, taking the inverse, we see that \(\nu(x)>0\) cannot happen as well. In the Case 3, (8.43) implies that \[x\begin{pmatrix}p^{j^{\prime}-j}&bp^{j^{\prime}-j}-2^{-1}p^{k^{\prime}-j}+\tau p^{ k^{\prime}-1}&z_{2}\\ p^{k^{\prime}-k}&\delta^{-1}p^{k^{\prime}-k-1}\\ -p^{k^{\prime}}&1-\delta^{-1}p^{k^{\prime}-1}\end{pmatrix}\in K_{p}, \tag{8.44}\] where \[z_{2}:=\delta^{-1}bp^{j^{\prime}-j-1}-2^{-1}\delta^{-1}p^{k^{\prime}-j-1}+p^{- j}-\tau p^{-1}+\tau\delta^{-1}p^{k^{\prime}-2}.\] Considering the inverse as before, we then obtain \[x^{-1}\begin{pmatrix}p^{j-j^{\prime}}&-2^{-1}p^{k-j^{\prime}}-bp^{k-k^{\prime} }&\tau^{-1}p^{j-j^{\prime}-1}-p^{-j^{\prime}}\\ 0&p^{k-k^{\prime}}-\delta p^{k-1}&-\delta p^{-1}\\ 0&p^{k}&1\end{pmatrix}\in K_{p}. \tag{8.45}\] By (8.45) one has \(\nu(x)\leq-1\) and \(j-j^{\prime}=\nu(x).\) Computing the determinant of (8.44) we then have \[j-j^{\prime}+k-k^{\prime}=3\nu(x),\ i.e.,\ k-k^{\prime}=2\nu(x).\] Therefore \[j=1,j^{\prime}=2,k=-1,k^{\prime}=1,\ \text{and}\ \nu(x)=-1.\] Applying inversion in (8.45) we then get \(\nu(x)-1\geq 0\) by considering its \((2,3)\)-th entry and therefore \(\nu(x)\geq 1,\) a contradiction! In conclusion, the Case 3 does not contribute to the orbital integral, just as the situation in Lemma 8.11. Similarly the contribution of Case 4 is also zero. Finally we consider the Case 2; where (8.43) implies that \[\begin{pmatrix}p^{j^{\prime}-j}&bp^{j^{\prime}-j}-2^{-1}p^{k^{\prime}-j}+\mu_{ 1}p^{k^{\prime}-1}&z\\ &p^{k^{\prime}-k}+\tau p^{k^{\prime}-1}&\delta^{-1}p^{k^{\prime}-1-k}-\tau p^{ -1}+\tau\delta^{-1}p^{k^{\prime}-2}\\ &-p^{k^{\prime}}&1-\delta^{-1}p^{k^{\prime}-1}\end{pmatrix}\in x^{-1}K_{p},\] where \[z=\delta^{-1}bp^{j^{\prime}-j-1}-2^{-1}\delta^{-1}p^{-1-j}+p^{-j}-\mu_{1}p^{-1 }+\mu_{1}\delta^{-1}p^{k^{\prime}-2}.\] Suppose \(\nu(x)\leq-1.\) Then \(k^{\prime}=1=k\) and \(j^{\prime}-j=-\nu(x)\) by analyzing the diagonals. However, taking determinant we then obtain \[j^{\prime}-j+k^{\prime}-k=-3\nu(x),\] a contradiction! Suppose \(\nu(x)\geq 1.\) Then taking the inverse of the above matrix we then obtain \[\begin{pmatrix}p^{j-j^{\prime}}&\mu_{1}p^{j-j^{\prime}}-2^{-1}p^{k-j^{\prime}} -bp^{k-k^{\prime}}&z^{\prime}\\ &p^{k-k^{\prime}}-\delta p^{k-1}&\tau^{-1}p^{k-k^{\prime}-1}-\delta p^{-1}- \delta\tau^{-1}p^{k-2}\\ &p^{k}&1+\tau^{-1}p^{k-1}\end{pmatrix}\in xK_{p},\] where \[z^{\prime}=\mu_{1}\tau^{-1}p^{j-j^{\prime}-1}-2^{-1}\tau^{-1}p^{k-j^{\prime}- 1}-p^{-j^{\prime}}-b\tau^{-1}p^{k-k^{\prime}-1}.\] So \(k=k^{\prime}=1\) and \(j-j^{\prime}=1.\) Again, we encounter an contradiction by taking determinant. Hence, \(\nu(x)=0,\) i.e., \(x\in\mathcal{O}_{E_{p}}^{\times}.\) In summary, we have shown that \(\nu(x)\geq 0\) in all finite places. So \(x\in\mathcal{O}_{E}.\) Since \(x\overline{x}=1\) we have \(x\in\mathcal{O}_{E}^{1}.\) Since the test function \(f\) is \(Z_{G}(\mathcal{O}_{E}^{1})\)-invariant, then Proposition 8.13 follows. As a consequence of Proposition 8.12 and 8.13 we have **Corollary 8.14**.: _The sum of the unipotent orbital integrals satisfies the bound_ \[\sum_{x\in E^{1}}\mathcal{O}_{\gamma(x)}(f,\varphi^{\prime})\ll\frac{(\ell N^{ \prime})^{o(1)}}{2^{4k}k^{2}}\frac{N}{{N^{\prime}}^{3}}\prod_{p\in\nu(f)}p^{r_{ p}}\] _where the implicit constant depend on \(E\) and \(\pi^{\prime}\)._ ## 9. **The Regular Orbital Integrals** In this section and the next we handle the contribution to (6.17) of the regular orbital integrals, ie. the \(\mathcal{O}_{\gamma(x)}(f,\varphi^{\prime})\) when \(x\in E^{\times}-E^{1}\). In the case of \(\operatorname{GL}(2)\times\operatorname{GL}(1)\) this was handled in [10, SS2.6]; here the geometric structure of \(U(2,1)\times U(1,1)\) is much more complicated. In this section, we provide bounds for some local integrals which we combine together in Section 10 to prove Theorem 10.1. ### The Stabilizer \(H_{\gamma(x)}\) We start by computing the stabilizer \(H_{\gamma(x)}\) associated to \(\gamma(x)\) when \(x\in E^{\times}-E^{1}\). Recall that for any \(\mathbb{Q}\)-algebra \(R\) \[H_{\gamma(x)}(R):=\big{\{}(g_{1},g_{2})\in G^{\prime}(R):\ g_{1}^{-1}\gamma(x )g_{2}=\gamma(x)\big{\}}.\] To save space we will represent matrices in \(G^{\prime}\) either as \(2\times 2\) or a \(3\times 3\) matrices (in the standard bases \(\{e_{-1},e_{1}\}\) or \(\{e_{-1},e_{0},e_{1}\}\)), that is either \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\) or \(\begin{pmatrix}a&&b\\ &1&\\ c&&d\end{pmatrix}.\) Of course whenever matrices from \(G\) are involved we will use the \(3\times 3\) notation. **Lemma 9.1**.: _Let notation be as above, for any \(\mathbb{Q}\)-algebra \(R\), \(H_{\gamma(x)}(R)\) is equal to_ \[\left\{\begin{pmatrix}1+\frac{y(x-1)(\overline{x}+1)}{2}&\frac{y(1+x)(1+ \overline{x})}{4}\\ y(1-x)(1-\overline{x})&1+\frac{y(x+1)(\overline{x}-1)}{2}\end{pmatrix}\times \begin{pmatrix}1+\frac{y(x-1)(\overline{x}+1)}{2}&y\\ \frac{y(1+x)(1+\overline{x})(1-x)(1-\overline{x})}{4}&1+\frac{y(x+1)( \overline{x}-1)}{2}\end{pmatrix}\right.\] \[\left.\in G^{\prime}(R)\times G^{\prime}(R):y\in E^{\times}(R)\text{ and }y+\overline{y}+y\overline{y}(x\overline{x}-1)=0\right\}.\] Proof.: Let \[g=\begin{pmatrix}a&b\\ c&d\end{pmatrix},g^{\prime}=\begin{pmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{pmatrix}\in G^{\prime}(R).\] Assume \[g\left(\begin{array}{ccc}\frac{x\overline{x}+3\overline{x}-x+1}{4}&\frac{1+ x}{2}&-\frac{1}{2}\\ \frac{(x+1)(\overline{x}-1)}{2}&x&-1\\ -\frac{(1-x)(1-\overline{x})}{2}&1-x&1\end{array}\right)=\left(\begin{array}[] {ccc}\frac{x\overline{x}+3\overline{x}-x+1}{4}&\frac{1+x}{2}&-\frac{1}{2}\\ \frac{(x+1)(\overline{x}-1)}{2}&x&-1\\ -\frac{(1-x)(1-\overline{x})}{2}&1-x&1\end{array}\right)g^{\prime}. \tag{9.1}\] Expanding both sides of equation (9.1) we then obtain an equality between the matrix \[\left(\begin{array}{ccc}a^{\prime}(\overline{x}+\frac{(1-x)(1-\overline{x} )}{4})-\frac{c^{\prime}}{2}&\frac{1+x}{2}&b^{\prime}(\overline{x}+\frac{(1-x)( 1-\overline{x})}{4})-\frac{d^{\prime}}{2}\\ \frac{a^{\prime}(x+1)(\overline{x}-1)}{2}-c^{\prime}&x&\frac{b^{\prime}(x+1)( \overline{x}-1)}{2}-d^{\prime}\\ -\frac{a^{\prime}(1-x)(1-\overline{x})}{2}+c^{\prime}&1-x&-\frac{b^{\prime}(1- x)(1-\overline{x})}{2}+d^{\prime}\end{array}\right)\] and the matrix \[\left(\begin{array}{ccc}a(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})- \frac{b(1-x)(1-\overline{x})}{2}&\frac{a(1+x)}{2})+b(1-x)&-\frac{a}{2}+b\\ c(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{d(1-x)(1-\overline{x})}{2}& \frac{c(1+x)}{2}+d(1-x)&-\frac{c}{2}+d\end{array}\right),\] which is equivalent to the system of equations \[\begin{cases}-\frac{a}{2}+b=b^{\prime}(\overline{x}+\frac{(1-x)(1-\overline{x})}{4 })-\frac{d^{\prime}}{2}\\ \frac{a(1+x)}{2})+b(1-x)=\frac{1+x}{2}\\ a(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{b(1-x)(1-\overline{x})}{ 2}=a^{\prime}(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{c^{\prime}} {2}\\ -\frac{c}{2}+d(1-x)=1-x\\ c(\overline{x}+\frac{(1-x)(1-\overline{x})}{2})-\frac{d(1-x)(1-\overline{x})}{ 2}=-\frac{a^{\prime}(1-x)(1-\overline{x})}{2}+c^{\prime}\\ -1=\frac{b^{\prime}(x+1)(\overline{x}-1)}{2}-d^{\prime}\\ \frac{(x+1)(\overline{x}-1)}{2}=\frac{a^{\prime}(x+1)(\overline{x}-1)}{2}-c^{ \prime}\end{cases} \tag{9.2}\] Now we need to solve (9.2) to find the group \(H_{\gamma(x)}(R).\) Note that \[(-\frac{1}{2},1)\frac{(1-x)(1+\overline{x})}{2}+(\frac{1+x}{2},1-x)\overline{ x}=(\frac{x\overline{x}+3\overline{x}-x-1}{4},-\frac{(1-x)(1-\overline{x})}{2}).\] Hence we obtain from (9.2) the following equations on \((a^{\prime},b^{\prime},c^{\prime},d^{\prime}):\) \[\begin{cases}(\frac{b^{\prime}(x\overline{x}+3\overline{x}-x-1)}{4}-\frac{d^{ \prime}(x-1)(1+\overline{x})}{4}+\frac{(1+x)\overline{x}}{2}=\frac{a^{\prime}( x\overline{x}+3\overline{x}-x-1)}{4}-\frac{c^{\prime}}{2}\\ (-\frac{b^{\prime}(1-x)(1-\overline{x})}{2}+d^{\prime})\frac{(x-1)(1+\overline {x})}{2}+\overline{x}(1-x)=-\frac{a^{\prime}(1-x)(1-\overline{x})}{2}+c^{ \prime}\\ -1=\frac{b^{\prime}(x+1)(\overline{x}-1)}{2}-d^{\prime}\\ \frac{(x+1)(\overline{x}-1)}{2}=\frac{a^{\prime}(x+1)(\overline{x}-1)}{2}-c^{ \prime}.\end{cases} \tag{9.3}\] A computation shows that the companion matrix is singular. By Gaussian elimination, we can further simplify the system of equations (9.3) to get \[\begin{cases}c^{\prime}=\frac{b^{\prime}(x+1)(\overline{x}-1)(x-1)(\overline{ x}+1)}{4}\\ d^{\prime}=\frac{b^{\prime}(x+1)(\overline{x}-1)}{2}+1\\ c^{\prime}=\frac{(a^{\prime}-1)(x+1)(\overline{x}-1)}{2}.\end{cases} \tag{9.4}\] Denote by \(y=\frac{(x-1)(\overline{x}+1)}{2}.\) Since \(\begin{pmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{pmatrix}\in G^{\prime}(R),\) we have \[\begin{pmatrix}\overline{b}^{\prime}y+1&\overline{b}^{\prime}\\ \overline{b}^{\prime}y\overline{y}&\overline{b}^{\prime}\overline{y}+1\end{pmatrix} \begin{pmatrix}b^{\prime}y+1&b^{\prime}\\ b^{\prime}y\overline{y}&b^{\prime}\overline{y}+1\end{pmatrix}=\text{Id},\] which turns out to be equivalent to \(b^{\prime}\overline{b}^{\prime}(y+\overline{y})+b^{\prime}+\overline{b}^{ \prime}=0,\) i.e., \[b^{\prime}\overline{b}^{\prime}(x\overline{x}-1)+b^{\prime}+\overline{b}^{ \prime}=0. \tag{9.5}\] Substituting (9.4) into (9.2) we then obtain \[\begin{cases}-\frac{a}{2}+b=b^{\prime}(\overline{x}+\frac{(1-x)(1-\overline{x} )}{4})-b^{\prime}(\frac{(1-x)(1-\overline{x})}{4}-\frac{1-\overline{x}}{2})- \frac{1}{2}=\frac{b^{\prime}(1+\overline{x})}{2}-\frac{1}{2}\\ \frac{a(1+\overline{x})}{2}+b(1-x)=\frac{1+x}{2}\\ a(\overline{x}+\frac{(1-x)(1-\overline{x})}{2})-\frac{b(1-x)(1-\overline{x})}{2}=a ^{\prime}(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-(a^{\prime}-1)( \frac{(1-x)(1-\overline{x})}{2}-\frac{1-\overline{x}}{2})\\ -\frac{c}{2}+d=-\frac{b^{\prime}(1-x)(1-\overline{x})}{2}+d^{\prime}=b^{\prime }(\frac{(1-x)(1-\overline{x})}{2}+\overline{x}-1)+1-\frac{b^{\prime}(1-x)(1- \overline{x})}{2}\\ \frac{c(1+x)}{2}+d(1-x)=1-x\\ c(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{d(1-x)(1-\overline{x})}{2} =-\frac{a^{\prime}(1-x)(1-\overline{x})}{2}+(a^{\prime}-1)(\frac{(1-x)(1- \overline{x})}{2}-1+\overline{x}).\end{cases}\] Using Gaussian elimination method we then see the above system becomes \[\begin{cases}a=1-b^{\prime}(1-x)\frac{1+\overline{x}}{2}\\ b=\frac{b^{\prime}(1+\overline{x})}{2}-\frac{b^{\prime}(1-x)}{2}(1-\frac{ \overline{x}}{2})=\frac{b^{\prime}(1+x)(1+\overline{x})}{4}\\ c=b^{\prime}(1-x)(1-\overline{x})\\ d=1-b^{\prime}(1-\overline{x})+\frac{b^{\prime}(1-x)(1-\overline{x})}{2}\\ a(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{b(1-x)(1-\overline{x})} {2}=\frac{a^{\prime}(1+\overline{x})}{2})+(\frac{(1-x)(1-\overline{x})}{4}- \frac{(1-\overline{x})}{2})\\ c(\overline{x}+\frac{(1-x)(1-\overline{x})}{4})-\frac{d(1-x)(1-\overline{x})} {2}=-a^{\prime}(1-\overline{x})-(\frac{(1-x)(1-\overline{x})}{2}-1+\overline{ x}).\end{cases} \tag{9.6}\] A calculation shows that the last two equations in (9.6) are redundant. That is, (9.6) is equivalent to \[\begin{cases}a=1-\frac{b^{\prime}(1-x)(1+\overline{x})}{2}\\ b=\frac{b^{\prime}(1+\overline{x})}{2}-\frac{b^{\prime}(1-x)(1+\overline{x})}{4 }=\frac{b^{\prime}(1+x)(1+\overline{x})}{4}\\ c=b^{\prime}(1-x)(1-\overline{x})\\ d=1-\frac{b^{\prime}(1+x)(1-\overline{x})}{2}.\end{cases} \tag{9.7}\] Since \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in G^{\prime}(\mathbb{Q})\), we then have \[\begin{pmatrix}\overline{d}&\overline{b}\\ \overline{c}&\overline{a}\end{pmatrix}\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\operatorname{Id}. \tag{9.8}\] Substituting (9.7) into (9.8) we then conclude together with (9.5) that \[\begin{cases}[b^{\prime}+\overline{b}^{\prime}+b^{\prime}\overline{b}^{\prime }(x\overline{x}-1)]\frac{(1-x)(1+\overline{x})}{2}=0\\ b^{\prime}+\overline{b}^{\prime}+b^{\prime}\overline{b}^{\prime}(x \overline{x}-1)=0.\end{cases} \tag{9.9}\] Then it follows from (9.9) that \(b^{\prime}+\overline{b}^{\prime}+b^{\prime}\overline{b}^{\prime}(x\overline{ x}-x-\overline{x})=0.\) Hence Lemma 9.1 follows. We will now use that \(x\notin E^{1}\), that is \(x.\overline{x}-1\neq 0\). It turns out that the conjugate stabilizer of \(\gamma(x)J\) (which is a conjugate to that of \(\gamma(x)\) since \(J\in G^{\prime}(\mathbb{Q})\)) has a nice parametrization and we will use it instead: **Corollary 9.2**.: _Notation be as before, for \(x\in E^{\times}-E^{1}\) set_ \[s(x):=\frac{(x-1)(\overline{x}+1)}{2}\in E^{\times}.\] _For \(R\) any \(\mathbb{Q}\)-algebra, the stabilizer of \(\gamma(x)J\) is given by_ \[H_{\gamma(x)J}(R)=\big{\{}(h_{1}(w),h_{2}(w))\in G^{\prime}(R)\times G^{\prime }(R):\ w\in E^{\times}(R),\ w\overline{w}=1\big{\}},\] _where_ \[\begin{cases}h_{1}(w):=&\begin{pmatrix}1-\frac{(1-w)s(x)}{s(x)+s(\overline{x} )}&-\frac{(1-w)s(x)s(\overline{x})}{(s(x)+s(\overline{x}))\overline{x}}\\ -\frac{(1-w)x}{s(x)+s(\overline{x})}&1-\frac{(1-w)s(x)}{s(x)+s(\overline{x})} \end{pmatrix}\\ h_{2}(w):=&\begin{pmatrix}1-\frac{(1-w)s(\overline{x})}{s(x)+s(\overline{x})}&- \frac{(1-w)s(x)s(\overline{x})}{s(x)+s(\overline{x})}\\ -\frac{(1-w)}{s(x)+s(\overline{x})}&1-\frac{(1-w)s(x)}{s(x)+s(\overline{x})} \end{pmatrix}.\end{cases} \tag{9.10}\] _In particular, \(h_{1}(w)\) and \(h_{2}(w)\) are determined uniquely by their determinant:_ \[\det h_{1}(w)=\det h_{2}(w)=w.\] Proof.: Since \(x\not\in E^{1},\)\(x\overline{x}-1\neq 0.\) Now we consider the equation \[y+\overline{y}+y\overline{y}(x\overline{x}-1)=0,\] whose locus is a conic (since \(x\overline{x}-1\neq 0\)) and by the intersection method, its \(R\)-rational points are parametrized by \(\mathbb{P}^{1}(R)\). Explicitely, write \(y=a+at\sqrt{-D},\) where \(a,t\in R.\) Then \[a(1+t^{2})(x\overline{x}-1)+2=0.\] and we obtain \[y=-\frac{2(1+t\sqrt{-D})}{(1+t^{2}D)(x\overline{x}-1)}=-\frac{2}{(1-t\sqrt{-D} )(x\overline{x}-1)},\quad t\in\mathbb{P}^{1}(R).\] Then the parametrization (9.10) follows from Lemma 9.1 and the fractional linear transform replacing \(t\) with \(w\) such that \((1-w)/2=(1-t\sqrt{-D})^{-1}.\) We have again used the fact that \(J\in G^{\prime}(\mathbb{Q}).\) _Remark 9.1_.: For \(x_{0}\in E^{1},\) the above computation of \(H_{\gamma(x)J}\) is consistent with what we obtained before for \(H_{\gamma(x_{0})}\) after taking limit \(x\mapsto x_{0}\) in (9.10). We will henceforth write \(H_{x}\) for the image of \(H_{\gamma(x)J}\subset G^{\prime}\times G^{\prime}\) under the projection to the first component, i.e., \[H_{x}(R)=\{h_{1}(u),\ u\in E^{\times}(R),\ u\overline{u}=1\}\] for \(h_{1}(u)\) defined in (9.10). As a consequence of Corollary 9.2, we obtain **Corollary 9.3**.: _Let \(x\in E^{\times}-E^{1}.\) Then \(H_{x}\) is isomorphic to \(U(1),\) the unitary group of rank \(1.\) In particular, \(H_{x}(\mathbb{Q})\backslash H_{x}(\mathbb{A})\) has volume \(2\) for the Tamagawa measure._ Proof.: A direct computation shows that \[h_{1}(u_{1})h_{1}(u_{2})=h_{1}(u_{1}u_{2}).\] This gives the group law of \(H_{x}(R),\) where \(R\) is a \(\mathbb{Q}\)-algebra. In particular, we have a natural isomorphism with the unitary group \[H_{x}\simeq U(1)=E^{1}.\] ### Reduction to factorable integrals To simplify notations we will write \[\mathcal{O}_{x}(f^{\mathfrak{n}},\varphi^{\prime}):=\mathcal{O}_{\gamma(x)}(f ^{\mathfrak{n}},\varphi^{\prime})\] Observe that since \(J\in G^{\prime}(\mathbb{Q})\) and \(\varphi^{\prime}\) is \(G^{\prime}(\mathbb{Q})\)-invariant we have by suitable changes of variables that for \(x\in E^{\times}-E^{1}\) \[\mathcal{O}_{x}(f^{\mathfrak{n}},\varphi^{\prime})=\mathcal{O}_{\gamma(x)}(f^ {\mathfrak{n}},\varphi^{\prime})=\mathcal{O}_{\gamma(x)J}(f^{\mathfrak{n}}, \varphi^{\prime})\] \[=\int_{H_{\gamma(x)J}(\mathbb{Q})\backslash G^{\prime}(\mathbb{A})\times G^{ \prime}(\mathbb{A})}f^{\mathfrak{n}}(y_{1}^{-1}\gamma(x)Jy_{2})\overline{ \varphi}^{\prime}(y_{1})\varphi^{\prime}(y_{2})dy_{1}dy_{2}\] \[=\int_{H_{x}(\mathbb{A})\backslash G^{\prime}(\mathbb{A})\times G^{\prime}( \mathbb{A})}f^{\mathfrak{n}}(y_{1}^{-1}\gamma(x)Jy_{2})\int_{[H_{\gamma(x)J}]} \overline{\varphi}^{\prime}(h_{1}y_{1})\varphi^{\prime}(h_{2}y_{2})dudy_{1}dy _{2}\] where \[[H_{\gamma(x)J}]:=H_{\gamma(x)J}(\mathbb{Q})\backslash H_{\gamma(x)J}( \mathbb{A}),\] and \[(h_{1},h_{2})=(h_{1}(u),h_{2}(u))\in[H_{\gamma(x)J}].\] For any \((y_{1},y_{2})\), the \(u\)-integral can be evaluated as an (infinite) sum of Wald-spurger's period integrals (see [20]). However, we will be much more coarse and will content ourselves with the upperbound \[\int_{[H_{\gamma(x)J}]}\varphi^{\prime}(h_{1}y_{1})\overline{\varphi}^{\prime} (h_{2}y_{2})du\ll\|\varphi^{\prime}\|_{\infty}^{2}. \tag{9.11}\] Therefore we obtain \[|\mathcal{O}_{x}(f^{\mathfrak{n}},\varphi^{\prime})|\ll\|\varphi^{\prime}\|_ {\infty}^{2}\mathcal{I}(f^{\mathfrak{n}},x) \tag{9.12}\] where \[\mathcal{I}(f^{\mathfrak{n}},x)=\int_{H_{\gamma(x)J}(\Lambda)\setminus G^{ \prime}(\Lambda)^{2}}|f^{\mathfrak{n}}(u^{-1}\gamma(x)Jv)|dudv,\] say. The main benefit of this rather crude treatment is that the resulting integral is factorable: we have \[\mathcal{I}(f^{\mathfrak{n}},x)=\prod_{v}\mathcal{I}_{v}(f_{v}^{\mathfrak{n} },x)\] where \[\mathcal{I}_{v}(f_{v}^{\mathfrak{n}},x):=\int_{H_{\gamma(x)J}(\mathbb{Q}_{v} )\setminus(G^{\prime}\times G^{\prime})(\mathbb{Q}_{v})}|f_{v}^{\mathfrak{n} }(u^{-1}\gamma(x)Jv)|dudv; \tag{9.13}\] indeed as we will verify in the next subsections below, for any given \(x\in E-E_{1}\), we have \[\mathcal{I}_{p}(f_{p}^{\mathfrak{n}},x)=1\] for all but finitely many \(p\). In the subsequent subsections we analyse these local integrals \(\mathcal{I}_{v}(f_{v}^{\mathfrak{n}},x)\) give critera for vanishing and provide upper bounds. In the sequel to simplify notations we will write \(\mathcal{I}(x),\mathcal{I}_{v}(x)\) in place of \(\mathcal{I}(f^{\mathfrak{n}},x)\) and \(\mathcal{I}_{v}(f_{v}^{\mathfrak{n}},x)\). ### A vanishing criterion for the non-archimedan local integrals Let \(p\) be a prime, \(\mathfrak{p}\) the place above \(p\) we have fixed (we take \(\varpi_{p}=p\) if \(p\) is inert) and let \(\nu\) be the associated valuation in the local field \(E_{\mathfrak{p}}\). **Proposition 9.4**.: _Let \(x\in E^{\times}\!-E^{1}\) a non-zero regular element. The following holds_ * _If_ \(p\) _is ramified, then_ \(\mathcal{I}_{p}(x)=0\) _unless_ \[x\in\mathcal{O}_{E,p}.\] * _If_ \(p\) _is inert, then_ \(\mathcal{I}_{p}(x)=0\) _unless_ \[x\in p^{-\nu(\ell)}\mathcal{O}_{E,p}\ \text{ and }x\overline{x}\equiv 1 \,(\mathrm{mod}\,(N,p)).\] * _If_ \(p\) _is split then_ \(\mathcal{I}_{p}(x)=0\) _unless_ \[\nu(N^{\prime}x),\nu(N^{\prime}\overline{x})\geq 0.\] From this we deduce immediately the a global vanishing criterion: **Theorem 9.5**.: _Notations being as above, let_ \[\mathfrak{X}(N,N^{\prime},\ell)=\big{\{}x\in E^{\times}\!-E^{1},\ x\in(\ell N ^{\prime})^{-1}\mathcal{O}_{E}:\ x\overline{x}\equiv 1\ (\mathrm{mod}\ N)\big{\}}. \tag{9.14}\] _For \(x\in E^{\times}\!-E^{1}\) and not contained in \(\mathfrak{X}(N,N^{\prime},\ell)\), we have_ \[\mathcal{I}(f^{\mathfrak{n}},x)=0\] _and by (9.12) we have_ \[\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})=0 \tag{9.15}\] _Remark 9.2_.: This criterion is similar to the phenomenon encountered in the case of the forms of \(\operatorname{GL}(2)\times\operatorname{GL}(1)\) in [10, SS2.6] and [11]. We will prove this through case by case analysis: see Propositions 9.6, 9.7, 9.8, In fact we will also provide upper bounds for \(\mathcal{I}_{p}(x)\). The non-split case is discussed in SS9.4 and the following subsection and the split case in SS9.5. For any \(\mathbb{Q}\)-algebra \(R\) let \[SG^{\prime}(R):=SU(W)=\{g\in G^{\prime}(R),\ \det g=1\}.\] We start with the observation that a fundamental domain for the quotient \(H_{\gamma(x)J}(\mathbb{Q}_{p})\backslash(G^{\prime}\times G^{\prime})( \mathbb{Q}_{p})\) is the subgroup \[SG^{\prime}(\mathbb{Q}_{p})\times G^{\prime}(\mathbb{Q}_{p}).\] Indeed any \((u,v)\in G^{\prime}\times G^{\prime}(\mathbb{Q}_{p})\) can be written \[(u,v)=(g_{1}(w)u^{\prime},g_{2}(w)v^{\prime})\] with \[w=\det(u)=\det(g_{1}(w)),\ u^{\prime}=g_{1}(w)^{-1}u\in SG^{\prime}(\mathbb{Q }_{p}),\ v^{\prime}=g_{2}(w)^{-1}v.\] Moreover for \(u_{1},u_{1}^{\prime}\in SG^{\prime}(\mathbb{Q}_{p})\) and \(v,v^{\prime}\in G^{\prime}(\mathbb{Q}_{p})\) we have \[(u_{1}^{\prime},v^{\prime})=(g_{1}(w)u_{1},g_{2}(w)v)\Longrightarrow w=1,\ g_{1}(w)=g_{2}(w)= \operatorname{Id}_{3},\ u_{1}=u_{1}^{\prime},\ v^{\prime}=v.\] We have therefore \[\mathcal{I}_{p}(x):=\int_{SG^{\prime}(\mathbb{Q}_{p})\times G^{\prime}( \mathbb{Q}_{p})}|f_{v}^{\mathfrak{n}}(u^{-1}\gamma(x)Jv)|dudv.\] ### The non-split case In this section we evaluate the local integral \(\mathcal{I}_{p}(x)\) when \(p\) is non-split: this implies that \(\overline{\varpi}_{p}=\pm\varpi_{p}\), that \(\mathfrak{n}_{p}=\operatorname{Id}_{3}\) and that \(f_{p}^{\mathfrak{n}}=f_{p}\) is a scalar times the characteristic function of either \(K_{p}(N)\) or of \[G(\mathbb{Z}_{p})A_{r}G(\mathbb{Z}_{p})\] for \(A_{r},\ r\geq 0\) the diagonal matrix defined in the Appendix (A.12). We use the Iwasawa decomposition of for \(u,v\in SG^{\prime}(\mathbb{Q}_{p})\times G^{\prime}(\mathbb{Q}_{p})\). We have \(\det u=1\) and from the description of \(G^{\prime}(\mathbb{Q}_{p})\)\(\det v\) has valuation \(0\) so that (here we represent \(u\) and \(v\) as \(2\times 2\) matrices) \[u=\begin{pmatrix}\varpi_{p}^{i}&\\ &\overline{\varpi}_{p}{}^{-i}\end{pmatrix}\begin{pmatrix}1&b\\ &1\end{pmatrix}k_{1}=u^{\prime}k_{1}\kappa_{1}^{i},\quad v=\begin{pmatrix}\varpi _{p}^{j}&\\ &\overline{\varpi}_{p}{}^{-j}\end{pmatrix}\begin{pmatrix}1&b^{\prime}\\ &1\end{pmatrix}k_{2}=v^{\prime}k_{2}, \tag{9.16}\] with \(i,j\in\mathbb{Z}\), \(b,b^{\prime}\in E_{p}^{0}\), \(k_{1},k_{2}\in G^{\prime}(\mathbb{Z}_{p})\), \(\det k_{1}=1\) and \(\kappa_{1}=\begin{pmatrix}\overline{\varpi}_{p}&\\ &\varpi_{p}\end{pmatrix}\). By definition, the function \(f_{p}\) is bi-\(K_{p}(N)\)-invariant so that \[f_{p}(u^{-1}\gamma(x)Jv)=f_{p}(k_{1}^{-1}{u^{\prime}}^{-1}\gamma(x)Jv^{ \prime}k_{2}).\] We have \[\gamma(x)J=\left(\begin{array}{ccc}-\frac{1}{2}&\frac{1+x}{2}&\frac{x\overline {x}+3\overline{x}-x+1}{2}\\ -1&x&\frac{(x+1)(\overline{x}-1)}{2}\\ 1&1-x&-\frac{(1-x)(1-\overline{x})}{2}\end{array}\right)\] and \[{u^{\prime}}^{-1}\gamma(x)Jv^{\prime}=\begin{pmatrix}-\frac{\varpi_{p}^{j-i}} {2}-\overline{\varpi}_{p}^{i}\varpi_{p}^{j}b&e&z\\ -\varpi_{p}^{j}&x&f\\ \varpi_{p}^{j+j}&\overline{\varpi}_{p}^{i}(1-x)&g\end{pmatrix}, \tag{9.17}\] where \[e=\frac{\varpi_{p}^{-i}(1+x)}{2}-\overline{\varpi}_{p}^{i}b(1-x),\] \[f=-\varpi_{p}^{j}b^{\prime}+\overline{\varpi}_{p}^{-j}\frac{(x+1)( \overline{x}-1)}{2}, \tag{9.18}\] \[g=\overline{\varpi}_{p}^{i}\varpi_{p}^{j}b^{\prime}-\overline{\varpi}_{p}^{-j} \frac{(x-1)(\overline{x}-1)}{2},\] and \[z=-\frac{1}{2}\varpi_{p}^{j-i}b^{\prime}+\varpi_{p}^{-i}\overline{ \varpi}_{p}^{-j}\frac{x\overline{x}+3\overline{x}-x+1}{4}-\overline{\varpi}_{p }^{i}\varpi_{p}^{j}bb^{\prime}+\overline{\varpi}_{p}^{i-j}b\frac{(x-1)( \overline{x}-1)}{2}.\] \[=\frac{1-x\overline{x}}{1-x}\varpi_{p}^{-i}\overline{\varpi}_{p} ^{-j}+\frac{f}{1-x}\varpi_{p}^{-i}-\frac{1-\overline{x}}{1-x}e\overline{\varpi }_{p}^{-j}+\frac{ef}{1-x} \tag{9.19}\] #### 9.4.1. The non-split case \(f_{p}=1_{K_{p}(N)}\) **Proposition 9.6**.: _We assume here that \(p\) is non-split and that \(f_{p}=1_{K_{p}(N)}\). The following hold:_ * _If_ \(\nu(x)<0\) _we have_ \(\mathcal{I}_{p}(x)=0\)_._ _Assume that \(\nu(x)\geq 0\) (ie. \(x\in\mathcal{O}_{E,p}\))_ * _if_ \(p\nmid 2D_{E}N\) _and_ \(\nu(x\overline{x}-1)=0\)_, we have_ \[\mathcal{I}_{p}(x)=1.\] * _if_ \(p\mid N,\) _and_ \(\nu(x\overline{x}-1)\leq 0\) _we have_ \(f_{p}(u^{-1}\gamma(x)Jv)=0.\)__ * _In general, we have the bound_ (9.20) \[\mathcal{I}_{p}(x)\ll e_{p}(x)(N,p)N(\varpi_{p})^{3\nu(x\overline{x}-1)};\] _here we have set_ \[e_{p}(x)=(1+\nu(x-1))^{2}(1+\nu(x\overline{x}-1)),\] \[N(\varpi_{p})=|\mathcal{O}_{E_{p}}/\varpi_{p}\mathcal{O}_{E_{p}}|=p^{f_{p}}, \ f_{p}=\begin{cases}2&\text{ if $p$ is inert}\\ 1&\text{ if $p$ is ramified}\end{cases}\] _and the implicit constant is absolute._ Proof.: As noted previously \(f_{p}(u^{-1}\gamma(x)Jv)\) is non zero if and only if \[u^{\prime-1}\gamma(x)Jv^{\prime}\in k_{1}K_{p}(N)k_{2}^{-1}\subset G(\mathbb{ Z}_{p}). \tag{9.21}\] This implies that \[\begin{cases}\nu(x)\geq 0,\ \nu(z)\geq 0\\ j\geq 0\\ i+\nu(1-x)\geq 0\\ e,f,g,z,-\frac{\varpi_{p}^{j-i}}{2}-\overline{\varpi}_{p}^{i}\varpi_{p}^{j}b \in\mathcal{O}_{E_{p}}\end{cases} \tag{9.22}\] In particular we have \(f_{p}(u^{-1}\gamma(x)Jv)=0\) unless \(\nu(x)\geq 0\). The proves the first part of Proposition 9.6. Since \(j\geq 0\), then it follows from the equality \[\varpi_{p}^{j}e-[-\frac{1}{2}\varpi_{p}^{j-i}-\overline{\varpi}_{p}^{i}\varpi _{p}^{j}b](1-x)=\varpi_{p}^{j-i}\] that \(j\geq i\). Note that \[\overline{\varpi}_{p}^{i}f+g=\overline{\varpi}_{p}^{i-j}(\overline{x}-1).\] Hence \[i-j+\nu(x-1)\geq\min\{0,i\}. \tag{9.23}\] From the condition \(e,f,z\in\mathcal{O}_{E,p}\), the equality (9.19) and the fact that \(\nu(1-x)=\nu(1-\overline{x})\), we have \[-i-j-\nu(x-1)+\nu(x\overline{x}-1)\geq\min\{-i-\nu(x-1),-j\}. \tag{9.24}\] Suppose now that \(\nu(x\overline{x}-1)=0\); this implies that \(\nu(x-1)=0\) and therefore by (9.22) \(i\geq 0\) and eventually \(i\geq j\) by (9.23), therefore \(i=j\geq 0\). By (9.24) we have \(-2i\geq-i\) and therefore \[i=j=0\] and from (9.17) we conclude that \[b,b^{\prime}\in\mathcal{O}_{E_{p}}.\] If \(p\) does not divide \(2N\) we have \(K_{p}(N)=G(\mathbb{Z}_{p})\) so that if \(\nu(x\overline{x}-1)=0\) we have \[|f_{p}(u^{-1}\gamma(x)Jv)|=1\] precisely, if \[i=j=0,\ k_{1}\in SK_{p},\ k_{2}\in K_{p},\ b,b^{\prime}\in\mathcal{O}_{E_{p}}\] and otherwise it is zero. It follows that \[\int_{H_{\gamma(x)J}(Q_{p})(G^{\prime}\times G^{\prime})(Q_{p})}\big{|}f_{p}( u^{-1}\gamma(x)Jv)\big{|}dudv=\mu(\mathcal{O}_{E_{p}})^{2}=1.\] This proves the generic part of Proposition 9.6. We return to the general case: given two integers \(i,j\) such that \(u,v\) have Iwasawa decomposition given by (9.16) and such that \(f_{p}(u^{-1}\gamma(x)Jv)\neq 0\). From the previous discussion we see that \[b=\frac{\varpi_{p}^{-i}\overline{\varpi}_{p}^{-i}(1+x)}{2(1-x)}-\frac{e}{ \overline{\varpi}_{p}^{i}(1-x)}\in B_{i},\] \[b^{\prime}=\varpi_{p}^{-j}\overline{\varpi}_{p}^{j}\frac{(x+1)(\overline{x}-1 )}{2}-\varpi_{p}^{-j}f\in B^{\prime}_{j}\] where \[B_{i}:=\frac{\varpi_{p}^{-i}\overline{\varpi}_{p}^{-i}(1+x)}{2(1-x)}-\frac{1 }{\overline{\varpi}_{p}^{i}(1-x)}\mathcal{O}_{E_{p}},\] \[B^{\prime}_{j}:=\varpi_{p}^{-j}\overline{\varpi}-p^{-j}\frac{(x+1)(\overline{ x}-1)}{2}-\varpi_{p}^{-j}\mathcal{O}_{E_{p}}\] We have \[\int_{B_{i}}db\int_{B^{\prime}_{j}}db^{\prime}\ll\operatorname{Nr}(\varpi_{p })^{i+\nu(1-x)+j}. \tag{9.25}\] Suppose \(i\leq 0\). Then by (9.23) we have \(j\leq\nu(x-1)\) and \[-\nu(x-1)\leq i\leq 0\leq j\leq\nu(x-1).\] Then by (9.25) \[\underset{-\nu(x-1)\leq i\leq 0\leq j\leq\nu(x-1)}{\sum}\int_{B_{i}}db\int_{B^{ \prime}_{j}}db^{\prime}\leq(1+\nu(x-1))^{2}N(\varpi_{p})^{2\nu(x-1)}. \tag{9.26}\] Suppose \(i\geq 1\). Then similar arguments as before show that \(j-\nu(x-1)\leq i\leq j\) and \(1\leq j\leq\nu(x\overline{x}-1)\). So in this range we have \[\underset{i,j\cdots}{\sum}\int_{B_{i}}db\int_{B^{\prime}_{j}}db^{\prime}\leq e _{p}(x)N(\varpi_{p})^{2\nu(x\overline{x}-1)+\nu(x-1)}. \tag{9.27}\] Then (9.20) follows from (9.26) and (9.27). When \(p=2\), a similar argument also applies with some worse (but absolute) implied constant. The proves the last part of Proposition 9.6. Suppose now that \(p\mid N\). Then \(K_{p}(N)=K_{p}(p)\subset K_{p}\) is the Iwahori subgroup and its intersection with \(G^{\prime}(\mathbb{Z}_{p})\) is \(K_{p}^{\prime}(N)=K_{p}^{\prime}(p)\) the Iwahori subgroup of \(G^{\prime}(\mathbb{Z}_{p})\). It follows that the function on \(G^{\prime}(\mathbb{Z}_{p})\times G^{\prime}(\mathbb{Z}_{p})\) \[(u,v)\mapsto f_{p}(u^{-1}\gamma(x)Jv)\] is bi-\(K_{p}^{\prime}(p)\) invariant. Because of this we will evaluate the integral \(\mathcal{I}_{p}(x)\) using the Iwasawa decompositions of \(u\) and \(v\) (9.16) and in particular decompose the two integrals over the \(k_{1}\in SG^{\prime}(\mathbb{Z}_{p})\) and \(k_{2}\in G^{\prime}(\mathbb{Z}_{p})\) variable into a sum of \((p+1)^{2}\)-integrals supported along \(SK_{p}^{\prime}(p)\times K_{p}^{\prime}(p)\)-cosets using Lemma A.1. Given \(u,v\) such that \(f_{p}(u^{-1}\gamma(x)v)\neq 0\) and whose Iwasawa decompositions are given by (9.16) and let \[g:=\begin{pmatrix}1&&-b\\ &1&&\\ &&1\end{pmatrix}\begin{pmatrix}\varpi_{p}^{-i}&&\\ &1&\\ &&&\overline{\varpi}_{p}^{i}\end{pmatrix}\gamma(x)J\begin{pmatrix}\varpi_{p}^{j} &&\\ &1&\\ &&\overline{\varpi}_{p}^{j}\end{pmatrix}\begin{pmatrix}1&&b^{\prime}\\ &1&\\ &&1\end{pmatrix},\] Since \(f_{p}\) is supported on \(K_{p}(p)\), for the integrand in one of these to be non-zero one of the following hold : \[g\in K_{p}(p),\text{ or }J\begin{pmatrix}1&&-\delta\\ &1&\\ &&1\end{pmatrix}g\in K_{p}(p)\text{ or }g\begin{pmatrix}1&&\delta^{\prime}\\ &1&\\ &&1\end{pmatrix}J\in K_{p}(p)\] \[\text{ or }J\begin{pmatrix}1&&-\delta\\ &1&\\ &&1\end{pmatrix}g\begin{pmatrix}1&&\delta^{\prime}\\ &1&\\ &&1\end{pmatrix}J\in K_{p}(p)\text{ for some }\delta,\delta^{\prime}\in\mathbb{Z}_{p} /p\mathbb{Z}_{p}.\] We will show in each case that \(\nu(x\overline{x}-1)\geq 1\). If we assume instead that \(\nu(x\overline{x}-1)=0\), we have \[\nu(x-1)=\nu(\overline{x}+1)=\nu(1-\overline{x})=0.\] We will obtain a contradiction for each of the \(1+2p+p^{2}\) possible locations of \(g\): * Assume \(g\in K_{p}(p)\). Then by (9.17) we have \(i\geq 1\), \(j\geq 1\) and \(e,f,z\in\mathcal{O}_{E_{p}}\). But this contradicts the algebraic relation (9.19), which forces that \(z\not\in\mathcal{O}_{E_{p}}\) a contradiction and therefore \(\nu(x\overline{x}-1)\geq 1\). * Suppose \(J\begin{pmatrix}1&&-\delta\\ &1&\\ &&1\end{pmatrix}g\in K_{p}(p)\). This implies that (9.28) \[\begin{pmatrix}-\frac{\varpi_{p}^{j-i}}{2}-\overline{\varpi}_{p}^{i}\varpi_{ p}^{j}b&e&z\\ -\varpi_{p}^{j}&x&f\\ \varpi_{p}^{i+j}&\overline{\varpi}_{p}^{i}(1-x)&\overline{\varpi}_{p}^{i} \varpi_{p}^{j}b^{\prime}-\overline{\varpi}_{p}^{i-j}\frac{(x-1)(\overline{x} -1)}{2}\end{pmatrix}\in JK_{p}(p).\] here we have made the change of variable \[b\mapsto b+\delta.\] Hence \(i\geq 0\), \(j\geq 1\), \(\nu(e)\geq 1\), \(\nu(f)\geq 0\) and \(\nu(z)\geq 0\). However, this contradicts (9.19) as well. Notice then, that for \(\nu(x\overline{x}-1)\geq 1\), (9.28) leads to \[-\frac{\varpi_{p}^{j-i}}{2}-\overline{\varpi}_{p}^{i}\varpi_{p}^{j}b\in\varpi _{p}\mathcal{O}_{E_{p}}\] and \(\varpi_{p}^{i+j}\in\mathcal{O}_{E_{p}}^{\times}\). So there is only one possible choice for \(b\mod\varpi_{p}\). 3. Suppose that \(g\begin{pmatrix}1&&\delta^{\prime}\\ &1&\\ &&1\end{pmatrix}J\in K_{p}(p).\) This is similar to the preceding case (ii). Again we must have \(\nu(x\overline{x}-1)\geq 1\) and there is only one choice for \(b^{\prime}\mod\varpi_{p}.\) 4. Suppose finally that \(J\begin{pmatrix}1&&-\delta\\ &1&\\ &&1\end{pmatrix}g\begin{pmatrix}1&\delta^{\prime}\\ &1&\\ &&1\end{pmatrix}J\in K_{p}(p).\) We then have again \[\begin{pmatrix}-\frac{\varpi_{p}^{j-i}}{2}-\overline{\varpi}_{p}^{i}\varpi_{p }^{j}b&e&z\\ -\varpi_{p}^{j}&x&f\\ \varpi_{p}^{i+j}&\overline{\varpi}_{p}^{i}(1-x)&\overline{\varpi}_{p}^{i} \varpi_{p}^{j}b^{\prime}-\overline{\varpi}_{p}^{i-j}\frac{(x-1)(\overline{x}-1) }{2}\end{pmatrix}\in JK_{p}(p);\] where this time we have made the change of variables \[b\mapsto b+\delta,\ b^{\prime}\mapsto b^{\prime}+\delta^{\prime}.\] We have therefore \(i\geq 0\), \(j\geq 0\), \(\nu(e)\geq 1\), \(\nu(f)\geq 1\) and \(\nu(z)\geq 1\), which contradicts (9.19) and necessarily \(\nu(x\overline{x}-1)\geq 1\). Since \(\nu(e)\geq 1\) and \(\nu(f)\geq 1\), this implies that \[b\in C_{i}:=\frac{\varpi_{p}^{-i}\overline{\varpi}_{p}^{-i}(1+x)}{2(1-x)}- \frac{1}{\overline{\varpi}_{p}^{i}(1-x)}\varpi_{p}\mathcal{O}_{E_{p}},\] \[b^{\prime}\in C^{\prime}_{j}:=\varpi_{p}^{-j}\overline{\varpi}_{p}-p^{-j}\frac {(x+1)(\overline{x}-1)}{2}-\varpi_{p}^{1-j}\mathcal{O}_{E_{p}}.\] Therefore, \[\int_{C_{i}}db\int_{C^{\prime}_{j}}db^{\prime}\ll N(\varpi_{p})^{i+\nu(x-1)+j -2}.\] Notice the extra saving \(p^{-2}\) (compared with (9.25)): it comes to compensate the contribution of the \(p^{2}\) choices of \(\delta\) and \(\delta^{\prime}\) in \(\mathbb{Z}_{p}/p\mathbb{Z}_{p}\). In all case we have \(\nu(x\overline{x}-1)\geq 1\). Integrating over these \(SK^{\prime}_{p}(p)\times K^{\prime}_{p}(p)\) cosets, we conclude that when \(p\mid N,\) one has \[\mathcal{I}_{p}(x)\ll\frac{e_{p}(x)N(\varpi_{p})^{3\nu(x\overline{x}-1)}\mu( SK^{\prime}_{p}(p))\mu(K^{\prime}_{p}(p))}{\mu(K_{p}(N))}\ll\frac{(1/p)^{2}}{1/p^{3}} e_{p}(x)N(\varpi_{p})^{3\nu(x\overline{x}-1)}.\] This completes the proof of the Proposition when \(p|N\). #### 9.4.2. The inert Hecke case We now evaluate \(\mathcal{I}_{p}(x)\) for \(p\) inert in the last remaining case. **Proposition 9.7**.: _Let \(p\) be an inert prime, \(r\geq 0\) and suppose that_ \[f_{p}=1_{G(\mathbb{Z}_{p})A_{p^{r}}G(\mathbb{Z}_{p})}.\] _We have_ \[\mathcal{I}_{p}(x)=0\] _unless \(\nu(x)\geq-r\) in which case we have_ \[\mathcal{I}_{p}(x)\ll(r+|\nu(1-x\overline{x})|+|\nu(1-x)|)(p^{3r+2\nu(1-x)}+p ^{7r+2\nu(1-x\overline{x})}). \tag{9.29}\] _Here the implied constant is absolute._ Proof.: We use again the notations on SS9.4. Let \(u,v\) such that \[f_{p}(u^{-1}\gamma(x)Jv)=f_{p}({u^{\prime}}^{-1}\gamma(x)Jv^{\prime})\neq 0,\] which, by (9.17), amounts to \[{u^{\prime}}^{-1}\gamma(x)Jv^{\prime}=\begin{pmatrix}-\frac{p^{j-i}}{2}-p^{j+i}b&e&z \\ -p^{j}&x&f\\ p^{i+j}&p^{i}(1-x)&g\end{pmatrix}\in G(\mathbb{Z}_{p})A_{r}G(\mathbb{Z}_{p}). \tag{9.30}\] where \[\begin{split}& e=\frac{p^{-i}(1+x)}{2}-p^{i}b(1-x),\\ & f=-p^{j}b^{\prime}+p^{-j}\frac{(x+1)(\overline{x}-1)}{2},\\ & g=p^{i+j}b^{\prime}-p^{i-j}\frac{(x-1)(\overline{x}-1)}{2}.\end{split} \tag{9.31}\] and \[z=\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x} }{1-x}ep^{-j}+\frac{ef}{1-x}.\] Analyzing the entries in (9.30) we then derive that \(z,e,f\in p^{-r}\mathbb{Z}_{p}.\) Hence, \[\begin{cases}\nu(x)\geq-r,\ \nu(g)\geq-r,\\ j\geq-r,\ i\geq-r-\nu(1-x),\\ f=-p^{j}b^{\prime}+p^{-j}\frac{(x+1)(\overline{x}-1)}{2}\in p^{-r}\mathbb{Z} _{p},\\ e=\frac{p^{-i}(1+x)}{2}-p^{i}b(1-x)\in p^{-r}\mathbb{Z}_{p},\\ z=\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x} }{1-x}ep^{-j}+\frac{ef}{1-x}\in p^{-r}\mathbb{Z}_{p},\end{cases} \tag{9.32}\] where the last constraint yields that \[-i-j+\nu(1-x\overline{x})\geq\min\big{\{}-r-i,-r-j+\nu(1-x),-2r,-r+\nu(1-x) \big{\}}. \tag{9.33}\] Since \(p^{i}f+g=p^{i-j}(\overline{x}-1),\) then \[i-j+\nu(1-x)=\min\{i+\nu(f),\nu(g)\}\geq\min\{i-r,-r\}. \tag{9.34}\] Since \(p^{j}e-(1-x)a=p^{j-i},\) then \[j-i=\min\{j+\nu(e),\nu(1-x)+\nu(a)\}\geq\min\{j-r,\nu(1-x)-r\}. \tag{9.35}\] We now separate the cases to derive the ranges of \(i\) and \(j\) as follows. 1. Suppose \(i\leq 0.\) Then by (9.34) we obtain that \(i-j+\nu(1-x)\geq i-r,\) implying that \(j\leq r+\nu(1-x).\) Thus in this case we have \(-r\leq i\leq 0\) and \(-r\leq j\leq r+\nu(1-x).\) 2. Suppose that \(i\geq 1.\) Then (9.34) gives (9.36) \[i-j+\nu(1-x)\geq-r.\] * Suppose further that \(j\leq\nu(1-x).\) Then it follows from (9.35) that \(j-i\geq j-r,\) i.e., \(i\leq r.\) Hence, in this case we have \(1\leq i\leq r\) and \(-r\leq j\leq\nu(1-x).\) * Suppose that \(j>\nu(1-x).\) Then by (9.35) we have \(j-i\geq\nu(1-x)-r,\) i.e., \(-r-i\geq-2r-j+\nu(1-x).\) Substituting this into (9.33) to obtain \[-i-j+\nu(1-x\overline{x})\geq \min\big{\{}-2r-j+\nu(1-x),-2r,-r+\nu(1-x)\big{\}}\] \[= \min\big{\{}-2r-j+\nu(1-x),-2r\big{\}}=-2r-j+\nu(1-x).\] Here we note the fact that \(-2r\leq-r+\nu(1-x)\) and \(j>\nu(1-x).\) Therefore, we have \(1\leq i\leq 2r+\nu(1-x\overline{x})-\nu(1-x).\) By (9.36), \[\nu(1-x)<j\leq i+r+\nu(1-x)\leq 3r+\nu(1-x\overline{x}).\] Denote by \(\mathcal{S}\) the support of \((i,j)\in\mathbb{Z}^{2}\) determined by (9.32). From the above discussion we see that \(\#\mathcal{S}\ll r+|\nu(1-x\overline{x})|+|\nu(1-x)|\) and for \((i,j)\in\mathcal{S}\), \[i+j\leq\max\{r+\nu(1-x),5r+2\nu(1-x\overline{x})-\nu(1-x)\}.\] Note that (9.32) also implies that \(b\) (resp. \(b^{\prime}\)) ranges over a translate of \(p^{-i-r-\nu(1-x)}\mathbb{Z}_{p}\) (resp. \(p^{-j-r}\mathbb{Z}_{p}\)). Therefore, \[\mathcal{I}_{p}(x)\ll\sum_{(i,j)\in\mathcal{S}}\int_{p^{-i-r-\nu(1-x)}\mathbb{ Z}_{p}}db\int_{p^{-j-r}\mathbb{Z}_{p}}db^{\prime}\ll\sum_{(i,j)\in\mathcal{S}}p^{i+ j}\cdot p^{\nu(1-x)+2r}.\] Hence, (9.29) follows. ### The split case Let \(p\) be a prime split in \(E\), the corresponding ideal of \(K\) decomposes as \(\mathfrak{p}\overline{\mathfrak{p}}=(p)\) and we have \(E\otimes_{\mathbb{Q}}\mathbb{Q}_{p}\simeq E_{\mathfrak{p}}\times E_{\overline {\mathfrak{p}}}\simeq\mathbb{Q}_{p}\times\mathbb{Q}_{p}\). Let \(\varpi\) in \(E_{\mathfrak{p}}\simeq\mathbb{Q}_{p}\) be an uniformizer. In the split case, we have \(G^{\prime}(\mathbb{Q}_{p})\simeq\operatorname{GL}(2,\mathbb{Q}_{p})\) and by Corollary 9.2 one has \[H_{x}(\mathbb{Q}_{p})\simeq\operatorname{GL}(1,\mathbb{Q}_{p}).\] Given \(u,v\in G^{\prime}(\mathbb{Q}_{p})\), we write them in Iwasawa coordinates: \[u=\begin{pmatrix}p^{i_{1}}&b\\ &p^{i_{2}}\end{pmatrix}\begin{pmatrix}1&b\\ &1\end{pmatrix}k_{1},\ \ v=\begin{pmatrix}p^{i_{1}}&\\ &p^{i_{2}}\end{pmatrix}\begin{pmatrix}1&b^{\prime}\\ &1\end{pmatrix}k_{2}, \tag{9.37}\] where \(i_{1},i_{2},j_{1},j_{2}\in\mathbb{Z}\), \(b,b^{\prime}\in\mathbb{Q}_{p}\) and \(k_{1},k_{2}\in G^{\prime}(\mathbb{Z}_{p})\). In particular if \(\nu(\det u)=0\) we have \(i_{2}=-i_{1}\). Applying this decomposition to the variable \(u\) in the integral \(\mathcal{I}_{p}(x)\) we have \[\mathcal{I}_{p}(x)=\sum_{i\in\mathbb{Z}}\int_{SG^{\prime}(\mathbb{Z}_{p})}\int _{\mathbb{Q}_{p}}\int_{G^{\prime}(\mathbb{Q}_{p})}\left|f_{p}^{\mathfrak{n}} \left(k_{1}^{-1}\begin{pmatrix}p^{-i}&&-p^{i}b\\ &1&\\ &&p^{i}\end{pmatrix}\gamma(x)Jv\right)\right|dbdk_{1}dv.\] **Proposition 9.8**.: _Let \(p\) be a prime, split in \(E\) and \(x\in E^{\times}-E^{1}\)._ 1. _If_ \(p\nmid N^{\prime}\) _we have_ \[\mathcal{I}_{p}(x)=0\] _unless_ \(\nu(x),\ \nu(\overline{x})\geq 0\)_; in that case we have the bound_ (9.38) \[\mathcal{I}_{p}(x)\ll(1+\nu(P(x,\overline{x}))p^{3\nu(x\overline{x}-1)},\] _where_ \(P(X,Y)\in\mathbb{Z}[X,Y]\) _is a polynomial independent of_ \(p\) _whose degree and coefficients are absolutely bounded; moreover the implicit constant is absolute._ _- In addition, if_ \(p\neq 2\) _and_ \[\nu(x\overline{x}-1)=\nu(1-x)=\nu(1-\overline{x})=0\] _we have_ \(\mathcal{I}_{p}(x)=1\)_._ 2. _If_ \(p|N^{\prime}\)_, we have_ (9.39) \[\mathcal{I}_{p}(x)=0\] _unless_ \(x\in p^{-1}\mathcal{O}_{E_{p}}\)_, i.e.,_ \(\nu(x)\geq-1\) _and_ \(\nu(\overline{x})\geq-1\)_._ - Moreover if_ \[\nu(x),\nu(\overline{x})\geq 0,\] _one has_ (9.40) \[\mathcal{I}_{p}(x)\ll(1+\nu(P(x,\overline{x})))^{2}\\ \left(p^{\nu(x\overline{x}(1-x)^{2}(1-\overline{x})^{2})}+p^{\nu(1-x \overline{x})-1}+p^{2\nu(1-x\overline{x})-2}+p^{2\nu(1-x\overline{x})-1}\right)\] _where_ \(P(X,Y)\in\mathbb{Z}[X,Y]\) _is a polynomial independent of_ \(p\) _whose degree and coefficients are absolutely bounded; moreover the implicit constant is absolute._ Proof.: We start with the case \(p\nmid N^{\prime}\). We have \(f_{p}^{\mathfrak{n}}=f_{p}\) and \[f_{p}(u^{-1}\gamma(x)Jv)=0\] if and only if \(u^{-1}\gamma(x)Jv\in G(\mathbb{Z}_{p})\). Since the determinant of \(v\) has valuation \(0\), its Iwasawa coordinates (9.37) satisfy \(j_{1}+j_{2}=0\). Hence \(\mathcal{I}_{p}(x)\) is bounded from above by \[\sum_{i\in\mathbb{Z}}\sum_{j\in\mathbb{Z}}\int_{\mathbb{Q}_{p}}\int_{\mathbb{Q} _{p}}|f_{p}\left(\begin{pmatrix}p^{i}&&p^{i}b\\ &1&\\ &&p^{-i}\end{pmatrix}^{-1}\gamma(x)J\begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&\\ &&p^{-j}\end{pmatrix}\right)|dbdb^{\prime}.\] The proof then follow the same lines as for the nonsplit case considered in Proposition 9.6. We also note that at a split place \(p\), we always have \(G^{\prime}(\mathbb{Z}_{p})\subset G(\mathbb{Z}_{p})\) if \(p\nmid N^{\prime}\). We now consider the case of a split prime \(p\mid N^{\prime}.\) Let \(b,b^{\prime}\in\mathbb{Q}_{p}\) and \(i,j\in\mathbb{Z}.\) We have \[\begin{pmatrix}p^{i}&&p^{i}b\\ &1&&\\ &&p^{-i}\end{pmatrix}^{-1}\gamma(x)J\begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&&\\ &&p^{-j}\end{pmatrix}=\begin{pmatrix}a&e&z\\ -p^{j}&x&f\\ p^{i+j}&p^{i}(1-x)&g\end{pmatrix}, \tag{9.41}\] where \[\begin{cases}a=-\frac{1}{2}p^{j-i}-p^{i+j}b\\ e=\frac{p^{-i}(1+x)}{2}-p^{i}b(1-x)\\ f=-p^{j}b^{\prime}+p^{-j}.\frac{(x+1)(\overline{x}-1)}{2}\\ g=p^{i+j}b^{\prime}-\frac{(x-1)(\overline{x}-1)}{2}p^{i-j}\\ z=-\frac{1}{2}p^{i-j}b^{\prime}+p^{-i-j}y-p^{i+j}bb^{\prime}+p^{i-j}b\frac{(x-1 )(\overline{x}-1)}{2}.\end{cases} \tag{9.42}\] where \[y=\frac{x\overline{x}+3\overline{x}-x+1}{4}.\] Then one has an explicit algebraic relation \[z=\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{ x}}{1-x}ep^{-j}-\frac{ef}{1-x}. \tag{9.43}\] Taking inverse of (9.41) to obtain \[\begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&p^{-j}\end{pmatrix}^{-1}J\gamma(x)^{-1}\begin{pmatrix}p^{i}&&p^{i}b\\ &1&\\ &&p^{-i}\end{pmatrix}=\begin{pmatrix}a^{\prime}&e^{\prime}&z^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}&f^{\prime}\\ p^{i+j}&-p^{j}&g^{\prime}\end{pmatrix}, \tag{9.44}\] where \[\begin{cases}a^{\prime}=-\frac{(1-x)(1-\overline{x})}{2}p^{i-j}-p^{i+j}b^{ \prime}\\ e^{\prime}=\frac{(x-1)(\overline{x}+1)}{2}p^{-j}+p^{j}b^{\prime}\\ f^{\prime}=(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}\\ g^{\prime}=p^{i+j}b-\frac{1}{2}p^{j-i}\\ z^{\prime}=-p^{i-j}b\frac{(x-1)(\overline{x}-1)}{2}+p^{-i-j}\overline{y}-p^{i+j }bb^{\prime}+\frac{1}{2}p^{j-i}b^{\prime}\end{cases} \tag{9.45}\] and one notes the algebraic relation \[z^{\prime}=\frac{1-x\overline{x}}{1-\overline{x}}\,p^{-i-j}+\frac{e^{\prime}} {1-\overline{x}}p^{-i}-\frac{1-x}{1-\overline{x}}f^{\prime}p^{-j}-\frac{e^{ \prime}f^{\prime}}{1-\overline{x}}. \tag{9.46}\] We also note that \[a+g^{\prime}=-p^{j-i},\ a^{\prime}+g=-(1-x)(1-\overline{x})p^{i- j},\] \[p^{i}.f+g=p^{i-j}(\overline{x}-1),\ p^{j}f^{\prime}-(1-\overline{ x})g^{\prime}=p^{j-i} \tag{9.47}\] By definition (9.13), we have \[\mathcal{I}_{p}(x)=\int_{SG^{\prime}(\mathbb{Q}_{p})\times G^{\prime}(\mathbb{Q}_ {p})}\big{|}f_{p}(\widetilde{\mathfrak{n}}_{p}^{-1}u^{-1}\gamma(x)Jv\widetilde{ \mathfrak{n}}_{p})\big{|}dudv.\] where we recall that \(f_{p}=\mathbf{1}_{K_{p}}/\mu(K_{p}),\ K_{p}=G(\mathbb{Z}_{p})\) and \[\widetilde{\mathfrak{n}}_{p}=w^{\prime}\mathfrak{n}_{p}w^{\prime}=\begin{pmatrix} 1&p^{-1}\\ &1\\ &&1\end{pmatrix}\] We will apply the Iwasawa decomposition as in the beginning of the proof of Proposition 9.8. Due to the conjugation by \(\widetilde{\mathfrak{n}}_{p}\), the function \[(u,v)\mapsto f_{p}(\widetilde{\mathfrak{n}}_{p}^{-1}u^{-1}\gamma(x)Jv \widetilde{\mathfrak{n}}_{p})\] is bi \(I^{\prime}_{p}(1)\)-invariant (since \(\widetilde{\mathfrak{n}}_{p}I^{\prime}_{p}(1)\widetilde{\mathfrak{n}}_{p}^{- 1}\subset K_{p}\)) so we shall further decompose \(G^{\prime}(\mathbb{Z}_{p})\) into a union of disjoint cosets of \(I^{\prime}_{p}(1)\): we start with the Iwahori decomposition for \(G^{\prime}\) (see (5.66)) \[G^{\prime}(\mathbb{Z}_{p})=I^{\prime}_{p}\sqcup I^{\prime}_{p}JI^{\prime}_{p} =I^{\prime}_{p}\sqcup\bigsqcup_{\delta\in(\mathbb{Z}/p\mathbb{Z})^{\times}}I^ {\prime}_{p}J\begin{pmatrix}\delta&\\ &1\\ &&1\end{pmatrix}I^{\prime}_{p}(1)\] We then check four cases as in Lemma 9.6. **Case I.** We start with the most complicated case: \[u= \begin{pmatrix}p^{i}&&p^{i}b\\ &1&&p^{-i}\end{pmatrix}\begin{pmatrix}1&&\mu\\ &1&&\mu\\ &&1\end{pmatrix}J\begin{pmatrix}\tau^{-1}&&\\ &&1&\\ &&1\end{pmatrix}k_{1},\] \[v= \begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&&p^{-j}\end{pmatrix}\begin{pmatrix}1&&\mu^{\prime}\\ &1&\\ &&1\end{pmatrix}J\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}k_{2},\] with \(\mu,\mu^{\prime}\in\mathbb{Z}_{p}\) run over representatives of \(\mathbb{Z}_{p}/p\mathbb{Z}_{p}\) and \(\tau,\delta\in\mathbb{Z}_{p}^{\times}\) run over representatives of \((\mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}\), and \(k_{1},k_{2}\in I^{\prime}_{p}(1)\). Let \(\mathcal{I}_{p}(x;1)\) be the contribution to \(\mathcal{I}_{p}(x)\) of all the \(u,v\) whose Iwasawa decomposition is of the above form. We first look for some necessary condition for \(\mathcal{I}_{p}(x;1)\) to be non zero. For \(\delta\in\mathbb{Z}_{p}^{\times}\), we denote by \[\mathfrak{u}_{\delta}=\begin{pmatrix}1&&\\ &1&\\ &\delta p^{-1}&1\end{pmatrix}=J\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\widetilde{\mathfrak{n}}_{p}J.\] Then, since \(J\in K_{p}\), \(f_{p}(\widetilde{\mathfrak{n}}_{p}^{-1}u^{-1}\gamma(x)Jv\widetilde{ \mathfrak{n}}_{p})\neq 0\) if and only if \[\mathfrak{u}_{\tau}^{-1}\begin{pmatrix}p^{i}&&p^{i}(b+\mu)\\ &1&\\ &&p^{-i}\end{pmatrix}^{-1}\gamma(x)J\begin{pmatrix}p^{j}&&p^{j}(b^{\prime}+ \mu^{\prime})\\ &1&\\ &&p^{-j}\end{pmatrix}\mathfrak{u}_{\delta}\in K_{p} \tag{9.48}\] or equivalently \[\begin{pmatrix}a&&e+\frac{\delta}{p}z&&z\\ -p^{j}&&x+\frac{\delta}{p}f&&f\\ p^{i+j}+\tau p^{j-1}&p^{i}(1-x)+\frac{\delta}{p}g-\frac{\tau}{p}(x+\delta p^{-1 }f)&g-\tau p^{-1}f\end{pmatrix}\in K_{p}, \tag{9.49}\] and taking the inverse of (9.49) we also obtain the condition \[\begin{pmatrix}a^{\prime}&e^{\prime}+\frac{\tau}{p}z^{\prime}&z^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\frac{p}{f}f^{\prime}&f^{\prime}\\ p^{i+j}-\delta p^{i-1}(1-\overline{x})&-p^{j}+\frac{\tau}{p}g^{\prime}-\frac{ \delta}{p}(\overline{x}+\frac{\tau}{p}f^{\prime})&g^{\prime}-\frac{\delta}{p} f^{\prime}\end{pmatrix}\in K_{p}. \tag{9.50}\] Here \(a,e,f,g,z\) are defined in (9.42) and \(a^{\prime},e^{\prime},f^{\prime},g^{\prime},z^{\prime}\) as defined in (9.45). These conditions imply that \[\nu(f),\ \nu(f^{\prime}),\ \nu(x+\delta p^{-1}f),\ \nu(\overline{x}+\tau p^{- 1}f^{\prime})\geq 0\] which in turn imply that \[\nu(x),\ \nu(\overline{x})\geq-1.\] This proves Proposition 9.8 in Case I. Note also that (9.49) and (9.50) imply that \[i^{\prime}:=i+\nu(1-\overline{x})\geq 0,\ j\geq 0,\] \[\nu(z),\ \nu(z^{\prime})\geq 0,\ \nu(e),\ \nu(e^{\prime})\geq-1. \tag{9.51}\] We have \[\mathcal{I}_{p}(x;1)\leq\mu(I^{\prime}_{p}(1))^{2}p^{2}\sum_{i\geq-\nu(1- \overline{x})}\sum_{j\geq 0}\int_{\mathbb{Q}_{p}}\int_{\mathbb{Q}_{p}}\sum_{ \tau}\sum_{\delta}\mathbf{1}_{K_{p}}(\cdots)dbdb^{\prime}, \tag{9.52}\] where the "\(\cdots\)" in the parenthesis indicates the left hand side of (9.49) with \[\mu=\mu^{\prime}=0.\] Indeed, up to changing variables in the \(b\) and \(b^{\prime}\) integrals, we may assume this is the case. Define \[\mathcal{I}_{p}^{j=0}(x;1) :=\mu(I^{\prime}_{p}(1))^{2}p^{2}\sum_{i\geq-\nu(1-\overline{x})} \sum_{j=0}\int_{\mathbb{Q}_{p}}\int_{\mathbb{Q}_{p}}\sum_{\tau}\sum_{\delta} \mathbf{1}_{K_{p}}(\cdots)dbdb^{\prime},\] \[\mathcal{I}_{p}^{j>0}(x;1) :=\mu(I^{\prime}_{p}(1))^{2}p^{2}\sum_{i\geq-\nu(1-\overline{x})} \sum_{j>0}\int_{\mathbb{Q}_{p}}\int_{\mathbb{Q}_{p}}\sum_{\tau}\sum_{\delta} \mathbf{1}_{K_{p}}(\cdots)dbdb^{\prime}.\] We now prove Proposition 9.8 in case I and when assuming that \[\nu(x),\ \nu(\overline{x})\geq 0\] and therefore \[\nu(1\pm x),\ \nu(1\pm\overline{x})\geq 0.\] By (9.49) and (9.50), we have \[\nu(f)\geq 1,\ \nu(f^{\prime})\geq 1\] \[g=p^{i+j}b^{\prime}-\frac{(x-1)(\overline{x}-1)}{2}p^{i-j}\in \mathbb{Z}_{p},\ g^{\prime}=-\frac{p^{j-i}}{2}-p^{i+j}b\in\mathbb{Z}_{p}. \tag{9.53}\] Also since (see (9.47)) \[p^{j-i}=a+g^{\prime}\in\mathbb{Z}_{p}\ \text{and}\ -(x-1)(\overline{x}-1)p^{i-j}=a^ {\prime}+g\in\mathbb{Z}_{p}\] we have \[0\leq j-i\leq\ \nu(1-x)+\nu(1-\overline{x}). \tag{9.54}\] From \(\nu(e^{\prime})\geq-1\) and \(\nu(f)\geq 1\) we then have \[j\leq 1+\nu(x(\overline{x}-1)). \tag{9.55}\] and likewise \[j\leq 1+\nu(\overline{x}(x-1))\] so that \[j\leq 1+\frac{\nu(x\overline{x}(x-1)(\overline{x}-1))}{2}. \tag{9.56}\] _Localization of \(b,b^{\prime}\)._ Another general observation in Case I is that, since \[f=-p^{j}b^{\prime}+p^{-j}.\frac{(x+1)(\overline{x}-1)}{2}\in p\mathbb{Z}_{p}, \ f^{\prime}=(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}\in p\mathbb{ Z}_{p}, \tag{9.57}\] \(b\) and \(b^{\prime}\) are contained, respectively, in translates of \(p^{-i-\nu(1-\overline{x})+1}\mathbb{Z}_{p}\) and \(p^{-j+1}\mathbb{Z}_{p}\) (depending only on \(x\)) whose volumes are \(p^{i+\nu(1-\overline{x})-1}\) and \(p^{j-1}\). Similarly since \[g=p^{i+j}b^{\prime}-\frac{(x-1)(\overline{x}-1)}{2}p^{i-j}\in\mathbb{Z}_{p}, \ g^{\prime}=-\frac{p^{j-i}}{2}-p^{i+j}b\in\mathbb{Z}_{p}, \tag{9.58}\] \(b\) and \(b^{\prime}\) are contained in translates of \(p^{-(i+j)}\mathbb{Z}_{p}\) (depending only on \(x\)) whose volumes are \(p^{i+j}\). We will use either of these informations depending on the values of \[\min(i+\nu(1-\overline{x})-1,i+j)\text{ and }\min(j-1,i+j).\] For this we need to split the discussion into further cases: _The case \(j=0\)._ Since \(p^{i}+\tau p^{-1}\in\mathbb{Z}_{p}\) we have \(i=-1\) and \(\tau\equiv-1\pmod{p}\). Since \(i+j=-1\) we have that \(b\) and \(b^{\prime}\) each belong to some translates of \(p\mathbb{Z}_{p}\). Hence we have \[\mathcal{I}_{p}^{0}(x;1)\leq\frac{\mu(I_{p}^{\prime}(1))^{2}p^{2}(p-1)}{\mu(K _{p})}\int_{p\mathbb{Z}_{p}}\int_{p\mathbb{Z}_{p}}dbdb^{\prime}\leq\frac{1}{p^ {2}(p-1)}. \tag{9.59}\] _The case \(j\geq 1\): \(\nu(x)=\nu(\overline{x}-1)=0\)._ Suppose first that \(\nu(x)=\nu(\overline{x}-1)=0.\) From (9.51), (9.55) and (9.54), we have \[0\leq i\leq j=1.\] Finally, since \[p^{i+1}-\delta p^{i-1}(1-\overline{x})\in\mathbb{Z}_{p}\] we conclude that \[i=j=1.\] In particular by (9.57), \(b\) and \(b^{\prime}\) are contained in single translates of \(\mathbb{Z}_{p}\). We need to split the discussion into two further cases: - Suppose that, given \(x,e,f,z\), there exists at most one \(\delta\pmod{p}\) satisfying (9.49) and (9.50) we then have (under the assumption that \(\nu(x)=\nu(\overline{x}-1)=0\)) \[\mathcal{I}_{p}^{>0}(x;1)\leq\frac{\mu(I_{p}^{\prime}(1))^{2}(p-1)}{\mu(K_{p} )}\int_{\mathbb{Z}_{p}}\int_{\mathbb{Z}_{p}}dbdb^{\prime}\leq\frac{1}{p^{2}(p -1)}. \tag{9.60}\] - Suppose instead that, given \(x,e,f,z\), there exists \(\delta_{1}\not\equiv\delta_{2}\pmod{p}\) satisfying (9.49) and (9.50); we then have \((\delta_{1}-\delta_{2})p^{-1}z\in\mathbb{Z}_{p}\); this implies that \(\nu(e)\geq 0\) and \[(1-x\overline{x})p^{-2}+fp^{-1}-(1-\overline{x})ep^{-1}-ef\in(1-x)p\mathbb{Z} _{p}.\] Since the last three terms belong to \(p^{-1}\mathbb{Z}_{p}\) we must have \[\nu(x\overline{x}-1)\geq 1.\] In all cases, we conclude that \[\mathcal{I}_{p}^{>0}(x;1)\leq\frac{\mu(I_{p}^{\prime}(1))^{2}(p-1)^{2}}{\mu(K _{p})}\int_{\mathbb{Z}_{p}}\int_{\mathbb{Z}_{p}}dbdb^{\prime}\leq\frac{1}{p^{2 }}\leq\frac{p^{\nu(x\overline{x}-1)}}{p^{2}(p-1)}. \tag{9.61}\] _The case \(j\geq 1\): \(\nu(x)+\nu(\overline{x}-1)\geq 1\)._ Suppose now that \(\nu(\overline{x}-1)+\nu(x)\geq 1.\) From (9.51), (9.54) and (9.56) we find that \(\mathcal{I}_{p}^{>0}(x;1)\) is bounded by \[\frac{\mu(I_{p}^{\prime}(1))^{2}p^{2}(p-1)^{2}}{\mu(K_{p})}\sum_{ j=1}^{\frac{\nu(x\overline{x}(x-1)(\overline{x}-1))}{2}+1}\sum_{i=-\nu(x-1)}^{ j}\int_{p^{-(i+\nu(1-\overline{x})-1)}\mathbb{Z}_{p}}\int_{p^{-(j-1)} \mathbb{Z}_{p}}dbdb^{\prime}\] \[\ll\sum_{j=1}^{\frac{\nu(x\overline{x}(x-1)(\overline{x}-1))}{2}+ 1}\sum_{i=-\nu(x-1)}^{j}p^{i+j+\nu(1-\overline{x})-2}\] \[\ll(1+\nu(x\overline{x}(x-1)(\overline{x}-1)))^{2}p^{\nu(x \overline{x}(x-1)(\overline{x}-1))+\nu(1-\overline{x})}\] Since \(\nu(1-x)\geq 0\) we can replace the exponent in \(p\) above by the more symmetric expression \[\nu(x\overline{x})+2\nu((x-1)(\overline{x}-1)),\] so that combining this with (9.59), (9.60), (9.61) \[\mathcal{I}_{p}(x;1)\ll(1+\nu(x\overline{x}(x-1)(\overline{x}-1)))^{2}p^{\nu( x\overline{x})+2\nu((x-1)(\overline{x}-1))}+\frac{p^{\nu(1-x\overline{x})}}{p^{3}}. \tag{9.62}\] **Case II.** We assume now that the Iwasawa decomposition of \(u\) and \(v\) are of the form \[u=\begin{pmatrix}p^{i}&p^{i}b\\ &1&\\ &&p^{-i}\end{pmatrix}\begin{pmatrix}\tau^{-1}&&\\ &&1&\\ &&1\end{pmatrix}k_{1},\ v=\begin{pmatrix}p^{j}&p^{j}b^{\prime}\\ &&1&\\ &&p^{-j}\end{pmatrix}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}k_{2},\] where \(\tau,\delta\in(\mathbb{Z}/p\mathbb{Z})^{\times},\) and \(k_{1},k_{2}\in I_{p}^{\prime}(1).\) Let \(\mathcal{I}_{p}(x;2)\) be the contribution of \(u,v\) of the above forms to \(\mathcal{I}_{p}(x).\) We first look for some necessary condition for \(\mathcal{I}_{p}(x;2)\) to be non zero. Computing \(u^{-1}\gamma(x)Jv\) we see that \(f_{p^{\prime}}^{\mathfrak{n}_{p}}(u^{-1}\gamma(x)Jv)\neq 0\) if and only if \[\begin{pmatrix}a+\tau p^{j-1}&e+\delta p^{-1}a-\tau p^{-1}(x-\delta p^{j-1}) &z-\tau p^{-1}f\\ -p^{j}&x-\delta p^{j-1}&f\\ p^{i+j}&p^{i}(1-x)+\delta p^{i+j-1}&g\end{pmatrix}\in K_{p}. \tag{9.63}\] Taking inverse, we obtain \[\begin{pmatrix}a^{\prime}-\delta p^{i-1}(1-\overline{x})&e^{\prime}+\frac{ \tau}{p}a^{\prime}-\frac{\delta}{p}(\overline{x}+\tau p^{i-1}(1-\overline{x} ))&z^{\prime}-\frac{\delta}{p}f^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\tau p^{i-1}(1-\overline{x})&f^{\prime}\\ p^{i+j}&-p^{j}+\tau p^{i+j-1}&g^{\prime}\end{pmatrix}\in K_{p}. \tag{9.64}\] Here, as in Case I, \(a,e,f,g,z\) are defined in (9.42) and \(a^{\prime},e^{\prime},f^{\prime},g^{\prime},z^{\prime}\) as defined in (9.45). These conditions imply in particular that \[i+\nu(1-\overline{x})\geq 0,\ j\geq 0,\ i+j\geq 0 \tag{9.65}\] which together with \(\nu(x-\delta p^{j-1}),\ \nu(\overline{x}-\tau p^{i}(1-\overline{x})p^{-1})\geq 0\) implies that \[\nu(x),\ \nu(\overline{x})\geq-1.\] This proves Proposition 9.8 in Case II. We also note that since \[f,f^{\prime},g,g^{\prime},\ z-\tau p^{-1}f,\ z^{\prime}-\delta p^{-1}f^{ \prime}\in\mathbb{Z}_{p}\] we have \[\nu(z),\nu(z^{\prime})\geq-1\] and \[\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x} }{1-x}ep^{-j}-\frac{ef}{1-x}=z\in p^{-1}\mathbb{Z}_{p}. \tag{9.66}\] We considered Case II when \(p|N^{\prime}\) and assume that \[\nu(x),\ \nu(\overline{x})\geq 0.\] In particular \(\nu(1\pm x),\ \nu(1\pm\overline{x})\geq 0.\) A few general remarks: * Since \(\nu(x)\geq 0\) and \(\nu(x-\delta p^{j-1})\geq 0\) we have \(j\geq 1\) and therefore since \(a+\tau p^{j-1}\in\mathbb{Z}_{p}\) we have \(a\in\mathbb{Z}_{p}.\) * This together with \(e+\delta p^{-1}a-\tau p^{-1}(x-\delta p^{j-1})\in\mathbb{Z}_{p}\) implies that \[\nu(e)\geq-1.\] * Since \(f,f^{\prime},g,g^{\prime}\in\mathbb{Z}_{p},\) we see that \(b,b^{\prime}\) are contained in translates of respectively \[p^{-\min(i+\nu(1-\overline{x}),i+j)}\mathbb{Z}_{p}\text{ and }p^{-\min(j,i+j)}\mathbb{Z}_{p}\] which have volumes \[p^{\min(i+\nu(1-\overline{x}),i+j)}\mathbb{Z}_{p}\text{ and }p^{\min(j,i+j)} \mathbb{Z}_{p}.\] _The case \(\nu(\overline{x}-1)=0\)._ Since \(\overline{x}+\tau p^{i-1}(1-\overline{x})\in\mathbb{Z}_{p}\) we have \(i\geq 1\) and since \[p^{i}.f+p^{i-j}(1-\overline{x})=-g\in\mathbb{Z}_{p},\ p^{j}f^{\prime}-(1- \overline{x})g^{\prime}=p^{j-i}\in\mathbb{Z}_{p} \tag{9.67}\] we have \[j=i\geq 1.\] Since \[x-\delta p^{j-1},a,e+\delta p^{-1}a-\tau p^{-1}(x-\delta p^{j-1})\in\mathbb{Z} _{p}\] we have \(\nu(e)\geq-1\) We now look at the variable \(z\): under our current assumptions (9.66) becomes \[\frac{1-x\overline{x}}{1-x}p^{-2i}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x}}{ 1-x}ep^{-i}-\frac{ef}{1-x}=z\in p^{-1}\mathbb{Z}_{p}.\] The valuation of the first term is \(\geq\) of the minimum of the valuations of the three other terms and of \(z\): \[\nu(1-x\overline{x})-2i\geq\min(-i+\nu(f),\nu(1-\overline{x})+\nu(e)-i,\nu(e) +\nu(f),\nu(z)+\nu(1-x))\] which yields (since \(\nu(1-\overline{x}),\nu(f)\geq 0\) and \(\nu(e),\ \nu(z)\geq-1\)) \[\nu(1-x\overline{x})-2i\geq-i-1\] so that \[1\leq i=j\leq\nu(1-x\overline{x})+1.\] Notice also that if \(\nu(f)=0\) or \(\nu(f^{\prime})=0\) (which is the generic case) the relations \[z-\tau p^{-1}f\in\mathbb{Z}_{p},\ z^{\prime}-\delta p^{-1}f^{\prime}\] uniquely determine \(\delta\left(\operatorname{mod}p\right)\) and \(\tau\left(\operatorname{mod}p\right)\). On the other hand, if either \(f\in p\mathbb{Z}_{p}\) or \(f^{\prime}\in p\mathbb{Z}_{p}\), then \(b^{\prime}\) or \(b\) belong to fixed translates of \(p^{-i+1}\mathbb{Z}_{p}\) (whose volume is smaller by a factor \(p\)): Therefore, we have \[\mathcal{I}_{p}(x;2)\ll \ \ \frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{1\leq i\leq 1+ \nu(1-x\overline{x})}\iint_{b\in p^{-i+1}\mathbb{Z}_{p},b^{\prime}\in p^{-i+1} \mathbb{Z}_{p}}\sum_{\delta,\tau\,(\operatorname{mod}p)}dbdb^{\prime}\] \[\ \ +2\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{1\leq i \leq 1+\nu(1-x\overline{x})}\iint_{b\in p^{-i}\mathbb{Z}_{p},b^{\prime}\in p^{-i+1 }\mathbb{Z}_{p}}\sum_{\tau\,(\operatorname{mod}p)}dbdb^{\prime}\] \[\ \ +\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{1\leq i \leq 1+\nu(1-x\overline{x})}\iint_{b\in p^{-i}\mathbb{Z}_{p},b^{\prime}\in p^{-i }\mathbb{Z}_{p}}dbdb^{\prime} \tag{9.68}\] \[\ll \ \frac{1}{p^{4}}(1+\nu(1-x\overline{x}))p^{2\nu(1-x\overline{x })+2}\ll(1+\nu(1-x\overline{x}))\frac{p^{2\nu(1-x\overline{x})}}{p^{2}}\] if \(\nu(1-\overline{x})=0\). _The case \(\nu(1-\overline{x})\geq 1\)_. Since \(\overline{x}+\tau p^{i-1}(1-\overline{x})\in\mathbb{Z}_{p}\), \(j\geq 1\), \(f^{\prime},g^{\prime}\in\mathbb{Z}_{p}\) and \(p^{j-i}=p^{j}f^{\prime}-(1-\overline{x})g^{\prime}\) we have \[i\geq 1-\nu(1-\overline{x}),\ j\geq i+1.\] By (9.66) we have (since \(\nu(f)\geq 0\), \(\nu(e),\nu(z),\nu(1-x)\geq 0\)) \[\nu(1-x\overline{x})-i-j\geq\min(-i,-j+\nu(1-\overline{x})-1,-1)\] or equivalently \[i+j\leq\nu(1-x\overline{x})+\max(i,j+1-\nu(1-\overline{x}),1). \tag{9.69}\] Also since \(p^{i}.f+p^{i-j}(1-\overline{x})=-g\in\mathbb{Z}_{p}\) we have \(i-j+\nu(1-\overline{x})\geq\min(i,0)\) or equivalently \[j\leq\max(i,0)+\nu(1-\overline{x}). \tag{9.70}\] If \(i\leq 0\), this gives \(j\leq\nu(1-\overline{x})\) and by (9.69) \[i+j\leq\nu(1-x\overline{x})+1. \tag{9.71}\] We have therefore \[1-\nu(1-\overline{x})\leq i\leq 0,\ 1\leq j\leq\nu(1-x\overline{x})+1-i.\] The contribution of this configuration is bounded by \[\ll \ \frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{\begin{subarray} {c}1-\nu(1-\overline{x})\leq i\leq 0\\ 1\leq j\leq\nu(1-x\overline{x})+1-i\end{subarray}}\iint_{b\in p^{-i-\nu(1- \overline{x})+1}\mathbb{Z}_{p}}\sum_{\delta,\tau\,(\operatorname{mod}p)}1dbdb ^{\prime}\] \[\ \ +\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1-\nu(1-\overline{x})\leq i\leq 0\\ 1\leq j\leq\nu(1-x\overline{x})+1-i\end{subarray}}\iint_{b\in p^{-i-\nu(1- \overline{x})+1}\mathbb{Z}_{p}}\sum_{\tau\,(\operatorname{mod}p)}1dbdb^{\prime}\] \[\ \ +\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1-\nu(1-\overline{x})\leq i\leq 0\\ 1\leq j\leq\nu(1-x\overline{x})+1-i\end{subarray}}\iint_{b\in p^{-i-\nu(1- \overline{x})}\mathbb{Z}_{p}}\sum_{\delta\,(\operatorname{mod}p)}1dbdb^{\prime}\] \[\ \ +\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1-\nu(1-\overline{x})\leq i\leq 0\\ 1\leq j\leq\nu(1-x\overline{x})+1-i\end{subarray}}\iint_{b\in p^{-i-\nu(1- \overline{x})}\mathbb{Z}_{p}}dbdb^{\prime}\] and using (9.71) this is bounded by \[\ll(1+\nu(1-x\overline{x}))(1+\nu(1-\overline{x}))\frac{p^{\nu(1-x\overline{x })+\nu(1-\overline{x})}}{p^{3}}. \tag{9.72}\] If \(i\geq 1\) then by (9.70) we have \(j\leq\nu(1-\overline{x})+i\) and by (9.69) \[j\leq\nu(1-x\overline{x})+1\text{ and }i\leq j-1\leq\nu(1-x\overline{x}).\] \[\ll\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{\begin{subarray} {c}1\leq i\leq\nu(1-x\overline{x})\\ i+1\leq j\leq\nu(1-x\overline{x})+1\end{subarray}}\iint_{\begin{subarray}{c}b \in p^{-i-\nu(1-\overline{x})+1}\mathbb{Z}_{p}\\ b^{\prime}\in p^{-j+1}\mathbb{Z}_{p}\end{subarray}}\sum_{\delta,\tau}\sum_{ \operatorname{(mod}p)}1dbdb^{\prime}\] \[\quad+\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1\leq i\leq\nu(1-x\overline{x})\\ i+1\leq j\leq\nu(1-x\overline{x})+1\end{subarray}}\iint_{\begin{subarray}{c}b \in p^{-i-\nu(1-\overline{x})+1}\mathbb{Z}_{p}\\ b^{\prime}\in p^{-j}\mathbb{Z}_{p}\end{subarray}}\sum_{\tau\operatorname{(mod }p)}1dbdb^{\prime}\] \[\quad+\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1\leq i\leq\nu(1-x\overline{x})\\ i+1\leq j\leq\nu(1-x\overline{x})+1\end{subarray}}\iint_{\begin{subarray}{c}b \in p^{-i-\nu(1-\overline{x})}\mathbb{Z}_{p}\\ b^{\prime}\in p^{-j+1}\mathbb{Z}_{p}\end{subarray}}\sum_{\delta\operatorname{( mod}p)}1dbdb^{\prime}\] \[\quad+\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{ \begin{subarray}{c}1\leq i\leq\nu(1-x\overline{x})\\ i+1\leq j\leq\nu(1-x\overline{x})+1\end{subarray}}\iint_{\begin{subarray}{c}b \in p^{-i-\nu(1-\overline{x})}\mathbb{Z}_{p}\\ b^{\prime}\in p^{-j}\mathbb{Z}_{p}\end{subarray}}dbdb^{\prime}\] which is bounded by \[\ll(1+\nu(1-x\overline{x}))^{2}\frac{p^{2\nu(1-x\overline{x})+\nu(1- \overline{x})}}{p^{3}}. \tag{9.73}\] Combining (9.68) with the bounds (9.72) and (9.73) for \(\nu(1-\overline{x})\geq 1\) we obtain \[\mathcal{I}_{p}(x;2)\ll(1+\nu(1-x\overline{x}))(1+\nu(1-x\overline{x})+\nu(1 -\overline{x}))\frac{p^{2\nu(1-x\overline{x})+\max(1,\nu(1-\overline{x}))}}{p ^{3}}. \tag{9.74}\] ### Case III We assume now that the Iwasawa decomposition of \(u\) and \(v\) are of the form \[u= \begin{pmatrix}p^{i}&&p^{i}b\\ &1&&p^{-i}\end{pmatrix}\begin{pmatrix}1&&\mu\\ &1&&1\end{pmatrix}J\begin{pmatrix}\tau&&\\ &1&&1\end{pmatrix}\gamma_{1},\] \[v= \begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&&p^{-j}\end{pmatrix}\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\tau,\delta\in(\mathbb{Z}/p\mathbb{Z})^{\times},\)\(\mu\in\mathbb{Z}/p\mathbb{Z},\) and \(\gamma_{1},\gamma_{2}\in I_{p}^{\prime}(1).\) Let \(\mathcal{I}_{p}(x;3)\) be the contribution of \(u,v\) of the above forms to \(\mathcal{I}_{p}(x).\) We first look for some necessary condition for \(\mathcal{I}_{p}(x;3)\) to be non zero. We have \(f_{p}^{\mathsf{n}_{p}}(u^{-1}\gamma(x)Jv)\neq 0\) if and only if \[\begin{pmatrix}a&e+\delta p^{-1}a&z\\ -p^{j}&x-\delta p^{j-1}&f\\ p^{i+j}-\tau p^{j-1}&p^{i}(1-x)+\delta p^{i+j-1}-\frac{\tau}{p}(x-\delta p^{j- 1})&g-\tau p^{-1}f\end{pmatrix}\in K_{p}. \tag{9.75}\] Taking inverse of (9.75) we then obtain that \[\begin{pmatrix}a^{\prime}-\delta p^{i-1}(1-\overline{x})&e^{\prime}-\frac{ \delta}{p}\overline{x}+\frac{\tau}{p}(z^{\prime}-\delta p^{-1}f^{\prime})&z^{ \prime}-\delta p^{-1}f^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\tau p^{-1}f^{\prime}&f^{\prime}\\ p^{i+j}&-p^{j}+\tau p^{-1}g^{\prime}&g^{\prime}\end{pmatrix}\in K_{p}. \tag{9.76}\] These conditions imply in particular that \[i+j\geq 0,\ j\geq 1,i+\nu(1-\overline{x})\geq 0 \tag{9.77}\] and therefore \(\nu(x)\geq 0\). Also \(f^{\prime},\overline{x}+\tau p^{-1}f^{\prime}\in\mathbb{Z}_{p}\) implies that \(\nu(\overline{x})\geq-1\). This proves (9.8) (in a stronger form) in Case III. We assume now that \(x,\overline{x}\in\mathbb{Z}_{p}\). We have \[f^{\prime},g^{\prime}\in p\mathbb{Z}_{p}\] So \(j\geq 1\), \(\nu(f^{\prime})\geq 1\) and \(\nu(g^{\prime})\geq 1.\) Consequently, since \[p^{j-i}=p^{j}f^{\prime}-(1-\overline{x})g^{\prime}\in p\mathbb{Z}_{p}\] (see (9.47)) we have \[i\leq j-1 \tag{9.78}\] and \(a\in p\mathbb{Z}_{p}\) which implies that \(e\in\mathbb{Z}_{p}\). Since \(z\in\mathbb{Z}_{p}\) by (9.43) we have \[\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x}} {1-x}ep^{-j}-\frac{ef}{1-x}=z\in\mathbb{Z}_{p}\] and since \(f,e\in\mathbb{Z}_{p}\) we have \[\nu(1-x\overline{x})-i-j\geq\min\{\nu(1-x),-i,-j+\nu(1-\overline{x}),0\}=\min \{-i,-j+\nu(1-\overline{x}),0\}.\] or \[i+j\leq\nu(1-x\overline{x})+\max\{i,j-\nu(1-\overline{x}),0\}. \tag{9.79}\] Also, we have \(\nu(g)\geq-1\) so from the relation \(p^{i}f+g=p^{i-j}(1-\overline{x})\) we conclude that \[j-i-\nu(1-\overline{x})\leq\max\{1,-i\} \tag{9.80}\] and substracting \(i\) from (9.79) we obtain \[1\leq j\leq\nu(1-x\overline{x})+\max\{1,-i\}. \tag{9.81}\] _Localisation of \(b\) and \(b^{\prime}\)._ Finally we observe that since \(f\in\mathbb{Z}_{p},\ f^{\prime}\in p\mathbb{Z}_{p}\), we see from the expression of \(f\) and \(f^{\prime}\) in (9.42) and (9.45) that \(b\) and \(b^{\prime}\) are contained in translates of \[p^{1-i-\nu(1-\overline{x})}\mathbb{Z}_{p}\ \text{and}\ p^{-j}\mathbb{Z}_{p}\ \text{ respectively}\] which have volumes \[p^{i+\nu(1-\overline{x})-1}\ \text{and}\ p^{j}.\] Moreover we notice that if \(\nu(f)=0\) then since \(g-\tau p^{-1}f\in\mathbb{Z}_{p}\) the congruence class \(\tau\) is uniquely determined by \(g\) and \(f\) (which depend on \(x\) and \(b^{\prime}\)) while for \(\nu(f)\geq 1\) there is no constraint on \(\tau\) but \(b^{\prime}\) varies over a translate of \(p^{-j+1}\mathbb{Z}_{p}\). _The case \(\nu(1-\overline{x})=0\)._ We have \(1\leq j\leq\max\{i+1,0\}\leq j\) so \(j\leq i+1\) and \[j=i+1.\] Therefore, the contribution from this case to \(\mathcal{I}_{p}(x;3)\) is bounded by \[\frac{\mu(I^{\prime}_{p}(1))^{2}}{\mu(K_{p})}\sum_{\mu\in\mathbb{Z }/p\mathbb{Z}}\sum_{0\leq i\leq\nu(1-x\overline{x})}\sum_{\tau}\sum_{\delta} \int_{p^{1-i}\mathbb{Z}_{p}}db\int_{p^{-i-1}\mathbb{Z}_{p}}db^{\prime}\] \[\ll p\mu(I^{\prime}_{p}(1))^{2}(1+\nu(1-x\overline{x}))p^{2\nu(1-x \overline{x})}\ll(1+\nu(1-x\overline{x}))\frac{p^{2\nu(1-x\overline{x})}}{p^{ 3}}.\] _The case \(\nu(1-\overline{x})\geq 1\)._ We have \(j-\nu(1-\overline{x})\leq j-1\) and (since \(i\leq j-1\)) we have \[i+j\leq\nu(1-x\overline{x})+j-1\Longleftrightarrow i\leq\nu(1-x\overline{x})-1.\] - If \(i\geq 0\) we have by (9.81) \[j\leq\nu(1-x\overline{x})+1\] and by (9.78) \[i+j\leq 2\nu(1-x\overline{x})+1.\] - If \(i\leq-1\) we have by (9.80) \[-i,j\leq\nu(1-\overline{x})\] and \[i+j\leq\nu(1-x\overline{x})\] We conclude that the contribution from the case \(\nu(1-\overline{x})\geq 1\) to \(\mathcal{I}_{p}(x;3)\) is bounded by (see the paragraph on the localisation of \(b\) and \(b^{\prime}\)) \[\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{\mu\in\mathbb{Z}/p \mathbb{Z}}\sum_{-\nu(1-\overline{x})\leq i\leq j-1\leq\nu(1-x\overline{x})+1 }\int_{p^{1-i-\nu(1-\overline{x})}\mathbb{Z}_{p}}\int_{p^{-j+1}\mathbb{Z}_{p} }\sum_{\tau}\sum_{\delta}1dbdb^{\prime}\] \[+\frac{\mu(I_{p}^{\prime}(1))^{2}}{\mu(K_{p})}\sum_{\mu\in \mathbb{Z}/p\mathbb{Z}}\sum_{-\nu(1-\overline{x})\leq i\leq j-1\leq\nu(1-x \overline{x})+1}\int_{p^{1-i-\nu(1-\overline{x})}\mathbb{Z}_{p}}\int_{p^{-j} \mathbb{Z}_{p}}\sum_{\delta}1dbdb^{\prime}\] \[\ll(1+\nu(1-\overline{x})+\nu(1-x\overline{x}))^{2}\frac{p^{2\nu (1-x\overline{x})+\nu(1-\overline{x})}}{p^{2}}.\] Putting the above discussion together we then obtain \[\mathcal{I}_{p}(x;3)\ll(1+\nu(1-\overline{x})+\nu(1-x\overline{x}))^{2}\frac{ p^{2\nu(1-x\overline{x})+\nu(1-\overline{x})}}{p^{2}}. \tag{9.82}\] **Case IV.** Consider \(u\) and \(v\) in their Iwasawa forms: \[u= \begin{pmatrix}p^{i}&&p^{i}b\\ &1&&p^{-i}\end{pmatrix}\begin{pmatrix}\tau&&\\ &1&\\ &&1\end{pmatrix}\gamma_{1},\] \[v= \begin{pmatrix}p^{j}&&p^{j}b^{\prime}\\ &1&&\\ &&p^{-j}\end{pmatrix}\begin{pmatrix}1&&\mu\\ &1&\\ &&1\end{pmatrix}J\begin{pmatrix}\delta&&\\ &1&\\ &&1\end{pmatrix}\gamma_{2},\] where \(\tau,\delta\in(\mathbb{Z}/p\mathbb{Z})^{\times}\), \(\mu\in\mathbb{Z}/p\mathbb{Z}\), and \(\gamma_{1},\gamma_{2}\in I_{p}^{\prime}(1)\). Let \(\mathcal{I}_{p}(x;4)\) be the contribution of \(u,v\) of the above forms to \(\mathcal{I}_{p}(x)\). We first look for some necessary condition for \(\mathcal{I}_{p}(x;4)\) to be non zero. Then \(f_{p}^{\mathfrak{n}_{p}}(u^{-1}\gamma(x)Jv)\neq 0\) if and only if \[\begin{pmatrix}a+\tau p^{j-1}&e+\frac{\delta}{p}z-\frac{\tau}{p}(x+\frac{ \delta}{p}f)&z-\frac{\tau}{p}f\\ -p^{j}&x+\frac{\delta}{p}f&f\\ p^{i+j}&p^{i}(1-x)+\frac{\delta}{p}g&g\end{pmatrix}\in K_{p}. \tag{9.83}\] Taking inverse of (9.83) we then obtain that \[\begin{pmatrix}a^{\prime}&e^{\prime}+\frac{\tau}{p}a^{\prime}&z^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\tau p^{i-l}(1-\overline{x})&f^{\prime}\\ p^{i+j}-\frac{\delta}{p}p^{i}(1-\overline{x})&-p^{j}+p^{i+j}\frac{\tau}{p}- \frac{\delta}{p}(\overline{x}+\frac{\tau}{p}p^{i}(1-\overline{x}))&g^{\prime} -\frac{\delta}{p}f^{\prime}\end{pmatrix}\in K_{p}. \tag{9.84}\] These conditions imply in particular that \[j\geq 0,i+j\geq 0,\ i+\nu(1-\overline{x})\geq 1.\] Moreover since \(f,x+\delta f/p\in\mathbb{Z}_{p}\) we have \(\nu(x)\geq-1\). Since \[p^{i}(1-\overline{x})\in p\mathbb{Z}_{p},\overline{x}+\tau p^{i}(1-\overline{x} )/p\in\mathbb{Z}_{p}\] we have \(\nu(\overline{x})\geq 0\). This proves Proposition 9.8 in case IV (in a stronger form). We assume now that \(x,\overline{x}\in\mathbb{Z}_{p}\). We have \[f\in p\mathbb{Z}_{p},z\in\mathbb{Z}_{p}.\] Since \(z\in\mathbb{Z}_{p}\) by (9.43) we have \[\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{x}} {1-x}ep^{-j}-\frac{ef}{1-x}=z\in\mathbb{Z}_{p}\] and since \(f\in p\mathbb{Z}_{p},\ e\in p^{-1}\mathbb{Z}_{p}\) we have \[\nu(1-x\overline{x})-i-j\geq\min\{\nu(1-x),-i+1,-j+\nu(1-\overline{x})-1,0\} =\min\{-i+1,-j+\nu(1-\overline{x})-1,0\}.\] or \[i+j\leq\nu(1-x\overline{x})+\max\{i-1,j-\nu(1-\overline{x})+1,0\}.\] Also from \(g\in\mathbb{Z}_{p}\) and \(p^{i}f+g=p^{i-j}(1-\overline{x})\) we have \[j-\nu(1-\overline{x})-i\leq\max(-i-1,0)\] which implies that \[j-\nu(1-\overline{x})\leq\max(-1,i)\] and \[i+j\leq\nu(1-x\overline{x})+\max\{i+1,0\}.\] We also have \(f^{\prime}\in\mathbb{Z}_{p},g^{\prime}\in p^{-1}\mathbb{Z}_{p}\) and since \[p^{j-i}=p^{j}f^{\prime}-(1-\overline{x})g^{\prime}\] we have \[j-i\geq\min(j,\nu(1-\overline{x})-1)\] and therefore \[1-\nu(1-\overline{x})\leq i\leq\max(0,j+1-\nu(1-\overline{x})).\] ### The case \(i\geq 0\) In that case we have \[j-\nu(1-\overline{x})\leq i,\ j\leq\nu(1-x\overline{x})+1\] and therefore \[0\leq i\leq\nu(1-x\overline{x})+2\] so that \[i+j\leq 2\nu(1-x\overline{x})+3\] ### The case \(i<0\) We have \[1-\nu(1-\overline{x})\leq i\leq 0\leq j\leq\nu(1-\overline{x})-1\] and \[i+j\leq\nu(1-x\overline{x})\leq 2\nu(1-x\overline{x})+3.\] _Localisation of \(b\) and \(b^{\prime}\)._ we observe that since \(f\in p\mathbb{Z}_{p},\ f^{\prime}\in\mathbb{Z}_{p}\), we see from the expression of \(f\) and \(f^{\prime}\) in (9.42) and (9.45) that \(b\) and \(b^{\prime}\) are contained in translates of \[p^{-i-\nu(1-\overline{x})}\mathbb{Z}_{p}\text{ and }p^{-j+1}\mathbb{Z}_{p} \text{ respectively}\] which have volumes \[p^{i+\nu(1-\overline{x})}\text{ and }p^{j-1}.\] Moreover we notice that if \(\nu(f^{\prime})=0\), since \(g^{\prime}-\frac{\delta}{p}f^{\prime}\in\mathbb{Z}_{p}\) the congruence class \(\delta\) is uniquely determined by \(g^{\prime}\) and \(f^{\prime}\) (which depend on \(x\) and \(b\)) while for \(\nu(f^{\prime})\geq 1\) there is no constraint on \(\delta\) but \(b\) varies over a translate of \(p^{i+\nu(1-\overline{x})+1}\mathbb{Z}_{p}\) (whose volume is smaller by a factor \(p\)). Arguing as before, we deduce that \[\mathcal{I}_{p}(x;4) \ll(1+\nu(1-\overline{x})+\nu(1-x\overline{x}))^{2}\frac{p}{p^{ 4}}p^{2\nu(1-x\overline{x})+3+\nu(1-\overline{x})-1+1}\] \[\ll(1+\nu(1-\overline{x})+\nu(1-x\overline{x}))^{2}p^{2\nu(1-x \overline{x})+\nu(1-\overline{x})}\] Putting the above discussion together we then obtain \[\mathcal{I}_{p}(x;4)\leq\mu(I^{\prime}_{p}(1))\big{[}4\nu(1-\overline{x})+\nu (1-x\overline{x})^{2}\big{]}p^{2\nu(1-x\overline{x})}. \tag{9.85}\] Combining (9.62), (9.74), (9.82) with (9.85) we then obtain \[\mathcal{I}_{p}(x)\leq \frac{2+2p^{2\nu(1-x\overline{x})}}{p}+\nu(x\overline{x}(x-1)( \overline{x}-1))^{2}p^{\nu(x\overline{x})+2\nu((\overline{x}-1)(x-1))}\] \[+\frac{8\nu((1-x\overline{x})(1-x\overline{x}))^{2}p^{2\nu(1-x \overline{x})}}{p^{2}}+\frac{4\nu(1-\overline{x})\nu(1-x\overline{x})^{2}p^{2 \nu(1-x\overline{x})}}{p}.\] Consequently, (9.40) follows readily. ### Analysis of the non-integral cases We now deal with the remaining cases when \(x\) or \(\overline{x}\) have negative valuation. **Proposition 9.9**.: _Let notation be as before. Let \(p\) be a prime divisor of \(N^{\prime}.\) Let \(x\in E^{\times}-E^{1}\) be such that_ \[\nu(x),\nu(\overline{x})\geq-1,\text{ and }\nu(x)\text{ or }\nu(\overline{x})=-1.\] _Then_ 1. _we have_ \(\mathcal{I}_{p}(x;1)=0\) _unless_ \(\nu(x)=-1\) _and_ \(\nu(1-\overline{x})\geq 1\)_, in which case_ \[\mathcal{I}_{p}(x;1)\ll\begin{cases}p^{-4},&\text{if }\nu(x)=-1\text{ and }\nu(1-\overline{x})=1,\\ p^{-2},&\text{if }\nu(x)=-1\text{ and }\nu(1-\overline{x})\geq 2.\end{cases}\] 2. _we have_ \(\mathcal{I}_{p}(x;2)=0\) _unless_ \(\nu(x)=-1,\)__\(\nu(\overline{x})=-1,\) _or_ \(\nu(x)=-1,\)__\(\nu(\overline{x})=\nu(1-\overline{x})=0\)_, in which case_ \[\mathcal{I}_{p}(x;2)\ll\begin{cases}p^{-4},&\text{if }\nu(x)=-1\text{ and }\nu( \overline{x})=-1,\\ p^{-3},&\text{if }\nu(x)=-1\text{ and }\nu(\overline{x})=\nu(1-\overline{x})=0.\end{cases}\] 3. _we have_ \(\mathcal{I}_{p}(x;3)=0\) _unless_ \(\nu(x)\geq 0\) _and_ \(\nu(\overline{x})=-1\)_, in which case_ \[\mathcal{I}_{p}(x;3)\ll\begin{cases}p^{-3},&\text{if }\nu(x)=0\text{ and }\nu( \overline{x})=-1,\\ p^{2\nu(1-x\overline{x})-1},&\text{if }\nu(x)\geq 1\text{ and }\nu(\overline{x})=-1.\end{cases}\] _;_ * _we have_ \(\mathcal{I}_{p}(x;4)=0\) _unless_ \(\nu(x)=-1\) _and_ \(\nu(\overline{x})\geq 0\)_, in which case_ \[\mathcal{I}_{p}(x;4)\ll\begin{cases}p^{-3},&\text{if $\nu(x)=-1$ and $\nu( \overline{x})=\nu(1-\overline{x})=0$};\\ p^{2\nu(1-x\overline{x})-1},&\text{if $\nu(x)=-1$ and $\nu(\overline{x})\geq 1$};\\ p^{-3},&\text{if $\nu(x)=-1$ and $\nu(1-\overline{x})=1$};\\ \nu(1-\overline{x})p^{2\nu(1-\overline{x})-5},&\text{if $\nu(x)=-1$ and $\nu(1-\overline{x})\geq 2$}. \end{cases}\] Proof.: We shall keep the notation in the proof of Proposition 9.8 and investigate the four cases therein. **Case I.** We start by recalling the integrality conditions in that case: \[\begin{pmatrix}a&e+\frac{\delta}{p}z&z\\ -p^{j}&x+\frac{\delta}{p}f&f\\ p^{i+j}+\tau p^{j-1}&p^{i}(1-x)+\frac{\delta}{p}g-\frac{\tau}{p}(x+\delta p^{-1 }f)&g-\tau p^{-1}f\end{pmatrix}\in K_{p}, \tag{9.87}\] \[\begin{pmatrix}a^{\prime}&e^{\prime}+\frac{\tau}{p}z^{\prime}&z^{ \prime}\\ p^{i}(1-\overline{x})&\overline{x}+\frac{\tau}{p}f^{\prime}&f^{\prime}\\ p^{i+j}-\delta p^{i-1}(1-\overline{x})&-p^{j}+\frac{\tau}{p}g^{\prime}-\frac{ \delta}{p}(\overline{x}+\frac{\tau}{p}f^{\prime})&g^{\prime}-\frac{\delta}{p} f^{\prime}\end{pmatrix}\in K_{p}. \tag{9.86}\] Suppose that \(\nu(\overline{x})=-1\). The inclusions \[f^{\prime},\overline{x}+\tau p^{-1}f^{\prime}\in\mathbb{Z}_{p}\] imply that \(\nu(f^{\prime})=0\) and since \(\nu(g^{\prime}-\delta p^{-1}f^{\prime})\in\mathbb{Z}_{p}\), we have \(\nu(g^{\prime})=-1\) which in turn would imply (since \(j\geq 0\)) \[\nu(-p^{j}+\tau p^{-1}g^{\prime}-\delta p^{-1}(\overline{x}+\tau p^{-1}f^{ \prime}))=\nu(\tau p^{-1}g^{\prime})=-2\] a contradiction. We have therefore \(\nu(\overline{x})\geq 0\) and \(\nu(x)=-1\). It follows from \[\nu(x+\delta p^{-1}f),\ \nu(f)\geq 0\] that \(\nu(f)=0\) and since \(\nu(g-\tau p^{-1}f)\geq 0\) we then have \(\nu(g)=-1\). We have also \[p^{i}(1-x)+\delta p^{-1}g-\tau p^{-1}(x+\delta p^{-1}f)\in\mathbb{Z}_{p}.\] In the above expression, the three terms on the lefthand side have respective valuations \(i-1,\ -2,\geq-1\); this forces \(i=-1\) and since \(p^{i}(1-\overline{x})\in\mathbb{Z}_{p}\) we have the congruence \[\nu(1-\overline{x})\geq 1;\] this implies that \(\nu(\overline{x})=0\) and \(\nu(x\overline{x}-1)=-1\). This implies also that \(\nu(f^{\prime})\geq 1\) and \(\nu(g^{\prime})\geq 0\). If \(\nu(1-\overline{x})=1\), then since \[p^{-1+j}-\delta p^{-2}(1-\overline{x})\in\mathbb{Z}_{p}\] and the second term has valuation \(-1\) we must have \(j=0\). If \(\nu(1-\overline{x})\geq 2\) then since \[(1-x\overline{x})p^{1-j}+fp-(1-\overline{x})ep^{-j}-ef=(1-x)z\in p^{-1} \mathbb{Z}_{p}\] the terms above have respective valuations \[=-j,\ \geq 1,\ \geq 1-j,\ \geq-1\] we conclude that \(0\leq j\leq 1\). Moreover, looking at the \((3,1)\)-th entry of (9.87), we derive that \(p^{-1+j}\in\mathbb{Z}_{p}\), implying that \(j\geq 1.\) So the assumption that \(\nu(1-\overline{x})\geq 2\) forces that \(j=1\). _Localisation of \(b\) and \(b^{\prime}\)._ Since \(g^{\prime}\in\mathbb{Z}_{p}\), we see that \(b\) belong to a translate of \(p^{1-j}\mathbb{Z}_{p}\) and since \(a^{\prime}\in\mathbb{Z}_{p}\) we see that \(b^{\prime}\) belong to a translate of \(p^{-i-j}\mathbb{Z}_{p}=p^{1-j}\mathbb{Z}_{p}\). In particular, the translations depends _only_ on \(x\). _Localisation of \(\delta\) and \(\tau\)._ For fixed \(b\) and \(b^{\prime}\), we show that \(\delta\) and \(\tau\) are determined uniquely. Since \(\nu(x)=-1\), \(\nu(f)=0\) and \(\delta\,(\operatorname{mod}p)\) is determined by \(f\). If \(j=0\) then since \(p^{-1}+\tau p^{-1}\) we have \(\tau\equiv-1\,(\operatorname{mod}p)\). Suppose now that \(j=1\); we have, by considering the \((3,2)\)-th entry of (9.87), that \[\tau(g^{\prime}-\delta p^{-1}f^{\prime})-\delta\overline{x}\in p\mathbb{Z}_{p}\] so if \(\nu(g^{\prime}-\delta p^{-1}f^{\prime})=0\), \(\tau\,(\operatorname{mod}p)\) is determined. Otherwise \(\nu(z^{\prime})=0\) because the last column of (9.87) cannot be divisible by \(p\) and the last two entries are. In that case, the condition \(e^{\prime}+\frac{\tau}{p}z^{\prime}\in\mathbb{Z}_{p}\) determines \(\tau\,(\operatorname{mod}p)\) in terms of \(e^{\prime}\) and \(z^{\prime}\). Hence the corresponding contribution in the case \(\nu(1-\overline{x})=1\) to \(\mathcal{I}_{p}(x;1)\) is \[\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\sum_{\mu^{ \prime}\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\int_{p\mathbb{Z}_{p}}db^{\prime}db \ll p^{-4}.\] and the corresponding contribution in the case\(\nu(1-\overline{x})\geq 2\) to \(\mathcal{I}_{p}(x;1)\) is \[\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\sum_{\mu^{ \prime}\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\int_{\mathbb{Z}_{p}}db^{\prime}db \ll p^{-2}.\] In conclusion we obtain that \[\mathcal{I}_{p}(x;1)\ll\delta_{\nu(1-\overline{x})=1}p^{-4}+\delta_{\nu(1- \overline{x})\geq 2}p^{-2}.\] **Case II.** We suppose we are in case II and assume that \(\nu(x)\geq 0\) and \(\nu(\overline{x})=-1\). Considering (9.63) and (9.64) we see that \[i=1,\ j\geq 1.\] Also from the above equations, we have \[p^{i}f+g=(\overline{x}-1)p^{i-j},\ f,g\in\mathbb{Z}_{p}.\] So \(i-j\geq 1\), and \(j\leq 0\). A contradiction! Hence, we only have the following two possible cases: * \(\nu(x)=\nu(\overline{x})=-1\). We then have \(j=0\), \(i=1\), \(\tau=1\,(\operatorname{mod}p)\) and \(\delta=px\,(\operatorname{mod}p)\). Moreover, since \(f,f^{\prime}\in\mathbb{Z}_{p}\) we see that \(b,b^{\prime}\) belong to translates of \(\mathbb{Z}_{p}\) determined by \(x\) and \(\overline{x}\). We then have \[\mathcal{I}_{p}(x;2)\ll p^{-4}.\] * \(\nu(x)=-1\) and \(\nu(\overline{x})\geq 0\). We have \(j=0\) and \(\delta\equiv px\,(\operatorname{mod}p)\). Also, since \(f\in\mathbb{Z}_{p}\) we see that \(b^{\prime}\) belong to a translate of \(\mathbb{Z}_{p}\) determined by \(x\) and \(\overline{x}\). Since \[-p^{j}+\tau p^{i+j-1}\in\mathbb{Z}_{p}\] we obtain \(i\geq 1\), and since \[a\in p^{-1}\mathbb{Z}_{p},\ g^{\prime}\in\mathbb{Z}_{p}\text{ and }a+g^{\prime}=-p^{j-i}=-p^{-i}\in p^{-1}\mathbb{Z}_{p},\] we have \(i\leq 1\) and therefore \(i=1\). Since \(a(1-\overline{x})+f^{\prime}=\overline{x}p^{-1}\) and \(a+\tau p^{-1}\in\mathbb{Z}_{p}\) we have \[f^{\prime}\in\overline{x}p^{-1}+\tau(1-\overline{x})p^{-1}+\mathbb{Z}_{p};\] but \(f^{\prime}\in\mathbb{Z}_{p}\), therefore \[\overline{x}p^{-1}+\tau(1-\overline{x})p^{-1}\in\mathbb{Z}_{p},\] which implies that \(\nu(\overline{x})=\nu(1-\overline{x})=0\) and \(\tau\;(\operatorname{mod}p)\) is uniquely determined by \(\overline{x}.\) Finally, since \(g^{\prime}\in\mathbb{Z}_{p},\) one has \(b\in\frac{1}{2}p^{-2}+p^{-1}\mathbb{Z}_{p}.\) It follows that in this case we have \[\mathcal{I}_{p}(x)\ll\frac{1}{p^{4}}\int_{\mathbb{Z}_{p}}db^{\prime}\int_{p^{- 1}\mathbb{Z}_{p}}db=\frac{1}{p^{3}}.\] **Case III.** Consider now Case III; we recall the two integrality conditions (9.75) and (9.76): \[\begin{pmatrix}a&e+\delta p^{-1}a&z\\ -p^{j}&x-\delta p^{j-1}&f\\ p^{i+j}-\tau p^{j-1}&p^{i}(1-x)+\delta p^{i+j-1}-\frac{\tau}{p}(x-\delta p^{j-1 })&g-\tau p^{-1}f\end{pmatrix}\in K_{p}. \tag{9.88}\] \[\begin{pmatrix}a^{\prime}-\delta p^{i-1}(1-\overline{x})&e^{\prime}-\frac{ \delta}{p}\overline{x}+\frac{\tau}{p}(z^{\prime}-\delta p^{-1}f^{\prime})&z^ {\prime}-\delta p^{-1}f^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\tau p^{-1}f^{\prime}&f^{\prime}\\ p^{i+j}&-p^{j}+\tau p^{-1}g^{\prime}&g^{\prime}\end{pmatrix}\in K_{p}. \tag{9.89}\] \[\begin{cases}a=-\frac{1}{2}p^{j-i}-p^{i+j}b\\ e=\frac{p^{-i}(1+x)}{2}-p^{i}b(1-x)\\ f=-p^{j}b^{\prime}+p^{-j}.\frac{(x+1)(\overline{x}-1)}{2}\\ g=p^{i+j}b^{\prime}-\frac{(x-1)(\overline{x}-1)}{2}p^{i-j}\\ z=-\frac{1}{2}p^{j-i}b^{\prime}+p^{-i-j}y-p^{i+j}bb^{\prime}+p^{i-j}b\frac{(x- 1)(\overline{x}-1)}{2}.\end{cases}\] where \[y=\frac{x\overline{x}+3\overline{x}-x+1}{4}.\] Then one has an explicit algebraic relation \[z=\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{ x}}{1-x}ep^{-j}-\frac{ef}{1-x}. \tag{9.90}\] \[\begin{cases}a^{\prime}=-\frac{(1-x)(1-\overline{x})}{2}p^{i-j}-p^{i+j}b^{ \prime}\\ e^{\prime}=\frac{(x-1)(\overline{x}+1)}{2}p^{-j}+p^{j}b^{\prime}\\ f^{\prime}=(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}\\ g^{\prime}=p^{i+j}b-\frac{1}{2}p^{j-i}\\ z^{\prime}=-p^{i-j}b\frac{(x-1)(\overline{x}-1)}{2}+p^{-i-j}\overline{y}-p^{i+ j}bb^{\prime}+\frac{1}{2}p^{j-i}b^{\prime}\end{cases} \tag{9.91}\] and one notes the algebraic relation \[z^{\prime}=\frac{1-x\overline{x}}{1-\overline{x}}p^{-i-j}+\frac{e^{\prime}}{ 1-\overline{x}}p^{-i}-\frac{1-x}{1-\overline{x}}f^{\prime}p^{-j}-\frac{e^{ \prime}f^{\prime}}{1-\overline{x}}.\] We also recall that \[a+g^{\prime}=-p^{j-i},\ a^{\prime}+g=-(1-x)(1-\overline{x})p^{i-j},\] \[p^{i}f+g=p^{i-j}(\overline{x}-1),\ p^{j}f^{\prime}-(1-\overline{x})g^{\prime }=p^{j-i}\] Suppose that \(\nu(x)=-1.\) We have \(j=0\) and since \(p^{i+j}-\tau p^{j-1}\in\mathbb{Z}_{p}\) we have \(i=-1,\) which contradicts the condition \(p^{i+j}\in\mathbb{Z}_{p}.\) So we must have \(\nu(x)\geq 0\) and therefore \(\nu(\overline{x})=-1.\) This implies that \(i,j\geq 1\) and since \[p^{i-j}(\overline{x}-1)=p^{i}f+g\in p^{-1}\mathbb{Z}_{p}\] we have \(i\geq j\geq 1.\) We also have \(f^{\prime},p^{-1}g^{\prime}\in\mathbb{Z}_{p}\) and since \[p^{j-i}=p^{j}f^{\prime}-(1-\overline{x})g^{\prime}\in\mathbb{Z}_{p}\] we have \(j\geq i\) and \[i=j\geq 1.\] _Localization of \(b\) and \(b^{\prime}\)._ We observe that the conditions \(\nu(\overline{x})=-1\) and \(\overline{x}+\tau f^{\prime}/p\in\mathbb{Z}_{p}\) imply that \(f^{\prime}\in\mathbb{Z}_{p}^{\times}\) and that \[f^{\prime}\in\tau^{-1}p\overline{x}+p\mathbb{Z}_{p}.\] We also note that since \[f^{\prime}=(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}\] we have \[(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}-\tau^{-1}p\overline{x} \in p\mathbb{Z}_{p}\] which implies that \(b\) belongs to a translate (depending on \(x\) and \(\tau\)) of \[(1-\overline{x})^{-1}p^{1-i}\mathbb{Z}_{p}=p^{2-i}\mathbb{Z}_{p}.\] We also have \[p^{i}f+g=\overline{x}-1,\;g-\tau p^{-1}f\in\mathbb{Z}_{p}\] so that \[(\tau p^{-1}+p^{i})f+\overline{x}\in\mathbb{Z}_{p}.\] Since \[f=-p^{i}b^{\prime}+p^{-i}.\frac{(x+1)(\overline{x}-1)}{2}\] we conclude that \(b^{\prime}\) belong to a translate (depending on \(x\) and \(\tau\)) of \[(\tau p^{-1}+p^{i})^{-1}p^{-i}\mathbb{Z}_{p}=p^{1-i}\mathbb{Z}_{p}.\] We now consider the possible values of \(i=j\geq 1\). #### 9.6.1. The case \(i=1\) Suppose that \(i=j=1\). The inclusion \[p(1-x)+\delta p-\frac{\tau}{p}(x-\delta)\in\mathbb{Z}_{p}\] implies that \(x-\delta\in p\mathbb{Z}_{p}\). This implies that \(\nu(x)=0\) and the congruence class \(\delta\left(\operatorname{mod}p\right)\) is determined by \(x\). Remembering that for \(i=j=1\), \(b\) and \(b^{\prime}\) belong respectively to additive translates of \(p\mathbb{Z}_{p}\) and \(\mathbb{Z}_{p}\), we conclude that for \(\nu(\overline{x})=-1\) and \(\nu(x)=0\), the contribution to \(\mathcal{I}_{p}(x;3)\) of the case \(i=1\) is bounded by \[\mathcal{I}_{p}^{i=1}(x;3)\ll\frac{1}{p^{4}}p^{1+1-1+0}\leq\frac{1}{p^{3}}.\] #### 9.6.2. The case \(i\geq 2\) Suppose that \(i=j\geq 2\). We observe that \(a\) is a unit because the two other terms in the first column of (9.88) are divisible by \(p\); this implies that \(\nu(e)=-1\) and that the congruence congruence \(\delta\left(\operatorname{mod}\mathbb{Z}_{p}\right)\) is determined by \(e\) and \(a\) (so by \(x\) and \(b\)). By (9.90) we have \[(1-x\overline{x})+fp^{i}-(1-\overline{x})ep^{i}-efp^{2i}=(1-x)zp^{2i}\in p^{2i} \mathbb{Z}_{p}, \tag{9.92}\] and since \(\nu(f)\geq 0\), \(\nu(e)=-1\) we obtain that \[\nu((1-\overline{x})ep^{i})=i-2.\] Since the second and last term of (9.92) have valuation \(>i-2\) we have \[\nu(1-x\overline{x})=i-2\geq 0.\] This is only possible if \(\nu(x)\geq 1\). Therefore, the contribution of this case to \(\mathcal{I}_{p}(x;3)\) is bounded by \[\mathcal{I}_{p}^{i\geq 2}(x;3)\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p \mathbb{Z}_{p}}\sum_{\tau}\int_{p^{2-i}\mathbb{Z}_{p}}\int_{p^{1-i}\mathbb{Z}_{p }}db^{\prime}db\ll p^{-4+2+2i-3}=p^{2\nu(1-x\overline{x})-1}.\] and when \(\nu(\overline{x})=-1,\ \nu(x)\geq 0\) we have \[\mathcal{I}_{p}(x;3)\ll p^{2\nu(1-x\overline{x})}.\] ### Case IV Consider Case IV, and recall (9.83) and (9.84): \[\begin{pmatrix}a+\tau p^{j-1}&e+\frac{\delta}{p}z-\frac{\tau}{p}(x+\frac{\delta }{p}f)&z-\frac{\tau}{p}f\\ -p^{j}&x+\frac{\delta}{p}f&f\\ p^{i+j}&p^{i}(1-x)+\frac{\delta}{p}g&g\end{pmatrix}\in K_{p}. \tag{9.93}\] \[\begin{pmatrix}a^{\prime}&e^{\prime}+\frac{\tau}{p}a^{\prime}&z^{\prime}\\ p^{i}(1-\overline{x})&\overline{x}+\tau p^{i-1}(1-\overline{x})&f^{\prime}\\ p^{i+j}-\frac{\delta}{p}p^{i}(1-\overline{x})&-p^{j}+p^{i+j}\frac{\tau}{p}- \frac{\delta}{p}(\overline{x}+\frac{\tau}{p}p^{i}(1-\overline{x}))&g^{\prime} -\frac{\delta}{p}f^{\prime}\end{pmatrix}\in K_{p}. \tag{9.94}\] \[\begin{cases}a=-\frac{1}{2}p^{j-i}-p^{i+j}b\\ e=\frac{p^{-i}(1+x)}{2}-p^{i}b(1-x)\\ f=-p^{j}b^{\prime}+p^{-j}.\frac{(x+1)(\overline{x}-1)}{2}\\ g=p^{i+j}b^{\prime}-\frac{(x-1)(\overline{x}-1)}{2}p^{i-j}\\ z=-\frac{1}{2}p^{j-i}b^{\prime}+p^{-i-j}y-p^{i+j}bb^{\prime}+p^{i-j}b\frac{(x- 1)(\overline{x}-1)}{2}.\end{cases}\] where \[y=\frac{x\overline{x}+3\overline{x}-x+1}{4}.\] Then one has an explicit algebraic relation \[z=\frac{1-x\overline{x}}{1-x}p^{-i-j}+\frac{f}{1-x}p^{-i}-\frac{1-\overline{ x}}{1-x}ep^{-j}-\frac{ef}{1-x}. \tag{9.95}\] \[\begin{cases}a^{\prime}=-\frac{(1-x)(1-\overline{x})}{2}p^{i-j}-p^{i+j}b^{ \prime}\\ e^{\prime}=\frac{(x-1)(\overline{x}^{\prime}+1)}{2}p^{-j}+p^{j}b^{\prime}\\ f^{\prime}=(1-\overline{x})p^{i}b+\frac{1+\overline{x}}{2}p^{-i}\\ g^{\prime}=p^{i+j}b-\frac{1}{2}p^{j-i}\\ z^{\prime}=-p^{i-j}b\frac{(x-1)(\overline{x}-1)}{2}+p^{-i-j}\overline{y}-p^{i+ j}bb^{\prime}+\frac{1}{2}p^{j-i}b^{\prime}\end{cases} \tag{9.96}\] and one notes the algebraic relation \[z^{\prime}=\frac{1-x\overline{x}}{1-\overline{x}}p^{-i-j}+\frac{e^{\prime}}{ 1-\overline{x}}p^{-i}-\frac{1-x}{1-\overline{x}}f^{\prime}p^{-j}-\frac{e^{ \prime}f^{\prime}}{1-\overline{x}}. \tag{9.97}\] We also note that \[a+g^{\prime}=-p^{j-i},\ a^{\prime}+g=-(1-x)(1-\overline{x})p^{i-j},\] \[p^{i}.f+g=p^{i-j}(\overline{x}-1),\ p^{j}f^{\prime}-(1-\overline{x})g^{\prime }=p^{j-i}\] We assume now that \(\nu(x),\nu(\overline{x})\geq-1\) and that one of the two equals \(-1\). These conditions imply first that \[j\geq 0,i+j\geq 0,\ i+\nu(1-\overline{x})\geq 0.\] Suppose \(\nu(\overline{x})=-1.\) Then \(i=1\) (since \(\overline{x}+\tau p^{i-1}(1-\overline{x})\in\mathbb{Z}_{p}\)) and we get a contradiction from \(p^{i+j}-\delta p^{i-1}(1-\overline{x})\in\mathbb{Z}_{p}.\) So we must have \(\nu(\overline{x})\geq 0\) and \(\nu(x)=-1.\) Since \(p^{i}(1-x)+\delta p^{-1}g\in\mathbb{Z}_{p}\) we have \(i\geq 0.\) We now distinguish two subcases. **The case \(\nu(1-\overline{x})=0\).** Note that \[-(1-x)(1-\overline{x})p^{i-j}=a^{\prime}+g\in\mathbb{Z}_{p},\] which imply that \(i-j\geq 1.\) Since \(j\geq 0,\) then \(i\geq 1.\) _The case \(i=1.\)_ Suppose \(i=1.\) Then \(j=0.\) Since the \((3,2)\)-th entry of (9.94) is in \(\mathbb{Z}_{p}\) we have the \((2,2)\)-th entry of (9.94) lies in \(p\mathbb{Z}_{p},\) which implies that \(\tau\) is determined by \(\overline{x}.\) From the \((2,2)\)-th entry of (9.93) we see that \(f+\delta^{-1}px\in p\mathbb{Z}_{p}.\) So \(b^{\prime}\) lies in a translate of \(p\mathbb{Z}_{p}.\) The identity \(g^{\prime}=(1-\overline{x})^{-1}f^{\prime}+(1-\overline{x})^{-1}p^{-1}\) together with \(g^{\prime}-\delta p^{-1}f^{\prime}\in\mathbb{Z}_{p}\) implies that \(f^{\prime}\) belong to a translate of \(p\mathbb{Z}_{p}\) and that \(b\) belongs to a translate of \(\mathbb{Z}_{p}\) (depending on \(x\) and \(\delta\)). Therefore, the contribution in this case to \(\mathcal{I}_{p}(x;4)\) is \[\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\sum_{\delta\in( \mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}}\int_{\mathbb{Z}_{p}}db\int_{p\mathbb{ Z}_{p}}db^{\prime}\ll p^{-3}.\] _The case \(i\geq 2\)_. Now we suppose \(i\geq 2.\) Looking at the first column of (9.94) we see that \(\nu(a^{\prime})=0\) (because the whole column cannot be \(0\) modulo \(p\)). This also implies that \(\nu(e^{\prime})=-1.\) Since the \((3,2)\)-th entry of (9.93) is integral, we conclude that \(\nu(g)\geq 1\) and that \(a^{\prime}+g=-(1-x)(1-\overline{x})p^{i-j}\) is a unit so that \[i-j=1\] and \(j\geq 1.\) Next we recall that \[(1-x\overline{x})+e^{\prime}p^{j}-(1-x)f^{\prime}p^{i}-e^{\prime}f^{\prime}p^ {i+j}=(1-\overline{x})z^{\prime}p^{i+j}\in p^{i+j}\mathbb{Z}_{p}\] Observe that \[\nu(e^{\prime}p^{j})=i-2,\ \nu((1-x)f^{\prime}p^{i})\geq i-1,\ \nu(e^{\prime}f^{ \prime}p^{i+j})\geq 2i-2\] the first valuation is therefore the smallest as \(i\geq 2;\) from this we conclude that \[\nu(1-x\overline{x})=i-2\geq 0.\] In particular \(\nu(\overline{x})\geq 1.\) _Localization of \(b,b^{\prime}\)._ Finally since \(x+\delta p^{-1}f\in\mathbb{Z}_{p}\) we see that \(b^{\prime}\) belong to a translate of \(p^{1-j}\mathbb{Z}_{p}=p^{2-i}\mathbb{Z}_{p}.\) The identity \(g^{\prime}=(1-\overline{x})^{-1}p^{i-1}f^{\prime}+(1-\overline{x})^{-1}p^{-1}\) together with \(g^{\prime}-\delta p^{-1}f^{\prime}\in\mathbb{Z}_{p}\) implies that \(f^{\prime}\) belong to a translate of \(p\mathbb{Z}_{p}\) and that \(b\) belongs to a translate of \(p^{1-i}\mathbb{Z}_{p}\) (depending on \(x\) and \(\delta\)). _Localization of \(\tau\)._ Comparing evaluations on both sides of (9.95) we derive that \[\frac{1-x\overline{x}}{1-x}p^{-i-j}-\frac{1-\overline{x}}{1-x}ep^{-j}\in p^{ -i+1}\mathbb{Z}_{p}.\] As a consequence, we have \[(1-x\overline{x})p^{-i}-(1-\overline{x})e\in p^{-1}\mathbb{Z}_{p}. \tag{9.98}\] On the other hand, considering the \((1,2),\)\((1,3)\) and \((2,2)\)-th entries of (9.93), one has \[e+\delta\tau p^{-2}f\in p^{-1}\mathbb{Z}_{p},\ \ z-\tau p^{-1}f\in\mathbb{Z}_{p}, \ \ x+\delta p^{-1}f\in\mathbb{Z}_{p}.\] Hence \(e-\tau p^{-1}x\in p^{-1}\mathbb{Z}_{p}.\) In conjunction with (9.98) we deduce that \[(1-x\overline{x})p^{-i}-\tau p^{-1}x(1-\overline{x})\in p^{-1}\mathbb{Z}_{p}.\] So \(\tau\) is determined by \(x.\) Hence, in this case, i.e., \(i\geq 2\), the corresponding contribution to \(\mathcal{I}_{p}(x;4)\) is \[\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\sum_{\delta\in( \mathcal{I}_{p}/p\mathbb{Z}_{p})^{\times}}\int_{p^{1-i}\mathbb{Z}_{p}}db\int_{p ^{2-i}\mathbb{Z}_{p}}db^{\prime}\ll p^{2\nu(1-x\overline{x})-1}.\] In conclusion, for \(\nu(x)=-1\) and \(\nu(1-\overline{x})=0\) we have \[\mathcal{I}_{p}(x;4)\ll\delta_{\nu(\overline{x})=\nu(1-\overline{x})=0}p^{-3 }+\delta_{\nu(\overline{x})\geq 1}p^{2\nu(1-x\overline{x})}.\] **Case \(\nu(1-\overline{x})\geq 1\).** We then have \(\nu(\overline{x})=0\) and \(\nu(1-x\overline{x})=-1\). Suppose that \(i\geq 1\) then since \(j\geq 0,i+j-1\geq 0\) and \(i+\nu(1-\overline{x})-1\geq 1\) and \(\overline{x}\) is a unit, the condition \[-p^{j}+p^{i+j}\frac{\tau}{p}-\frac{\delta}{p}(\overline{x}+\frac{\tau}{p}p^{i }(1-\overline{x}))\in\mathbb{Z}_{p}\] leads to \(\delta\overline{x}p^{-1}\in\mathbb{Z}_{p}\) a contradiction. Therefore \(i\leq 0\). Moreover since \(\nu(g)\geq 0\) and \(\nu(1-x)=-1\) the condition \[p^{i}(1-x)+\frac{\delta}{p}g\in\mathbb{Z}_{p}\] forces \(i\geq 0\) so we have \[i=0.\] The condition \[-(1-x)(1-\overline{x})p^{-j}=a^{\prime}+g\in\mathbb{Z}_{p}\] gives the bound \[0\leq j\leq\nu(1-\overline{x})-1.\] _The case \(\nu(\overline{x}-1)=1\)._ Suppose \(\nu(\overline{x}-1)=1\). Then \(j=0\). From \(x+\delta p^{-1}f\in\mathbb{Z}_{p}\) we derive that \(b^{\prime}\) is in a translate of \(p\mathbb{Z}_{p}\). Looking at the \((1,1)\)-th entry of (9.93), we obtain that \(a+\tau p^{-1}=-b-1/2+\tau p^{-1}\in\mathbb{Z}_{p}\). So \(b\) belongs to a translate of \(\mathbb{Z}_{p}\). Considering the \((3,2)\)-th entry of (9.94), we obtain \[\tau-\delta(\overline{x}+\tau p^{-1}(1-\overline{x}))\in p\mathbb{Z}_{p}.\] Hence \(\delta\) is determined by \(\tau\). Therefore, the contribution in this case to \(\mathcal{I}_{p}(x;4)\) is \[\ll\frac{1}{p^{4}}\sum_{\mu\in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\sum_{\tau\in( \mathbb{Z}_{p}/p\mathbb{Z}_{p})^{\times}}\int_{\mathbb{Z}_{p}}db\int_{p \mathbb{Z}_{p}}db^{\prime}\ll p^{-3}.\] _The case \(\nu(\overline{x}-1)\geq 2\)._ Now we consider the situation where \(\nu(\overline{x}-1)\geq 2\). _Localization of \(\delta,\tau\)._ Since \(\nu(x)=-1\), we see that \(f\) is a unit and that \(\delta\left(\operatorname{mod}p\right)\) is determined by \(x\) and \(f\) (which depend on \(b^{\prime}\)). In addition we see that \(\nu(z)=-1\) and \(\nu(\delta z/p)=-2\) ans since \(\nu(\frac{\tau}{p}(x+\frac{\delta}{p}f))\geq-1\) we must have \(\nu(e)=-2\). Since \[e+\frac{\delta}{p}z-\frac{\tau}{p}(x+\frac{\delta}{p}f)\in\mathbb{Z}_{p}\] we see that \(e+\frac{\delta}{p}z\) has valuation \(-1\) which implies that \(\delta\left(\operatorname{mod}p\right)\) is determined by \(e\) and \(z\) which depends on \(b\) and \(b^{\prime}\). _Localization of \(b,b^{\prime}.\)_ Since \(x+\delta p^{-1}f\in\mathbb{Z}_{p}\) we derive that \(b^{\prime}\) is in a translate of \(p^{1-j}\mathbb{Z}_{p},\) and since \(a+\tau p^{j-1}\in\mathbb{Z}_{p},\)\(b\) belongs to a translate of \(p^{-j}\mathbb{Z}_{p}.\) Therefore, the contribution in this case to \(\mathcal{I}_{p}(x;4)\) is \[\ll\frac{1}{p^{4}}\sum_{0\leq j\leq\nu(1-\overline{x})-1}\sum_{\mu\in \mathbb{Z}_{p}/\mathbb{Z}_{p}}\sum_{\tau\in(\mathbb{Z}_{p}/\mathbb{Z}_{p})^{ \times}}\int_{p^{-j}\mathbb{Z}_{p}}db\int_{p^{1-j}\mathbb{Z}_{p}}db^{\prime}\ll \nu(1-\overline{x})p^{2\nu(1-\overline{x})-5}.\] In conclusion, for \(\nu(x)=-1,\ \nu(\overline{x})\geq 0\) and \(\nu(1-\overline{x})\geq 1\) we have \[\mathcal{I}_{p}(x;4)\ll\delta_{\nu(1-\overline{x})=1}p^{-3}+\delta_{\nu(1- \overline{x})\geq 2}\nu(1-\overline{x})p^{2\nu(1-\overline{x})-5}.\] This concludes Proposition 9.9. #### 9.6.3. The archimedean place In this subsection we study the archimedean local orbital integral \(\mathcal{I}_{\infty}(x)\). Recall the definition (given in (9.13)): \[\mathcal{I}_{\infty}(x)=\int_{H_{x}(\mathbb{R})\setminus G^{\prime}(\mathbb{ R})\times G^{\prime}(\mathbb{R})}\big{|}f_{\infty}(y_{1}^{-1}\gamma(x)Jy_{2}) \big{|}dy_{1}dy_{2},\ x\in E^{\times}-E^{1}.\] **Proposition 9.10**.: _Let notation be as before. Let \(x\in E^{\times}-E^{1}.\) Define_ \[\langle x\rangle:=\begin{cases}|x|^{2}+1&\text{if }|x|<1,\\ |x|^{2}&\text{if }|x|>1.\end{cases} \tag{9.99}\] _Then for \(k\geq 32\)_ \[\mathcal{I}_{\infty}(x)\ll\frac{1}{k\langle x\rangle^{\frac{k}{4}-2}(|x|^{2}-1 )^{2}}, \tag{9.100}\] _where the implied constant is absolute, and the absolutely value \(|\cdot|\) is the usual norm in \(\mathbb{C}.\)_ Before engaging the proof we will need two elementary lemmatas **Lemma 9.11**.: _Let \(A,B,C>0.\) Let \(m\geq 2.\) Then_ \[\int_{0}^{\infty}\frac{1}{\big{[}A+(Ba-Ca^{-1})^{2}\big{]}^{m}}\frac{da}{a^{2} }\ll\frac{1}{A^{m-\frac{1}{2}}C}, \tag{9.101}\] _where the implied constant is absolute._ Proof.: Denote by LHS the left hand side of (9.101). Then LHS \[= \int_{\sqrt{\frac{\overline{x}}{2}}}^{\infty}\frac{1}{\big{[}A+( Ba-Ca^{-1})^{2}\big{]}^{m}}\frac{da}{a^{2}}+\int_{0}^{\sqrt{\frac{\overline{x}}{2}}} \frac{1}{\big{[}A+(Ba-Ca^{-1})^{2}\big{]}^{m}}\frac{da}{a^{2}}\] \[\leq \int_{\sqrt{\frac{\overline{x}}{2}}}^{\infty}\frac{1}{\big{[}A+( Ba-\sqrt{BC})^{2}\big{]}^{m}}\frac{da}{a^{2}}+\int_{0}^{\sqrt{\frac{\overline{x}}{2}}} \frac{1}{\big{[}A+(Ca^{-1}-\sqrt{BC})^{2}\big{]}^{m}}\frac{da}{a^{2}}\] \[= \int_{0}^{\infty}\frac{da}{(a+\sqrt{CB^{-1}})^{2}(A+B^{2}a^{2})^{ m}}+\int_{0}^{\infty}\frac{1}{(A+C^{2}a^{2})^{m}}da\] \[\leq \int_{0}^{\infty}\frac{da}{(\sqrt{CB^{-1}})^{2}(A+B^{2}a^{2})^{m} }+\frac{1}{A^{m-\frac{1}{2}}C}\ll\frac{1}{A^{m-\frac{1}{2}}C},\] where the implied constant is absolute. Then (9.101) holds. _Remark 9.3_.: One has \[\int_{0}^{\infty}\frac{da}{(1+a^{2})^{m}}=\frac{2\pi}{2^{2m}}\frac{(2m-2)!}{(m- 1)!^{2}}=\frac{2\pi}{2^{2m}}\frac{2^{2(m-1)}}{(\pi m)^{1/2}}(1+o(1))\ll\frac{1 }{m^{1/2}} \tag{9.102}\] from which one can extract slightly better bounds. Similarly we have **Lemma 9.12**.: _Let \(A,B,C>0.\) Let \(m\geq 2.\) Then_ \[\int_{0}^{\infty}\frac{1}{\left[A+(Ba+Ca^{-1})^{2}\right]^{m}}\frac{da}{a^{2}} \ll\frac{1}{(A+2BC)^{m-\frac{1}{2}}C}, \tag{9.103}\] _where the implied constant is absolute (independent of \(m\))._ Proof.: Denote by LHS the left hand side of (9.103). Then \[\begin{split}\text{LHS}=&\int_{\sqrt{\frac{c}{2}} }^{\infty}\frac{1}{\left[A+(Ba+Ca^{-1})^{2}\right]^{m}}\frac{da}{a^{2}}+\int_{ 0}^{\sqrt{\frac{c}{2}}}\frac{1}{\left[A+(Ba+Ca^{-1})^{2}\right]^{m}}\frac{da}{ a^{2}}\\ \leq&\int_{\sqrt{\frac{c}{2}}}^{\infty}\frac{1}{ \left[A+2BC+(Ba)^{2}\right]^{m}}\frac{da}{a^{2}}+\int_{\sqrt{\frac{c}{2}}}^{ \infty}\frac{1}{\left[A+2BC+(Ca)^{2}\right]^{m}}da.\end{split}\] For the first term, we note that in the range of integration we have \[(Ba)^{2}\geq(Ba-C/a)^{2}\] so that the first integral is bounded using Lemma 9.11 and the second is bounded using a linear change of variable. This yields to (9.103). Proof.: (of Proposition 9.100) Recall in SS4.1.5 the notation \[g_{E}=\text{diag}(|D_{E}|^{1/4},1,|D_{E}|^{-1/4}).\] Write \(y_{1,\infty}\) and \(y_{2,\infty}\) into their Iwasawa coordinates: \[y_{1,\infty}=g_{E}\begin{pmatrix}a&\\ &a^{-1}\end{pmatrix}\begin{pmatrix}1&-ib\\ &1\end{pmatrix}k_{1}g_{E}^{-1},\;y_{2,\infty}=g_{E}\begin{pmatrix}a^{\prime}& \\ &a^{\prime-1}\end{pmatrix}\begin{pmatrix}1&-ib^{\prime}\\ &1\end{pmatrix}k_{2}g_{E}^{-1},\] where \(a,a^{\prime}\in\mathbb{R}_{+}^{\times}\) and \(k_{1},k_{2}\) lie in the maximal compact subgroup. Then \(g_{E}^{-1}y_{1,\infty}^{-1}\gamma(x)y_{2,\infty}g_{E}\) is equal to \[k_{1}^{-1}\begin{pmatrix}a^{-1}&\quad iab\\ &1&\\ &&a\end{pmatrix}g_{E}^{-1}\gamma(x)Jg_{E}\begin{pmatrix}a^{\prime}&-ia^{\prime }b^{\prime}\\ &1&\\ &&a^{\prime-1}\end{pmatrix}k_{2}.\] Noting the \(K\)-type, we then obtain \[|f_{\infty}(y_{1,\infty}^{-1}\gamma(x)y_{2,\infty})|=\bigg{|}M\left(\begin{pmatrix} a^{-1}&\quad iab\\ &1&\\ &&a\end{pmatrix}g_{E}^{-1}\gamma(x)Jg_{E}\begin{pmatrix}a^{\prime}&\quad ia^{ \prime}b^{\prime}\\ &1&\\ &&a^{\prime-1}\end{pmatrix}\right)\bigg{|},\] where \(M(g):=\langle D^{\Lambda}(g)\phi_{\circ},\phi_{\circ}\rangle_{\Lambda}\) is defined in (10.7) (cf. Lemma 4.7). A direct computation shows that \(\begin{pmatrix}a^{-1}&\quad iab\\ &1&\\ &&a\end{pmatrix}g_{E}^{-1}\gamma(x)Jg_{E}\begin{pmatrix}a^{\prime}&\quad ia^{ \prime}b^{\prime}\\ &1&\\ &&a^{\prime-1}\end{pmatrix}\) is equal to \[\begin{pmatrix}*&\frac{1+x}{2a|D_{E}|^{\frac{1}{4}}}+iab(1-x)|D_{E}|^{\frac{1} {4}}&*\\ -a^{\prime}|D_{E}|^{\frac{1}{4}}&x&ia^{\prime}b^{\prime}|D_{E}|^{\frac{1}{4}}+ \frac{(x+1)(\overline{x}-1)}{2a^{\prime}|D_{E}|^{\frac{1}{4}}}\\ aa^{\prime}&(1-x)a|D_{E}|^{\frac{1}{4}}&*\end{pmatrix}. \tag{9.104}\] Denote by \((g_{ij})\) the matrix given in (9.104). Then by Lemma 4.7 \[|f_{\infty}(y_{1,\infty}^{-1}\gamma(x)y_{2,\infty})|=|M((g_{ij}))|=\frac{2^{k }|g_{22}|^{k/2}}{|g_{11}-g_{13}-g_{31}+g_{33}|^{k}}. \tag{9.105}\] Since \(g=(g_{ij})\) is unitary, i.e., \(\overleftarrow{g}Jg=J,\) its conjugate by \(\mathbf{B}\) ( defined in (4.2)) satisfies \[g^{\prime}=(g^{\prime}_{ij})=\mathbf{B}^{-1}g\mathbf{B}\in G_{J^{\prime}}( \mathbb{R}),\] where \(J^{\prime}=\text{diag}(1,1,-1).\) Since \(\overleftarrow{g^{\prime}}Jg^{\prime}=J^{\prime}\) we have \[|g^{\prime}_{33}|^{2}=|g^{\prime}_{22}|^{2}+|g^{\prime}_{31}|^{2}+|g^{\prime}_{ 12}|^{2}=|g^{\prime}_{22}|^{2}+|g^{\prime}_{13}|^{2}+|g^{\prime}_{21}|^{2}\] and \[|g^{\prime}_{33}|^{2}=|g^{\prime}_{22}|^{2}+\frac{|g^{\prime}_{13}|^{2}+|g^{ \prime}_{31}|^{2}+|g^{\prime}_{12}|^{2}+|g^{\prime}_{21}|^{2}}{2}\geq|g^{\prime }_{22}|^{2}+\frac{|g^{\prime}_{12}|^{2}+|g^{\prime}_{21}|^{2}}{2}. \tag{9.106}\] By \[\mathbf{B}\left(\begin{array}{ccc}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{array}\right)\mathbf{B}=\left(\begin{array}{ccc} \frac{g_{11}+g_{13}+g_{31}+g_{31}}{2}&\frac{g_{12}+g_{32}}{\sqrt{2}}&\frac{g_{ 11}-g_{13}+g_{31}-g_{33}}{2}\\ \frac{g_{21}+g_{23}}{\sqrt{2}}&\frac{g_{22}}{\sqrt{2}}&\frac{g_{21}-g_{23}}{ \sqrt{2}}\\ \frac{g_{11}+g_{13}-g_{31}-g_{33}}{2}&\frac{g_{12}-g_{32}}{\sqrt{2}}&\frac{g_{ 11}-g_{13}-g_{13}}{2}\end{array}\right).\] we then have from (9.106) that \[|g_{11}-g_{13}-g_{31}+g_{33}|^{2}\geq 4|g_{22}|^{2}+2|g_{12}+g_{32}|^{2}+2|g_{2 1}+g_{23}|^{2}. \tag{9.107}\] Substituting (9.107) into (9.105) we then get \[|f_{\infty}(y_{1,\infty}^{-1}\gamma(x)y_{2,\infty})|\leq\left(\frac{2|x|}{2|g _{22}|^{2}+|g_{12}+g_{32}|^{2}+|g_{21}+g_{23}|^{2}}\right)^{k/2}. \tag{9.108}\] We can write \(x=m+ni\sqrt{|D_{E}|}\) with \(m,n\in\mathbb{Q}\). Plugging (9.104) into the right hand side of (9.108) we then see that \(|f_{\infty}(y_{1,\infty}^{-1}\gamma(x)y_{2,\infty})|\) is bounded by We have \[2|g_{22}|^{2}+|g_{12}+g_{32}|^{2}+|g_{21}+g_{23}|^{2}\] \[=2|x|^{2}+h_{1}(a|D_{E}|^{1/4},b,x)^{2}+h_{2}(|D_{E}|^{1/4}a,b,x) ^{2}\] \[\quad+h_{1}^{\prime}(|D_{E}|^{1/4}a^{\prime},b^{\prime},x)^{2}+h _{2}^{\prime}(|D_{E}|^{1/4}a^{\prime},b^{\prime},x)^{2}\] where \[\begin{cases}h_{1}(a,b,x)=\frac{m+1}{2a}+abn|D_{E}|^{\frac{1}{2}}-(m-1)a\\ h_{2}(a,b,x)=ab(m-1)+an|D_{E}|^{\frac{1}{2}}-\frac{n|D_{E}|^{\frac{1}{2}}}{2a} \\ h_{1}^{\prime}(a^{\prime},b^{\prime},x)=a^{\prime}-\frac{m^{2}+n^{2}|D_{E}|-1 }{2a^{\prime}}\\ h_{2}^{\prime}(a^{\prime},b^{\prime},x)=a^{\prime}b^{\prime}-\frac{n|D_{E}|^{ \frac{1}{2}}}{a^{\prime}}.\end{cases} \tag{9.109}\] Then after the change of variables \[a|D_{E}|^{1/4}\longleftrightarrow a,\ a^{\prime}|D_{E}|^{1/4}\longleftrightarrow a ^{\prime},\] \(\mathcal{I}_{\infty}(x)\) is bounded by \[\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}}\int_{\mathbb{R} }\left[\frac{2|x|}{2|x|^{2}+\sum_{j=1}^{2}\left[h_{j}(a,b,x)^{2}+h_{j}^{ \prime}(a^{\prime},b^{\prime},x)^{2}\right]}\right]^{\frac{k}{2}}\frac{dbb^{ \prime}dada^{\prime}}{aa^{\prime}}. \tag{9.110}\] Note that \(x\not\in E^{1}\), i.e., \(|x|^{2}=m^{2}+n^{2}|D_{E}|\neq 1\). So \((m,n)\neq(1,0)\) and if furthermore \(|D_{E}|=1\), then \((m,n)\neq(\pm 1,0)\) or \((0,\pm 1.)\). Suppose first that \(|x|^{2}>1\). 1. Suppose \(n\neq 0\). Then we make the linear change of variables \[h_{1}(a,b,x)\longleftrightarrow b,\ h_{2}^{\prime}(a^{\prime},b^{\prime},x) \longleftrightarrow b^{\prime}\] and find that(9.110) is bounded by (9.111) \[\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}}\int_{\mathbb{R} }\left[\frac{2|x|}{2|x|^{2}+b^{2}+h(a,b,x)^{2}+b^{\prime 2}+h^{\prime}(a^{\prime},x)^{2}} \right]^{\frac{k}{2}}\frac{dbb^{\prime}dada^{\prime}}{|n||D_{E}|^{1/2}a^{2}a^{ \prime 2}},\] where \[h^{\prime}(a^{\prime},x)=h_{1}^{\prime}(a^{\prime},b^{\prime},x)=a^{\prime}- \frac{|x|^{2}-1}{2}\frac{1}{a^{\prime}}\] defined in (9.109) and \[h(a,b,x)=\frac{m-1}{n|D_{E}|^{\frac{1}{2}}}b+\frac{|x-1|^{2}}{n|D_{E}|^{\frac{1}{2 }}}a-\frac{|x|^{2}-1}{2n|D_{E}|^{\frac{1}{2}}}\frac{1}{a}=\alpha.b+\beta,\] say. Making a linear change of variable \[(1+\alpha^{2})^{1/2}.b+\beta\longleftrightarrow b,\] and noting that \[1+\alpha^{2}=\frac{|x-1|^{2}}{n^{2}|D_{E}|},\] (9.111) becomes (9.112) \[\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}}\int_{\mathbb{R}} \left[\frac{2|x|}{2|x|^{2}+b^{2}+h(a,x)^{2}+b^{\prime 2}+h^{\prime}(a^{\prime},x)^{ 2}}\right]^{\frac{k}{2}}\frac{dbdb^{\prime}dada^{\prime}}{|x-1|a^{2}a^{\prime 2 }},\] where \[h(a,x)=\frac{\beta}{(1+\alpha^{2})^{1/2}}=|x-1|a-\frac{|x|^{2}-1}{2|x-1|}\frac {1}{a}.\] By two changes of variable (9.112) is equal to (9.113) \[T_{k}.T_{k-1}\frac{(2|x|)^{k/2}}{|x-1|}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_ {>0}}\left[\frac{1}{2|x|^{2}+h(a,x)^{2}+h^{\prime}(a^{\prime},x)^{2}}\right]^ {\frac{k}{2}-1}\frac{dada^{\prime}}{a^{2}a^{\prime 2}}.\] where \[T_{k}=\int_{-\infty}^{\infty}\frac{db}{(1+b^{2})^{k/2}}\ll\frac{1}{k^{1/2}}\] by (9.102). Applying twice the computational Lemma 9.11 above, with \[A=2|x|^{2}+h^{\prime}(a^{\prime},x)^{2},\ C=\frac{|x|^{2}-1}{2|x-1|}>0,\ A^{ \prime}=2|x|^{2},C^{\prime}=\frac{|x|^{2}-1}{2},\] we have \[\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_{>0}}\left[\frac{1}{2|x|^{2}+h(a,x)^{2} +h^{\prime}(a^{\prime},x)^{2}}\right]^{\frac{k}{2}-1}\frac{dada^{\prime}}{a^{ 2}a^{\prime 2}}\ll\frac{1}{k}\frac{|x-1|}{(2|x|^{2})^{k/2-2}(|x|^{2}-1)^{2}}\] where the implicit constant is absolute. Therefore, (9.114) \[\mathcal{I}_{\infty}(x)\ll\frac{1}{k}\frac{1}{|x|^{\frac{k}{2}-4}(|x|^{2}-1)^ {2}},\] where the implicit constant is absolute. 2. Suppose \(n=0\). We then have \(x=m\neq 1\) and \(\mathcal{I}_{\infty}(x)\) is bounded by (9.110) with (9.115) \[\begin{cases}h_{1}(a,b,x)=(m-1)a-\frac{m+1}{2}\frac{1}{a}\\ h_{2}(a,b,x)=a(m-1)b\\ h_{1}^{\prime}(a^{\prime},b^{\prime},x)=a^{\prime}-\frac{m^{2}-1}{2}\frac{1} {a^{\prime}}\\ h_{2}^{\prime}(a^{\prime},b^{\prime},x)=a^{\prime}b^{\prime}.\end{cases}\] In particular both \(h_{1}(a,b,x)\) and \(h_{1}^{\prime}(a,b,x)\) do not depends on \(b\) and \(b^{\prime}\) and we note them \(h(a,x)\) and \(h^{\prime}(a,x)\) respectively. Changing variables this expression equals (9.116) \[\frac{1}{|x-1|}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}_{>0}}\int_{\mathbb{R}} \int_{\mathbb{R}}\left[\frac{2|x|}{2|x|^{2}+b^{2}+h(a,x)+b^{\prime 2}+h^{ \prime}(a^{\prime},x)}\right]^{\frac{k}{2}}\frac{dbdb^{\prime}dada^{\prime}}{a^ {2}a^{\prime 2}}.\] By Lemma 9.11, (9.116) is majorized by \[T_{k}.T_{k-1}.\frac{(2|x|)^{k/2}}{|x-1|}\int_{0}^{\infty}\int_{0}^{\infty}\left[ \frac{1}{2|x|^{2}+h(a,x)+h^{\prime}(a^{\prime},x)}\right]^{\frac{k}{2}-1}\frac{ dada^{\prime}}{a^{2}a^{\prime 2}}\ll\frac{1}{k}\frac{1}{|x|^{\frac{k}{2}-4}(|x|^{2}-1)^{2}},\] where the implied constant is absolute. We have also used that \[m^{2}-1=|x|^{2}-1.\] Therefore, assuming that \(|x|>1,\) it follows from (9.114) and the above estimates that \[\mathcal{I}_{\infty}(x)\ll\frac{1}{k}\frac{1}{|x|^{\frac{k}{2}-4}(|x|^{2}-1)^ {2}}.\] If we assume that \(|x|<1,\) i.e., \(1-|x|^{2}>0.\) Then similarly as above we find (using Lemma 9.12 instead of Lemma 9.11) \[\mathcal{I}_{\infty}(x)\ll\frac{1}{k}\frac{|x|^{\frac{k}{2}}}{(|x|^{2}+1)^{ \frac{k}{2}-2}(|x|^{2}-1)^{2}}\ll\frac{1}{k}\frac{1}{(|x|^{2}+1)^{\frac{k}{6} -2}(|x|^{2}-1)^{2}}.\] Thus (9.100) holds. _Remark 9.4_.: Recall that in the regular orbital integral case we have \(x\notin E^{1},\) so it does not lead to the divergence of the integral (9.110). That is why bounds (9.11) was too coarse to deal with the unipotent orbital integrals and we have had to make explicit computations instead. ## 10. **Bounds for the sum of global regular orbital integrals** In this section we collect the local bounds from the previous section to bound the sum of global regular orbital integrals in (6.17) \[\sum_{x\in E^{\gamma}-E^{1}}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^ {\prime}) \tag{10.1}\] (and in particular establish absolute convergence when \(k\) is large enough). We recall that to the test function \(f^{\mathfrak{n}}\) is associated an integer \(\ell\geq 1\) given by (4.25) and that in Theorem 9.5 we have introduced the set \[\mathfrak{X}(N,N^{\prime},\ell)=\big{\{}x\in E^{\times}-E^{1},\ x\in(\ell N^{ \prime})^{-1}\mathcal{O}_{E},\ x\overline{x}\equiv 1\ (\text{mod}\ N)\big{\}}.\] and have proved that for \(x\in E^{\times}-E^{1}\) not contained in \(\mathfrak{X}(N,N^{\prime},\ell)\) we have \[\mathcal{I}(f^{\mathfrak{n}};x)=\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}}, \varphi^{\prime})=0.\] The main result of this section is the following upper bound for the sum (10.1): **Theorem 10.1**.: _Let notations and assumption be as before. Let \(\varphi^{\prime}\) be the new form of weight \(k\geq 32\), level \(N^{\prime}\) described in SS4.5 subject to the normalization (4.38) and set_ \[\kappa=\frac{k}{4}-2. \tag{10.2}\] _We have_ \[\sum_{x\in E^{\times}-E^{1}}\frac{\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}}, \varphi^{\prime})}{\langle\varphi^{\prime},\varphi^{\prime}\rangle}\ll_{E}(k \ell NN^{\prime})^{o(1)}k^{-\frac{1}{2}}\ell^{17}{N^{\prime}}^{\frac{14}{3}}N ^{2}(1+\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\mathcal{E} \tag{10.3}\] _where_ \[\mathcal{E}:=e^{-\frac{\kappa}{(\ell N^{\prime})^{2}+1}}+2^{-\kappa}\] _Moreover if we assume that_ \[\ell^{2}{N^{\prime}}^{2}<N \tag{10.4}\] _we have_ \[\sum_{x\in E^{\times-E^{1}}}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{ \prime})\ll(k\ell NN^{\prime})^{o(1)}k^{-\frac{1}{2}}\ell^{7}N^{4}{N^{\prime}}^ {2+2/3}(\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}. \tag{10.5}\] _Here the implicit constants depends only on \(E\)._ _Remark 10.1_.: The above estimate could be improved by carefully analyzing each situation determined by Proposition 9.9. ### Decomposition of Regular Orbital Integrals Let us recall that we have made the following reductions in SS9.2: \[\big{|}\sum_{x\in E^{\times}-E^{1}}\mathcal{O}_{\gamma(x)}(f^{ \mathfrak{n}},\varphi^{\prime})\big{|} \leq\sum_{x\in E^{\times}-E^{1}}|\mathcal{O}_{\gamma(x)}(f^{ \mathfrak{n}},\varphi^{\prime})|\] \[\ll\|\varphi^{\prime}\|_{\infty}^{2}\sum_{x\in\mathfrak{X}(N,N^{ \prime},\ell)}\mathcal{I}(x)\] \[\ll\|\varphi^{\prime}\|_{\infty}^{2}\sum_{x\in\mathfrak{X}(N,N^ {\prime},\ell)}\mathcal{I}_{\infty}(x)\prod_{p}\mathcal{I}_{p}(x)\] \[\leq(kN^{\prime})^{o(1)}k^{1/2}{N^{\prime}}^{2/3}\sum_{x\in \mathfrak{X}(N,N^{\prime},\ell)}\mathcal{I}_{\infty}(x)\prod_{p}\mathcal{I}_{ p}(x) \tag{10.6}\] here for the last step we have used the following bound from [11, Thm. 1] for \(\varphi^{\prime}\) satisfying (4.38) \[\|\varphi^{\prime}\|_{\infty}^{2}\leq(kN^{\prime})^{o(1)}k^{1/2}{N^{\prime}}^ {\frac{2}{3}}\langle\varphi^{\prime},\varphi^{\prime}\rangle.\] Set \[S(N,N^{\prime},\ell):=\sum_{x\in\mathfrak{X}(N,N^{\prime},\ell)}\mathcal{I}_{ \infty}(x)\prod_{p}\mathcal{I}_{p}(x).\] By Proposition 9.10 we have \[\mathcal{I}_{\infty}(x)\ll\frac{1}{k}\frac{1}{\langle x\rangle^{\kappa}(|x|^{ 2}-1)^{2}}, \tag{10.7}\] where \[\langle x\rangle=\begin{cases}|x|^{2}+1&\text{ if }|x|<1,\\ |x|^{2}&\text{ if }|x|>1.\end{cases}\] We consider the decomposition \[\prod_{p}\mathcal{I}_{p}(x)=\mathcal{I}_{n-sp}(x)\mathcal{I}_{sp}(x)\mathcal{ I}_{N^{\prime}}(x)\mathcal{I}_{\ell}(x)\] where \(\mathcal{I}_{n-sp}(x)\), \(\mathcal{I}_{sp}(x)\), \(\mathcal{I}_{N^{\prime}}(x)\) and \(\mathcal{I}_{\ell}(x)\) denote respectively the product of the \(\mathcal{I}_{p}(x)\) over all the non-split primes (coprime to \(\ell\)), over the split primes (coprime to \(N^{\prime}\)) and over the primes dividing \(N^{\prime}\) (if any) and \(\ell\). These integrals have been bounded in SS9.4 and in SS9.5. To implement these bounds, we need some extra notation: for any \(z\in E^{\times}\) and \(p\) a prime we set \[\operatorname{Nr}(z)_{p}=\prod_{\mathfrak{p}\mid p}p^{e_{p}f_{p}\nu_{ \mathfrak{p}}(z)}\] where \(e_{p},f_{p}\) and \(\nu_{\mathfrak{p}}\) are respectively the ramification index, residual degree and valuation at \(\mathfrak{p}\); for \(S\) a subset of prime numbers we set \[\operatorname{Nr}(z)_{S}:=\prod_{p\in S}\operatorname{Nr}(z)_{p}.\] We have \[\mathrm{Nr}(z)=z\overline{z}=|z|^{2}=\prod_{p}\mathrm{Nr}(z)_{p}=\mathrm{Nr}(z)_{ \ell N^{\prime}}\mathrm{Nr}(z)_{sp}\mathrm{Nr}(z)_{n-sp} \tag{10.8}\] where \(sp\) (resp. \(n-sp\)) denote the product over the split (resp. non-split) primes. By Propositions 9.6 (for the non-split case) and 9.8 (for the split case) we have for any \(\varepsilon>0\) and \(x\in\mathfrak{X}(N,N^{\prime},\ell)\) (in particular \(x\not\in E^{1}\)) \[\mathcal{I}_{n-sp}(x)\ll_{\varepsilon}\delta_{x\overline{x}\equiv 1\,(\mathrm{ mod}\,N)}N\mathrm{Nr}(x\overline{x}-1)_{n-sp}^{3+\varepsilon}, \tag{10.9}\] \[\mathcal{I}_{sp}(x)\ll_{\varepsilon}\mathrm{Nr}(x\overline{x}-1)_{sp}^{3/2+ \varepsilon}\leq\mathrm{Nr}(x\overline{x}-1)_{sp}^{3+\varepsilon} \tag{10.10}\] (the last inequality because \(x\overline{x}-1\) has non-negative valuation at every prime \(p\nmid N^{\prime}\ell\)). To bound \(\mathcal{I}_{\ell}(x)=\prod_{p\mid\ell}\mathcal{I}_{p}(x)\) we apply Proposition 9.7: for \(p\mid\ell\) and \(r=\nu_{p}(\ell)\), we have \[\mathcal{I}_{p}(x)\ll\delta_{\nu(x)\geq-r}(1+|\nu(1-x))|+|\nu(1-x\overline{x })|)p^{7r}\big{(}p^{2\nu(1-x)}+p^{2\nu(1-x\overline{x})}\big{)}. \tag{10.11}\] To bound \(\mathcal{I}_{N^{\prime}}(x)\) (granted \(N^{\prime}>1\)) we use Propositions 9.8 and 9.9, which depend on the valuations \(\nu(x)\), \(\nu(\overline{x})\), \(\nu(1-\overline{x})\) at the prime \(N^{\prime}\). Define \[\mathfrak{X}(N,N^{\prime},\ell)_{0}:= \big{\{}x\in\mathfrak{X}(N,N^{\prime},\ell):\ \nu(x)\geq 0,\ \nu(\overline{x})\geq 0\big{\}},\] \[\mathfrak{X}(N,N^{\prime},\ell)_{1}:= \big{\{}x\in\mathfrak{X}(N,N^{\prime},\ell):\ \nu(x)=-1,\ \nu(1-\overline{x})\geq 1 \big{\}},\] \[\mathfrak{X}(N,N^{\prime},\ell)_{2}:= \big{\{}x\in\mathfrak{X}(N,N^{\prime},\ell):\ \nu(x)=-1,\ \nu(\overline{x})\geq-1\big{\}},\] \[\mathfrak{X}(N,N^{\prime},\ell)_{3}:= \big{\{}x\in\mathfrak{X}(N,N^{\prime},\ell):\ \nu(x)\geq 0,\ \nu( \overline{x})=-1\big{\}},\] \[\mathfrak{X}(N,N^{\prime},\ell)_{4}:= \big{\{}x\in\mathfrak{X}(N,N^{\prime},\ell):\ \nu(x)=-1,\ \nu( \overline{x})\geq 0\big{\}}. \tag{10.12}\] Then (10.7), together with Propositions 9.8 and 9.9, implies that \[\sum_{x\in E^{\times}-E^{1}}\mathcal{O}_{\gamma(x)}(f^{n},\varphi^{\prime}) \ll(kN^{\prime})^{o(1)}k^{-1/2}{N^{\prime}}^{2/3}\sum_{i=0}^{4}S_{i}(N,N^{ \prime},\ell), \tag{10.13}\] where \[S_{i}(N,N^{\prime},\ell):=\sum_{x\in\mathfrak{X}(N,N^{\prime},\ell)_{i}}\frac {1}{\langle x\rangle^{\kappa}(|x|^{2}-1)^{2}}\prod_{p}\mathcal{I}_{p}(x)\] where we recall that \(\kappa=k/4-2\). By (10.9), (10.10) and (10.11), for \(0\leq i\leq 4\), \(S_{i}(N,N^{\prime},\ell)\) is majorized by \[\ell^{7}\sum_{x\in\mathfrak{X}(N,N^{\prime},\ell)_{i}}\frac{N\mathrm{Nr}(x \overline{x}-1)_{n-sp}^{3+\varepsilon}\mathrm{Nr}(x\overline{x}-1)_{sp}^{3+ \varepsilon}\mathcal{I}_{N^{\prime}}(x)}{\langle x\rangle^{\kappa}(|x|^{2}-1)^ {2}}\prod_{p\mid\ell}(\mathrm{Nr}(1-x)_{p}+\mathrm{Nr}(1-x\overline{x})_{p})^{ 1+\varepsilon}.\] ### Bounding \(S_{0}(N,N^{\prime},\ell)\) We first bound the simplest term \(S_{0}(N,N^{\prime},\ell)\), which is the "generic" case if \(N^{\prime}\) is not too large. In fact, since we are in the stable range, one can conceptually think \(N^{\prime}=1\), then \(S_{i}(N,N^{\prime},\ell)=0\) for \(1\leq i\leq 4\). **Lemma 10.2**.: _Let notations be as before. Assuming that \(\kappa>6,\) we have for all \(\varepsilon>0\) and \(\ell\geq 1\)_ \[S_{0}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\ell^{11}N^{2}(1+ \frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\left(e^{-\frac{\kappa}{\ell^{2}+1}}+2 ^{-\kappa}\right).\] _If in addition we have_ \[\ell^{2}<N \tag{10.14}\] _we have_ \[S_{0}(N,N^{\prime},\ell)\ll(\ell N)^{\varepsilon}(\frac{\ell^{2}}{N})^{\kappa-4}.\] Proof.: By Proposition 9.8, for \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{0}\), we have \[\mathcal{I}_{N^{\prime}}(x)\ll_{\varepsilon}\operatorname{Nr}(x\overline{x} )^{\varepsilon}\big{(}\operatorname{Nr}(X)_{N^{\prime}}+\frac{\operatorname{ Nr}(x\overline{x}-1)_{N^{\prime}}}{N^{\prime}}\big{)} \tag{10.15}\] where \(X=X(x)=x\overline{x}(1-x)(1-\overline{x})\). For \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{0}\), we may write \[x=z\ell^{-1},\ z\in\mathcal{O}_{E}-\{0\},\ \operatorname{Nr}(z)\neq\ell^{2},\ z \overline{z}\equiv\ell^{2}\ (\text{mod}\ N).\] We have therefore \[|z|^{2}=qN+\ell^{2}>0,\ q\in\mathbb{Z}-\{0\}. \tag{10.16}\] Then \(S_{0}(N,N^{\prime},\ell)\) is majorized by \[\sum_{q>-\ell^{2}N^{-1}}\frac{r(q)\ell^{2}N(|q|N)^{3+\varepsilon} \ell^{7}}{\langle x\rangle^{\kappa}q^{2}N^{2}}\\ \times\Bigg{[}\left((qN+\ell^{2})\big{|}1-z/\ell\big{|}^{4}+ \frac{qN+\ell^{2}}{N^{\prime}}\right)\left(\big{|}1-z/\ell\big{|}^{2}+\frac{ qN}{\ell^{2}}\right)\Bigg{]}^{1+\varepsilon},\] where \[r(q)=|\{z\in\mathcal{O}_{E}-\{0\},\ z\overline{z}=Nq+\ell^{2}\}|\ll_{ \varepsilon}(N\ell)^{\varepsilon}\] for any \(\varepsilon>0\). By triangle inequality, \[\big{|}1-z/\ell\big{|}^{2}\leq 2(1+z\overline{z}/\ell^{2})=2(2+qN\ell^{-2}).\] Hence, \(S_{0}(N,N^{\prime},\ell)\) is bounded by \[\ll(\ell N)^{\varepsilon}\sum_{q>-\ell^{2}/N}\frac{r(q)\ell^{2}| q|^{1+\varepsilon}N^{2}\ell^{7}}{\langle x\rangle^{\kappa}}\\ \times\left(1+\frac{qN}{\ell^{2}}\right)\left((qN+\ell^{2})(1+ \frac{qN}{\ell^{2}})^{2}+\frac{qN+\ell^{2}}{N^{\prime}}\right)\\ \ll(\ell N)^{\varepsilon}\ell^{11}N^{2}\sum_{q>-\ell^{2}/N}\frac {|q|^{1+\varepsilon}}{\langle x\rangle^{\kappa}}(1+\frac{qN}{\ell^{2}})^{4+ \varepsilon}\ll(\ell N)^{\varepsilon}\ell^{11}N^{2}(S_{01}+S_{02}),\] say, where \[S_{01}:=\sum_{-\ell^{2}/N<q<0}\frac{|q|^{1+\varepsilon}}{\left(\frac{qN+\ell^ {2}}{\ell^{2}}+1\right)^{\kappa}}(1+\frac{qN}{\ell^{2}})^{4+\varepsilon}\] and \[S_{02}:=\sum_{q\geq 1}\frac{|q|^{1+\varepsilon}}{\left(\frac{qN+\ell^{2}}{ \ell^{2}}\right)^{\kappa}}(1+\frac{qN}{\ell^{2}})^{4+\varepsilon}.\] Note that \[(A+1)^{-\kappa}=\exp\left(-\kappa\log\left(1+A\right)\right)\leq\exp\left(- \frac{\kappa}{A^{-1}+1}\right); \tag{10.17}\] this implies that for all \(q>-\ell^{2}/N\) one has (since \(qN+\ell^{2}\geq 1\)) \[\exp\left(-\frac{\kappa}{\left(\frac{qN+\ell^{2}}{\ell^{2}}\right)^{-1}+1} \right)\leq e^{-\frac{\kappa}{\ell^{2}+1}}.\] This implies that \[S_{01}\ll\sum_{-\ell^{2}N^{-1}<q<0}|q|^{1+\varepsilon}e^{-\frac{\kappa}{\ell^{2}+ 1}}\ll(1+\ell^{2}/N)^{2+\varepsilon}e^{-\frac{\kappa}{\ell^{2}+1}}.\] To estimate \(S_{02}\) we break it into two further pieces: \[S_{02}=\sum_{1\leq q\leq\ell^{2}/N}\cdots+\sum_{q>\ell^{2}/N}\cdots.\] The first piece is bounded by \[\sum_{1\leq q\leq\ell^{2}/N}\cdots\ll\sum_{1\leq q\leq\ell^{2}/N} \frac{|q|^{1+\varepsilon}}{(\frac{qN}{\ell^{2}}+1)^{\kappa}}\\ \ll(\ell N)^{\varepsilon}\frac{\ell^{4}}{N^{2}}e^{-\frac{\kappa} {\ell^{2}/N+1}}\ll(\ell N)^{\varepsilon}(1+\frac{\ell^{2}}{N})^{2}e^{-\frac{ \kappa}{\ell^{2}+1}}.\] The second piece is bounded by \[\sum_{q>\ell^{2}/N}\cdots\ll\frac{\ell^{2}}{N}\sum_{q>\ell^{2}/N} \frac{(qN\ell^{-2})^{5+\varepsilon}}{\left(\frac{qN}{\ell^{2}}+1\right)^{ \kappa}}\\ \ll(1+\ell^{2}/N)^{2}\int_{1}^{\infty}\frac{t^{5+\varepsilon}}{ (t+1)^{\kappa}}dt\ll\frac{(1+\ell^{2}/N)^{2}}{2^{\kappa}}.\] Consequently \[S_{02}\ll(\ell N)^{\varepsilon}(1+\ell^{2}/N)^{2}(e^{-\frac{\kappa}{\ell^{2}+ 1}}+2^{-\kappa}).\] Let us now assume that (10.14) holds, then (10.16) implies that \(q\geq 1\) and in the discussion above, the terms \(S_{01}\) and the first piece of \(S_{02}\) are empty and we have \[S_{0}(N,N^{\prime},\ell)=\sum_{q\geq 1}\cdots\ll\frac{\ell^{2}}{N}\sum_{q \geq 1}\frac{(qN\ell^{-2})^{5+\varepsilon}}{\left(\frac{qN}{\ell^{2}}+1 \right)^{\kappa}}\ll(\ell N)^{\varepsilon}(\frac{\ell^{2}}{N})^{\kappa-4}.\] This concludes the proof of Lemma 10.2. _Remark 10.2_.: In the above, the series are absolutely converging since \(\kappa-5>1\).This is indeed our treatment of \(S_{0}(N,N^{\prime},\ell)\) which is responsible for the constraint \(k\geq 32\). The following sums will be absolutely converging for smaller values of \(k\). ### Bounding \(S_{2}(N,N^{\prime},\ell)\) The worse case scenario is achieved when \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{2}\) (see (10.12)). In this section we bound \(S_{2}(N,N^{\prime},\ell).\) The approach is similar to that in SS10.2, with a mild modification. **Lemma 10.3**.: _Let notation be as before. We have for any \(\varepsilon>0\)_ \[S_{2}(N,N^{\prime},\ell)\ll(\ell N^{\prime}N)^{\varepsilon}\ell^{9}{N^{ \prime}}^{3}N^{2}(1+\ell^{2}{N^{\prime}}^{2}/N)^{2}\left(e^{-\frac{\kappa}{( \ell N^{\prime})^{2}+1}}+2^{-\kappa}\right). \tag{10.18}\] _If we assume in addition that_ \[\ell^{2}{N^{\prime}}^{2}<N \tag{10.19}\] _we have_ \[S_{2}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\frac{\ell^{7}N^{ 3}}{N^{\prime 3}}(\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}. \tag{10.20}\] Proof.: For \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{2}\), we may write \[x=z/(\ell N^{\prime}),\ z\in\mathcal{O}_{E}-\{0\},\ \mathrm{Nr}(x)\neq 1,\ z \overline{z}\equiv(\ell N^{\prime})^{2}\,(\mathrm{mod}\,N)\] and we can write \[0<z\overline{z}=qN+(\ell N^{\prime})^{2},\ q\in\mathbb{Z}-\{0\}. \tag{10.21}\] We have \[\prod_{p|\ell}\mathrm{Nr}(1-x)_{p}\leq\left|(1-z/\ell)N^{\prime}\right|^{2}\ll N ^{\prime 2}(1+\frac{\mathrm{Nr}(z)}{\ell^{2}}).\] We have \[S_{2}(N,N^{\prime},\ell)\ll S_{2}^{\heartsuit}(N,N^{\prime},\ell),\] where \(S_{2}^{\heartsuit}(N,N^{\prime},\ell)\) is defined by \[\sum_{q>-(\ell N^{\prime})^{2}N^{-1}}\frac{r(q)(\ell N^{\prime})^{2}N(|q|N)^{3 +\varepsilon}\ell^{7}}{\langle x\rangle^{\kappa}q^{2}N^{2}}\left(N^{\prime 2}(1+ \frac{\mathrm{Nr}(z)}{\ell^{2}})+\frac{qN}{\ell^{2}}\right)^{1+\varepsilon}N^{ \prime-3}.\] We break the sum into two pieces as above: \[S_{2}^{\heartsuit}(N,N^{\prime},\ell)=\sum_{-\frac{(\ell N^{\prime})^{2}}{N}<q< 0}\cdots\ +\ \sum_{q\geq 1}\cdots\] For the first sum we use the trivial bound \[\left(N^{\prime 2}(1+\frac{\mathrm{Nr}(z)}{\ell^{2}})+\frac{qN}{\ell^{2}} \right)^{1+\varepsilon}\ll N^{\prime 4+\varepsilon},\] and use (10.17) to bound the remaining terms as in the treatment of \(S_{01}\) in the proof of Lemma 10.2. In the range \(q\geq 1\) the sum decays exponentially as in the treatment of \(S_{02}\) in the proof of Lemma 10.2. Here we provide an explicit calculation (the value of \(\varepsilon\) may change from line to line) \[\sum_{q\geq 1}\cdots \ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime}N^{2}\sum_{ q\geq 1}\frac{|q|^{1+\varepsilon}}{\left(\frac{qN}{(\ell N)^{2}}+1\right)^{ \kappa}}\left(N^{\prime 2}+\frac{qN}{\ell^{2}}\right)\] \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}{N^{\prime}}^{3}N^{2} \sum_{1\leq q\leq\frac{(\ell N^{\prime})^{2}}{N}}q^{1+\varepsilon}\exp\left(- \frac{\kappa}{\frac{(\ell N^{\prime})^{2}}{qN}+1}\right)\] \[\qquad+(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime}N^{2} \sum_{q>\frac{(\ell N^{\prime})^{2}}{N}}\frac{q^{1+\varepsilon}}{\left(\frac{ qN}{(\ell N)^{2}}+1\right)^{\kappa}}\left(\frac{qN}{\ell^{2}}\right)\] \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime 3}N^{2}(1+ \frac{\ell^{2}N^{\prime 2}}{N})^{2}e^{-\frac{\kappa}{(\ell N^{\prime})^{2}/N+1}}\] \[\qquad+(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime}N^{2} \sum_{q>\frac{\ell^{2}N^{\prime 2}}{N}}\frac{|q|^{2+\varepsilon}}{\left(\frac{qN}{( \ell N^{\prime})^{2}}+1\right)^{\kappa}}\] \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime 3}N^{2}(1+ \frac{(\ell N^{\prime})^{2}}{N})^{2}\left(e^{-\frac{\kappa}{(\ell N^{\prime})^ {2}/N+1}}+\int_{1}^{\infty}\frac{t^{2+\varepsilon}}{(t+1)^{\kappa}}dt\right)\] \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}N^{\prime 3}N^{2}(1+ \frac{\ell^{2}N^{\prime 2}}{N})^{2}\left(e^{-\frac{\kappa}{(\ell N^{\prime})^{2}+1}}+2 ^{-\kappa}\right).\] Then (10.18) follows. If we moreover assume (10.19) then (10.21) implies that \(q\geq 1\) and we have \[S_{2}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\frac{\ell^{7}N^{3} }{N^{3}}(\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}\] ### Bounding \(S_{i}(N,N^{\prime},\ell):i=1,3,4\) **Lemma 10.4**.: _Let notations be as before. Then for \(i=1,3,4,\) we have for any \(\varepsilon>0\)_ \[S_{i}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\ell^{17}N^{\prime 4 }N^{2}(1+\ell^{2}/N)^{2}\left(e^{-\frac{\kappa}{(\ell N^{\prime})^{2}+1}}+2^{- \kappa}\right). \tag{10.22}\] _If we assume in addition that_ \[\ell^{2}{N^{\prime}}^{2}<N \tag{10.23}\] _we have_ \[S_{i}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\ell^{7}N^{3}{N^{ \prime}}^{2}(\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}. \tag{10.24}\] Proof.: Investigating the situations in Proposition 9.9 we see that \(\mathcal{I}_{N^{\prime}}(x)\) is majorized by \(N^{\prime-2}\) or \({N^{\prime}}^{2\nu(1-x\overline{x})-1}\) or \(\nu(1-x){N^{\prime}}^{2\nu(1-\overline{x})-5}\), depending on \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{i}\), \(1\leq i\leq 4.\) In these cases we may still write \[x=z/(\ell N^{\prime}),\ z\in\mathcal{O}_{E},\ \mathrm{Nr}(x)\neq 1,\ z\overline{z}\equiv(\ell N^{\prime})^{2} \left(\mathrm{mod}\,N\right)\] and we can write \[0<z\overline{z}=qN+(\ell N^{\prime})^{2},\ q\in\mathbb{Z}-\{0\}. \tag{10.25}\] We call \(x\) is _good_ if \[\mathcal{I}_{N^{\prime}}\ll{N^{\prime}}^{-2},\] i.e., \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{1}\) and some subsets of \(\mathfrak{X}(N,N^{\prime},\ell)_{3}\) and \(\mathfrak{X}(N,N^{\prime},\ell)_{4}.\) We don't need to cover \(\mathfrak{X}(N,N^{\prime},\ell)_{2}\) here since it has been handled in Lemma 10.3 already. The same arguments as in the proof of Lemma 10.3 yields the following upper bound for \(x\) good: \[S_{i}(N,N^{\prime},\ell)\ll\sum_{q>-(\ell N^{\prime})^{2}/N}\frac{r(q)(\ell N ^{\prime})^{2}N(|q|N)^{3+\varepsilon}\ell^{7}}{\langle x\rangle^{\kappa}q^{2 }N^{2}}\left({N^{\prime}}^{2}(1+\frac{\mathrm{Nr}(z)}{\ell^{2}})+\frac{qN}{ \ell^{2}}\right)^{1+\varepsilon}\frac{1}{{N^{\prime}}^{2}}.\] Note that the above bound is obtained by replacing \(\mathcal{I}_{N^{\prime}}(x)\ll{N^{\prime}}^{-3}\) for \(x\in\mathfrak{X}(N,N^{\prime},\ell)_{2}\) with \(\mathcal{I}_{N^{\prime}}(x)\ll{N^{\prime}}^{-2}\) when \(x\) is good. By Lemma 10.3 the corresponding contribution from good \(x\) is \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{9}{N^{\prime}}^{4}N^{2}(1+\ell^{2}{ N^{\prime}}^{2}/N)^{2}\left(e^{-\frac{\kappa}{(\ell N^{\prime})^{2}+1}}+2^{- \kappa}\right).\] Now we consider the remaining cases where one has only \[\mathcal{I}_{N^{\prime}}\ll{N^{\prime}}^{2\nu(1-x\overline{x})-1}\ \mathrm{or}\ \mathcal{I}_{N^{\prime}}\ll\nu(1-x){N^{\prime}}^{2\nu(1-\overline{x})-5}.\] Write the prime decomposition of \(N^{\prime}\mathcal{O}_{E}\) \[\mathfrak{p}\overline{\mathfrak{p}}=N^{\prime}\mathcal{O}_{E}\] #### 10.4.1. First case If \[\mathcal{I}_{N^{\prime}}\ll{N^{\prime}}^{2\nu(1-x\overline{x})-1},\] by Proposition 9.9 we see that \(z\in\mathfrak{p}^{2}\mathcal{O}_{E}\) or \(z\in\overline{\mathfrak{p}}^{2}\mathcal{O}_{E}\). So \[\mathcal{I}_{N^{\prime}}(x)\ll{N^{\prime}}^{2\nu(1-x\overline{x})-1}\ll(|x|^{ 2}-1)^{2}\ell^{4}/N^{\prime}.\] We can write \[0<x\overline{x}=qN/\ell^{2}+1,\ q\in\mathbb{Z}-\{0\}.\] Therefore, the contribution from these \(x\)'s is bounded by \[\ll\ell^{7}\sum_{x}\frac{N{\rm Nr}(x\overline{x}-1)^{3+\varepsilon}_ {n-sp}{\rm Nr}(x\overline{x}-1)^{3+\varepsilon}_{sp}{\mathcal{I}}_{N^{\prime}} (x)}{\langle x\rangle^{\kappa}(|x|^{2}-1)^{2}}\\ \times\prod_{p|\ell}({\rm Nr}(1-x)_{p}+{\rm Nr}(1-x\overline{x} )_{p})^{1+\varepsilon}\\ \ll(\ell NN^{\prime})^{\varepsilon}\ell^{7}\sum_{q>-\ell^{2}/N} \frac{r(q)(|q|N)^{3+\varepsilon}\ell^{7}}{\langle x\rangle^{\kappa}q^{2}N^{2}} \left(N^{\prime 2}(1+\frac{{\rm Nr}(z)}{\ell^{2}})+\frac{qN}{\ell^{2}}\right) \frac{|u|^{2}}{{N^{\prime}}^{4}}\\ \ll(\ell NN^{\prime})^{\varepsilon}(S_{1}+S_{2}),\] where \[S_{1}=\sum_{\begin{subarray}{c}q>-(\ell N^{\prime})^{2}N^{-1}\\ |u|\leq 2\ell N^{\prime-2}\end{subarray}}\cdots,\ S_{2}=\sum_{ \begin{subarray}{c}q>-(\ell N^{\prime})^{2}N^{-1}\\ |u|>2\ell N^{\prime-2}\end{subarray}}\cdots.\] By definition, we have \[S_{1}\ll(\ell NN^{\prime})^{\varepsilon}\frac{1}{N^{\prime}}(\frac{\ell}{N^{ \prime 2}})^{2}S_{2}^{\heartsuit}(N,N^{\prime},\ell)=(\ell NN^{\prime})^{ \varepsilon}\frac{\ell^{2}}{N^{\prime 5}}S_{2}^{\heartsuit}(N,N^{\prime},\ell),\] where \(S_{2}^{\heartsuit}(N,N^{\prime},\ell)\) was defined in the proof of Lemma 10.3. To handle \(S_{2}\), we observe that (10.26) in the range \(|u|>2\ell N^{\prime-2}\) implies that \[1+\frac{qN}{(\ell N^{\prime})^{2}}=\left|1+\frac{\overline{\varpi}^{3}u}{\ell N ^{\prime}}\right|^{2}\gg\frac{N^{\prime 4}|u|^{2}}{\ell^{2}},\] i.e., \[|u|^{2}\ll\frac{\ell^{2}}{N^{\prime 4}}\left(1+\frac{qN}{(\ell N^{\prime})^{2} }\right).\] Therefore, \(S_{2}\) is bounded by \[\ll(\ell NN^{\prime})^{\varepsilon}\sum_{q>-(\ell N^{\prime})^{2}N^{-1}}\frac {(\ell N^{\prime})^{2}N(|q|N)^{3+\varepsilon}\ell^{7}}{\langle x\rangle^{ \kappa}q^{2}N^{2}}\left(N^{\prime 2}(1+\frac{\mathrm{Nr}(z)}{\ell^{2}})+\frac{qN}{ \ell^{2}}\right)^{1}\frac{\ell^{2}}{N^{\prime 8}}.\] and we obtain \[S_{2}\ll(\ell NN^{\prime})^{\varepsilon}\frac{\ell^{2}}{N^{\prime 5}}S_{2}^{ \heartsuit}(N,N^{\prime},\ell).\] As a consequence, by Lemma 10.3, the contribution from \(x\)'s in the second case is bounded by \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{11}\frac{N^{2}}{N^{\prime 2}}(1+ \frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\left(e^{-\frac{\kappa}{(\ell N^{\prime })^{2}+1}}+2^{-\kappa}\right).\] This proves the first part of Lemma 10.4. Suppose now that in addition (10.23) holds. For the good \(x\)'s we have \(q\geq 1\) in (10.25) and that contribution is bounded by \[\ll(\ell NN^{\prime})^{\varepsilon}\sum_{q\geq 1}\frac{(\ell N^{ \prime})^{2}N(|q|N)^{3+\varepsilon}\ell^{7}}{(\frac{qN}{(\ell N^{\prime})^{2} })^{\kappa}q^{2}N^{2}}\left(N^{\prime 2}\frac{qN}{\ell^{2}}\right)\frac{1}{{N^{ \prime}}^{2}}\\ \ll(\ell NN^{\prime})^{\varepsilon}\ell^{7}N^{3}{N^{\prime}}^{2} (\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}\] Next the contribution of the non good \(x\)'s in the first case is bounded by \[\ll(\ell NN^{\prime})^{\varepsilon}\ell^{17}\frac{N}{N^{\prime}}\sum_{q\geq 1 }\frac{|q|^{1+\varepsilon}}{(qN/\ell^{2})^{\kappa}}(\frac{qN}{\ell^{2}})^{3} \ll(\ell NN^{\prime})^{\varepsilon}\frac{\ell^{11}N^{4}}{N^{\prime}}(\frac{ \ell^{2}}{N})^{\kappa}.\] and in the second case, their contribution is bounded by \[\ll(\ell NN^{\prime})^{\varepsilon}\sum_{q\geq 1}\frac{(\ell N^{\prime})^{2}N(|q |N)^{3+\varepsilon}\ell^{7}}{(qN)^{\kappa}q^{2}N^{2}}\left(N^{\prime 2} \frac{qN}{\ell^{2}}\right)\frac{qN\ell^{2}/N^{\prime}}{{N^{\prime}}^{4}}\ll( \ell NN^{\prime})^{\varepsilon}\frac{\ell^{9}N}{N^{\prime}}N^{-\kappa}\] ### Proof of Theorem 10.1 Combining Lemma 10.2, 10.3 and 10.4 we obtain \[\sum_{i=0}^{4}S_{i}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon}\ell ^{17}N^{2}N^{\prime 4}(1+\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\left(e^{-\frac{ \kappa}{(\ell N^{\prime})^{2}+1}}+2^{-\kappa}\right).\] and if in addition we have \[\ell^{2}N^{\prime 2}<N\] we have \[\sum_{i=0}^{4}S_{i}(N,N^{\prime},\ell)\ll(\ell NN^{\prime})^{\varepsilon} \ell^{7}N^{4}{N^{\prime}}^{2}(\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}.\] Substituting the above estimates into (10.13) yields \[\sum_{x\in E^{\chi}-E^{1}}\mathcal{O}_{\gamma(x)}(f^{n},\varphi^{\prime})\ll( k\ell NN^{\prime})^{o(1)}\frac{\ell^{17}N^{\prime \frac{4}{3}}N^{2}}{k^{1/2}}(1+\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\left(e^{ -\frac{\kappa}{(\ell N^{\prime})^{2}+1}}+2^{-\kappa}\right).\] and, assuming that \(\ell^{2}N^{\prime 2}<N\), \[\sum_{x\in E^{\times}-E^{1}}\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{ \prime})\ll(k\ell NN^{\prime})^{o(1)}\frac{\ell^{7}N^{4}N^{\prime 8/3}}{k^{1/2}}( \frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}.\] So Theorem 10.1 follows. ## 11. **Twisted moments of Bessel periods** In this section, we establish an asymptotic formula for the average of the Bessel periods \(|\mathcal{P}(\varphi,\varphi^{\prime})|^{2}\) twisted by eigenvalues of Hecke operators supported at inert primes. Theorem 1.2 will follow as a consequence. **Theorem 11.1**.: _Let notations be as in Theorem 1.2; in particular we recall that_ \[k>32,\ \kappa=\frac{k}{4}-2>6,\] \[d_{\Lambda}=\frac{(2k-2)(k+2)(k-6)}{3},d_{k}=k-1\] _and_ \[\Psi(N)=\prod_{p|N}\left(1-\frac{1}{p}+\frac{1}{p^{2}}\right),\ \mathfrak{S}(N^{\prime})=\prod_{p|N^{\prime}}(1-\frac{1}{p^{2}})^{-1}.\] _For \(\ell\geq 1\) be an integer coprime to \(N\) and divisible only by primes which are inert in \(E\) let \(\lambda_{\pi}(\ell)\) be the eigenvalue of the corresponding Hecke operator at \(\pi\) (see (11.6) below). We have_ \[\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}(N)}\lambda_{\pi}(\ell)\sum_{ \varphi\in\mathcal{B}_{k}^{\pi}(\pi)}\frac{\left|\mathcal{P}(\varphi,\varphi^{ \prime})\right|^{2}}{\langle\varphi,\varphi\rangle\langle\varphi^{\prime}, \varphi^{\prime}\rangle}=\frac{w_{E}}{d_{k}}(\frac{N}{N^{\prime}})^{2}\Psi(N) \mathfrak{S}(N^{\prime})\frac{\lambda_{\pi^{\prime}}(\ell)}{\ell}\\ +O((\ell NN^{\prime})^{o(1)}\frac{}{2^{4k}k^{2}}\frac{N}{N^{ \prime 3}}\frac{1}{\ell}+(k\ell NN^{\prime})^{o(1)}\frac{\ell^{15}{N^{\prime}}^{ \frac{44}{3}}N^{2}}{k^{1/2}}(1+\frac{{\ell^{2}{N^{\prime}}}^{2}}{N})^{2} \mathcal{E}) \tag{11.1}\] _where_ \[\mathcal{E}=e^{-\frac{\kappa}{(kN^{\prime})^{2}+1}}+2^{-\kappa}.\] _Moreover, if we assume that_ \[{\ell^{2}{N^{\prime}}^{2}}<N,\] _then the third term on the right-hand side of (11.1) can be replaced by_ \[(k\ell NN^{\prime})^{o(1)}\frac{\ell^{5}N^{4}{N^{\prime}}^{8/3}}{k^{1/2}}( \frac{{\ell^{2}{N^{\prime}}}^{2}}{N})^{\kappa}.\] ### The Hecke algebra at inert primes We refer to [20, SS2.1] for proofs of the well known facts listed below. Given \(p\) a prime inert in \(E\) and coprime with \(N\) and \(r\geq 0\) an integer, the (normalized) \(p^{r}\)-th Hecke operator is the convolution operator by the function \[T(p^{r})=\frac{1}{p^{2r}}1_{G(\mathbb{Z}_{p})A_{r}G(\mathbb{Z}_{p})}. \tag{11.2}\] These satisfy for the recurrence relation \[T(p^{r})T(p)=T(p^{r+1})+\frac{1}{p}T(p^{r})+T(p^{r-1}),\ r\geq 1. \tag{11.3}\] Given \(\pi\in\mathcal{A}_{k}(N)\), the \(G(\mathbb{Z}_{p})\)-invariant vectors of \(\pi\) are eigenvectors of the \(T(p^{r})\) are share the same eigenvalue which we denote by \(\lambda_{\pi}(p^{r})\). From (11.3) we have therefore \[\lambda_{\pi}(p^{r})\lambda_{\pi}(p)=\lambda_{\pi}(p^{r+1})+\frac{1}{p}\lambda _{\pi}(p^{r})+\lambda_{\pi}(p^{r-1}),\ r\geq 1 \tag{11.4}\] or in other terms \[\sum_{r\geq 0}\frac{\lambda_{\pi}(p^{r})}{p^{rs}}=(1+\frac{1}{p^{1+s}})(1-\frac{ \alpha_{\pi}(p)}{p^{s}})^{-1}(1-\frac{\alpha_{\pi}^{-1}(p)}{p^{s}})^{-1} \tag{11.5}\] where \[\lambda_{\pi}(p)=\alpha_{\pi}(p)+1/p+\alpha_{\pi}^{-1}(p)\] for some \(\alpha_{\pi}(p)\in\mathbb{C}^{\times}\). For any integer \(\ell=\prod_{p}p^{r_{p}}\) coprime to \(N\) and divisible only by primes inert in \(E\) we set \[\lambda_{\pi}(\ell):=\prod_{p}\lambda_{\pi}(p^{r}) \tag{11.6}\] Finally, since the representation \(\pi\) is cohomological, it satisfies the Ramanujan-Petersson conjectures [10]) and one has \[|\alpha_{\pi}(p)|=1;\] therefore for \(r\geq 1\), one has \[|\lambda_{\pi}(p^{r})|\leq r+2 \tag{11.7}\] and for \(\ell\) as above one has \[\lambda_{\pi}(\ell)\ll\ell^{o(1)}. \tag{11.8}\] _Remark 11.1_.: The Satake parameters at the prime \(p\) of the base change \(\pi_{E}\) of \(\pi\) are given by \(\{\alpha_{\pi}(p),1,\alpha_{\pi}^{-1}(p)\}\). ### Proof of Theorem 11.1 Let \(\ell=\prod_{p}p^{r_{p}}\) be as above and let \[f^{\mathfrak{n}}=f^{\mathfrak{n}}_{\infty}\prod_{p}f^{\mathfrak{n}}_{p}\] be the smooth function (which depends on \(\ell\)) that was constructed in SS4.4; in particular for \(p|\ell\), one has \[f^{\mathfrak{n}}_{p}=1_{G(\mathbb{Z}_{p})A_{\tau_{p}}G(\mathbb{Z}_{p})}.\] By Lemma 5.2 and our normalization for the Hecke operators (11.2), we have \[\frac{1}{d_{\Lambda}}\sum_{\varphi\in\mathcal{B}^{\mathfrak{n}}_{ k}(N)}\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle \varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}\lambda_{ \pi}(\ell)\ell^{2}=w_{E}\frac{\mathcal{O}_{\gamma_{1}}(f^{\mathfrak{n}},\varphi ^{\prime})}{\langle\varphi^{\prime},\varphi^{\prime}\rangle}\\ +\sum_{x\in E^{1}}\frac{\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{\prime})}{\langle\varphi^{\prime},\varphi^{\prime}\rangle}+\sum_{x \in E^{\times}-E^{1}}\frac{\mathcal{O}_{\gamma(x)}(f^{\mathfrak{n}},\varphi^{ \prime})}{\langle\varphi^{\prime},\varphi^{\prime}\rangle}. \tag{11.9}\] The proof then follows immediately from Proposition 7.1, Corollary 8.14 and Theorem 10.1 after dividing by \(\ell^{2}\). ### Proof of Theorem 1.2 This is a direct consequence of Theorem 11.1 for \(\ell=1\) after observing that, there exists a suitable absolute constant \(C\geq 32\), such that given any \(\delta>0\), if either of the two following conditions is satisfied \[{N^{\prime}}^{2}\leq N^{1-\delta},\ N>16,k\geq C(1+1/\delta)\] or \[{N^{\prime}}^{2}\leq k^{1-\delta},\ N\leq 2^{4k},\] then the second and third terms on the right-hand side of (11.1) are negligible compared to the first term. ## 12. **Weighted Vertical Sato-Tate Distribution** In this section, we interpret Theorem 11.1 as an "vertical" Sato-Tate type joint equidistribution result for products of Hecke eigenvalues \(\lambda_{\pi}(p_{i})\) at a finite set of inert prime \(p_{i}\)'s, for \(\pi\) varying over \(\mathcal{A}_{k}(N)\) and with the Hecke eigenvalues weighted by the Bessel periods \(\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}\). For \(\operatorname{GL}(2)\) a result of that kind goes back to Royer [20]. ### The Measure The Sato-Tate measure is the measure on \([-2,2]\) with density \[d\mu_{\operatorname{ST}}(x):=\begin{cases}\frac{1}{\pi}\sqrt{1-\frac{x^{2}}{4 }}dx,&\text{ if }-2\leq x\leq 2,\\ 0,&\text{ otherwise.}\end{cases}\] We recall that an orthonormal basis for \(\mu_{ST}\) is provided by the the Chebyshev polynomials \(C_{r}(X),\ r\geq 1\) where \(C_{r}(X)\) (of degree \(r\)) is defined by \[C_{r}(2\cos\theta)=\frac{\sin(r+1)\theta}{\sin\theta}.\] Let \(x\in[-2,2]\) and \(p\) be a prime inert in \(E\) and such that \(p\nmid NN^{\prime}\). Let \(\sigma_{p,x}\) be the unramified unitary representations of \(G^{\prime}(\mathbb{Q}_{p})\) with Satake parameters \((\alpha_{x}(p),\alpha_{x}(p)^{-1})\) satisfying \[\alpha_{x}(p)+\alpha_{x}(p)^{-1}=x\] and \(\sigma_{E_{p},x}\) its base change. Let \(L(1/2,\sigma_{E_{p},x}\times\pi^{\prime}_{E_{p}})\) be the local Rankin-Selberg \(L\)-factor of the base change representations. We define the measure on \(\mathbb{R}\) supported on \([-2,2]\) \[d\mu_{p}(x):=L(1/2,\sigma_{E_{p_{i}},x}\times\pi^{\prime}_{E_{p_{i}}})d\mu_{ \operatorname{ST}}(x).\] Given \(\mathbf{p}=(p_{1},\cdots,p_{m})\) a tuple of inert primes coprime with \(NN^{\prime}\), we define a measure \(\mu_{\mathbf{p}}\) on \(\mathbb{R}^{m}\) by \[d\mu_{\mathbf{p}}(x_{1},\cdots,x_{m}):=d\mu_{p_{1}}(x_{1})\otimes\cdots \otimes d\mu_{p_{m}}(x_{m}). \tag{12.1}\] _Remark 12.1_.: The measure \(\mu_{\mathbf{p}}\) is a positive measure since, by temperedness, the local factors satisfy \[(1-1/p)^{6}\leq L(1/2,\sigma_{E_{p_{i}},x}\times\pi^{\prime}_{E_{p_{i}}})\leq (1+1/p)^{6}\] ### Weighted Equidistribution of Joint Hecke Eigenvalues **Theorem 12.1**.: _Let notation be as in Theorem 11.1. Let \(\mathbf{p}=(p_{1},\cdots,p_{m})\) be a tuple of inert primes coprime with \(NN^{\prime}\) and for any \(\pi\in\mathcal{A}_{k}(N)\) set_ \[\tilde{\lambda}_{\pi}(\mathbf{p}):=(\lambda_{\pi}(p_{1})-p_{1}^{-1},\cdots, \lambda_{\pi}(p_{m})-p_{m}^{-1})\in\mathbb{R}^{m}\] _where \(\lambda_{\pi}(p)\) denote the \(p\)-th Hecke eigenvalue. For any continuous function \(\phi\) on \(\mathbb{R}^{m},\) we have, as \(k+N\to\infty\)_ \[\frac{N^{\prime 2}}{w_{E}\mathfrak{S}(N^{\prime})}\frac{d_{k}}{d_{\Lambda}} \frac{1}{N^{2}\Psi(N)}\sum_{\begin{subarray}{c}\pi\in\mathcal{A}_{k}(N)\\ \varphi\in\mathcal{B}_{k}^{n}(\pi)\end{subarray}}\frac{\left|\mathcal{P}( \varphi,\varphi^{\prime})\right|^{2}}{\left\langle\varphi,\varphi\right\rangle \left\langle\varphi^{\prime},\varphi^{\prime}\right\rangle}\phi(\tilde{\lambda }_{\pi}(\mathbf{p}))=\mu_{\mathbf{p}}(\phi)+o(1),\] _where \(\mu_{\mathbf{p}}\) is defined by (12.1) and the error term depends on \(E,N^{\prime},\mathbf{p}\) and \(\phi\)._ _Remark 12.2_.: Regarding uniformity, it will be clear from the proof that this asymptotic formula is valid as long as \(N^{\prime}\prod_{i=1}^{m}p_{i}\) is bounded by some absolute positive power of \(kN\) but we will ignore this aspect here. Proof.: As a reminder (cf. SS11.1), we recall that \(\lambda_{\pi}(p)\) can be expressed as \[\lambda_{\pi}(p)=\alpha_{\pi}(p)+1/p+\alpha_{\pi}^{-1}(p),\] where \(\alpha_{\pi}(p)\in\mathbb{C}^{\times}\). Moreover, \(|\alpha_{\pi}(p)|=1\) since \(\pi_{p}\) is tempered. In particular, \(\lambda_{\pi}(p)-p^{-1}\in[-2,2].\) For \(r\geq 0\) define \(\tilde{\lambda}_{\pi}(p^{r})\) via the formula \[\sum_{r\geq 0}\frac{\tilde{\lambda}_{\pi}(p^{r})}{p^{rs}}=(1-\frac{\alpha_{ \pi}(p)}{p^{s}})^{-1}(1-\frac{\alpha_{\pi}^{-1}(p)}{p^{s}})^{-1}. \tag{12.2}\] In particular we have \[\tilde{\lambda}_{\pi}(p)=\alpha_{\pi}(p)+\alpha_{\pi}^{-1}(p)=\lambda_{\pi}(p )-1/p\] and more generally \[\tilde{\lambda}_{\pi}(p^{r})=C_{r}(\tilde{\lambda}_{\pi}(p)). \tag{12.3}\] In view of (11.5) we also have \[\tilde{\lambda}_{\pi}(p^{r})=\frac{1}{p^{r}}\sum_{l=0}^{r}(-1)^{r-l}p^{l} \lambda_{\pi}(p^{l}).\] The identity above can be rewritten \[\tilde{\lambda}_{\pi}(p^{r})=\big{(}\frac{(-1)^{\Omega(\bullet)}}{\mathrm{Id} }\star\lambda_{\pi}\big{)}(p^{r})\] where \(\star\) is the Dirichlet convolution and \(\frac{(-1)^{\Omega(\bullet)}}{\mathrm{Id}}\) is the multiplicative function \[n\mapsto\frac{(-1)^{\Omega(n)}}{n}.\] In particular, for \(\mathbf{p}=(p_{1},\cdots,p_{m})\) a tuple of inert primes and coprime with \(NN^{\prime}\) we have \[\tilde{\lambda}_{\pi}(\mathbf{p})=(\lambda_{\pi}(p_{1}),\cdots,\lambda_{\pi}( p_{m})).\] Moreover if, for a tuple of integers \((r_{1},\cdots,r_{m})\in\mathbb{N}^{m}\) and \(\ell=p_{1}^{r_{1}}\cdots p_{m}^{r_{m}}\), we define \[\tilde{\lambda}_{\pi}(\ell):=\prod_{i=1}^{m}\tilde{\lambda}_{\pi}(p_{i}^{r_{ i}}),\ \tilde{\lambda}_{\pi}(1)=1,\] we obtain a multiplicative function which can be expressed as a Dirichlet convolution: \[\tilde{\lambda}_{\pi}(\ell)=\sum_{\ell_{1}\ell_{2}=\ell}\frac{(-1)^{\Omega( \ell_{1})}}{\ell_{1}}\lambda_{\pi}(\ell_{2}). \tag{12.4}\] We now turn to the combinatorics of the Hecke eigenvalues \(\lambda_{\pi^{\prime}}(p^{r}),\ r\geq 0\): the product of Cartan cells \[K_{p}^{\prime}\begin{pmatrix}p&\\ &p^{-1}\end{pmatrix}K_{p}^{\prime}\cdot K_{p}^{\prime}\begin{pmatrix}p^{r}&\\ &p^{-r}\end{pmatrix}K_{p}^{\prime}\] decomposes as the disjoint union \[K_{p}^{\prime}\begin{pmatrix}p^{r+1}&\\ &p^{-r-1}\end{pmatrix}K_{p}^{\prime}\begin{pmatrix}p^{r}&\\ &p^{-r}\end{pmatrix}K_{p}^{\prime}\begin{pmatrix}p^{r-1}&\\ &p^{-r+1}\end{pmatrix}K_{p}^{\prime},\] implies the Hecke relation \[\lambda_{\pi^{\prime}}(p)\lambda_{\pi^{\prime}}(p^{r})=\lambda_{\pi^{\prime}} (p^{r+1})+\lambda_{\pi^{\prime}}(p^{r})+\lambda_{\pi^{\prime}}(p^{r-1}).\] So if we set \[\tilde{\lambda}_{\pi^{\prime}}(p^{r}):=\frac{1}{p^{r}}\sum_{l=0}^{r}(-1)^{r-l} \lambda_{\pi^{\prime}}(p^{l}),\] we see, by substituting this definition into the above relation, that \[p\tilde{\lambda}_{\pi^{\prime}}(p).p^{r}\tilde{\lambda}_{\pi^{\prime}}(p^{r})=p^{ r+1}\tilde{\lambda}_{\pi^{\prime}}(p^{r+1})+p^{r-1}\tilde{\lambda}_{\pi^{\prime}}(p^{ r-1}).\] This in turn implies that \[p^{r}\tilde{\lambda}_{\pi^{\prime}}(p^{r})=C_{r}(\lambda_{\pi^{\prime}}(p)). \tag{12.5}\] _Remark 12.3_.: Unlike the case of \(\tilde{\lambda}_{\pi}(p^{r})\), there is no factor \(p^{l}\) included in the definition of \(\tilde{\lambda}_{\pi^{\prime}}(p^{r})\). If we set for \(\ell=p_{1}^{r_{1}}\cdots p_{m}^{r_{m}}\) \[\tilde{\lambda}_{\pi^{\prime}}(\ell):=\prod_{i=1}^{m}\tilde{\lambda}_{\pi^{ \prime}}(p_{i}^{r_{i}}),\ \tilde{\lambda}_{\pi^{\prime}}(1)=1\] we obtain a multiplicative function which is the Dirichlet convolution \[\tilde{\lambda}_{\pi^{\prime}}(\ell)=\frac{1}{\ell}\big{(}(-1)^{\Omega(\bullet )}\star\lambda_{\pi^{\prime}}\big{)}(\ell)=\frac{1}{\ell}\sum_{\ell_{1}\ell_{ 2}=\ell}(-1)^{\Omega(\ell_{1})}\lambda_{\pi^{\prime}}(\ell_{2}) \tag{12.6}\] Suppose \(N>\ell^{2}{N^{\prime}}^{2}\). By Theorem 11.1, and using (12.4) and (12.6) we have \[\frac{{N^{\prime}}^{2}}{w_{E}\mathfrak{S}(N^{\prime})}\frac{d_{k} }{d_{\Lambda}}\frac{1}{N^{2}\Psi(N)}\sum_{\begin{subarray}{c}\pi\in\mathcal{A }_{k}(N)\\ \varphi\in\mathcal{B}_{k}^{k}(\pi)\end{subarray}}\tilde{\lambda}_{\pi}(\ell) \frac{\big{|}\mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}=\\ \tilde{\lambda}_{\pi^{\prime}}(\ell)+\mathcal{R}(k,\ell,N,N^{\prime}). \tag{12.7}\] where \[\mathcal{R}(k,\ell,N,N^{\prime})=\frac{(\ell NN^{\prime})^{o(1)}}{2^{4k}kNN^{ \prime}\ell}+(k\ell NN^{\prime})^{o(1)}\ell^{15}{N^{\prime}}^{\frac{20}{3}}k^ {1/2}(1+\frac{\ell^{2}{N^{\prime}}^{2}}{N})^{2}\mathcal{E}\] with \[\mathcal{E}=e^{-\frac{\kappa}{\langle\ell N^{\prime}\rangle^{2}+1}}+2^{-\kappa}\] and if we assume that \[\ell^{2}{N^{\prime}}^{2}<N, \tag{12.8}\] the second term can be replaced by \[(k\ell NN^{\prime})^{o(1)}\ell^{5}{N^{2}}{N^{\prime}}^{14/3}k^{1/2}(\frac{\ell ^{2}{N^{\prime}}^{2}}{N})^{\kappa}.\] The now interpret this formula in terms of the measure discussed above. For \(x=(x_{1},\cdots,x_{m})\in[-2,2]^{m}\) let \[\phi(x)=\prod_{i=1}^{m}C_{r_{i}}(x_{i}).\] From (12.3) we have \[\tilde{\lambda}_{\pi}(\ell):=\prod_{i=1}^{m}\tilde{\lambda}_{\pi}(p_{i}^{r_{i} })=\prod_{i=0}^{m}C_{r_{i}}(\lambda_{\pi}(p_{i})-p_{i}^{-1})=\phi(\lambda_{\pi }(\mathbf{p})),\] Also for \(i=1,\cdots,m\) we have \[L(1/2,\sigma_{E_{p_{i}},x_{i}}\times\pi_{E_{p_{i}}}^{\prime})=\sum_{r=0}^{ \infty}\frac{C_{r}(x_{i})\tilde{\lambda}_{\pi^{\prime}}(p_{i}^{r_{i}})}{p_{i}^ {r}}\] and by (12.5) this is equal to \[\sum_{r=0}^{\infty}C_{r}(x_{i})C_{r}(\tilde{\lambda}_{\pi^{\prime}}(p_{i})).\] Since Chebyshev polynomials are orthonormal relative to \(d\mu_{\mathrm{ST}}\), we have \[\tilde{\lambda}_{\pi^{\prime}}(p_{i}^{r_{i}})=\int_{\mathbb{R}}\sum_{r=0}^{ \infty}\tilde{\lambda}_{\pi^{\prime}}(p_{i}^{r})C_{r}(x_{i})C_{r_{i}}(x_{i})d \mu_{\mathrm{ST}}(x_{i})=\mu_{p_{i}}(C_{r_{i}}).\] We have therefore \[\tilde{\lambda}_{\pi^{\prime}}(\ell)=\prod_{i=1}^{m}\tilde{\lambda}_{\pi^{ \prime}}(p_{i}^{r_{i}})=\mu_{\mathbf{p}}(\phi)\] So (12.7) becomes \[\frac{N^{\prime 2}}{w_{E}\mathfrak{S}(N^{\prime})}\frac{d_{k}}{d_{ \Lambda}}\frac{1}{N^{2}\Psi(N)}\sum_{\begin{subarray}{c}\pi\in\mathcal{A}_{k}( N)\\ \varphi\in\mathcal{B}_{k}^{\tilde{\mathfrak{n}}}(\pi)\end{subarray}}\frac{ \left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\left\langle\varphi, \varphi\right\rangle\!\left\langle\varphi^{\prime},\varphi^{\prime}\right\rangle }\phi(\lambda_{\pi}(\mathbf{p}))= \tag{12.9}\] \[\mu_{\mathbf{p}}(\phi)+\mathcal{R}(k,\ell,N,N^{\prime}).\] Suppose that \(k+N\to\infty\). If \(k\geq N\) we see that \[\mathcal{R}(k,\ell,N,N^{\prime})=o_{\phi,N^{\prime}}(1)\] since \(\mathcal{E}\) converges exponentially fast to \(0\) while the dependency in \(N\) as at most polynomial. If \(N\geq k\) then for \(N\) large enough (12.8) is satisfied and \[\mathcal{R}(k,\ell,N,N^{\prime})=\frac{(\ell NN^{\prime})^{o(1)}}{2^{4k}kNN^{ \prime}\ell}+(k\ell NN^{\prime})^{o(1)}\ell^{5}N^{2+1/2}{N^{\prime}}^{14/3}( \frac{\ell^{2}{N^{\prime}}^{2}}{N})^{\kappa}.\] The first term in the expression above is always \(o_{\phi,N^{\prime}}(1)\) while the second term is because \(\kappa>6\). Theorem 12.1 for general \(\phi\) follows from the Stone-Weierstrass theorem. ## 13. Averaging over forms of exact level \(N\) Suppose \(N>1\) (and an inert prime). With the choice of the test function made in SS4.4 the spectral side of the relative trace formula picks up both newforms and oldforms of level \(N\). In this section, we show that when \(N\) is large enough the contribution from the oldforms is smaller than from the new forms; from this, we will eventually deduce (1.6). We use the notations of SS5.1. The set \(\mathcal{A}_{k}(N)\) is the disjoint union of the two subsets \(\mathcal{A}_{k}^{\mathrm{n}}(N)\) and \(\mathcal{A}_{k}(1)\) where \[\mathcal{A}_{k}(1)=\{\pi=\pi_{\infty}\otimes\pi_{f}\in\mathcal{A}(G),\omega_{ \pi}=\mathbf{1},\ \pi_{\infty}\simeq D^{\Lambda},\ \pi_{f}^{K_{f}(1)}\neq\{0\}\}\] is the space automorphic representations "of level 1" and \(\mathcal{A}_{k}^{\mathrm{n}}(N)\) the space automorphic representations of "new" at \(N\). Consequently the space of automorphic forms \(\mathcal{V}_{k}(N)\) admits an orthogonal decomposition \[\mathcal{V}_{k}(N)=\mathcal{V}_{k}^{new}(N)\oplus\mathcal{V}_{k}^{old}(N)\] (here \(\mathcal{V}_{k}^{old}(N)\) is the subspace generated the forms that belong to the elements of \(\mathcal{A}_{k}(1)\)). We choose a corresponding orthogonal basis \[\mathcal{B}_{k}(N)=\mathcal{B}_{k}^{new}(N)\sqcup\mathcal{B}_{k}^{old}(N)\] whose elements belong to the \(\pi\) contained in either \(\mathcal{A}_{k}^{\mathrm{n}}(N)\) or \(\mathcal{A}_{k}(1)\) and are factorable vectors. Accordingly we have a corresponding decomposition \[\mathcal{B}_{k}^{\tilde{\mathfrak{n}}}(N)=\mathcal{B}_{k}^{\tilde{\mathfrak{n} },new}(N)\sqcup\mathcal{B}_{k}^{\tilde{\mathfrak{n}},old}(N)\] and the spectral side of the relative trace formula decomposes as \[J(f^{\mathfrak{n}})=J^{new}(f^{\mathfrak{n}})+J^{old}(f^{\mathfrak{n}}),\] where \[J^{new}(f^{\mathfrak{n}})= \frac{1}{d_{\Lambda}}\sum_{\varphi\in\mathcal{B}^{\mathfrak{n},new} _{k}(N)}\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle \varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle},\] \[J^{old}(f^{\mathfrak{n}})= \frac{1}{d_{\Lambda}}\sum_{\varphi\in\mathcal{B}^{\mathfrak{n}, old}_{k}(N)}\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle \varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}.\] We show the contribution from oldforms are negligible. The main result of this section is the following. **Proposition 13.1**.: _With notations and assumptions as in Theorem 1.1, we have_ \[J^{old}(f^{\mathfrak{n}})\ll_{N^{\prime}}\frac{1}{k} \tag{13.1}\] ### Proof of Theorem 1.1 Assuming this Proposition let us prove (1.6). We have by Theorem 11.1 \[\begin{split}\sum_{\varphi\in\mathcal{B}^{\mathfrak{n}}_{k}(N)} \frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}&= \sum_{\varphi\in\mathcal{B}^{\mathfrak{n},new}_{k}(N)}\frac{\left| \mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi \rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}+O_{N^{\prime}}(\frac{d _{\Lambda}}{d_{k}})\\ &=w_{E}\frac{d_{\Lambda}}{d_{k}}(\frac{N}{N^{\prime}})^{2}\Psi(N) \mathfrak{S}(N^{\prime})\\ &\quad+(kN)^{o(1)}\frac{Nk}{2^{4k}}+(kN)^{o(1)}\frac{k^{5/2}}{N^ {2}}(e^{-\frac{\kappa}{(kN^{\prime})^{2}+1}}+2^{-\kappa})\end{split} \tag{13.2}\] For \(N\) sufficiently large (depending on \(N^{\prime}\)) the main term \[w_{E}\frac{d_{\Lambda}}{d_{k}}(\frac{N}{N^{\prime}})^{2}\Psi(N)\mathfrak{S}(N ^{\prime})\asymp\frac{d_{\Lambda}}{d_{k}}(\frac{N}{N^{\prime}})^{2} \tag{13.3}\] will be twice bigger that the term \(O_{N^{\prime}}(d_{\Lambda}/d_{k})\) above; moreover as \(k+N\to\infty\) the second and third terms on the righthand side of (13.2) are negligible compared to (13.3). Therefore under the assumptions of Theorem 1.1 we have \[\sum_{\varphi\in\mathcal{B}^{\mathfrak{n},new}_{k}(N)}\frac{\left|\mathcal{P }(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}\asymp_{N^{\prime}}\frac{d_{\Lambda}} {d_{k}}N^{2}. \tag{13.4}\] By Proposition 5.4 we have for any \(\varphi\in\mathcal{B}^{\mathfrak{n}}_{\pi}(N),\ \pi\in\mathcal{A}^{\mathfrak{n}}_{k}(N)\) \[\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}\asymp\frac{L(1 /2,\pi_{E}\times\pi_{E}^{\prime})}{L(1,\pi,\mathrm{Ad})L(1,\pi^{\prime}, \mathrm{Ad})}\cdot\frac{1}{d_{k}}\frac{1}{{N{N{N^{\prime}}}}^{2}}, \tag{13.5}\] which implies that \[\sum_{\pi\in\mathcal{A}^{\mathfrak{n}}_{k}(N)}\frac{L(1/2,\pi_{E}\otimes\pi_{ E}^{\prime})}{L(1,\pi,\mathrm{Ad})L(1,\pi^{\prime},\mathrm{Ad})}\asymp_{N^{ \prime}}d_{\Lambda}N^{3}\asymp_{N^{\prime}}|\mathcal{A}^{\mathfrak{n}}_{k}(N)|\] by Weyl's law (1.5). ### Local Analysis: an elucidation of oldforms Let \(p\) be a prime inert in \(E\). Given \(\pi\in\mathcal{A}_{k}(1)\), in this section we shall describe explicitly the space, \(\pi_{p}^{I_{p}}\), of Iwahori-fixed vectors at \(p\). This is certainly well known but we could not find a reference for it. As this subsection is purely local (at the place \(p\)), we will often, to simplify notations, omit the index \(p\): we will write, \(E\) for the local field \(E_{p}\)\(\nu\) for \(\nu_{p}\) its valuation, \(G\) for \(G(\mathbb{Q}_{p})\), \(\pi\) for the local component \(\pi_{p}\), \(I\) for \(I_{p}\), \(\pi^{I}\) for \(\pi_{p}^{I_{p}}\) etc... Let \(\phi^{\circ}\in\pi\) be the spherical vector normalized such that \(\phi^{\circ}(e)=1\), where \(e\) is the identity matrix in \(G\). It is well known that the subspace \(\pi^{I}\) has dimension two : \(\pi\) is induced from unramified character, hence trivial on \(G(\mathbb{Z}_{p})\) and \[B(\mathbb{Z}_{p})\backslash G(\mathbb{Z}_{p})/I\simeq B(\mathbb{F}_{p}) \backslash G(\mathbb{F}_{p})/B(\mathbb{F}_{p})\] has two elements. Obviously \(\pi^{I}\) contains \(\phi^{\circ}\). Our first goal is to construct an explicit vector \(\phi^{*}\in\pi^{I}\) which is not multiple of \(\phi^{\circ}\). By the Gram-Schmidt process, we will obtain an orthonormal basis of \(\pi^{I}\). ### Construction of \(\phi^{*}\) Let \[t=A_{1}^{-1}=\operatorname{diag}(p^{-1},1,p);\] we set \[\phi^{*}(g):=\frac{1}{\operatorname{vol}(I)}\int_{I}\phi^{\circ}(gkt)dk.\] **Lemma 13.2**.: _We have_ \[\phi^{*}=p^{-1}\pi(t)\phi^{\circ}+\sum_{\alpha\in\mathbb{P}_{p}^{k}}\pi\left( \begin{pmatrix}1&&i\alpha p^{-1}\\ &1&&\\ &&1\end{pmatrix}\right)\phi^{\circ}\in\pi^{I}-\{0\}. \tag{13.6}\] Proof.: Given \[k=\begin{pmatrix}k_{11}&k_{12}&k_{13}\\ pk_{21}&k_{22}&k_{23}\\ pk_{31}&pk_{32}&k_{33}\end{pmatrix}\in I\] we have \(\nu(k_{11})=\nu(k_{22})=\nu(k_{33})=0\). Let \(z\in i\mathbb{Z}_{p}\) be such that \[zk_{11}\equiv-k_{31}\,(\operatorname{mod}p),\] in particular \(k_{33}+pzk_{11}\in\mathcal{O}_{E}^{\times}\). Let \[\begin{pmatrix}1&&\\ &1&\\ pz&&1\end{pmatrix}\in I^{\prime}:=I\cap K^{\prime}\] and \[k^{*}:=\begin{pmatrix}1&&\\ &1&\\ pz&&1\end{pmatrix}k=\begin{pmatrix}k_{11}&k_{12}&k_{13}\\ pk_{21}&k_{22}&k_{23}\\ pk_{31}+pzk_{11}&pk_{32}+pzk_{11}&k_{33}+pzk_{11}\end{pmatrix}\in I.\] Since \(\begin{pmatrix}1&&\\ &1&\\ -pz&&1\end{pmatrix}\in I\) we have, by a change of variable, \[\phi^{*}(g):=\frac{1}{\operatorname{vol}(I)}\int_{I}\phi^{\circ}(gkt)dk=\frac {1}{\operatorname{vol}(I)}\int_{i\mathbb{Z}_{p}}\int_{I}\phi^{\circ}\left(g \begin{pmatrix}1&&\\ &1&\\ -pz&&1\end{pmatrix}kt\right)dkdz.\] Since \(t^{-1}k^{*}t\in K\) and \(\phi^{\circ}\) is spherical we see that \[\phi^{*}(g)= \int_{i\mathbb{Z}_{p}}\phi^{\circ}\left(g\begin{pmatrix}1&&\\ &1&\\ -pz&&1\end{pmatrix}t\right)dz\] \[= \int_{\mathbb{Z}_{p}-p\mathbb{Z}_{p}}\phi^{\circ}\left(g\begin{pmatrix} 1&&\\ ipz&&1\end{pmatrix}t\right)dz+\int_{p\mathbb{Z}_{p}}\phi^{\circ}\left(g \begin{pmatrix}1&&\\ ipz&&1\end{pmatrix}t\right)dz.\] Note that for \(z\in p\mathbb{Z}_{p}\), \(\begin{pmatrix}1&\\ ipz&1\end{pmatrix}t\in tK.\) Hence, \[\phi^{*}(g)=\sum_{\alpha\in\mathbb{F}_{p}^{\times}}\phi^{\phi}\left(g\begin{pmatrix} p^{-1}&&\\ &1&\\ i\alpha&&p\end{pmatrix}\right)+p^{-1}\phi^{\circ}(gt).\] Taking advantage of the identity \[\begin{pmatrix}1&i\alpha^{-1}p^{-1}\\ &1&\\ &&1\end{pmatrix}\begin{pmatrix}p^{-1}&&\\ &1&\\ i\alpha&&p\end{pmatrix}=\begin{pmatrix}&&i\alpha^{-1}\\ &1&\\ i\alpha&&p\end{pmatrix}\in K \tag{13.7}\] we obtain the equality (13.6) by the change of variable \(\alpha\mapsto\alpha^{-1}\). Since \(\pi\) is unramified and given our choice for \(\phi^{\circ}\), we have \[\phi^{\circ}(t)=\delta(t)^{\frac{1}{2}}\overline{\chi}^{2}(p)=p^{2}\overline {\chi}^{2}(p)\] where \(\delta\) is the modulus character, and \(\chi\) is an unitary unramified character; it follows that, for \(J=\begin{pmatrix}&&1\\ &1&\\ 1&&\end{pmatrix}\) \[\phi^{*}(J)=p^{-1}\phi^{\circ}(t^{-1})+\sum_{\alpha\in\mathbb{F}_{p}^{\times }}\phi^{\circ}\begin{pmatrix}1&&\\ i\alpha p^{-1}&&1\end{pmatrix}. \tag{13.8}\] By (13.7) we have \[\phi^{\circ}\begin{pmatrix}1&&\\ i\alpha p^{-1}&&1\end{pmatrix}=\phi^{\circ}\left(t^{-1}\begin{pmatrix}1&&i \alpha^{-1}p^{-1}\\ &1&\\ &&1\end{pmatrix}\right)=\phi^{\circ}(t^{-1})=p^{-2}\chi^{2}(p).\] Substituting this into (13.8) we then obtain \[\phi^{*}(J)=(p^{-1}+p-1)p^{-2}\chi^{2}(p)\neq 0. \tag{13.9}\] Hence \(\phi^{*}\not\equiv 0\). **Lemma 13.3**.: _The vector \(\phi^{*}\) is not a scalar multiple of \(\phi^{\circ}\)._ Proof.: Since \(\phi^{\circ}\) is spherical, we have \[\phi^{\circ}(e)=\phi^{\circ}(J).\] On the other hand, by Lemma 13.2 we have \[\phi^{*}(e) =p^{-1}\phi^{\circ}(t)+\sum_{\alpha\in\mathbb{F}_{p}^{\times}} \phi^{\circ}\begin{pmatrix}1&&i\alpha p^{-1}\\ &1&\\ &&1\end{pmatrix}\] \[=p^{-1}\phi^{\circ}(t)+\sum_{\alpha\in\mathbb{F}_{p}^{\times}} \phi^{\circ}(e),\] \[=p(\overline{\chi}^{2}(p)+1)-1.\] Since \(|\chi(p)|=1\), by the triangle inequality, we have \(|\phi^{*}(e)|\geq 1\). On the other hand, (13.9) yields that \[|\phi^{*}(J)|=p^{-1}-p^{-2}+p^{-3}<1.\] Hence, \(\phi^{*}(e)\neq\phi^{*}(J)\).. However, as \(\phi^{o}\) is \(K\)-invariant and \(J\in K\), \(\phi^{o}(J)=\phi^{o}(e)\) hence \(\phi^{*}\) cannot be a scalar multiple of \(\phi^{\circ}\) #### 13.3.1. The Gram-Schmidt Process Let \[\phi^{\dagger}=\frac{\phi^{*}-\frac{\langle\phi^{*},\phi^{\circ}\rangle}{\langle \phi^{\circ},\phi^{\circ}\rangle}\phi^{\circ}}{\sqrt{\langle\phi^{*},\phi^{*} \rangle-\frac{\langle\phi^{*},\phi^{\circ}\rangle^{2}}{\langle\phi^{*},\phi^{ \circ}\rangle}}}.\] Then by construction \[\{\frac{\phi^{\circ}}{\sqrt{\langle\phi^{\circ},\phi^{\circ}\rangle}},\ \phi^{ \dagger}\}\] is an orthonormal basis of \(\pi^{I}\). _Norm computations._ Given \(\alpha\in\mathbb{Z}_{p}^{\times}\) we set \[n_{\alpha}=\begin{pmatrix}1&i\alpha p^{-1}\\ &1&\\ &&1\end{pmatrix}. \tag{13.10}\] We have \[\langle\pi(n_{\alpha})\phi^{\circ},\phi^{\circ}\rangle=\langle\pi(t^{-1})\phi ^{\circ},\phi^{\circ}\rangle \tag{13.11}\] as (13.7) implies that \(\begin{pmatrix}1&i\alpha p^{-1}\\ &1\end{pmatrix}\in K^{\prime}t^{-1}K^{\prime}.\) Thus by (13.6) we have \[\langle\phi^{*},\phi^{\circ}\rangle= \langle p^{-1}\pi(t)\phi^{\circ}+\sum_{\alpha\in\mathbb{F}_{p}^{ \times}}\pi(n_{\alpha})\phi^{\circ},\phi^{\circ}\rangle\] \[= p^{-1}\langle\pi(t)\phi^{\circ},\phi^{\circ}\rangle+(p-1)\langle \pi(t^{-1})\phi^{\circ},\phi^{\circ}\rangle,\] and \[\langle\phi^{*},\phi^{*}\rangle= \langle p^{-1}\pi(t)\phi^{\circ}+\sum_{\alpha\in\mathbb{F}_{p}^{ \times}}\pi(n_{\alpha})\phi^{\circ},p^{-1}\pi(t)\phi^{\circ}+\sum_{\alpha\in \mathbb{F}_{p}^{\times}}\pi(n_{\alpha})\phi^{\circ}\rangle\] \[= p^{-2}\langle\phi^{\circ},\phi^{\circ}\rangle+p^{-1}\sum_{ \alpha\in\mathbb{F}_{p}^{\times}}\langle\pi(n_{\alpha})\phi^{\circ},\pi(t) \phi^{\circ}\rangle\] \[+p^{-1}\sum_{\alpha\in\mathbb{F}_{p}^{\times}}\langle\pi(t)\phi^ {\circ},\pi(n_{\alpha})\phi^{\circ}\rangle+\sum_{\alpha\in\mathbb{F}_{p}^{ \times}}\sum_{\beta\in\mathbb{F}_{p}^{\times}}\langle\pi(n_{\beta}^{-1}n_{ \alpha})\phi^{\circ},\phi^{\circ}\rangle.\] Note that \[t^{-1}n_{\alpha}=\begin{pmatrix}p&i\alpha\\ &p^{-1}\end{pmatrix}=\begin{pmatrix}1&i\alpha p\\ &1\end{pmatrix}\begin{pmatrix}p&\\ &p^{-1}\end{pmatrix}.\] Therefore, we have \[\langle\pi(n_{\alpha})\phi^{\circ},\pi(t)\phi^{\circ}\rangle=\langle\pi(t^{-1 }n_{\alpha})\phi^{\circ},\phi^{\circ}\rangle=\langle\pi(t^{-1})\phi^{\circ}, \phi^{\circ}\rangle=\langle\pi(t)\phi^{\circ},\phi^{\circ}\rangle,\] where we use the fact that \(t^{-1}=JtJ\) and \(\phi^{\circ}\) is right \(J\)-invariant. By (13.7), we have \[\langle\pi(n_{\beta}^{-1}n_{\alpha})\phi^{\circ},\phi^{\circ}\rangle=\begin{cases} \langle\phi^{\circ},\phi^{\circ}\rangle,&\text{if $\alpha=\beta$ in $\mathbb{F}_{p}^{\times}$}\\ \langle\pi(t)\phi^{\circ},\phi^{\circ}\rangle,&\text{otherwise}.\end{cases}\] By MacDonald's formula for spherical vectors and the temperedness of \(\pi\) we have \[\frac{\langle t.\phi^{\circ},\phi^{\circ}\rangle}{\langle\phi^{\circ},\phi^ {\circ}\rangle}=O(\frac{1}{p^{2}})\] and \[\frac{\langle\phi^{*},\phi^{*}\rangle}{\langle\phi^{\circ},\phi^ {\circ}\rangle}= (p+p^{-2}-1)+(p^{2}-3p+4-2p^{-1})\frac{\langle\pi(t)\phi^{\circ}, \phi^{\circ}\rangle}{\langle\phi^{\circ},\phi^{\circ}\rangle}=p+O(1),\] \[\frac{\langle\phi^{*},\phi^{\circ}\rangle}{\langle\phi^{\circ}, \phi^{\circ}\rangle}= (p+p^{-1}-1)\frac{\langle\pi(t)\phi^{\circ},\phi^{\circ}\rangle}{ \langle\phi^{\circ},\phi^{\circ}\rangle}=O(\frac{1}{p}), \tag{13.12}\] where the implied constants are absolute. Consequently we have \[\frac{1}{\langle\phi^{\circ},\phi^{\circ}\rangle}\big{(}\langle\phi^{*},\phi^{*} \rangle-\frac{\langle\phi^{*},\phi^{\circ}\rangle^{2}}{\langle\phi^{\circ},\phi ^{\circ}\rangle}\big{)}=p+O(1) \tag{13.13}\] ### Global Analysis: Proof of Proposition 13.1 In this section we are back to the global setting and return to the notation in force at the beginning of section 13. Given \(\pi\simeq\otimes_{p\leq\infty}\pi_{p}\in\mathcal{A}_{k}(1)\), by the previous section we may assume that \[\mathcal{B}_{\pi,k}(N)=\pi\cap\mathcal{B}_{k}(N)=\{\varphi_{\pi}^{\circ}, \varphi_{\pi}^{\dagger}\}\] is made of two factorable vectors such that \[\varphi_{\pi}^{\circ}\simeq\otimes_{v}\phi_{v}^{\circ},\ \varphi_{\pi}^{ \dagger}\simeq\phi_{N}^{\dagger}\otimes(\otimes_{v\neq N}\phi_{v}^{\circ})\] where * \(\phi_{v}^{\circ}\in\pi_{v}\) is either spherical for \(v<\infty\) or a highest weight vector of the minimal \(K\)-type of \(D^{\mathbb{A}}\) for \(v=\infty\) and * \(\phi_{N}^{\dagger}\in\pi_{N}^{I_{N}}\) is the vector denoted \(\phi^{\dagger}\) in SS13.3.1. We have \[J^{old}(f^{\mathfrak{n}})=\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}(1)} \frac{\left|\mathcal{P}(\varphi_{\pi}^{\circ},\varphi^{\prime})\right|^{2}}{ \langle\varphi_{\pi}^{\circ},\varphi_{\pi}^{\circ}\rangle}+\frac{1}{d_{ \Lambda}}\sum_{\pi\in\mathcal{A}_{k}(1)}\frac{\left|\mathcal{P}(\varphi_{\pi} ^{\dagger},\varphi^{\prime})\right|^{2}}{\langle\varphi_{\pi}^{\dagger},\varphi _{\pi}^{\dagger}\rangle}. \tag{13.14}\] We handle the second sum on the RHS of (13.14). Set \[p=N,\ t=\operatorname{diag}(p^{-1},1,p)\in G(\mathbb{Q}_{p})\hookrightarrow G (\mathbb{A}).\] We have for \(\pi\in\mathcal{A}_{k}(1)\) (in the sequel we drop the index \(\pi\) to ease notations) \[\frac{|\mathcal{P}(\varphi_{\pi}^{\dagger},\varphi^{\prime})|^{2}}{\langle \varphi_{\pi}^{\dagger},\varphi_{\pi}^{\dagger}\rangle}\leq 2\frac{|\mathcal{P}( \phi^{*},\varphi^{\prime})|^{2}+|\frac{\langle\phi^{*},\phi^{\circ}\rangle}{ \langle\phi^{*},\phi^{*}\rangle}|^{2}|\mathcal{P}(\phi^{\circ},\varphi^{\prime })|^{2}}{\langle\phi^{*},\phi^{*}\rangle-\frac{\langle\phi^{*},\phi^{*} \rangle^{2}}{\langle\phi^{*},\phi^{*}\rangle}}. \tag{13.15}\] We recall that \[\phi^{*}=p^{-1}\pi_{p}(t)\phi^{\circ}+\sum_{\alpha\in\mathbb{F}_{p}^{\times}} \pi_{p}(n_{\alpha})\phi^{\circ}\in\pi^{I},\] and \(n_{\alpha}\) as in (13.10). By the change of variables \[x\mapsto xt^{-1},\ xn_{\alpha}\mapsto xn_{\alpha}^{-1},\ \alpha\mapsto-\alpha,\] we derive that \[\mathcal{P}(\phi^{*},\varphi^{\prime})= p^{-1}\int_{[G^{\prime}]}\phi^{\circ}(x)\varphi^{\prime}(xt^{-1})dx+ \sum_{\alpha\in\mathbb{F}_{p}^{\times}}\int_{[G^{\prime}]}\phi^{\circ}(x) \varphi^{\prime}(xn_{\alpha})dx.\] Similar to (13.11), or making use of (13.7), we have for any \(\alpha\in\mathbb{F}_{p}^{\times}\) \[\int_{[G^{\prime}]}\phi^{\circ}(x)\varphi^{\prime}(xn_{\alpha})dx=\int_{[G^{ \prime}]}\phi^{\circ}(x)\varphi^{\prime}(xt^{-1})dx\] so that \[\mathcal{P}(\phi^{*},\varphi^{\prime})=(p-1+\frac{1}{p})\int_{[G^{\prime}]} \phi^{\circ}(x)\varphi^{\prime}(xt^{-1})dx. \tag{13.16}\] Let \(K^{\prime}\) be the maximal compact subgroup of \(G^{\prime}(\mathbb{A}).\) Since \(\phi^{\circ}\) is \(K^{\prime}\)-invariant, we have by a change of variable, \[\int_{[G^{\prime}]}\phi^{\circ}(x)\varphi^{\prime}(xt^{-1})dx=\frac{1}{ \operatorname{vol}(K^{\prime})}\int_{[G^{\prime}]}\phi^{\circ}(x)\int_{K^{ \prime}}\varphi^{\prime}(xk^{\prime}t^{-1})dk^{\prime}dx,\] where the inner integral defines a spherical function. By multiplicity one and MacDonald's formula, we have \[\int_{[G^{\prime}]}\phi^{\circ}(x)\varphi^{\prime}(xt^{-1})dx=c_{\pi^{\prime}_{N} }(t^{-1})\int_{[G^{\prime}]}\phi^{\circ}(x)\varphi^{\prime}(x)dx=c_{\pi^{\prime} _{N}}(t^{-1})\mathcal{P}(\phi^{\circ},\varphi^{\prime})\] for some scalar function \[c_{\pi^{\prime}_{N}}(t)\ll\delta^{\prime}(t^{-1})\ll p^{-1}.\] Here the implied constant is absolute since \(\pi^{\prime}_{p}\) is tempered. Therefore, by (13.16) we have \[\mathcal{P}(\phi^{*},\varphi^{\prime})\ll\mathcal{P}(\phi^{\circ},\varphi^{ \prime}).\] By (13.15), (13.12) and (13.13), we obtain \[\frac{\left|\mathcal{P}(\varphi^{\dagger},\varphi^{\prime})\right|^{2}}{ \langle\varphi^{\dagger},\varphi^{\dagger}\rangle}\ll\frac{1}{p}\frac{| \mathcal{P}(\phi^{\circ},\varphi^{\prime})|^{2}}{\langle\phi^{\circ},\phi^{ \circ}\rangle}.\] so that \[\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}(1)}\frac{\left|\mathcal{P}( \varphi^{\dagger}_{\pi},\varphi^{\prime})\right|^{2}}{\langle\varphi^{\dagger }_{\pi},\varphi^{\dagger}_{\pi}\rangle}\ll\frac{1}{N}\frac{1}{d_{\Lambda}} \sum_{\pi\in\mathcal{A}_{k}(1)}\frac{\left|\mathcal{P}(\varphi^{\circ}_{\pi}, \varphi^{\prime})\right|^{2}}{\langle\varphi^{\circ}_{\pi},\varphi^{\circ}_ {\pi}\rangle}; \tag{13.17}\] consequently we have \[J^{old}(f^{\mathfrak{n}})\ll\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}( 1)}\frac{\left|\mathcal{P}(\varphi^{\circ}_{\pi},\varphi^{\prime})\right|^{2} }{\langle\varphi^{\circ}_{\pi},\varphi^{\circ}_{\pi}\rangle}\] and the argument of SS11.2 for \(N=\ell=1\) yield \[J^{old}(f^{\mathfrak{n}})\ll_{N^{\prime}}\frac{1}{k}+\frac{k^{\circ(1)}}{2^{ 4k}k^{2}}+k^{1/2+o(1)}(e^{-\frac{\kappa}{N^{\prime 2}+1}}+2^{-\kappa})\ll_{N^{ \prime}}\frac{1}{k} \tag{13.18}\] Then (13.1) follows from substituting (13.18) and (13.17) into (13.14). ## 14. **Amplification and Non-vanishing** In this section, we prove Theorem 1.4 and Theorem 1.3. We assume that \[k,N\geq C(E,N^{\prime})\] for a suitable constant depending on \(E\) and \(N^{\prime}.\) ### The Amplifier Let \(\sigma\in\mathcal{A}_{k}(N).\) Let \(L>1.\) Denote by \[\mathcal{L}:=\{L/2<\ell<L:\ \ell\text{ is an inert prime in }E,\text{ and }(\ell,NN^{\prime})=1\}.\] By the prime theorem in arithmetic progression, one has \(|\mathcal{L}|\asymp_{E}L/\log L,\) where the implied constant depends on \(E.\) Recall that, for \(r\geq 1,\) one has \(\lambda_{\sigma}(\ell^{r})\in\mathbb{R},\) and that by (11.4), one has \[\lambda_{\sigma}(\ell)^{2}=\lambda_{\sigma}(\ell^{2})+\ell^{-1}\lambda_{ \sigma}(\ell)+1.\] Suppose \(|\lambda_{\sigma}(\ell)|<1/2\) and \(|\lambda_{\sigma}(\ell^{2})|<1/2\). By triangle inequality we obtain \[1\leq\lambda_{\sigma}(\ell)^{2}+|\lambda_{\sigma}(p^{2})|+p^{-1}|\lambda_{ \sigma}(\ell)|<\frac{1}{4}+\frac{1}{2}+p^{-1}\cdot\frac{1}{2}<1,\] a contradiction! Hence, there exists \(r_{p}\in\{1,2\}\) such that \(|\lambda_{\sigma}(p^{r_{p}})|\geq 1/2.\) Let \[J_{\text{Spec}}(\sigma,L):=\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}(N )}\Big{|}\sum_{p\in\mathcal{L}}\lambda_{\sigma}(p^{r_{p}})\lambda_{\pi}(p^{r_{ p}})\Big{|}^{2}\sum_{\varphi\in\mathcal{B}^{\mathfrak{n}}_{\pi}(N)}\frac{ \left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi \rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}.\] ### Spectral Side: a lower bound By dropping all \(\pi\)'s that are not equal to \(\sigma\) we have \[J_{\mathrm{Spec}}(\sigma,L) \geq\frac{1}{d_{\Lambda}}\Big{|}\sum_{p\in\mathcal{L}}\lambda_{ \sigma}(p^{r_{p}})^{2}\Big{|}^{2}\sum_{\varphi\in\mathcal{B}^{\pi}_{\pi,k}(N)} \frac{\big{|}\mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}\] \[\gg\frac{L^{2}}{d_{\Lambda}\log^{2}L}\sum_{\varphi\in\mathcal{B} ^{\pi}_{\pi,k}(N)}\frac{\big{|}\mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2 }}{\langle\varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime} \rangle}. \tag{14.1}\] ### Spectral Side: decompostion Squaring out of the sum over the primes \(\ell\in\mathcal{L}\), we obtain \[J_{\mathrm{Spec}}(\sigma,L)=J^{=}_{\mathrm{Spec}}(\sigma,L)+J^{\neq}_{\mathrm{ Spec}}(\sigma,L)\] where we have set \[J^{=}_{\mathrm{Spec}}(\sigma,L)=\sum_{\ell\in\mathcal{L}}\lambda_{\sigma}( \ell^{r_{\ell}})^{2}J_{\mathrm{Spec}}(\ell),\] \[J^{\neq}_{\mathrm{Spec}}(\sigma,L)=\sum_{\begin{subarray}{c}\ell_{1},\ell_{2} \in\mathcal{L}\\ \ell_{1}\neq\ell_{2}\end{subarray}}\lambda_{\sigma}(\ell_{1}^{r_{\ell_{1}}}) \lambda_{\sigma}(\ell_{2}^{r_{\ell_{2}}})J_{\mathrm{Spec}}(\ell_{1},\ell_{2})\] with \[J_{\mathrm{Spec}}(\ell)=\frac{1}{d_{\Lambda}}\sum_{\pi\in\mathcal{A}_{k}(N)} \lambda_{\pi}(\ell^{r_{\ell}})^{2}\sum_{\varphi\in\mathcal{B}^{\pi}_{\pi,k}(N) }\frac{\big{|}\mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi, \varphi\rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle},\] \[J_{\mathrm{Spec}}(\ell_{1},\ell_{2})=\frac{1}{d_{\Lambda}}\sum_{\pi\in \mathcal{A}_{k}(N)}\lambda_{\pi}(\ell_{1}^{r_{\ell_{1}}}\ell_{2}^{r_{\ell_{2} }})\sum_{\varphi\in\mathcal{B}^{\pi}_{\pi,k}(N)}\frac{\big{|}\mathcal{P}( \varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}.\] We will now bound \(J_{\mathrm{Spec}}(\ell)\) and \(J_{\mathrm{Spec}}(\ell_{1},\ell_{2})\) using Theorem 11.1. We notice first that by the Hecke relations (11.4) we have \[\begin{cases}\lambda_{\pi}(\ell)^{2}=\lambda_{\pi}(\ell^{2})+p^{-1}\lambda_{ \pi}(\ell)+1,\\ \lambda_{\pi}(\ell^{2})^{2}=\lambda_{\pi}(\ell^{4})+\ell^{-1}\lambda_{\pi}( \ell^{3})+\lambda_{\pi}(\ell^{2})+\ell^{-1}\lambda_{\pi}(\ell)+1.\end{cases} \tag{14.2}\] II follows that \[|J_{\mathrm{Spec}}(\ell)|\leq 4\max_{0\leq\alpha\leq 4}\big{|}\frac{1}{d_{ \Lambda}}\sum_{\pi\in\mathcal{A}_{k}(N)}\lambda_{\pi}(\ell^{\alpha})\sum_{ \varphi\in\mathcal{B}^{\pi}_{\pi,k}(N)}\frac{\big{|}\mathcal{P}(\varphi, \varphi^{\prime})\big{|}^{2}}{\langle\varphi,\varphi\rangle\langle\varphi^{ \prime},\varphi^{\prime}\rangle}\big{|}.\] Since \(\ell^{\alpha}\in[1,L^{4}]\), by Theorem 11.1 and Deligne's bound, \(|\lambda_{\pi^{\prime}}(\ell^{\alpha})|\leq\alpha+1\), we obtain \[J_{\mathrm{Spec}}(\ell)\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{N^{2}}{k}+ \frac{L^{60}N^{2}}{k^{1/2}}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{\prime}}^{2}L^{ 8}}+2^{-\kappa})\Big{)} \tag{14.3}\] and if \[{N^{\prime}}^{2}L^{8}<N \tag{14.4}\] the second term on the righthand side above can be replaced by \[\frac{L^{40}N^{4}}{k^{1/2}}(\frac{L^{8}{N^{\prime}}^{2}}{N})^{\kappa} \tag{14.5}\] Averaging over \(\ell\in\mathcal{L}\) and using the bound \(\lambda_{\sigma}(\ell^{r_{\ell}})^{2}\ll 1\) we obtain that \[J^{=}_{\mathrm{Spec}}(\sigma,L)\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{LN^{2 }}{k}+\frac{L^{61}N^{2}}{k^{1/2}}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{\prime}}^{ 2}L^{8}}+2^{-\kappa})\Big{)} \tag{14.6}\] and if (14.4) holds, the second term on the righthand side of the above bound can be replaced by \[\frac{L^{41}N^{4}}{k^{1/2}}(\frac{L^{8}{N^{\prime}}^{2}}{N})^{\kappa}.\] We treat \(J_{\mathrm{Spec}}^{\neq}(\sigma,L)\) is the same way. Since \(\ell_{1}^{r_{4_{1}}}\ell_{2}^{r_{4_{2}}}\in[L^{2}/4,L^{4}]\), using again Theorem 11.1, we obtain the bound \[J_{\mathrm{Spec}}(\ell_{1},\ell_{2})\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{N ^{2}}{kL^{2}}+\frac{L^{60}N^{2}}{k^{1/2}}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{ \prime}}^{2}L^{8}}+2^{-\kappa})\Big{)}\] and if (14.4) holds, the second term on the righthand side of (14.6) can be replaced by (14.5). Averaging over \(\ell_{1}\neq\ell_{2}\in\mathcal{L}\) we obtain \[J_{\mathrm{Spec}}^{\neq}(\sigma,L)\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{N ^{2}}{k}+\frac{L^{62}N^{2}}{k^{1/2}}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{\prime} }^{2}L^{8}}+2^{-\kappa})\Big{)} \tag{14.7}\] and if (14.4) holds, the second term on the righthand side of (14.3) can be replaced by \[\frac{L^{42}N^{4}}{k^{1/2}}(\frac{L^{8}{N^{\prime}}^{2}}{N})^{\kappa}.\] In conclusion we obtain that \[J_{\mathrm{Spec}}(\sigma,L)\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{LN^{2}}{k }+\frac{L^{62}N^{2}}{k^{1/2}}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{\prime}}^{2}L^ {8}}+2^{-\kappa})\Big{)}\] and if in addition (14.4) holds we have \[J_{\mathrm{Spec}}(\sigma,L)\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{LN^{2}}{k }+\frac{L^{42}N^{4}}{k^{1/2}}(\frac{L^{8}{N^{\prime}}^{2}}{N})^{\kappa}\Big{)}\] combining this with (14.1) we obtain that \[\sum_{\varphi\in\mathcal{B}^{s}_{\sigma,k}(N)}\frac{\big{|} \mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi,\varphi \rangle\langle\varphi^{\prime},\varphi^{\prime}\rangle}\\ \ll_{N^{\prime}}(kNL)^{o(1)}(\frac{k^{2}N^{2}}{L}+L^{60}k^{3/2} N^{2}(1+\frac{L^{8}}{N})(e^{-\kappa/{N^{\prime}}^{2}L^{8}}+2^{-\kappa})) \tag{14.8}\] and if (14.4) holds, we have \[\sum_{\varphi\in\mathcal{B}^{s}_{\sigma,k}(N)}\frac{\big{|}\mathcal{P}(\varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi,\varphi\rangle\langle\varphi^{ \prime},\varphi^{\prime}\rangle}\ll_{N^{\prime}}(kLN)^{o(1)}\Big{(}\frac{k^{2 }N^{2}}{L}+L^{40}k^{3/2}N^{4}(\frac{L^{8}{N^{\prime}}^{2}}{N})^{\kappa}\Big{)}. \tag{14.9}\] #### 14.3.1. The case \(k\geq N\) We choose \(L\gg_{E}1\) such that \(\mathcal{L}\) is not empty and \[L^{8}=\frac{(kN)^{1/2}}{{N^{\prime}}^{2}\log^{2}(kN)}\] (which requires that \(N^{\prime}\ll_{E}\frac{(kN)^{1/4}}{\log(kN)}\)). Since \(k\geq(kN)^{1/2}\) we conclude that the second term on the left-hand side of (14.8) is negligible and that \[\sum_{\varphi\in\mathcal{B}^{s}_{\sigma,k}(N)}\frac{\big{|}\mathcal{P}( \varphi,\varphi^{\prime})\big{|}^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}\ll_{N^{\prime}}(kN)^{o(1)}(kN)^{2-1 /16}.\] #### 14.3.2. The case \(k\leq N\) We choose \(L\gg_{E}1\) such that \(\mathcal{L}\) is not empty and \[L^{8}\leq\frac{1}{2}\frac{N^{1/2}}{{N^{\prime}}^{2}}; \tag{14.10}\] this implies that (14.4) is satisfied and requires \(N^{\prime}\ll_{E}N^{1/4}\). The second term on the left-hand side of (14.9) is bounded by \[(kN)^{o(1)}(kN)^{2}L^{40}k^{-1/2}N^{2-\kappa/2}\leq(kN)^{o(1)}(kN)^{3/2}L^{40}\] since \(\kappa/2-2\geq 1/2\). Choosing \[L=(kN)^{1/82}\] (this is compatible with (14.10)) so that that \((kN)^{2}/L=(kN)^{3/2}L^{40}\) we obtain for \(\sigma\in\mathcal{A}_{k}(N)\) the bound \[\sum_{\varphi\in\mathcal{B}_{\pi,k}^{\bar{\pi}}(N)}\frac{\left|\mathcal{P}( \varphi,\varphi^{\prime})\right|^{2}}{\langle\varphi,\varphi\rangle\langle \varphi^{\prime},\varphi^{\prime}\rangle}\ll_{N^{\prime}}(kN)^{o(1)}(kN)^{2- 1/82}. \tag{14.11}\] ### Proof of Theorem 1.3 If \(N>1\), by (13.4) we have \[\sum_{\pi\in\mathcal{A}_{k}^{\bar{\pi}}(N)}\sum_{\varphi\in\mathcal{B}_{\pi, k}^{\bar{\pi}}(N)}\frac{\left|\mathcal{P}(\varphi,\varphi^{\prime})\right| ^{2}}{\langle\varphi,\varphi\rangle\langle\varphi^{\prime},\varphi^{\prime} \rangle}\asymp_{N^{\prime}}(kN)^{2}\] (note that since \(\pi\in\mathcal{A}_{k}^{\bar{\pi}}(N)\), \(\mathcal{B}_{\pi,k}^{\bar{\pi}}(N)\) is a singleton) and from (14.11) (for \(k>32\) and \(N^{\prime}\ll_{E}(Nk)^{1/8}\)) we have \[\sum_{\pi\in\mathcal{A}_{k}^{\bar{\pi}}(N)}\sum_{\varphi\in\mathcal{B}_{\pi, k}^{\bar{\pi}}(N)}\delta_{\mathcal{P}(\varphi,\varphi^{\prime})\neq 0}\gg_{N^{ \prime}}(kN)^{-1/82+o(1)}\] since \(|\mathcal{B}_{\pi,k}^{\bar{\pi}}(N)|\leq 1\) and \(L(1/2,\pi_{E}\times\pi_{E}^{\prime})\) is proportional to \(\left|\mathcal{P}(\varphi,\varphi^{\prime})\right|^{2}\) we obtain Theorem 1.3 for \(N>1\). The case \(N=1\) follows the same principle by using (1.13) for \(N=1\). ## Appendix A Explicit double coset decompositions In this appendix we record several consequences of the Bruhat-Iwahori-Cartan decompositions for the open compact groups \(G(\mathbb{Z}_{p})\) and \(G^{\prime}(\mathbb{Z}_{p})\) which are used in the evaluation of the local period integrals in SS5. ### Decompositions for \(U(w)\) In this section we discuss the case of \(G^{\prime}(\mathbb{Z}_{p})\). For this it will be useful to represent the elements of \(G^{\prime}\) by their \(2\times 2\) matrices in the basis \(\{e_{-1},e_{1}\}\); moreover if \(p\) is split we will identify \(G^{\prime}(\mathbb{Q}_{p})\) with \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\). We denote by \[I_{p}^{\prime}\subset G^{\prime}(\mathbb{Z}_{p})\] the Iwahori subgroup corresponding to matrices which are upper-triangular modulo \(p\). The following lemma is a consequence of the Bruhat decomposition for \(G^{\prime}(\mathbb{F}_{p})\). **Lemma A.1**.: _We have the disjoint union decompositions_ (A.1) \[G^{\prime}(\mathbb{Z}_{p})=I_{p}^{\prime}\sqcup\bigsqcup_{\delta \in\mathbb{Z}_{p}/p\mathbb{Z}_{p}}\begin{pmatrix}\delta&1\\ 1\end{pmatrix}I_{p}^{\prime}\ \ \text{if $p$ is split;}\] (A.2) \[G^{\prime}(\mathbb{Z}_{p})=I_{p}^{\prime}\sqcup\bigsqcup_{ \begin{subarray}{c}\delta\in\mathcal{O}_{E_{p}}/p\mathcal{O}_{E_{p}}\\ \delta+\delta=0\end{subarray}}\begin{pmatrix}\delta&1\\ 1\end{pmatrix}I_{p}^{\prime}\ \ \text{if $p$ is inert.}\] _In particular_ \[|G^{\prime}(\mathbb{Z}_{p})/I^{\prime}_{p}|=p+1=\mu(I^{\prime}_{p})^{-1}.\] #### a.1.1. Bruhat-Iwahori-Cartan Decomposition on \(U(w)\) We set \[J^{\prime}=\begin{pmatrix}&1\\ 1&\end{pmatrix},\ A_{n}=\begin{pmatrix}p^{n}&\\ &p^{-n}\end{pmatrix},\ n\geq 1.\] We have the following double cosets decomposition: **Lemma A.2**.: _For \(p\) inert in \(E\), we have the disjoint unions_ \[I^{\prime}_{p}A_{n}I^{\prime}_{p}=\bigsqcup_{\begin{subarray}{c} \tau\in\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}=0\end{subarray}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}A_{n}I^{\prime}_{p},\] \[I^{\prime}_{p}J^{\prime}A_{n}I^{\prime}_{p}=\bigsqcup_{\begin{subarray} {c}\tau\in\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}=0\end{subarray}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}J^{\prime}A_{n}I^{\prime}_{p},\] \[I^{\prime}_{p}A_{n}J^{\prime}I^{\prime}_{p}=\bigsqcup_{\begin{subarray} {c}\tau\in\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}=0\end{subarray}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}A_{n}J^{\prime}I^{\prime}_{p},\] \[I^{\prime}_{p}J^{\prime}A_{n}J^{\prime}I^{\prime}_{p}=\bigsqcup_{ \begin{subarray}{c}\tau\in\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}=0\end{subarray}}\begin{pmatrix}1&\tau\\ &1\end{pmatrix}J^{\prime}A_{n}J^{\prime}I^{\prime}_{p}.\] _For \(p\) split in \(E\) these decompositions holds upon replacing \(\mathcal{O}_{p}\) by \(\mathbb{Z}_{p}\) and by removing the condition \(\tau+\overline{\tau}=0\)._ For the proof we refer to SSA.2 where we discuss the more complicated case of \(G(\mathbb{Z}_{p})\). **Lemma A.3**.: _Notations as in the previous lemma, we have we have_ (A.3) \[G^{\prime}(\mathbb{Z}_{p})A_{n}G^{\prime}(\mathbb{Z}_{p})=I^{\prime}_{p}A_{n}I ^{\prime}_{p}\bigsqcup_{\begin{subarray}{c}I^{\prime}_{p}A_{n}J^{\prime}I^{ \prime}_{p}\end{subarray}}\bigsqcup_{I^{\prime}_{p}}J^{\prime}A_{n}I^{\prime} _{p}\bigsqcup_{I^{\prime}_{p}}J^{\prime}A_{n}J^{\prime}I^{\prime}_{p}.\] Proof.: We discuss again only the case \(p\) inert. Taking inverse in the identity (A.2) we have (A.4) \[G^{\prime}(\mathbb{Z}_{p})=I^{\prime}_{p}\sqcup\bigsqcup_{\begin{subarray}{c }\delta\in\mathcal{O}_{p}/N^{\prime}\mathcal{O}_{p}\\ \delta+\overline{\delta}=0\end{subarray}}I^{\prime}_{p}\begin{pmatrix}1& \overline{\delta}\end{pmatrix}.\] We thus have, by (A.2) and (A.4), that \(G^{\prime}(\mathbb{Z}_{p})A_{n}G^{\prime}(\mathbb{Z}_{p})=U^{\prime}_{1}\bigcup U ^{\prime}_{2}\), where \[U^{\prime}_{1}:= \bigcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/N^{\prime }\mathcal{O}_{p}\\ \delta+\overline{\delta}=0\end{subarray}}I^{\prime}_{p}A_{n}\begin{pmatrix} \delta&1\\ 1\end{pmatrix}I_{p}\cup\bigcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/N^ {\prime}\mathcal{O}_{p}\\ \delta+\overline{\delta}=0\end{subarray}}I^{\prime}_{p}\begin{pmatrix}1& \overline{1}\\ 1&\overline{\delta}\end{pmatrix}A_{n}I^{\prime}_{p}\] \[U^{\prime}_{2}:= I^{\prime}_{p}A_{n}I^{\prime}_{p}\cup\bigcup_{\begin{subarray}{c }\delta_{1},\delta_{2}\in\mathcal{O}_{p}/N^{\prime}\mathcal{O}_{p}\\ \delta_{1}+\overline{\delta}_{1}=0\\ \delta_{2}+\delta_{2}=0\end{subarray}}I^{\prime}_{p}\begin{pmatrix}1& \overline{1}\\ 1&\overline{\delta}\end{pmatrix}A_{n}\begin{pmatrix}\delta_{2}&1\\ 1\end{pmatrix}I^{\prime}_{p}.\] Suppose \(n\geq 1\). Then \(G^{\prime}(\mathbb{Z}_{p})A_{n}G^{\prime}(\mathbb{Z}_{p})=I^{\prime}_{p}A_{n}I ^{\prime}_{p}\bigcup I^{\prime}_{p}A_{n}J^{\prime}I^{\prime}_{p}\bigcup U^{ \prime}_{3}\), where \[U^{\prime}_{3}:= \bigcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/N^{\prime} \mathcal{O}_{p}\\ \delta+\overline{\delta}=0\end{subarray}}I^{\prime}_{p}\begin{pmatrix}1& \overline{1}\\ \overline{\delta}\end{pmatrix}A_{n}I^{\prime}_{p}\cup\bigcup_{\begin{subarray}{c }\delta\in\mathcal{O}_{p}/N^{\prime}\mathcal{O}_{p}\\ \delta+\overline{\delta}=0\end{subarray}}I^{\prime}_{p}\begin{pmatrix}1& \overline{1}\\ \overline{\delta}\end{pmatrix}A_{n}J^{\prime}I^{\prime}_{p}.\] Note that under the assumption \(\delta\in\mathcal{O}_{p}^{\times}\), we have \[\begin{pmatrix}1&\frac{1}{\delta}\end{pmatrix}A_{n}=\begin{pmatrix}p^{-n}&\frac{p ^{-n}}{\delta p^{-n}}\end{pmatrix}=\begin{pmatrix}1&\overline{\delta}^{-1}\\ &1\end{pmatrix}A_{n}\begin{pmatrix}\delta^{-1}\\ p^{2n}&\overline{\delta}\end{pmatrix}\in I_{p}^{\prime}A_{n}I_{p}^{\prime};\] and \(J^{\prime}A_{n}=\begin{pmatrix}&1\\ 1\end{pmatrix}A_{n}\in I_{p}^{\prime}J^{\prime}A_{n}I_{p}^{\prime}.\) Hence, we obtain (A.5) \[\bigcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/N^{\prime}\mathcal{O}_{p }\\ \delta+\overline{\delta}=0\end{subarray}}I_{p}^{\prime}\begin{pmatrix}1& \frac{1}{\delta}\end{pmatrix}A_{n}I_{p}^{\prime}\subseteq I_{p}^{\prime}A_{n} I_{p}^{\prime}\bigcup I_{p}^{\prime}J^{\prime}A_{n}I_{p}^{\prime}.\] Note that, for \(\delta\in\mathcal{O}_{p}^{\times}\), a straightforward computation shows \[\begin{pmatrix}1&\frac{1}{\delta}\end{pmatrix}A_{n}J^{\prime}=\begin{pmatrix} \overline{p}^{-n}&p^{n}\end{pmatrix}=\begin{pmatrix}\delta^{-1}&\frac{1}{ \delta}\end{pmatrix}A_{n}J^{\prime}\begin{pmatrix}1&p^{2n}\overline{\delta}^{- 1}\\ &1\end{pmatrix}\in I_{p}^{\prime}A_{n}J^{\prime}I_{p}^{\prime}.\] Also, \(J^{\prime}A_{n}J^{\prime}=\begin{pmatrix}&1\\ 1\end{pmatrix}A_{n}J^{\prime}\in I_{p}^{\prime}J^{\prime}A_{n}J^{\prime}I_{p}^ {\prime}.\) Hence, similar to (A.5) we have, (A.6) \[\bigcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/N^{\prime}\mathcal{O}_{p }\\ \delta+\delta=0\end{subarray}}I_{p}^{\prime}\begin{pmatrix}1&\frac{1}{\delta} \end{pmatrix}A_{n}J^{\prime}I_{p}^{\prime}\subseteq I_{p}^{\prime}A_{n}I_{p}^ {\prime}\bigcup I_{p}^{\prime}J^{\prime}A_{n}J^{\prime}I_{p}^{\prime}.\] Substituting the relations (A.5) and (A.6) into the definition of \(U_{3}^{\prime}\) and the decomposition \(G^{\prime}(\mathbb{Z}_{p})A_{n}G^{\prime}(\mathbb{Z}_{p})=I_{p}^{\prime}A_{n} I_{p}^{\prime}\bigcup I_{p}^{\prime}A_{n}J^{\prime}I_{p}^{\prime}\bigcup U_{3}^{\prime}\) we then conclude (A.7) \[G^{\prime}(\mathbb{Z}_{p})A_{n}G^{\prime}(\mathbb{Z}_{p})=I_{p}^{\prime}A_{n} I_{p}^{\prime}\bigcup I_{p}^{\prime}A_{n}J^{\prime}I_{p}^{\prime}\bigcup I _{p}^{\prime}J^{\prime}A_{n}I_{p}^{\prime}\bigcup I_{p}^{\prime}J^{\prime}A_{n }J^{\prime}I_{p}^{\prime}.\] Then (A.3) follows from the fact that the union in (A.7) is actually disjoint. ### Decompositions for \(U(v)\) Let \(p\) be a prime which is inert in \(E\) (for instance \(p=N\)); let \(E_{p}=E\otimes_{\mathbb{Q}}\mathbb{Q}_{p}\) be the corresponding local field, \(\overline{\bullet}:z\mapsto\overline{z}\) the complex conjugation on \(E_{p}\) and \(\mathcal{O}_{p}\) be its ring of integers and \(p\) is an uniformizer. We denote by \(\nu:E_{p}\mapsto\mathbb{Z}\) the normalized valuation. We recall that, by definition of the unitary group \(G(\mathbb{Q}_{p})\), we have for \(g=\begin{pmatrix}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}\in G(\mathbb{Q}_{p})\), the relations (A.8) \[\begin{pmatrix}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}\begin{pmatrix}\overline{g}_{33}&\overline{g }_{23}&\overline{g}_{13}\\ \overline{g}_{32}&\overline{g}_{22}&\overline{g}_{12}\\ \overline{g}_{31}&\overline{g}_{21}&\overline{g}_{11}\end{pmatrix}=I_{3}\] and (A.9) \[\begin{pmatrix}\overline{g}_{33}&\overline{g}_{23}&\overline{g}_{13}\\ \overline{g}_{32}&\overline{g}_{22}&\overline{g}_{12}\\ \overline{g}_{31}&\overline{g}_{21}&\overline{g}_{11}\end{pmatrix}\begin{pmatrix} g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}=I_{3}.\] We denote by \(I_{p}\) the Iwahori subgroup (A.10) \[I_{p}:=G(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathcal{O}_{p}&\mathcal{O}_{p}& \mathcal{O}_{p}\\ p\mathcal{O}_{p}&\mathcal{O}_{p}&\mathcal{O}_{p}\\ p\mathcal{O}_{p}&p\mathcal{O}_{p}&\mathcal{O}_{p}\end{pmatrix}.\] In particular if \(p=N\), \(K_{p}(N)=I_{p}\). Like in Lemma A.1 the following is a consequence of the Bruhat decomposition for \(G(\mathbb{F}_{p})\): \[G(\mathbb{F}_{p})=P(\mathbb{F}_{p})\sqcup P(\mathbb{F}_{p})JN(\mathbb{F}_{p})\] where \(P\subset G\) is the Borel subgroup with unipotent radical \(N\), so \[N(\mathbb{F}_{p})=\{\begin{pmatrix}1&\delta&\tau\\ &1&-\overline{\delta}\\ &&1\end{pmatrix},\ \delta,\tau\in\mathbb{F}_{p^{2}}\}.\] **Lemma A.4**.: _Let \(p\) be a prime inert in \(E\). We have a disjoint coset decomposition,_ (A.11) \[G(\mathbb{Z}_{p})=I_{p}\bigsqcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/p \mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/p\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} \tau&\delta&1\\ -\overline{\delta}&1\\ 1&\end{pmatrix}I_{p}.\] _In particular_ \[|G(\mathbb{Z}_{p})/I_{p}|=p^{3}+1=(p+1)(p^{2}-p+1)=\mu(I_{p})^{-1}\] For \(n\in\mathbb{Z}\) we set (A.12) \[A_{n}=\begin{pmatrix}p^{n}&&\\ &1&\\ &&p^{-n}\end{pmatrix}.\] **Lemma A.5**.: _Assume that \(p\) is inert in \(E\). For \(n\geq 1\), we have the disjoint decompositions_ \[I_{p}JA_{n}I_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/p^{n }\mathcal{O}_{p}\\ \tau\in\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\\ -\overline{\delta}&1\\ \tau&\delta&1\end{pmatrix}JA_{n}I_{p},\] \[I_{p}JA_{n}I_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/p^{n }\mathcal{O}_{p}\\ \tau\in\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\\ -\overline{\delta}&1\\ \tau&\delta&1\end{pmatrix}JA_{n}I_{p},\] \[I_{p}JA_{n}JI_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/p^{ n+1}\mathcal{O}_{p}\\ \tau\in\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\delta&\tau\\ 1&-\overline{\delta}\\ &1\end{pmatrix}A_{n}JI_{p},\] \[I_{p}JA_{n}JI_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/p^{ n+1}\mathcal{O}_{p}\\ \tau\in\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\\ -\overline{\delta}&1\\ \tau&\delta&1\end{pmatrix}JA_{n}JI_{p}.\] Proof.: Let \[X=\begin{pmatrix}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}\in I_{p}\] and let \[Y=\begin{pmatrix}1&&\\ p^{n}\overline{g}_{32}&1&\\ p^{2n}\overline{g}_{33}&-p^{n}g_{32}&1\end{pmatrix}\in I_{p}.\] A priori we have \(g_{33}\in\mathcal{O}_{p}^{\times}\) but we may assume as well that \(g_{33}=1\). Using (A.8) one has \[g_{12}-g_{13}g_{32}=-\overline{g}_{23}(g_{22}-g_{23}g_{32})\text{ and }(g_{22}-g_{23}g_{32})(\overline{g}_{22}-\overline{g}_{23}\overline{g}_{32})=1.\] We have \[XA_{n}Y=\begin{pmatrix}1&g_{12}-g_{13}g_{32}&g_{13}\\ &g_{22}-g_{23}g_{32}&g_{23}\\ &&1\end{pmatrix}A_{n}\\ =\begin{pmatrix}1&-\overline{g}_{23}&g_{13}\\ &1&g_{23}\\ &&1\end{pmatrix}A_{n}\begin{pmatrix}1&\\ &g_{22}-g_{23}g_{32}\\ &&1\end{pmatrix}.\] Since \(\operatorname{diag}(1,g_{22}-g_{23}g_{32},1)\in I_{p}\), we then obtain \[I_{p}A_{n}I_{p}=N(\mathbb{Z}_{p})A_{n}I_{p}=\bigcup_{\begin{subarray}{c}\delta,\tau\in\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}n(\delta,\tau)A_ {n}I_{p},\quad n(\delta,\tau)=\begin{pmatrix}1&\delta&\tau\\ &1&-\overline{\delta}\\ &&1\end{pmatrix}.\] Since \(A_{n}^{-1}n(\delta,\tau)A_{n}=n(p^{-n}\delta,p^{2n}\tau)\), we then have (A.13) \[I_{p}A_{n}I_{p}=\bigcup_{\begin{subarray}{c}\delta,\tau\in\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}n(\delta,\tau)A_ {n}I_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in p\mathcal{O}_{p}/p^{n} \mathcal{O}_{p}\\ \tau\in p\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\delta&\tau\\ &1&-\overline{\delta}\\ &&1\end{pmatrix}A_{n}I_{p}.\] Similarly, there are some \(\delta_{1},\tau_{1}\in p\mathcal{O}_{p}\) such that \[\begin{pmatrix}1&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}JA_{n}J\begin{pmatrix}1&-p^{n}g_{12}&p^{2n} \overline{g}_{13}\\ &1&p^{n}\overline{g}_{12}\\ &&1\end{pmatrix}=Jn(\delta_{1},\tau_{1})JA_{n}^{-1}.\] Note \(A_{n}^{-1}=JA_{n}J.\) Similar to (A.13), one has \[I_{p}JA_{n}JI_{p}=\bigcup_{\begin{subarray}{c}\delta,\tau\in p\mathcal{O}_{p }\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}Jn(\delta,\tau)JA _{n}^{-1}I_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in p\mathcal{O}_{p}/p^{n+ 1}\mathcal{O}_{p}\\ \tau\in p\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\\ -\overline{\delta}&1\\ \tau&\delta&1\end{pmatrix}JA_{n}JI_{p}.\] By a straightforward computation there are some \(\delta_{2},\tau_{3}\in\mathcal{O}_{p}\) such that \[\begin{pmatrix}g_{11}&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&1\end{pmatrix}A_{n}J\begin{pmatrix}1&-p^{n}g_{32}&p^{2n} \overline{g}_{31}\\ 1&p^{n}\overline{g}_{32}\\ &&1\end{pmatrix}=n(\delta_{2},\tau_{2})A_{n}J.\] Note \(A_{n}^{-1}n(\delta_{2},\tau_{2})A_{n}=n(p^{-n}\delta_{2},p^{-2n}\tau_{2}).\) Therefore, we have \[I_{p}A_{n}JI_{p}=\bigcup_{\begin{subarray}{c}\delta,\tau\in\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}n(\delta,\tau)A _{n}JI_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in\mathcal{O}_{p}/p^{n+1} \mathcal{O}_{p}\\ \tau\in\mathcal{O}_{p}/p^{2n+1}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\delta&\tau\\ &1&-\overline{\delta}\\ &&1\end{pmatrix}A_{n}JI_{p}.\] Likewise, there are some \(\delta_{3},\tau_{3}\in p\mathcal{O}_{p}\) such that \[\begin{pmatrix}1&g_{12}&g_{13}\\ g_{21}&g_{22}&g_{23}\\ g_{31}&g_{32}&g_{33}\end{pmatrix}JA_{n}\begin{pmatrix}1&\\ p^{n}\overline{g}_{12}&1\\ p^{2n}\overline{g}_{13}&-p^{n}g_{12}&1\end{pmatrix}=Jn(\delta_{3},\tau_{3})A _{n}.\] Again, by \(A_{n}^{-1}n(\delta_{3},\tau_{3})A_{n}=n(p^{-n}\delta_{3},p^{-2n}\tau_{3}),\) one has \[I_{p}JA_{n}I_{p}=\bigcup_{\begin{subarray}{c}\delta,\tau\in p\mathcal{O}_{p }\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}Jn(\delta,\tau)A_ {n}I_{p}=\bigsqcup_{\begin{subarray}{c}\delta\in p\mathcal{O}_{p}/p^{n} \mathcal{O}_{p}\\ \tau\in p\mathcal{O}_{p}/p^{2n}\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} 1&\\ -\overline{\delta}&1\\ \tau&\delta&1\end{pmatrix}JA_{n}I_{p}.\] Lemma A.5 follows. **Lemma A.6**.: _Let \(p\) be inert in \(E\). We have_ (A.14) \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=I_{p}A_{n}I_{p}\bigsqcup I_{p}A_{n}JI_{p }\bigsqcup I_{p}JA_{n}I_{p}\bigsqcup I_{p}JA_{n}JI_{p}.\] _Moreover,_ \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=G(\mathbb{Z}_{p})A_{-n}G(\mathbb{Z}_{ p}).\] Proof.: Appealing to Lemma A.4 one has the decomposition (A.15) \[G(\mathbb{Z}_{p})=I_{p}\bigsqcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/ \rho\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}\begin{pmatrix} \tau&\delta&1\\ -\overline{\delta}&1&\\ 1&&\end{pmatrix}I_{p}.\] Taking the inverse of the above identity we then obtain (A.16) \[G(\mathbb{Z}_{p})=I_{p}\bigsqcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/ \rho\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}A_{n}\left( \begin{matrix}\tau&\delta&1\\ -\overline{\delta}&1&\\ 1&&\end{matrix}\right)I_{p}\bigsqcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{ p}/\rho\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}\left( \begin{matrix}1&\overline{\delta}\\ 1&-\delta&\overline{\tau}\end{matrix}\right)A_{n}I_{p}\] \[U_{2}:=\] Since \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=G(\mathbb{Z}_{p})JA_{n}JG(\mathbb{Z}_{ p})=G(\mathbb{Z}_{p})A_{-n}G(\mathbb{Z}_{p}),\] we may suppose \(n\geq 1\) without loss of generality. Therefore, with a straightforward computation we have \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=I_{p}A_{n}I_{p}\bigcup I_{p}A_{n}JI_{p }\bigcup U_{3},\] where \[U_{3}:=\bigcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/\rho\mathcal{O}_{p} \\ \delta\in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}\left( \begin{matrix}1&\overline{\delta}\\ 1&-\delta&\tau\end{matrix}\right)A_{n}I_{p}\bigcup_{\begin{subarray}{c}\tau \in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/\rho\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}\left( \begin{matrix}1&\overline{\delta}\\ 1&-\delta&\tau\end{matrix}\right)A_{n}JI_{p}.\] Let \(\delta\in\mathcal{O}_{p}^{\times}.\) Then for \(\tau\in\mathcal{O}_{p}\) such that \(\tau+\overline{\tau}+\delta\overline{\delta}=0,\) one has \(\tau\in\mathcal{O}_{p}^{\times}.\) Then \[\left(\begin{matrix}1&\overline{\delta}\\ 1&-\delta&\tau\end{matrix}\right)A_{n}=\left(\begin{matrix}-1&-\delta\overline{ \tau}^{-1}&-\tau^{-1}\\ &1&-\overline{\delta}\tau^{-1}\\ &&-1\end{matrix}\right)A_{n}\left(\begin{matrix}-\overline{\tau}^{-1}&\\ -p^{n}\overline{\delta}\tau^{-1}&-\overline{\tau}\tau^{-1}&\\ -p^{2n}&p^{n}\delta&-\tau\end{matrix}\right).\] Denote by \(\operatorname{LHS}^{(1)}_{\delta,\tau}\) the left hand side of the above identity. Note that \[\left(\begin{matrix}-1&-\delta\overline{\tau}^{-1}&-\tau^{-1}\\ &1&-\overline{\delta}\tau^{-1}\\ &&&-1\end{matrix}\right)\in I_{p},\quad\left(\begin{matrix}-\overline{\tau}^{-1} \\ -p^{n}\overline{\delta}\tau^{-1}&-\overline{\tau}\tau^{-1}&\\ -p^{2n}&p^{n}\delta&-\tau\end{matrix}\right)\in I_{p}.\] Then we have \(\operatorname{LHS}^{(1)}_{\delta,\tau}\in I_{p}A_{n}I_{p}.\) Suppose, on the other hand, \(\delta=0.\) Then \(\tau+\overline{\tau}=0.\) When \(\tau\in\mathcal{O}_{p}^{\times}\), we then have \[\begin{pmatrix}&1\\ &1\\ 1&\tau\end{pmatrix}A_{n}=\begin{pmatrix}1&\tau^{-1}\\ &1&\\ &&1\end{pmatrix}A_{n}\begin{pmatrix}\overline{\tau}^{-1}&\\ &1\\ p^{2n}&\tau\end{pmatrix}\in I_{p}A_{n}I_{p};\] when \(\tau=0,\) we have \(JA_{n}\in I_{p}JA_{n}I_{p}.\) Combining these discussions, we obtain (A.17) \[\bigcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/N\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/N\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}\begin{pmatrix} &1\\ &1&\overline{\delta}\\ 1&-\delta&\tau\end{pmatrix}A_{n}I_{p}\subseteq I_{p}A_{n}I_{p}\bigcup I_{p}JA _{n}I_{p}.\] Let \(\delta\in\mathcal{O}_{p}^{\times}.\) Then for \(\tau\in\mathcal{O}_{p}\) such that \(\tau+\overline{\tau}+\delta\overline{\delta}=0,\) one has \(\tau\in\mathcal{O}_{p}^{\times}.\) Then \[\begin{pmatrix}&1&\overline{\delta}\\ &1&-\delta&\tau\end{pmatrix}A_{n}J=\begin{pmatrix}\overline{\tau}^{-1}&- \delta\overline{\tau}^{-1}&1\\ &-1&-\overline{\delta}\\ &&&\tau\end{pmatrix}A_{n}\begin{pmatrix}&1\\ &-\overline{\tau}\tau^{-1}&-p^{n}\overline{\delta}\tau^{-1}\\ 1&-p^{n}\delta\tau^{-1}&p^{2n}\tau^{-1}\end{pmatrix}.\] Denote by \(\operatorname{LHS}^{(2)}_{\delta,\tau}\) the left hand side of the above identity. Note that \[\begin{pmatrix}\overline{\tau}^{-1}&-\delta\overline{\tau}^{-1}&1\\ &-1&-\overline{\delta}\\ &&\tau\end{pmatrix}\in I_{p},\quad\begin{pmatrix}&1\\ &-\overline{\tau}\tau^{-1}&-p^{n}\overline{\delta}\tau^{-1}\\ 1&-p^{n}\delta\tau^{-1}&p^{2n}\tau^{-1}\end{pmatrix}\in JI_{p}.\] Then we have \(\operatorname{LHS}^{(2)}_{\delta,\tau}\in I_{p}A_{n}JI_{p}.\) Suppose, on the other hand, \(\delta=0.\) Then \(\tau+\overline{\tau}=0.\) When \(\tau\in\mathcal{O}_{p}^{\times}\), we then have \[\begin{pmatrix}&1\\ &1\\ 1&\tau\end{pmatrix}A_{n}J=\begin{pmatrix}\overline{\tau}^{-1}&1\\ &1\\ &&\tau\end{pmatrix}A_{n}J\begin{pmatrix}1&p^{2n}\tau^{-1}\\ &1\\ &&1\end{pmatrix}\in I_{p}A_{n}JI_{p};\] when \(\tau=0\), we have \(JA_{n}J\in I_{p}JA_{n}JI_{p}.\) Combining these discussions, we obtain (A.18) \[\bigcup_{\begin{subarray}{c}\tau\in\mathcal{O}_{p}/N\mathcal{O}_{p}\\ \delta\in\mathcal{O}_{p}/N\mathcal{O}_{p}\\ \tau+\overline{\tau}+\delta\overline{\delta}=0\end{subarray}}I_{p}\begin{pmatrix} &1\\ &1&\overline{\delta}\\ 1&-\delta&\tau\end{pmatrix}A_{n}JI_{p}\subseteq I_{p}A_{n}JI_{p}\bigcup I_{p} JA_{n}JI_{p}.\] It the follows from (A.17), (A.18) and definition of \(U_{3}\) that \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})\subseteq I_{p}A_{n}I_{p}\bigcup I_{p }A_{n}JI_{p}\bigcup I_{p}JA_{n}I_{p}\bigcup I_{p}JA_{n}JI_{p}\subseteq G( \mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p}),\] namely, \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=I_{p}A_{n}I_{p}\bigcup I_{p}A_{n}JI_{ p}\bigcup I_{p}JA_{n}I_{p}\bigcup I_{p}JA_{n}JI_{p}.\] Moreover, by Lemma A.5, the union is in fact disjoint. As a consequence, we obtain (A.14). For some inert primes not dividing \(N\), we will also need another closely related double coset decomposition. **Lemma A.7**.: _Let \(p\) be an inert prime. We have for \(n\geq 1\) we have_ \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=A_{n}G(\mathbb{Z}_{p})\sqcup\bigcup_{ \begin{subarray}{c}\delta\,(\operatorname{mod}p^{n})\\ \tau\,(\operatorname{mod}p^{2n}),\ \tau+\overline{\tau}=0\end{subarray}} \gamma(\delta,\tau)A_{n}G(\mathbb{Z}_{p})\] _with_ \[\gamma(\delta,\tau)=\begin{pmatrix}\tau&\delta&1\\ -\overline{\delta}&1\\ 1&&\end{pmatrix}\] Proof.: We have \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=G(\mathbb{Z}_{p})A_{-n}G(\mathbb{Z}_{p} )=A_{-n}A_{n}.G(\mathbb{Z}_{p}).A_{-n}.G(\mathbb{Z}_{p}).\] Let \(K_{2,1}(p^{n})\) be intersection \[A_{n}.G(\mathbb{Z}_{p}).A_{-n}\cap G(\mathbb{Z}_{p}).\] We have \[K_{2,1}(p^{n})=G(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathcal{O}_{p}&\mathcal{O }_{p}&\mathcal{O}_{p}\\ p^{n}\mathcal{O}_{p}&\mathcal{O}_{p}&\mathcal{O}_{p}\\ p^{2n}\mathcal{O}_{p}^{0}&p^{n}\mathcal{O}_{p}&\mathcal{O}_{p}\end{pmatrix}\] where \[\mathcal{O}_{p}^{0}=\{z\in\mathcal{O}_{p},\ \mathrm{tr}(z)=0\}.\] We have the following decomposition (A.19) \[G(\mathbb{Z}_{p})=K_{2,1}(p^{n})\sqcup\bigsqcup_{\begin{subarray}{c}\delta \!\!\!\!\!\!\!\pmod{p^{n}}\\ \tau\!\!\!\!\!\pmod{p^{2n}},\ \tau+\overline{\tau}=0\end{subarray}}\gamma(\delta,\tau)K_{2,1}(p^{n}).\] Let \(N,\overline{N},A\subset G(\mathbb{Q}_{p})\) be respectively the upper triangular nilpotent subgroup, the lower triangular nilpotent subgroup and the diagonal torus. From the Iwahori decomposition we have \[K_{2,1}(p^{n}) =(K_{2,1}(p^{n})\cap\overline{N}).(K_{2,1}(p^{n})\cap A).(K_{2,1} (p^{n})\cap N)\] \[=(K_{2,1}(p^{n})\cap\overline{N}).(G(\mathbb{Z}_{p})\cap A).(G( \mathbb{Z}_{p})\cap N)\] Let \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=K_{2,1}(p^{n})A_{n}G(\mathbb{Z}_{p}) \cup\bigcup_{\begin{subarray}{c}\delta\!\!\!\!\!\pmod{p^{n}}\\ \tau\!\!\!\!\pmod{p^{2n}},\ \tau+\overline{\tau}=0\end{subarray}}\gamma( \delta,\tau)K_{2,1}(p^{n})A_{n}G(\mathbb{Z}_{p}).\] since \[K_{2,1}(p^{n})A_{n}=A_{n}A_{-n}K_{2,1}(p^{n})A_{n}\subset A_{n}G(\mathbb{Z}_{ p})\] we have \[G(\mathbb{Z}_{p})A_{n}G(\mathbb{Z}_{p})=A_{n}G(\mathbb{Z}_{p})\cup\bigcup_{ \begin{subarray}{c}\delta\!\!\!\!\!\pmod{p^{n}}\\ \tau\!\!\!\!\pmod{p^{2n}},\ \tau+\overline{\tau}=0\end{subarray}}\gamma(\delta,\tau)A_{n}G( \mathbb{Z}_{p})\] and the disjointness in (A.19) implies the disjointness of this union. ## Acknowledgements Significant parts of this work were carried out while Ph.M. was visiting de Department of Mathematics at Caltech in particular while on sabbatical in the spring semester 2022. Ph.M. gratefully acknowledges its hospitality and the wonderful working conditions provided in the newly renovated Linde Hall. We are very grateful to Paul Nelson for encouragement and helpful comments and especially for detecting a serious error that we had copied from [26]. We would also like to thank Laurent Clozel, Dipendra Prasad, Yiannis Sakellaridis and Xinwen Zhu for their interest and comments. Ph. M. was partially supported by the SNF grant 200021_197045. D. R. was supported by a grant from the Simons Foundation (award Number: 523557).
2309.16876
Evaluating the Efficiency of Software-only Techniques to Detect SEU and SET in Microprocessors
This paper presents a detailed evaluation of the efficiency of software-only techniques to mitigate SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of a MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results show the inefficiency of the control-flow techniques in detecting the majority of SEU and SET faults. Three effects of the non-detected faults are explained. The conclusions can lead designers in developing more efficient techniques to detect these types of faults.
Jose Rodrigo Azambuja, Fernando Sousa, Lucas Rosa, Fernanda Lima Kastensmidt
2023-09-28T22:20:23Z
http://arxiv.org/abs/2309.16876v1
# Evaluating the Efficiency of Software-only Techniques to Detect SEU and SET in Microprocessors ###### Abstract This paper presents a detailed evaluation of the efficiency of software-only techniques to mitigate SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of a MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results show the inefficiency of the control-flow techniques in detecting the majority of SEU and SET faults. Three effects of the non-detected faults are explained. The conclusions can lead designers in developing more efficient techniques to detect these types of faults. Control flow signatures, Fault tolerance, SEU, SET, Soft errors, Software techniques. ## I Introduction The last-decade advances in the semiconductor industry have increased microprocessor performance exponentially. Most of this performance gain is due to smaller dimensions and low voltage transistors. However, the same technology that made possible all this progress also lowered the transistor reliability by reducing threshold voltage and tightening the noise margins [1, 2] and thus making them more susceptible to faults caused by energized particles [3]. As a consequence, high reliable applications demand fault-tolerant techniques capable of recovering the system from a fault with minimum implementation and performance overhead. One of the major concerns is known as _soft error_, which is defined as a transient effect fault provoked by the interaction of energized particles with the PN junction in the silicon. This upset temporally charges or discharges nodes of the circuit, generating transient voltage pulses that can be interpreted as internal signals, thus provoking an erroneous result [4]. The most typical errors concerning soft errors are _single event upsets_ (SEU), which are bit-flips in the sequential logic and _single event transients_ (SET), which are transient voltage pulses in the combinatorial logic that can be registered by the sequential logic. In areas where computer-based dependable systems are being introduced, the cost and development time are often major concerns. In such areas, highly efficient systems called systems-on-chip (SoC) are being used. SoC's are often designed using intellectual property (IP) cores and commercial off-the-shelf (COTS) microprocessors, which are only guaranteed to function correctly in normal environmental characteristics, while their behavior in the presence of soft errors is not guaranteed. Therefore, it is up to designers to harden their systems against soft errors. Fault tolerance by means of software techniques has been receiving a lot of attention on those systems, because they do not need any customization of the hardware. Software implemented hardware fault tolerance (SIHFT) techniques exploit information, instruction and time redundancy to detect and even correct errors during the program flow. All these techniques use additional instructions in the code area to either recompute instructions or store and check suitable information in hardware structures. In the past years, tools have been implemented to automatically inject such instructions into C or assembly code, reducing significantly the costs. Nevertheless, the drawbacks of software only techniques are the impossibility to achieve complete fault coverage [5], usual high overhead in memory and degradation in performance. Memory increases due to the additional instructions and often memory duplication, while the performance degradation comes from the execution of redundant instruction [6, 7, 8]. In this paper, the authors implemented a set of software-only techniques to harden a matrix multiplication algorithm in order to point out the main vulnerable areas that are not mitigated by these techniques, more specifically the ones affecting the control-flow. Results can guide designers to improve efficiency and detection rates of soft errors mitigation techniques based on software. The paper is organized as follows: Section 2 presents related works on the area of software only techniques. Section 3 presents the software rules implemented and a tool that implements the automatic injection of such rules. Section 4 presents the fault injection campaign and results. Section 5 concludes the paper and presents future work. ## II The proposed case-study Hardened Program Methodology A set of transformation rules has been proposed in the literature. In [9], eight rules are proposed, divided in two groups: (1) aiming data-flow errors, such as data instruction replication [9, 10] and (2) aiming control-flow errors, such as Structural Integrity Checking [11], Control-Flow Checking by Software Signatures (CFCSS) [12], Control Flow Checking using Assertions (CCA) [13] and Enhanced Control Flow Checking using Assertions (ECCA) [14]. The proposed techniques could achieve a full data-flow tolerance, concerning SEU's, being able to detect every fault affecting the data memory, which would lead the system to a wrong result. On the other hand, the control-flow techniques have not yet achieved full fault tolerance. Most control-flow techniques divide the program into basic blocks by starting them in jump destination addresses and memory positions after branch instructions. The end of a basic block is on every jump instruction address and on the last instruction of the code. ECCA extends CCA and is capable of detecting all the inter-BB control flow errors, but is neither able to detect intra-BB errors, nor faults that cause incorrect decision on a conditional branch. CFCSS is not able to detect errors if multiple BBs share the same BB destination address. In [15], several code transformation rules are presented, from variable and operation duplication to consistency checks. Transformation rules have been proposed in the literature aiming to detect both data and control-flow errors. In [9], eight rules are proposed, while [16] used thirteen rules to harden a program. In this paper, we address six rules, divided into faults affecting the datapath and the controlpath. ### _Errors in the Datapath_ This group of rules aims at detecting the faults affecting the data, which comprises the whole path between memory elements, for example, the path between a variable stored in the memory, through the ALU, to the register bank. Every fault affecting these paths, as faults affecting the register bank or the memory should be protected with the following rules: * Rule #1: every variable used in the program must be duplicated; * Rule #2: every write operation performed on a variable must be performed on its replica; * Rule #3: before each read on a variable, its value and its replica's value must be checked for consistency. Figure 1 illustrates the application of these rules to a program with 3 instructions. Instructions 1, 3, 7 and 8 are inserted due to rule #3, while instruction 3, 6 and 10 are inserted due to rules #1 and #2. Combined, these techniques duplicate the used data size, such as number of registers and memory addresses and, therefore, the microprocessor must have spare registers and the memory must have spare memory positions. This issue can also be solved by setting the compiler options to restrict the program to a given number of registers and restrict the data section. ### _Errors in the Controlpath_ This second group of rules aims at protecting the program's flow. Faults affecting the controlpath usually cause erroneous jumps, such as an incorrect jump address or a bitflip in a non-jump instruction's opcode which becomes a jump instruction. To detect these errors, three rules are used in this paper: * Rule #4: every branch instruction is replicated on both destination addresses. * Rule #5: an unique identifier is associated to each basic block in the code; * Rule #6: At the beginning of each basic block, a global variable is assigned with its unique identifier. On the end of the basic block, the unique identifier is checked with the global variable. Branch instructions are more difficult to duplicate than non-branch instructions, since they have two possible paths, when the branch condition is true or false. When the condition is false, the branch can be simply replicated and added right after the original branch, because the injected instruction will be executed after the branch. When the condition is taken, the duplicated branch instruction must be inverted and inserted on the branch taken address. Figure 2 illustrates rule #4 applied to a simple program. For the branch if equal (BEQ) instruction, instructions 2, 4 and 5 must be inserted to replicate it, where instruction 5 is the inverted branch (branch if not equal). Instruction 4 is necessary to avoid false error alerts. The role of rules #5 and #6 is to detect every erroneous jump in the code. They achieve this by inserting a unique identifier to the beginning of each basic block and checking its value on its end. Figure 3 illustrates a program divided in two basic blocks (instructions 2-4 and 5-8). Instructions 2 and 5 are Figure 1: datapath rules Figure 3: rules #5 and #6 Figure 2: rule #4 inserted to set the signature, while instructions 4 and 8 are inserted to compare the signatures with the global value. ## III Fault Injection Experimental Results The chosen case-study microprocessor is a five-stage pipeline microprocessor based on the MIPS architecture, but with a reduced instruction set. The miniMIPS microprocessor is described in [17]. In order to evaluate both the effectiveness and the feasibility of the presented approaches, an application based on 6x6 matrix multiplication algorithm is used. A tool called PosCompiler was developed to automate the software transformation. The tool receives as input the program's binary code and therefore is compiler and language independent and is capable of implementing the presented rules, divided in 3 groups. The first group, called variables, implements rules #1, #2 and #3; group 2, called inverted branches, implements rule #4 and, finally, group 3, also known as signatures, implements rules #5 and #6. The user is allowed to combine the techniques in a graphical interface. We generated through PosCompiler four hardened programs, implementing: (1) signatures, (2) variables, (3) inverted branches and (4) signatures, variables and inverted branches combined. Table 1 shows the original and modified program's execution time, cod size and data size. First, thousands of faults were injected in the non-protected microprocessor, one by program execution. At the end of each execution, the results stored in memory were compared with the expected correct values. If the results matched, the fault was discarded. The amount of faults masked by the program is application related and it should not interfere with the analysis. When 100% signal coverage was achieved and at least 3 faults per signal were detected we normalized the faults, varying from 3 to 5 faults per signal, and those faults build the test case list. In order to achieve a detailed fault analysis, we sorted the faults by their source and effect on the system. We defined four groups of fault sources to inject SEU and SET types of faults: datapath, controlpath, register bank and ALU. We assumed the program and data memories are protected by Error Detection and Correction (EDAC) and therefore faults in the memories were not injected. The fault effects were classified into 2 different groups: program data and program flow, according to the fault effect. To sort the faults among these groups, we continuously compared the Program Counter (PC) of a golden microprocessor with the PC of the faulty microprocessor. In case of a mismatch, the injected fault was classified as flow effect. If the PC matched with the golden's, the fault was classified as a data effect. When transforming the program, new instructions were added and therefore the time in which the faults were injected changed. Since the injection time is not proportional to the total execution time, we mapped each fault locating the instruction where the fault was injected (by locating its new PC) and pipeline stage where the fault was manifested. Around 1% of the total number of faults could not be mapped and were changed by new faults. Results show that the technique called variables (2) presented the highest detection rate among the three. It was capable of detecting all the faults injected in the register bank and the faults that caused errors on the data flow. The ALU was not completely protected because it also has control flow signals and some of these signals affected the program's flow. With 110% code size overhead, this technique could detect 77% overall. Technique (3) was able to complement technique (1) by detecting faults on branch instructions, mainly in the ALU, where most of the branch errors were found. By using technique (2) and (3), with 135% in code size overhead, these techniques combined could detect 79% overall. The signatures (1), on the other hand, were responsible for detecting the faults affecting the program's flow, but it could not reach a high detection rate. When combining all of them, the fault detection coverage reaches 80% with a code size increase of 192% and execution time increase of 118%. However, 20% of faults remain undetected. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Source & Original & (1) & (2) & (3) & (4) \\ \hline Exec. Time (ms) & 1.24 & 1.40 & 2.55 & 1.30 & 2.71 \\ \hline Code Size (byte) & 2060 & 3500 & 4340 & 2580 & 6012 \\ \hline Data Size (byte) & 524 & 532 & 1048 & 524 & 1056 \\ \hline \end{tabular} \end{table} Table 1: original and hardened program’s characteristics \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} & Source & \# of & Data & \multicolumn{3}{c|}{Hardened program versions (\%)} & Flow & \multicolumn{3}{c}{Hardened program versions (\%)} \\ \cline{3-11} & & Signals & & (1) & (2) & (3) & (4) & & (1) & (2) & (3) & (4) \\ \hline \multirow{4}{*}{SET} & Reg. Bank & 2 & 9 & - & 100 & - & 100 & 1 & - & 100 & - & 100 \\ \cline{2-11} & ALU & 10 & 22 & - & 100 & - & 100 & 21 & - & 52.3 & 28.5 & 80.8 \\ \cline{2-11} & Controlpath & 29 & 90 & - & 100 & - & 100 & 46 & 2.17 & 23.9 & 2.17 & 23.9 \\ \cline{2-11} & Datapath & 8 & 37 & - & 100 & - & 100 & 3 & - & 100 & - & 100 \\ \cline{2-11} & Total & 49 & 158 & - & 100 & - & 100 & 71 & 1.4 & 36.7 & 9.8 & 45.1 \\ \hline \multirow{4}{*}{SEU} & Reg. Bank & 36 & 18 & - & 100 & - & 100 & 18 & - & 100 & 5.5 & 100 \\ \cline{2-11} & ALU & 2 & 2 & - & 100 & - & 100 & 0 & - & - & - & - \\ \cline{1-1} \cline{2-11} & Controlpath & 126 & 63 & - & 100 & - & 100 & 56 & 1.8 & 17.8 & 1.7 & 19.6 \\ \cline{1-1} \cline{2-11} & Datapath & 20 & 19 & - & 100 & - & 100 & 1 & - & - & 100 & 100 \\ \cline{1-1} \cline{2-11} & Total & 184 & 102 & - & 100 & - & 100 & 75 & 1.3 & 37.3 & 4 & 40 \\ \hline \end{tabular} \end{table} Table 3: (1) signatures, (2) variables, (3) inverted branches and (4) signatures, variables and inverted branches combined. ## IV Analyzing the undetected faults Technique (2) presented a high detection rate for data effects and faults injected in the datapath and register bank, while (3) protected the branch instructions and complemented (1). The techniques combined (4) also presented and interesting result, which is that these techniques can be scaled in order to achieve a higher detection rate. On the other hand, the technique (1) presented a detection rate below expected and this result will be analyzed in this session. The signatures have three major drawbacks, which caused the low detection rate. The first one is incorrect jumps to the same basic block (intra-block), which cannot be detected since the unique identifier is an invariant and therefore does not depend on the instructions. The application used in this paper spends 83% of its time on the same basic block, which occupies 20% of the program data, and therefore increases the occurrence of this drawback, that can be seen on figure 4, (1). The second drawback is incorrect jumps to the beginning of a basic block that also cannot be detected, because the global variable containing the unique identifier is updated in the beginning of the basic blocks. The occurrence of such error is proportional to the number of basic blocks per instructions, which is higher in control-flow applications. Also, some microprocessors, such as the used in this paper, have a fault detection mechanism that resets themselves in some cases, such as when an inexistent instruction is fetched. This drawback can be seen on figure 4, (2). Finally, the last drawback is incorrect jumps to unused memory positions, which are filled with NOP instructions. To correct this drawback, a watchdog with a timer could be used. This drawback can be seen on figure 4, (3). Table 3 shows the distribution of undetected faults among the three drawbacks. ## V Conclusions In this paper we presented a set of rules based on software-only techniques to detect soft errors in microprocessors. A set of faults was built and a fault injection campaign was realized on the implemented techniques. Results showed that the variables and inverted branches presented a high detection rate, up to 77%, while the signatures showed results below expected. The signatures were then analyzed and three drawbacks were found explaining the undetected faults. We are currently working on improving the detection rates and decreasing the impact of the drawback on the signatures technique.
2309.09956
Small k-pairable states
A $k$-pairable $n$-qubit state is a resource state that allows Local Operations and Classical Communication (LOCC) protocols to generate EPR-pairs among any $k$-disjoint pairs of the $n$ qubits. Bravyi et al. introduced a family of $k$-pairable $n$-qubit states, where $n$ grows exponentially with $k$. Our primary contribution is to establish the existence of 'small' pairable quantum states. Specifically, we present a family of $k$-pairable $n$-qubit graph states, where $n$ is polynomial in $k$, namely $n=O(k^3\ln^3k)$. Our construction relies on probabilistic methods. Furthermore, we provide an upper bound on the pairability of any arbitrary quantum state based on the support of any local unitary transformation that has the shared state as a fixed point. This lower bound implies that the pairability of a graph state is at most half of the minimum degree up to local complementation of the underlying graph, i.e., $k(|G \rangle)\le \lceil \delta_{loc}(G)/2\rceil$. We also investigate the related combinatorial problem of $k$-vertex-minor-universality: a graph $G$ is $k$-vertex-minor-universal if any graph on any $k$ of its vertices is a vertex-minor of $G$. When a graph is $2k$-vertex-minor-universal, the corresponding graph state is $k$-pairable. More precisely, one can create not only EPR-pairs but also any stabilizer state on any $2k$ qubits through local operations and classical communication. We establish the existence of $k$-vertex-minor-universal graphs of order $O(k^4 \ln k)$. Finally, we explore a natural extension of pairability in the presence of errors or malicious parties and show that vertex-minor-universality ensures a robust form of pairability.
Nathan Claudet, Mehdi Mhalla, Simon Perdrix
2023-09-18T17:26:27Z
http://arxiv.org/abs/2309.09956v2
# Small \(k\)-pairable states ###### Abstract A \(k\)-pairable \(n\)-qubit state is a resource state that allows Local Operations and Classical Communication (LOCC) protocols to generate EPR-pairs among any \(k\)-disjoint pairs of the \(n\) qubits. Bravyi et al. introduced a family of \(k\)-pairable \(n\)-qubit states, where \(n\) grows exponentially with \(k\). Our primary contribution is to establish the existence of'small' pairable quantum states. Specifically, we present a family of \(k\)-pairable \(n\)-qubit graph states, where \(n\) is polynomial in \(k\), namely \(n=O(k^{3}\ln^{3}k)\). Our construction relies on probabilistic methods. Furthermore, we provide an upper bound on the pairability of any arbitrary quantum state based on the support of any local unitary transformation that has the shared state as a fixed point. This lower bound implies that the pairability of a graph state is at most half of the minimum degree up to local complementation of the underlying graph, i.e., \(k(|G\rangle)\leqslant\lceil\delta_{loc}(G)/2\rceil\). We also investigate the related combinatorial problem of \(k\)-vertex-minor universality: a graph \(G\) is \(k\)-vertex-minor universal if any graph on any \(k\) of its vertices is a vertex-minor of \(G\). When a graph is \(2k\)-vertex-minor universal, the corresponding graph state is \(k\)-pairable. More precisely, one can create not only EPR-pairs but also any stabilizer state on any \(2k\) qubits through local operations and classical communication. We establish the existence of \(k\)-vertex-minor universal graphs of order \(O(k^{4}\ln k)\). Finally, we explore a natural extension of pairability in the presence of errors or malicious parties and show that vertex-minor universality ensures a robust form of pairability. ## 1 Introduction In the realm of quantum communication networks, we often rely on classical communication along with shared entanglement. Since classical communication cannot create entanglement, we must rely on pre-existing entangled states to perform non-trivial operations. For example, an EPR-pair \(\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) shared between two parties allows the quantum teleportation of a qubit [3]. In this context, a highly pertinent problem is to explore which resource states enable a group of \(n\) parties, equipped with the capability of employing Local Operations and Classical Communication (LOCC), to create entangled EPR pairs among any \(k\) pairs of qubits. It is only recently that Bravyi et al. addressed this fundamental inquiry and provided both upper and lower bounds for what they call the \(k\)-pairability of quantum states, in terms of the number of parties and the number of qubits per party needed for a quantum state to be \(k\)-pairable [5]. However, before their work, numerous variations of this problem had surfaced in the literature, some in the context of entanglement routing [24, 15, 23], and some about problems that can be described as variants of \(k\)-pairability, for example to prepare resource states by clustering and merging [22]; starting from a particular network [13, 7]; creating one EPR pair hidden from other parties [19]; studying the complexity and providing algorithms for generating EPR pairs within a predetermined set [10, 9]; or taking into account the cost of distributing a graph state in terms of EPR pairs [21, 14] (see [5] for a more detailed review). Formally, an \(n\)-party state \(|\psi\rangle\) is said to be \(k\)_-pairable_ if, for every \(k\) disjoint pairs of parties \(\{a_{1},b_{1}\},\ldots,\{a_{k},b_{k}\}\), there exists a LOCC protocol that starts with \(|\psi\rangle\) and ends up with a state where each of those \(k\) pairs of parties shares an EPR-pair. Bravyi et al. studied \(n\)-party states in the case where each party holds \(m\) qubits, with \(m\) ranging from \(1\) to \(\ln(n)\). In the case where each party holds at least \(10\) qubits, they showed the existence of \(k\)-pairable states where \(k\) is of the order of \(n/\mathrm{polylog}(n)\), which is nearly optimal when \(m\) is constant. They also showed that if one allows a logarithmic number of qubits per party, then there exist \(k\)-pairable states with \(k=n/2\). In the present paper, we focus on the scenario that is both the most natural and challenging: when each party possesses precisely one qubit, i.e. \(m=1\). Bravyi et al. proved some results in this particular case. First, using Reed-Muller codes, they were able to construct, for any \(k\), a \(k\)-pairable state of size exponential in \(k\), namely \(n=2^{3k}\), leaving the existence of a \(k\)-pairable state of size \(n=poly(k)\) as an open problem. They found a \(2\)-pairable graph state of size \(10\) and proved that there exists no stabilizer state on less than \(10\) qubits that is \(2\)-pairable using LOCC protocol based on Pauli measurements. Our contributions rely on the graph state formalism and the ability to characterize properties of quantum states using tools from graph theory. In particular, the piariability of a graph state is related to the standard notion of vertex-minors. A graph \(H\) is a vertex-minor of \(G\) if one can transform \(G\) into \(H\) by means of local complementations1 and vertex deletions. If \(H\) is a vertex-minor of a \(G\) then the graph state \(|H\rangle\) can be obtained from \(|G\rangle\) using only single-qubit Clifford operations, single-qubit Pauli measurements and classical communications (we call these protocols CLOCC2 for Clifford LOCC). Dahlberg, Helsen, and Wehner proved that the converse is also true when \(H\) has no isolated vertices [11]. In [10], they proved that it is NP-complete to decide whether a graph state can be transformed into a set of EPR-pairs on specific qubits using CLOCC protocols. In [9] they showed that it is also NP-complete to decide whether a graph state can be transformed into another one using CLOCC protocols. Footnote 1: Local complementation according to a vertex \(u\) consists in toggling the edges in the neighbourhood of \(u\). Footnote 2: In [10] this fragment of operations is called LC + LPM + CC, and in [5] this corresponds to “LOCC protocols based on Pauli measurements”. We prove here the existence of an infinite family of \(k\)-pairable \(n\)-qubit graph states, where the number of qubits \(n\) is polynomial in \(k\) (specifically \(n=O(k^{3}\ln^{3}k)\)), while the construction from Bravyi et al. results in \(k\)-pairable states with an exponential number of qubits. For this purpose, we first point out that a graph state \(|G\rangle\) is \(k\)-pairable if any matching of size \(k\) (graph consisting of a set of \(k\) disjoint edges) is a vertex-minor of \(G\). We then use probabilistic methods to prove the existence of such \(k\)-pairable graph states with a number of qubits polynomial in \(k\). We also provide an upper bound on the \(k\)-pairability of a graph state \(|G\rangle\), namely \(k\) is at most half of the local minimum degree \(\delta_{\mathrm{loc}}(G)\). The local minimum degree [6] refers to the minimum degree of a vertex of \(G\) subject to any sequence of local complementation. The local minimum degree is related to the size of the smallest local set in a graph [17], which we put to use here. Note, however, that the decision problem associated with the computation of the local minimum degree of a graph has been proven to be NP-complete and hard to approximate [20]. This bound is not directly comparable to the bound proposed by Bravyi et al. [5], which, roughly speaking, implies that \(k=O(n\frac{\ln\ln n}{\ln n})\). The new bound is significantly better in certain cases; for instance, it directly implies that graph states whose underlying graph has a vertex of constant degree have constant pairability (as opposed to almost linear). However, it is worth noting that there are graphs with a local minimum degree linear in their number of vertices [20], although no explicit construction for such graphs is known to our knowledge. In such cases, the bound provided by [5] can be better than the one based on the local minimum degree by a logarithmic factor. In the process of proving this bound on the pairability of graph state, we prove a bound on the pairability of arbitrary quantum states, which depends on the support of local unitaries that the state is a fixed point of. From a combinatorial perspective, it is natural to consider graphs that contain any graph of a given order as a vertex-minor, rather than solely focusing on matchings. This leads us to introduce the notion of vertex-minor universality. We say that a graph \(G\) is _\(k\)-vertex-minor universal_ if any graph on any \(k\) of its vertices is a vertex-minor of \(G\). If a graph \(G\) is \(k\)-vertex-minor universal then one can create any stabilizer state on any \(k\) qubits of the graph state \(|G\rangle\) by CLOCC protocols. As a consequence, if \(G\) is \(2k\)-vertex-minor universal then \(|G\rangle\) is \(k\)-pairable. We prove the existence of an infinite family of \(k\)-vertex-minor universal graphs, where the number of vertices \(n\) is polynomial in \(k\), namely \(n=O(k^{4}\ln k)\) using probabilistic methods. Moreover, a counting argument implies that \(n\) is at least quadratic in \(k\), and we show that \(k\) is upper-bounded by the local minimum degree. Furthermore, we present minimal examples of graphs that are 2-, 3-, and 4-vertex-minor universal. In the context of quantum networks, it is important to study robustness to errors or malicious parties. For this purpose, we introduce a robust version of pairability. We say that a \(k\)-pairable state \(|\psi\rangle\) on a set of qubits is \(m\)-robust if for any set of size at most \(m\) of malicious partners, the trusted partners can create \(k\) EPR pairs among any \(2k\) of them with an LOCC protocol. We prove that vertex-minor universality ensures robust pairability. In Section 2, we define the pairability of a quantum state through the graph state formalism. In Section 3, we prove upper bounds on the pairability of a graph state, using in particular the local minimum degree. In Section 4, we prove the existence of polynomial-size \(k\)-pairable graph states. In Section 5, we introduce the notion of \(k\)-vertex-minor universality and prove the existence of polynomial-size \(k\)-vertex-minor universal graphs. We also provide a table with a few examples of \(k\)-pairable states and \(k\)-vertex-minor universal graphs for small values of \(k\). Finally, in Section 6, we define robust pairability and show how vertex-minor universality implies robust pairability. ## 2 Quantum state pairability We review in this section basic definitions of \(k\)-pairability and exemplify this concept with existing and new illustrating instances. We extensively use the graph state formalism [16], which is a standard family of quantum states that can be represented using simple undirected graphs. Given a graph \(G=(V,E)\), the corresponding graph state \(|G\rangle\) is the \(|V|\)-qubit state: \[|G\rangle=\frac{1}{\sqrt{2}^{|V|}}\sum_{x\in 2^{V}}(-1)^{|G[x]|}|x\rangle\] where \(|G[x]|\) is the size (number of edges) of the subgraph induced by \(x\)3. Footnote 3: With a slight abuse of notation we identify a subset (say \(x=\{u_{2},u_{4}\}\)) of the set of qubits \(V=\{u_{1},\ldots u_{5}\}\) with its characteristic binary word (\(x=01010\)). A graph state \(|G\rangle\) can be prepared as follows: initialize every qubit in \(|+\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}\) then apply for each edge of the graph a \(CZ:|ab\rangle\mapsto(-1)^{ab}|ab\rangle\) gate on the corresponding pair of qubits. Notice that the graph state \(|G\rangle\) is the unique quantum state (up to a global phase) which is, for every vertex \(u\in V\), a fixed point of the Pauli operator \(X_{u}Z_{N(u)}\)4. Footnote 4: It consists in applying \(X:|a\rangle\mapsto|1-a\rangle\) on \(u\) and \(Z:|a\rangle\mapsto(-1)^{a}|a\rangle\) on each of its neighbours in \(G\). A matching \(\pi\) of size \(k\) and order \(n\) is a graph on \(n\) vertices made of \(k\leqslant\frac{n}{2}\) pairwise non-adjacent edges. The associated graph state \(|\pi\rangle\) is nothing but \(k\) pairs of maximally entangled qubits together with \(n-2k\) fully separable qubits. Notice that there are \(\frac{n!2^{-k}}{k!(n-2k)!}\) matchings of size \(k\) and order \(n\). A protocol that transforms an \(n\)-party state \(|\psi\rangle\) into a state \(|\varphi\rangle\) is LOCC (resp. CLOCC) if it uses only local operations (resp. local Clifford unitaries and Pauli measurements) and classical communications between the parties. In this paper we consider only protocols where each party is made of a single qubit. **Definition 1**.: _An \(n\)-qubit quantum state \(|\psi\rangle\) is \(k\)-pairable if for every matching \(\pi\) of size \(k\) and order \(n\) there is an LOCC protocol transforming \(|\psi\rangle\) into \(|\pi\rangle\)._ **Remark 1**.: _Here we only consider the case where each party is made of a single qubit, please refer to [5] for a more general definition. Notice that in [5], \(k\)-pairability is defined as the ability to produce any \(k\) EPR pairs, which is equivalent to Definition 1 since given a matching \(\pi\) of size \(k\), \(|\pi\rangle\) is, up to local Clifford unitaries, nothing but \(k\) EPR pairs together with \(n-2k\) qubits in the \(|0\rangle\)-state._ For instance the GHZ state \(\frac{|0^{n}\rangle+|1^{n}\rangle}{\sqrt{2}}\) is \(1\)-pairable. More generally, a graph state \(|G\rangle\) is \(1\)-pairable if and only if \(G\) is connected. Bravyi et al. showed that the graph state - they call the "wheel" - on \(10\) qubits is \(2\)-pairable (Fig. 1.Left). This example is somehow _minimum_ in the sense that there is no graph state on at most \(9\) qubits that are \(2\)-pairable using a CLOCC protocol. We introduce a new example of Figure 1: (Left) “Wheel” graph. (Right) Petersen graph 2-piarable state on 10 qubits, namely the graph state associated with the Petersen graph (Fig. 1.Right). We provide also an example of 3-piarable graph state associated with the 29-Paley graph5 (on 29 vertices) which improves on the 3-piarable state on 32 qubits introduced in [5] (see Table 1). Footnote 5: Given a prime \(q=1\) mod 4, the \(q\)-Paley graph is a graph which vertices are elements of the finite field \(GF(q)\), such that two vertices share an edge if their difference is a square in \(GF(q)\). Since pairability is defined by means of LOCC protocols, two quantum states that are equal up to local unitaries (LU-equivalent for short) have the same pairability. A useful graph transformation to describe equivalent graph states by local Clifford unitaries is local complementation. Given a graph \(G\), a local complementation according to a given vertex \(u\) consists in complementing the subgraph induced by the neighbourhood of \(u\), leading to the graph \(G\star u=G\Delta K_{N(u)}\) where \(\Delta\) is the symmetric difference and \(K_{A}\) is the complete graph on the vertices of \(A\). It has been shown in [12] that if two graphs are equivalent up to local complementation then the corresponding graph states are LU-equivalent and hence have the same pairability. Notice that the graph states associated with the "wheel" and the Petersen graph are not LU-equivalent. Notice also that the Petersen graph has already been pointed out as the smallest examples in various properties related to local complementations [4, 16]. ## 3 Upper bounds It is known that the pairability of quantum states is sublinear in their number of qubits, roughly speaking \(k=O(n\frac{\ln\ln n}{\ln n})\)[5]. In this section, we provide an alternative upper bound on the pairability of arbitrary quantum states, depending on the support of local unitaries that the state is a fixed point of. When applied to graph states, this upper bound can be expressed in terms of graph parameters: local minimum degree (Corollary 1) and vertex cover number (Corollary 2). Recall that the support of a local unitary \(U=\bigotimes_{v\in V}U_{v}\) is the set of qubits on which \(U\) acts not trivially: \(\mathit{supp}(U)=\{v\in V\ |\ U_{v}\not\propto I\}\)6. Footnote 6: We note \(U\propto V\) when \(U=e^{i\phi}V\) for some angle \(\phi\), and \(U\not\propto V\) otherwise. **Proposition 1**.: _If \(|\psi\rangle\) is \(k\)-piarable and \(U\) is a local unitary such that \(U\neq I\) and \(U|\psi\rangle=|\psi\rangle\), then \(2k\leqslant|\mathit{supp}(U)|\)._ Proof.: By contradiction, assume there exist a \(k\)-piarable state \(|\psi\rangle\) of a register \(V\) of qubits and a local unitary \(U\) s.t. \(U|\psi\rangle=|\psi\rangle\) and \(2k>|\mathit{supp}(U)|>0\). We consider a matching \(\pi\) such that every vertex in the support of \(U\) is covered by an edge and at least one edge has one vertex inside and one vertex outside \(\mathit{supp}(U)\), i.e. \(\pi=(V,E)\) with \(E=\{(u_{i},v_{i})\}_{i=1\ldots k}\), \(u_{1}\in\mathit{supp}(U)\) and \(v_{1}\notin\mathit{supp}(U)\), and \(\mathit{supp}(U)\subseteq C\) where \(C:=\{u_{i}\}_{i=1\ldots k}\cup\{v_{i}\}_{i=1\ldots k}\) is the set of vertices covered by the matching. By hypothesis, the state \(|\pi\rangle\) can be obtained from the state \(|\psi\rangle\) by an LOCC protocol (with non-zero probability). Any LOCC protocol can be described by a completely trace-preserving map, with separable Kraus operators. So there exists a product Kraus operator \(M=M_{1}\otimes M_{2}\otimes\cdots\otimes M_{n}\) such that \(M|\psi\rangle=c_{\pi}|\pi\rangle\) for some \(c_{\pi}\neq 0\). Notice that \(M_{u}\) is invertible when \(u\in C\). Indeed, let \(|\varphi_{0}\rangle\) and \(|\varphi_{1}\rangle\) be two independent eigenvectors of \(M_{u}\). We have \(|\psi\rangle=|\varphi_{0}\rangle_{u}\otimes|\psi_{0}\rangle_{V\setminus u}+| \varphi_{1}\rangle_{u}\otimes|\psi_{1}\rangle_{V\setminus u}\). So \(M|\psi\rangle=M_{u}|\varphi_{0}\rangle_{u}\otimes M_{V\setminus u}|\psi_{0} \rangle_{V\setminus u}+M_{u}|\varphi_{1}\rangle_{u}\otimes M_{V\setminus u}| \psi_{1}\rangle_{V\setminus u}=c_{\pi}|\pi\rangle\). Since \(u\) is entangled in \(|\pi\rangle\) and \(c_{\pi}\neq 0\), we have \(M_{u}|\varphi_{0}\rangle_{u}\neq 0\) and \(M_{u}|\varphi_{1}\rangle_{u}\neq 0\), so \(M_{u}\) is invertible. As a consequence, \(M_{C}:=(\bigotimes_{u\in C}M_{u})\otimes I_{V\setminus C}\) is invertible. We consider also \(M_{\overline{C}}:=M_{\overline{C}}^{-1}M\) which commutes with \(U\) by construction. We show that \(|\pi\rangle\) is a fixed point of \(M_{C}UM_{\overline{C}}^{-1}\). Indeed, \(c_{\pi}|\pi\rangle=MU|\psi\rangle=M_{C}M_{\overline{C}}U|\psi\rangle=M_{C}UM_{ \overline{C}}^{-1}\psi\rangle=M_{C}UM_{C}^{-1}M_{C}M_{\overline{C}}|\psi\rangle =M_{C}UM_{C}^{-1}M|\psi\rangle\)\(=c_{\pi}M_{C}UM_{C}^{-1}|\pi\rangle\). It implies on the pair of qubits \(u_{1}(\in supp(U))\), \(v_{1}(\notin supp(U))\), that \(|\Phi\rangle_{u_{1},v_{1}}\) is an eigenstate of \(M_{u_{1}}U_{u_{1}}M_{u_{1}}^{-1}\otimes I_{v_{1}}\), so there exists \(\lambda\in\mathbb{C}\) s.t. \(M_{u_{1}}U_{u_{1}}M_{u_{1}}^{-1}|0\rangle=\lambda|0\rangle\) and \(M_{u_{1}}U_{u_{1}}M_{u_{1}}^{-1}|1\rangle=\lambda|1\rangle\), as a consequence \(M_{u_{1}}U_{u_{1}}M_{u_{1}}^{-1}=\lambda I\), so \(U_{u_{1}}=\lambda I\). Since \(U_{u_{1}}\) is unitary, we have \(U_{u_{1}}\propto I\) which contradicts the hypothesis \(u_{1}\in supp(U)\). In the particular case of graph states, we can express this bound in terms of an already-studied graph parameter on the corresponding graph, namely the local minimum degree, which is the minimum degree up to local complementation: \(\delta_{loc}(G):=\min_{G\equiv_{LC}H}\delta(H)\) where \(G\equiv_{LC}H\) means that \(G\) can be transformed into \(H\) using a series of local complementations. Indeed, the minimum support of a Pauli operator stabilizing a graph state \(|G\rangle\) is equal to \(\delta_{loc}(G)+1\)[17]. It leads to the following upper bound: **Corollary 1**.: _A graph state \(|G\rangle\) is not \((\lceil\delta_{loc}(G)/2\rceil+1)\)-piable._ Proof.: Since for any \(u\in V\), \(|G\rangle\) is a fixed point of \(X_{u}Z_{N(u)}\), \(|G\rangle\) is also a fixed point of \(\prod_{u\in D}X_{u}Z_{N(u)}=\pm X_{D}Z_{\mathrm{Odd}(D)}\) for any \(D\subseteq V\), where \(\mathrm{Odd}(D):=\{v\in V\ |\ |N(v)\cap D|=1\bmod 2\}\). The support \(D\cup\mathrm{Odd}(D)\) of \(X_{D}Z_{\mathrm{Odd}(D)}\) is called a local set. The minimum size of a non-empty local set is known to be equal to \(\delta_{loc}(G)+1\)[17]. Thus, according to Proposition 1, \(|G\rangle\) is not \((\lceil\delta_{\mathrm{loc}}(G)/2\rceil+1)\)-piable. This bound is tight, for instance the two graphs of Fig. 1 have local minimum degree 3 and the corresponding graph states are both 2-piable. We believe that Corollary 1 generally provides a useful upper bound on the pairability of a graph state as it is challenging to find a constructive family of graphs with large local minimum degree. The family of hypercubes, for example, has a logarithmic local minimum degree [17]. Paley graphs provide, up to our knowledge, the best7 known constructive family with graphs of order quadratic in their local minimum degree (see Table 1 for small Paley graphs). Footnote 7: ‘Best’ here means with the largest ratio local minimum degree divided by order of the graph. However, it has been proved, using non-constructive probabilistic methods, that for any (large enough) \(n\) there is exists a graph of order \(n\) and local minimum degree greater than \(0.189n\)[20], hence the upper bound provided by Corollary 1 is not tight for such graphs as it is known that the pairability of a quantum state is sublinear in its number of qubits \((k=O(n\frac{\ln\ln n}{\ln n}))\)[5]. Finally, the local minimum degree is related to the vertex cover number8\(\tau(G)\). Namely, the pairability of a graph state is at most a quarter of its vertex cover number up to logarithmic terms: Footnote 8: I.e. the size of the smallest set \(S\) such that if \(u\) and \(v\) share an edge, then \(u\in S\) or \(v\in S\). **Corollary 2**.: _A graph state \(|G\rangle\) is not \(\left(\left\lceil\frac{\tau(G)+\log_{2}(\tau(G))}{4}\right\rceil+1\right)\)-piable._ Proof.: It is known that \(2\delta_{\mathrm{loc}}(G)\leqslant\tau(G)+\log_{2}(\tau(G))+1\)[6], and that we can remove the constant term when \(\tau(G)\neq 1\). When \(\tau(G)=1\), we have \(\left\lceil\frac{\tau(G)+\log_{2}(\tau(G))}{4}\right\rceil=\left\lceil\frac{1+ 0}{4}\right\rceil=1=\left\lceil\frac{1+0+1}{4}\right\rceil=\left\lceil\frac{\tau (G)+\log_{2}(\tau(G))+1}{4}\right\rceil\). Existence of a polynomial-size \(k\)-pairable graph state In this section, we prove the existence of an infinite family of \(k\)-pairable graph states with a number of qubits that is polynomial in \(k\). For this purpose we describe a sufficient vertex-minor-based condition on a graph \(G\) for the corresponding graph state \(|G\rangle\) to be \(k\)-pairable. _Vertex-minor_ is a standard notion in graph theory [18, 8]: Given two graphs \(H=(V_{H},E_{H})\) and \(G=(V_{G},E_{G})\) such that \(V_{H}\subseteq V_{G}\), \(H\) is a vertex-minor of \(G\) when \(H\) can be obtained from \(G\) by means of local complementations and vertex deletions. It is well known that one can actually transform \(H\) into \(G\) by applying first the local complementations and then the vertex deletions. In other words, \(H\) is the subgraph induced by \(V_{H}\) in the graph \(G\star u_{1}\star u_{2}\ldots\star u_{m}\) for some sequence of vertices \(u_{1},\ldots,u_{m}\). If a graph \(H\) is a vertex-minor of a graph \(G\) then the graph state \(|H\rangle\) can be obtained from \(|G\rangle\) by CLOCC protocols, the converse is also true when \(H\) has no isolated vertices [11]. Notice that a perfect matching9 has by definition no isolated vertices. Then, Footnote 9: A perfect matching is a graph where each vertex appears in exactly one edge. **Proposition 2**.: _A graph state \(|G\rangle\) is \(k\)-pairable by CLOCC protocols if and only if any perfect matching on any \(2k\) vertices is a vertex-minor of \(G\)._ To prove the existence of'small' graphs which admit any perfect matching on any \(2k\) vertices as vertex-minors we use probabilistic methods. We consider a random Erdos Renyi graph \(G(n,p)\) of order \(n\) and such that each edge is included in the graph with probability \(p\), independently of every other edge. The objective is to choose appropriate values of \(p\) and \(n\) to guarantee that the random graph admits, with a non-zero probability, every perfect matching of size \(k\) as a vertex-minor. To do so, we are going to use the following lemma, which derives directly from the union bound: **Lemma 1**.: _Let \(\mathcal{A}=\{A_{1},\ldots,A_{d}\}\) be a set of bad events in an arbitrary probability space. If for all \(A_{i}\), \(Pr(A_{i})\leq p\) and \(dp<1\), then with a non-zero probability, none of the bad events occur: \(Pr(\overline{A_{1}},\ldots,\overline{A_{d}})>0\)._ Proof.: \(Pr(\overline{A_{1}},\ldots,\overline{A_{d}})=1-Pr(A_{1}\cup\cdots\cup A_{d}) \geq 1-\sum_{i\in\llbracket 1,d\rrbracket}Pr(A_{i})\geq 1-dp>0\). In our context, a bad event is when the strategy described below fails to induce a given perfect matching on a given set of vertices. Intuitively, in order to induce a given perfect matching on a fixed set \(K\) of vertices, one has to toggle some edges, say \(r\) edges. Such an edge \((a,b)\) can be toggled by means of a local complementation on a vertex \(u_{a,b}\notin K\) if \(u_{a,b}\) is connected to both \(a\) and \(b\) but none of the other vertices of \(K\). To guarantee that each of the \(r\) edges can be toggled independently, it is also desirable that the corresponding \(r\) vertices \(u_{a,b}\) form an independent set (see Fig. 2). We first prove a technical lemma to upper bound the probability of the bad event that such a configuration does not exist in a random graph: **Lemma 2**.: _Given a random graph \(G(n,p)\), let \(K\) (of size \(k\)) and \(M\) (of size \(m\)) be two non-intersecting subsets of vertices, and \(R\subseteq\binom{K}{2}\) be a set of \(r\) pairs of vertices in \(K\). The probability that for any independent set \(S\subseteq M\), \(\exists\{a,b\}\in R\), s.t. \(\forall u\in S\), \(N(u)\cap K\neq\{a,b\}\), is upper bounded by \(re^{-(m-r)p^{2}(1-p)^{k+r-2}}\)._ Proof.: Let \(R=\{\{a_{1},b_{1}\},\ldots\{a_{r},b_{r}\}\}\). We consider the complementary event \(\overline{B}\): \(\exists S\subseteq M\text{ s.t. }\forall i\in\llbracket 1,r\rrbracket,\exists u_{i} \in S\text{ s.t. }N_{K\cup S}(u_{i})=\{a_{i},b_{i}\}\), where \(N_{A}(u):=N(u)\cap A\). We describe in the following a greedy algorithm to find such a set \(S\), and then upper bound \(Pr(B)\) by the probability that the algorithm fails. First initialize \(S\) as the empty set, and consider an arbitrary ordering on the vertices of \(M\). We will consider vertices \(u\) of \(M\) one after the other. At each step, if \(\exists i\in\llbracket 1,r\rrbracket\), \(N_{K\cup S}(u)=\{a_{i},b_{i}\}\), then we add \(u\) to \(S\), and we remove \(\{a_{i},b_{i}\}\) from \(R\). When \(R\) is empty, we are done. We note \(p(m,r,s)\) the probability that the algorithm fails if we start with a set \(S\) of size \(s\) (then the probability that the algorithm fails in general is \(p(m,r,0)\)). We have \(p(m,r,s)=1\) when \(r>m\) and \(p(m,0,s)=0\). More generally, we show that: \[p(m,r,s)=rp^{2}(1-p)^{k+s-2}p(m-1,r-1,s+1)+\left(1-rp^{2}(1-p)^{k+s-2}\right)p (m-1,r,s)\] Indeed, at a given step of the algorithm, say that we consider the vertex \(u\in M\). \[p(m,r,s)\] \[=Pr(\text{the algorithm fails given m,r,s})\] \[=\sum_{i\in\llbracket 1,r\rrbracket}Pr(\text{the algorithm fails given m,r,s}|N_{K\cup S}(u)=\{a_{i},b_{i}\})Pr(N_{K\cup S}(u)=\{a_{i},b_{i}\})\] \[+Pr(\text{the algorithm fails given m,r,s}|\forall i\in \llbracket 1,r\rrbracket,N_{K\cup S}(u)\neq\{a_{i},b_{i}\})\] \[\times Pr(\forall i\in\llbracket 1,r\rrbracket,N_{K\cup S}(u) \neq\{a_{i},b_{i}\})\] \[=p(m-1,r-1,s+1)rp^{2}(1-p)^{k+s-2}+p(m-1,r,s)\left(1-rp^{2}(1-p)^ {k+s-2}\right)\] \[\text{using }Pr(\forall i\in\llbracket 1,r\rrbracket,N_{K\cup S}(u) \neq\{a_{i},b_{i}\})=1-\sum_{i\in\llbracket 1,r\rrbracket}Pr(N_{K\cup S}(u)=\{a_{i},b_{i }\})\] We will now prove by induction that for all \(m,r,s\in\mathbb{N}\text{ such that }m\geqslant r\), \(p(m,r,s)\leqslant re^{-(m-r)p^{2}(1-p)^{k+r+s-2}}\). For the initialization, we need to prove that this is true for \(p(r,r,s)\), for any \(r\) and \(s\), as well as for \(p(m,0,s)\), for any \(m\) and \(s\). We have \(p(m,0,s)=0\), so we just need \(p(r,r,s)\leqslant r\) for any \(r\geqslant 1\), which is trivially true. Then, consider some \(s\in\mathbb{N},m,r\in\mathbb{N}^{*}\text{ such that }m\geqslant r\), and suppose that for \(p(m-1,r-1,s+1)\) and \(p(m-1,r,s)\), the property is true. Then, \[p(m,r,s)\] \[=rp^{2}(1-p)^{k+s-2}p(m-1,r-1,s+1)+\left(1-rp^{2}(1-p)^{k+s-2}\right)p (m-1,r,s)\] \[\leqslant rp^{2}(1-p)^{k+s-2}(r-1)e^{-(m-r)p^{2}(1-p)^{k+s+r-2}}\] \[+\left(1-rp^{2}(1-p)^{k+s-2}\right)re^{-(m-r-1)p^{2}(1-p)^{k+s+r-2}}\] \[=re^{-(m-r)p^{2}(1-p)^{k+s+r-2}}\left(p^{2}(1-p)^{k+s-2}(r-1)+ \left(1-rp^{2}(1-p)^{k+s-2}\right)e^{p^{2}(1-p)^{k+s+r-2}}\right)\] \[\leqslant re^{-(m-r)p^{2}(1-p)^{k+s+r-2}}e^{p^{2}(1-p)^{k+s+r-2}} \left(p^{2}(1-p)^{k+s-2}(r-1)+\left(1-rp^{2}(1-p)^{k+s-2}\right)\right)\] \[=re^{-(m-r)p^{2}(1-p)^{k+s+r-2}}e^{p^{2}(1-p)^{k+s+r-2}}\left(1-p ^{2}(1-p)^{k+s-2}\right)\] \[\leqslant re^{-(m-r)p^{2}(1-p)^{k+s+r-2}}e^{p^{2}(1-p)^{k+s+r-2}} e^{-p^{2}(1-p)^{k+s-2}}\] \[\leqslant re^{-(m-r)p^{2}(1-p)^{k+s+r-2}}\quad\text{ as }r\geqslant 0\] At the end of the day, the probability that the algorithm fails is \(p(m,r,0)\). So \(1-Pr(\exists S=\{u_{1},\ldots,u_{r}\}\subseteq M\text{ s.t. }\forall i\in\llbracket 1,r \rrbracket,N_{K\cup S}(u_{i})=\{a_{i},b_{i}\})\leqslant p(m,r,0)\leqslant re^{- (m-r)p^{2}(1-p)^{k+r-2}}\). We are now ready to prove the existence of \(k\)-piable graph states on a number of qubits cubic in \(k\) (up to a logarithmic factor): **Proposition 3**.: _For any constant \(c>\frac{125e^{2}}{4}\approx 231\), there exists \(k_{0}\) s.t. for any \(k>k_{0}\), there exists a \(k\)-piable graph state on \(\lfloor ck^{3}ln(k)^{3}\rfloor\) qubits._ Proof.: For this proof we will generate a random graph, and we will apply Lemma 1 to show that with non-zero probability, such a graph satisfies the property of Proposition 2, i.e. for any \(k\) disjoint pairs \(\{a_{1},b_{1}\},...,\{a_{k},b_{k}\}\) of vertices, there exists a sequence of local complementations that maps \(G\) to a graph \(G^{\prime}\) such that \(\pi\) is an induced subgraph of \(G^{\prime}\), \(\pi\) being the perfect matching with vertices \(V_{\pi}=\bigcup_{i\in[k]}\{a_{i},b_{i}\}\) and edges \(E_{\pi}=\bigcup_{i\in[k]}\{(a_{i},b_{i})\}\). That will allow us to claim that the corresponding graph state is \(k\)-piable, thus proving the proposition. Let \(m\in\mathbb{N}\) and \(p\in[0,1]\) to be fixed later on. The graph \(G=(V,E)\), of order \(n=m+2k\), will be generated by the Erdos Renyi protocol: two vertices in \(G\) are connected with probability \(p\). For some perfect matching \(\pi\) represented by \(\{\{a_{1},b_{1}\},\ldots,\{a_{k},b_{k}\}\}\), the corresponding bad event \(A_{\pi}\) is: "We cannot induce the perfect matching \(\pi\) by means of local complementations on vertices from \(V\setminus V_{\pi}\)". The first step of this proof is to bound \(Pr(A_{\pi})\). In order to be able to induce some subgraph on \(V_{\pi}\), it is sufficient to find an independent set \(S\subseteq V\setminus V_{\pi}\) of size \(\binom{2k}{2}\) such that for any \(\{a,b\}\in E_{\pi}\), \(\exists u_{a,b}\in S\text{ s.t. }N_{V_{\pi}}(u_{a,b})=\{a,b\}\). Indeed, one can induce any subgraph \(H\) on \(V_{\pi}\) by toggling each edge \((a,b)\) of \(G[V_{\pi}]\Delta H\) by means of a local complementation on \(u_{a,b}\). In our case, we do not always need to be able to flip all edges of \(V_{\pi}\), because we only want to induce one particular subgraph (the perfect matching \(\pi\)). So the independent set that we look for might be of size less than \(\binom{2k}{2}\). Let's bound the probability of being able to construct such an independent set \(S\) (which will give a direct bound for \(Pr(A_{\pi})\)). We will use the fact that, as \(p\) will be chosen to be small (not \(1/2\)), the number of edges to toggle will be linear in the vast majority of cases. We will be calling \(r\) the number of edges to toggle in \(V_{\pi}\). Let \(t\in\mathbb{N}\) to be fixed later on. We will handle separately the cases where \(r\leqslant t\) and \(r\geqslant t+1\). \[Pr(A_{\pi})=Pr(r\leqslant t).Pr(A_{\pi}|r\leqslant t)+Pr(r\geqslant t+1).Pr(A_{ \pi}|r\geqslant t+1)\] Let's bound \(Pr(r\leqslant t)\) and \(Pr(A_{\pi}|r\geqslant t+1)\) by \(1\), as they are supposed to be very close to \(1\). \[Pr(A_{\pi})\leqslant Pr(A_{\pi}|r\leqslant t)+Pr(r\geqslant t+1)\] Using Lemma 2, \(Pr(A_{\pi}|r\leqslant t)\leqslant te^{-(m-t)p^{2}(1-p)^{2k+t-2}}\). To bound \(Pr(r\geqslant t+1)\), we'll use some property of the binomial distribution. To simplify, we'll suppose we will always have to to toggle the k edges corresponding to the pairs of \(K\). Let's introduce a random variable \(X\) that follows the distribution \(B(\binom{2k}{2}-k,p)\). What we just said amounts to writing: \(Pr(r\geqslant t+1)\leqslant Pr(X\geqslant t+1-k)\). Indeed, the edges that are not the pairs of \(K\) have to be removed, and some such edge exists with probability \(p\). We'll use the Chernoff bound: With \(\mu=\mathbb{E}[X]=p(\binom{2k}{2}-k)=2pk(k-1)\), for any \(\delta>0\), \(Pr(X\geqslant(1+\delta)\mu)\leqslant e^{-\frac{\delta^{2}}{2+\delta}\mu}\). As we need \((1+\delta)\mu=t+1-k\), we take \(\delta=\frac{t+1-k-\mu}{\mu}\). The Chernoff bound then gives \[Pr(X\geqslant t+1-k)\leqslant e^{-\frac{\left(\frac{t+1-k-\mu}{\mu}\right)^{2 }}{\left(\frac{t+1-k+\mu}{\mu}\right)}\mu}=e^{-\frac{(t+1-k-\mu)^{2}}{(t+1-k+ \mu)}}\] At the end of the day, \(Pr(A_{\pi})\leqslant te^{-(m-t)p^{2}(1-p)^{2k+t-2}}+e^{-\frac{(t+1-k-2pk(k-1)) ^{2}}{(t+1-k+2pk(k-1))}}\). The number of bad events is \(d=\frac{n!}{k!(n-2k)!2^{k}}\). Using \(k!\geqslant\frac{k^{k}}{e^{k-1}}\), it's upperbounded by \(\frac{n^{2k}e^{k-1}}{(2k)^{k}}=\frac{1}{e}\left(\frac{n^{2}e}{2k}\right)^{k}\). To apply Lemma 1, we need \(dp_{0}<1\), where \(p_{0}=te^{-(m-t)p^{2}(1-p)^{2k+t-2}}+e^{-\frac{(t+1-k-2pk(k-1))^{2}}{(t+1-k+2pk (k-1))}}\) is the upper bound on the probability of the bad events. It's sufficient to have: **(1)**: \(e^{-\frac{(t+1-k-2pk(k-1))^{2}}{(t+1-k+2pk(k-1))}}\left(\frac{n^{2}e}{2k} \right)^{k}<\frac{e}{2}\) and **(2)** \(\left(te^{-(m-t)p^{2}(1-p)^{2k+t-2}}\right)\left(\frac{n^{2}e}{2k}\right)^{k}< \frac{e}{2}\) Let's show that these equations are satisfied for any large enough \(k\) by choosing: \(n=\lfloor c_{2}k^{3}\ln(k)^{3}\rfloor\), \(t=\lfloor c_{1}k\ln(k)\rfloor\) and \(p=\frac{2}{2k+t}\) with \(c_{1}>5\) and \(c_{2}>\frac{5e^{2}c_{1}{}^{2}}{4}>\frac{125e^{2}}{4}\approx 231\). **(1)**: The equation translates to \(k(2ln(n)+1-\ln(2k))<\frac{(t+1-k-2pk(k-1))^{2}}{(t+1-k+2pk(k-1))}+\ln(e/2)\). \(k(2ln(n)+1-\ln(2k))=k(2ln(\lfloor c_{2}k^{3}\ln^{3}(k)\rfloor)+1-\ln(2k)) \leqslant k(2ln(c_{2})+6ln(k)+3\ln(\ln(k))+1-\ln(k)-\ln(2))\sim_{k\to\infty}5 kln(k)\). And \(\frac{(t+1-k-2pk(k-1))^{2}}{(t+1-k+2pk(k-1))}+\ln(e/2)\sim_{k\to\infty}t=\lfloor c_{1}k\ln(k)\rfloor\). The choice of \(c_{1}\) guarantees that for any large enough \(k\), **(1)** is satisfied. **(2)**: Using Lemma 3 (in the appendix), we get \(e^{-(m-t)p^{2}(1-p)^{2k+t-2}}\leqslant e^{-(m-t)\frac{4e^{-2}}{(2k+t)^{2}}}\), so it's sufficient to prove \(k(2ln(n)+1-\ln(2k))<(m-t)\frac{4e^{-2}}{(2k+t)^{2}}-\ln(t)+\ln(e/2)\). We just saw that \(k(2ln(n)+1-\ln(2k))\sim_{k\to\infty}5k\ln(k)\). Similarly, \(m=n-2k\) so \((m-t)\frac{4e^{-2}}{(2k+t)^{2}}-\ln(t)+\ln(e/2)\sim_{k\to\infty}c_{2}k^{3}\ln( k)3\frac{4e^{-2}}{(c_{1}k\ln(k))^{2}}=\frac{c_{2}4e^{-2}kln(k)}{c_{1}{}^{2}}\). The choice of \(c_{2}\) guarantees that for any large enough \(k\), **(2)** is satisfied. Then, according to Lemma 1, there exists a \(k\)-pairable graph state on \(n\) qubits. Vertex-minor universality In the previous section, we considered graphs on which we can induce any perfect matching on any \(2k\) vertices by means of local complementations. In this section we introduce the natural combinatorial problem of being able to induce any graph on any set of \(k\) vertices: **Definition 2**.: _A graph \(G\) is \(k\)-vertex-minor universal if any graph on any \(k\) vertices is a vertex-minor of \(G\)._ The associated property on quantum states is the ability to induce not only EPR pairs, but any graph state on a given number of qubits by means of LOCC protocols. As any stabilizer state is equivalent to a graph state under local Clifford operations [12], it leads to the following generalization of \(k\)-pairability: **Proposition 4**.: _If \(G\) is a \(k\)-vertex-minor universal graph, then one can induce any stabilizer state of on any set of \(k\) qubits in the corresponding graph state \(|G\rangle\), by \(CLOCC\) protocols._ Contrary to the pairability case (Proposition 2), the existence of CLOCC protocols to induce any stabilizer states on \(k\) qubits from a given graph state \(|G\rangle\) does not imply in general that \(G\) is \(k\)-vertex-minor universal. For instance, \(K_{2}\) (graph with two vertices and one edge) is not \(2\)-vertex-minor universal since no local complementation can turn it into an empty graph. However, using CLOCC protocol (e.g. an X-measurement on each qubit), one can map the corresponding graph state (a maximally entangled pair of qubits) to the graph state composed a tensor product of two qubits. Obviously, \(2k\)-vertex-minor universality implies \(k\)-pairability: **Corollary 3**.: _If \(G\) is a \(2k\)-vertex-minor universal graph, then \(|G\rangle\) is a \(k\)-pairable graph state._ As pointed out by Sergey Bravyi10, a counting argument leads to an upper bound on the vertex-minor universality: Footnote 10: Personal communication. **Proposition 5**.: _If a graph \(G\) of order \(n\) is \(k\)-vertex-minor universal then \(k<\sqrt{2n\log_{2}(3)}+2\)._ Proof.: If \(H\) of order \(k\) is a vertex-minor of \(G\) of order \(n\), then \(|H\rangle\) can be obtained from \(|G\rangle\) by means of local Pauli measurements on \(n-k\) qubits and local Clifford unitaries on \(k\) qubits. There are \(3\) possible Pauli measurements per qubit, so \(3^{n-k}\) in total. Notice that for a fixed choice of Pauli measurements, different local Clifford transformations on the remaining \(k\) qubits can only generate graph states which correspond to graphs that are equivalent up to local complementation. Moreover, it is known that there are at least \(2^{\frac{k^{2}-5k}{2}-1}\) different graphs on \(k\) vertices up to local complementation [2]. As a consequence, if \(G\) is \(k\)-vertex-minor universal, we must have \(3^{n-k}\geqslant 2^{\frac{k^{2}-5k}{2}-1}\). Using numerical analysis, this implies \(k<\sqrt{2n\log_{2}(3)}+2\). Another upper bound, based on the local minimum degree, can be obtained: **Proposition 6**.: _If a graph \(G\) is \(k\)-vertex-minor universal then \(k<\delta_{\text{loc}}(G)+2\)._ Proof.: By contradiction, assume there exists a graph \(G\) that is \((\delta_{\mathrm{loc}}(G)+2)\)-vertex-minor universal. \(G\) contains a local set \(L=D\cup\mathrm{Odd}(D)\) of size \(\delta_{\mathrm{loc}}(G)+1\). By hypothesis, there exists a sequence of local complementations that maps \(G\) to a graph \(G^{\prime}\) such that \(G^{\prime}[V(H)]=H\), where \(H\) is the graph defined on \(L\cup v\) (\(v\) being an arbitrary vertex of \(V\setminus L\)), such that its only edge is between \(v\) and an arbitrary vertex \(u\in L\). Note that a local set is invariant by local complementation [17]. This means \(L\) is still a local set in \(G^{\prime}\), i.e. there exists \(D^{\prime}\) s.t. \(L=D^{\prime}\cup\mathrm{Odd}(D^{\prime})\). If \(u\in D^{\prime}\) then \(v\in Odd(D^{\prime})\). If \(u\notin D^{\prime}\) then \(u\notin Odd(D^{\prime})\) because it's not connected to any other vertex of \(L\). This contradicts that \(L\) is still a local set. To illustrate the concept of vertex-minor universality, we provide minimal examples of \(k\)-vertex-minor universal graphs for \(k\) up to \(4\). The vertex-minor universality (and pairability) of the graphs below and in table Table 1 was observed by numerical analysis. The code consists of an exploration of the orbit of a given graph by local complementation (either by using essentially a breadth-first search, or by applying local complementations on random vertices, which yields results faster), to find the \(\binom{n}{k}2^{\binom{k}{2}}\) possible induced subgraphs of order \(k\). 2-vertex-minor universality.As we saw above, \(K_{2}\) is \(1\)-pirable but not \(2\)-vertex-minor universal: \(K_{3}\) is actually the smallest \(2\)-vertex-minor universal graph, because every edge can be toggled by a local complementation on the opposite vertex. 3-vertex-minor universality.We observed that \(C_{6}\) (the cycle of order \(6\)) is a \(3\)-vertex-minor universal graph, and there exist no \(3\)-vertex-minor universal graphs of order smaller or equal to \(5\). Indeed, using the database from [1] along with Proposition 6, it appears that the only graph of order smaller or equal to \(5\) having a local minimum degree larger or equal to \(2\) (thus being a candidate for \(3\)-vertex-minor universality) is \(C_{5}\). We have observed however that \(C_{5}\) is not \(3\)-vertex-minor universal by exploring its orbit by local complementation composed of \(132\) graphs, and checking that none of them contains an independent set of size \(3\). 4-vertex-minor universality.We observed that the \(10\)-vertex "wheel" graph from [5] (see Fig. 1) whose corresponding graph state has been proven to be \(2\)-pirable, is also \(4\)-vertex-minor universal. Bravyi et al. showed that no graph state with \(9\) or less qubits was \(2\)-pirable using CLOCC protocols. Besides, any graph state corresponding to a \(4\)-vertex-minor universal graph is \(2\)-pirable using CLOCC protocols. Thus, there is no \(4\)-vertex-minor universal graph of order smaller than \(10\). We also observed that the Petersen graph (of order \(10\)) is \(4\)-vertex-minor universal (and thus \(2\)-pirable). These practical results, along with results for small Paley graphs, are summarized in Table 1. We prove the existence of an infinite family of \(k\)-vertex-minor universal graphs whose order is polynomial in \(k\): **Proposition 7**.: _For any constant \(c>\frac{3e^{2}}{4}\approx 5.54\), there exists \(k_{0}\) s.t. for any \(k>k_{0}\), there exists a \(k\)-vertex-minor universal graph of order at most \(ck^{4}\ln(k)\)._ Proof.: For this proof we will generate a random \((k+1)\)-partite graph, and we will apply Lemma 1 to show that with non-zero probability, such a graph is \(k\)-vertex-minor univer sal. Let \(m\in\mathbb{N}\) to be fixed later on. The graph \(G=(V,E)\), of order \(n=m(k+1)\), will be generated as follows. Consider \(k+1\) sets of \(m\) vertices \(V_{0}\), \(V_{1}\), \(\ldots\), \(V_{k}\) that form a partition of \(V\). Two vertices \(u\in V_{i}\) and \(v\in V_{j}\) in \(G\) are not connected if \(i=j\) (so any \(V_{i}\) is an independent set), and are connected with probability \(p\in[0,1]\) if \(i\neq j\). For any \(K\in\binom{V}{k}\), let \(V^{(K)}\in\{V_{0},\ldots,V_{k}\}\) s.t. \(K\cap V^{(K)}=\emptyset\). Notice that if for any pair \(\{a,b\}\in\binom{K}{2}\) there exists a vertex \(u_{a,b}\in V^{(K)}\) s.t. \(N(u_{a,b})\cap K=\{a,b\}\) then one can induce any subgraph \(H\) on \(K\) by toggling each edge \((a,b)\) of \(G[K]\Delta H\) by means of a local complementation on \(u_{a,b}\). Given \(K\in\binom{V}{k}\) and \(\{a,b\}\in\binom{K}{2}\), let \(A_{K,a,b}\) be the bad event: "\(\forall u\in V^{(K)}\), \(N_{K}(u)\neq\{a,b\}\)". Since the probability for a given \(u\in V^{(K)}\) to satisfy \(N_{K}(u)=\{a,b\}\) is \(p^{2}(1-p)^{k-2}\) we have: \[Pr(A_{K,a,b}) = \left(1-p^{2}(1-p)^{k-2}\right)^{m}\] \[\leqslant e^{-mp^{2}(1-p)^{k-2}}\] We fix \(p=\frac{2}{k}\), as it minimizes \(e^{-mp^{2}(1-p)^{k-2}}\) (see Lemma 3 in the appendix), and get \[Pr(A_{K,a,b})\leqslant e^{-4e^{-2}mk^{-2}}\] The number of bad events is \(d=\binom{m(k+1)}{k}\binom{k}{2}\) which is upperbounded by \(\binom{m(k+1)}{k+1}\) when \(m>\binom{k}{2}\) (see Lemma 4 in the appendix). To apply Lemma 1, we need \(dp_{0}<1\), where \(p_{0}=e^{-4e^{-2}mk^{-2}}\) is the upper bound on the probability of the bad events. \[dp_{0} \leqslant \binom{m(k+1)}{k+1}e^{-4e^{-2}mk^{-2}}\] \[\leqslant 2^{m(k+1)H(\frac{1}{m})}e^{-4e^{-2}mk^{-2}}\] \[= e^{m(k+1)H(\frac{1}{m})\ln(2)-4e^{-2}mk^{-2}}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline graph \(G\) & order & \(\delta_{\mathrm{loc}}(G)\) & 1-p? & 2-xmu? & 3-xmu? & 2-p? & 4-xmu? & 5-vmu? & 3-p? \\ \hline \(K_{2}\) & 2 & 1 & yes & no & & & & & & \\ \hline \(K_{3}\) & 3 & 2 & yes & yes & no & & & & & \\ \hline \(C_{6}\) & 6 & 2 & yes & yes & yes & no [cor 1] & no [prop 6] & no [prop 6] & no [cor 1] \\ \hline “Wheel” graph & 10 & 3 & yes & yes & yes & yes & yes & no [prop 6] & no [cor 1] \\ \hline Petersen graph & 10 & 3 & yes & yes & yes & yes & yes & no [prop 6] & no [cor 1] \\ \hline 13-Paley graph & 13 & 4 & yes & yes & yes & yes & yes & no & no [cor 1] \\ \hline 17-Paley graph & 17 & 4 & yes & yes & yes & yes & yes & yes & no [cor 1] \\ \hline 29-Paley graph & 29 & 10 & yes & yes & yes & yes & yes & yes & yes \\ \hline \end{tabular} \end{table} Table 1: A table summarizing the pairability of some graph states, and the vertex-minor universality of some graphs. “\(k\)-p?” is to be understood as “is the graph state \(|G\rangle\)\(k\)-pairable?” and “\(k\)-vmu?” is to be understood as “is the graph \(G\)\(k\)-vertex-minor universal?”. “yes” results were obtained by numerical analysis, by exploring the orbit of the graphs by local complementation, so we were able to check that each graph has the required vertex-minors. “no [.]” results are direct applications of Corollary 1 and Proposition 6 (we had to compute the local minimum degree of each graph: this was done by brute force by computing every local set). \(K_{2}\) not being 2-vertex-minor universal and \(K_{3}\) not being 3-vertex-minor universal can be verified by checking the (very small) orbit by local complementation of these two graphs. The orbit by local complementation of the 13-Paley graph is, however, too big to be computed. To prove that this graph is not \(5\)-vertex-minor universal, we showed, using a slightly modified version of the program used in [5], that no fully separable quantum state can be induced on the first 5 qubits by means of CLOCC protocols (which implies that the independent set on the first 5 vertices is not a vertex-minor of the graph). where \(H(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)\) is the binary entropy. By choosing \(m=\left\lfloor c\frac{k^{4}}{k+1}\ln(k)\right\rfloor\) with \(c>\frac{3e^{2}}{4}\approx 5.54\), we have \(m(k+1)H(\frac{1}{m})\ln(2)\sim_{k\to\infty}3k\ln(k)\) and \(4e^{-2}mk^{-2}\sim_{k\to\infty}4e^{-2}ck\ln(k)\). The choice of \(c\) guaranteed that for any large enough \(k\), \[e^{m(k+1)H(\frac{1}{m})\ln(2)-4e^{-2}mk^{-2}}<1,\] and thus, according to Lemma 1, there exists a graph of order \(m(k+1)\) which is \(k\)-vertex-minor universal. ## 6 Robust pairability In this section, we explore a natural extension of pairability in the presence of errors or malicious parties. We say that a \(k\)-pairable state is \(m\)-robust if one can create any \(k\) pairs of maximally entangled qubits, independently of the actions of any set of \(m\) identified malicious parties: **Definition 3**.: _A \(k\)-pairable state \(\left|\psi\right\rangle\) on a set \(V\) of qubits is \(m\)-robust if for any set \(M\subseteq V\) of size at most \(m\) and any matching \(\pi\) of size \(k\) on the vertices \(V\setminus M\), there is an LOCC protocol that transforms \(\rho=Tr_{M}(\left|\psi\right\rangle\left\langle\psi\right|)\) into \(\left|\pi\right\rangle\)._ Vertex-minor universal graphs provide robust pairable quantum states: **Proposition 8**.: _If \(G\) is a \(k\)-vertex-minor universal graph then for any \(k^{\prime}\leqslant\frac{k}{2}\), \(\left|G\right\rangle\) is a \((k-2k^{\prime})\)-robust \(k^{\prime}\)-pairable state._ Proof.: Let \(G\) be a \(k\)-vertex-minor universal graph, \(M\) a set of at most \(k-2k^{\prime}\) vertices and \(K\) a (disjoint) set of \(2k^{\prime}\) vertices. We consider any matching \(\tau\) of size \(k^{\prime}\) on the vertices \(K\cup M\), such that each vertex of \(K\) is of degree \(1\), and those of \(M\) are of degree \(0\). It is enough to show that there is an LOCC protocol transforming \(Tr_{M}(\left|G\right\rangle\left\langle G\right|)\) into \(\left|\tau[K]\right\rangle\). As \(\tau\) is of order \(\left|K\right|+\left|M\right|\leqslant k\), \(\tau\) is a vertex-minor of \(G\), hence there exists a sequence of local complementations transforming \(G\) into \(G^{\prime}\) such that \(G^{\prime}[K\cup M]=\tau\). In terms of graph state, it means that there are local (Clifford) unitaries \(U_{M}\) (acting on the qubits of \(M\)) and \(U_{\overline{M}}\) (acting on the other qubits) s.t. \(U_{M}U_{\overline{M}}|G\rangle=\left|G^{\prime}\right\rangle\). Notice that applying \(U_{\overline{M}}\) on \(\rho=Tr_{M}(\left|G\right\rangle\left\langle G\right|)\) leads to the state \(\rho^{\prime}=U_{\overline{M}}Tr_{M}(\left|G\right\rangle\left\langle G \right|)U_{\overline{M}}^{\dagger}=Tr_{M}(U_{\overline{M}}U_{M}|G\rangle \left\langle G\right|U_{\overline{M}}^{\dagger}U_{M}^{\dagger})=Tr_{M}( \left|G^{\prime}\right\rangle\left\langle G^{\prime}\right|)\). Thus, from now on, we assume that the overall state is \(\left|G^{\prime}\right\rangle\), and we show that \(\left|G^{\prime}\right\rangle\) can be turned into \(\left|\tau[K]\right\rangle\) without the help of the parties in \(M\). Vertex deletions can be implemented by means of standard basis measurements and local corrections, more precisely, to transform \(\left|G^{\prime}\right\rangle\) into \(\left|G^{\prime}\setminus u\right\rangle\), one can measure \(u\) leading to the state \(Z_{N(u)}^{s_{u}}|G^{\prime}\setminus u\rangle\) where \(s_{u}\in\{0,1\}\) is the classical outcome of the measurement. As a consequence, the measurements of the qubits \(V\setminus(K\cup M)\) of \(\left|G^{\prime}\right\rangle\) lead to the state \(\left|G^{\prime}[K\cup M]\right\rangle=\left|\tau\right\rangle\) up to some \(Z\) corrections which depends on the classical outcomes of the measurements, thus the state is before any classical communications and corrections, of the form \(Z_{A}Z_{B}|\tau)\) for some subsets \(A\subseteq K\), \(B\subseteq M\). The parties of \(V\setminus(K\cup M)\) send their classical outcomes to the parties of \(K\) so that the correction \(Z_{A}\) can be applied leading to the state \(Z_{B}|\tau)\). As the qubits of \(M\) are separable for those of \(K\) in \(\left|\tau\right\rangle\) (the only edges of \(\tau\) are inbetween vertices of \(K\)), tracing out the qubits of \(M\) in the state \(Z_{B}|\tau)\) leads to \(\left|\tau[K]\right\rangle\). Conclusion We showed here that there exist polynomial size \(k\)-pairable quantum states and provided new upper bounds for \(k\)-pairability. We also introduced a new related combinatorial notion called vertex-minor universality, for which we gave similar properties, showing the existence of polynomial size graphs that are \(k\)-vertex-minor universal and providing lower bounds for \(k\)-vertex-minor universality based on the minimum degree by local complementation. We also provided minimal examples of \(k\)-vertex-minor universal graphs for small values of \(k\). Finally, we initiated the study of a robust version of \(k\)-pairability, in the presence of errors or malicious parties. This leaves open some questions for future work. * Our proof for the existence of polynomial-size \(k\)-pairable quantum states is non-constructive, and their size might be far from optimal. Explicit constructions of polynomial-size \(k\)-pairable quantum states in the same manner as the Reed-Muller CSS states from Bravyi et al., would be the logical next step. Paley graphs, thanks to their 'large' local minimum degrees, are candidates to provide good \(k\)-pairable states, using potentially even less qubits than those exhibited by non-constructive probabilistic methods in this paper. Similar questions also naturally apply to the explicit constructions of \(k\)-vertex-minor universal graphs. * Pairability of graph states, when restricted to CLOCC protocols, is fully characterized by the combinatorial properties of the associated graph (see Proposition 2). Does it extend to pairability with arbitrary LOCC protocols? Does there exist a \(k\)-pairable graph state which is not \(k\)-pairable by means of CLOCC protocols? * Even though \(2k\)-vertex-minor universality is a stronger requirement than \(k\)-pairability, it is not clear whether there exist \(k\)-pairable graph states on more than 2 vertices whose underlying graphs are not \(2k\)-vertex-minor universal. * as shown in the proof of Proposition 3 - satisfy the following property: in order to create \(k\) particular EPR-pairs, one can compute the local operations to be applied on the graph state (or, equivalently, the local complementation to be applied on the underlying graph: there are \(O(k^{2})\) of them) in polynomial time. This is a consequence of the fact that the independent set in the proof is found using a greedy algorithm. This raises the question of which \(k\)-pairable states possess this property. Furthermore, we could study how to generate graphs where the greedy algorithm works for every EPR-pair with high probability (whereas in this work we only show the existence of one such graph). A similar discussion is pertinent in the case of \(k\)-vertex-minor universal graphs. ## Acknowledgements We thank Sergey Bravyi and Ronald de Wolf for fruitful comments on an early version of this paper. This work is supported by the PEPR integrated project EPiQ ANR-22-PETQ-0007 part of Plan France 2030, by the STIC-AmSud project Qapla' 21-STIC-10, by the QuantERA grant EQUIP ANR-22-QUA2-0005-01, and by the European projects NEASQC and HPCQS.
2310.05969
Automated Chest X-Ray Report Generator Using Multi-Model Deep Learning Approach
Reading and interpreting chest X-ray images is one of the most radiologist's routines. However, it still can be challenging, even for the most experienced ones. Therefore, we proposed a multi-model deep learning-based automated chest X-ray report generator system designed to assist radiologists in their work. The basic idea of the proposed system is by utilizing multi binary-classification models for detecting multi abnormalities, with each model responsible for detecting one abnormality, in a single image. In this study, we limited the radiology abnormalities detection to only cardiomegaly, lung effusion, and consolidation. The system generates a radiology report by performing the following three steps: image pre-processing, utilizing deep learning models to detect abnormalities, and producing a report. The aim of the image pre-processing step is to standardize the input by scaling it to 128x128 pixels and slicing it into three segments, which covers the upper, lower, and middle parts of the lung. After pre-processing, each corresponding model classifies the image, resulting in a 0 (zero) for no abnormality detected and a 1 (one) for the presence of an abnormality. The prediction outputs of each model are then concatenated to form a 'result code'. The 'result code' is used to construct a report by selecting the appropriate pre-determined sentence for each detected abnormality in the report generation step. The proposed system is expected to reduce the workload of radiologists and increase the accuracy of chest X-ray diagnosis.
Arief Purnama Muharram, Hollyana Puteri Haryono, Abassi Haji Juma, Ira Puspasari, Nugraha Priya Utama
2023-09-28T07:57:03Z
http://arxiv.org/abs/2310.05969v3
# Automated Chest X-Ray Report Generator Using Multi-Model Deep Learning Approach ###### Abstract Reading and interpreting chest X-ray images is one of the most radiologist's routines. However, it still can be challenging, even for the most experienced ones. Therefore, we proposed a multi-model deep learning-based automated chest X-ray report generator system designed to assist radiologists in their work. The basic idea of the proposed system is by utilizing multi binary-classification models for detecting multi abnormalities, with each model responsible for detecting one abnormality, in a single image. In this study, we limited the radiology abnormalities detection to only cardiomegaly, lung effusion, and consolidation. The system generates a radiology report by performing the following three steps: image pre-processing, utilizing deep learning models to detect abnormalities, and producing a report. The aim of the image pre-processing step is to standardize the input by scaling it to 128x128 pixels and slicing it into three segments, which covers the upper, lower, and middle parts of the lung. After pre-processing, each corresponding model classifies the image, resulting in a 0 (zero) for no abnormality detected and a 1 (one) for the presence of an abnormality. The prediction outputs of each model are then concatenated to form a'result code'. The'result code' is used to construct a report by selecting the appropriate pre-determined sentence for each detected abnormality in the report generation step. The proposed system is expected to reduce the workload of radiologists and increase the accuracy of chest X-ray diagnosis. chest x-ray, radiology, medical report, multi-model, deep learning ## I Introduction Deep learning for image processing has rapidly advanced in recent years [1], demonstrating promising results across a variety of domains, including radiology [2]. These kind of advancements have enabled improvements in radiological diagnosis and patient care [2]. Therefore, there is a demand for automated systems that can accurately and quickly process the growing volume of medical images [2]. Chest X-ray (CXR) imaging, in particular, is one of the most often utilized radiologic diagnostic tools, although its interpretation can be difficult even for the experienced radiologists [3]. Radiologists can benefit from automated CXR report generation since it allows them to focus on more difficult cases while reducing the possibility of errors. In this study, we proposed a multi-model deep learning-based automated CXR report generator system to help radiologists' work better and faster. The proposed system is designed to detect abnormalities in a CXR image and then produce the corresponding report. Our approach was motivated by previous research that successfully applied deep learning algorithms to the processing of chest radiographs [4, 5, 6, 7, 8, 9, 10]. However, most of the studies mentioned have primarily focused on detecting or classifying a single abnormality in the image. Therefore, in this study, we aimed to detect multiple abnormalities in radiology images. We hypothesized that using multi-model, with each model responsible for an abnormal finding, can detect multiple abnormalities present within a single image. This paper will be structured as follows: Firstly, we will discuss related work on deep learning in radiology and mention several previous research studies in the field. Next, we will present the methodology and technical aspects of our suggested system. In the Results section, we will describe and discuss our experimental findings. Finally, we will conclude our study. ## II Related work In recent years, medical image analysis has emerged as a major use of deep learning, with the potential to aid radiologists in diagnosing a disease. Deep learning algorithms, in particular, have shown promise in the identification and classification of abnormalities on chest radiographs. One of the popular and widely-used deep learning algorithms in the field is Convolutional Neural Network (CNN). LeCun et al. [1] present an in-depth overview of deep learning, including CNNs, which have been demonstrated to be successful for image classification tasks. CNN is a form of artificial neural network (ANN) that is extensively employed in image recognition and processing applications. It is based on the structure and function of the visual cortex in animals, which is responsible for visual information processing [1]. In a CNN, input data is fed into a sequence of convolutional layers, which extract features from the input image by applying a set of filters. To inject non-linearity into the model, the output of each convolutional layer is then processed through a non-linear activation function such as ReLU (Rectified Linear Unit). As a result, the input data is represented by a set of high-level features that become increasingly complex and abstract. The output of the convolutional layers is often sent into one or more fully connected layers, which conduct classification or regression based on the retrieved features. The network's ultimate output is a probability distribution over the various classes or values of the output variable. CNNs have proven to be extremely effective in image classification, object detection, and segmentation tasks, and have been utilized in a variety of applications including as self-driving cars, medical diagnostics, and facial recognition. CNNs were shown by Rajkomar et al. [4] to be capable of classifying view orientations of chest radiographs with excellent accuracy. Lakhani et al. [6] used chest radiographs to construct a CNN-based model for the automated classification of pulmonary tuberculosis, obtaining high performance and indicating the promise for deep learning in the disease detection. Cicero et al. [5] used a CNN to detect and classify abnormalities on chest radiographs, with good sensitivity and specificity. Tang et al. [9] compare several existing CNN architectures such as AlexNet, VGGNet, ResNet, GoogLeNet, and DenseNet with tested on NIH CXR dataset using ROC score for evaluation. It found that DenseNet, GoogLeNet, and ResNet has performance of ROC score more than 98%. Bhatt et al. [10] use modified VGG-19 and EfficientNet-B3 architecture and applied transfer learning technique to identify pneumonia and COVID-19 in CXR images. Table I describes in detail previous research on deep learning in radiology. There are several papers that used multi-model based ensembling approaches for detection of COVID19 from CXR images. Saha et al. [7] use ensembling approaches with both feature concatenation and decision fusion technique. Deb et al. [8] develop a multi-model ensemble based on DCNN architecture with combining VGGnet, GoogLeNet, DenseNet, and NASNet. Report Generation of CXR using deep learning approach has been done before by Amjoud et al. [11] and Ghadekar et al. [12]. Encoder-Decoder approach has been the focus in Ghadekar et al. [12] and giving a results of 96% of accuracy. However, they do not mentioned the encoder-decoder architecture used, nor do they explain the characteristics of the generated report. Amjoud et al. [11] focused on combining CNN for CXR image classification and transformers for report generation. Compared to the mentioned studies, our proposed method utilizes a multi-model approach to detect multiple abnormalities present in the image. The results from each model are then used to generate the report. We argue that using multiple models, each dedicated to a specific abnormality, leads to more'responsible' results. ## III Methodology ### _System Design_ Based on our literature review, we are motivated to create a deep-learning model for quantifying and localizing abnormalities on chest radiographs and generating corresponding X-ray reports, to support radiologists in diagnosing a disease. Our proposed methodology consists of three fundamental steps: image pre-processing, CXR deep learning models, and report production (Figure 1). Image pre-processing is the first step of the entire process, as illustrated in Figure 2. In this step, each image is resized to 128x128 and square-cropped to ensure consistent dimensionality. Then, the image is divided into three segments (Segment I, II, and III) to reduce the amount of "information" passed to the deep learning model in the subsequent step. Segment I covers the upper part of the lung, segment II covers the lower part of the lung, and segment III covers the middle part of the lung. The second step involves training deep learning models to detect each abnormality. Three models were trained, one for detecting cardiomegaly, another for detecting lung effusion, and the third for detecting consolidation. However, instead of using the whole image as input for the model, we used the corresponding sliced image based on the specific abnormality of interest (Figure 3). For example, we used segment II as the input for the cardiomegaly and lung effusion models because these abnormalities are more likely to be found in the lower part of the lungs, while segment III was used as input for the consolidation model because this abnormality is more likely to be found in the middle part of the lungs. The NIH CXR dataset was used to train the models [15]. For the experiments, we used the "one-factor-at-a-time" strategy, using the learning rate, optimizer, and pre-trained models as the observable factors. In addition, we compared the ResNet18 [13], ResNet50 [13], and GoogleLeNet [14] pre-trained models, which are popular CNN algorithms. We trained our models on a high-performance computing server with an Intel Xeon Silver 4208 CPU 3.2 GHz, 256 GB of RAM, and an NVIDIA Quadro RTX 5000 GPU with 16 GB of RAM. For our deep learning library, we used Torch version 1.13.1 and Torchvision version 0.14.1 in a Python 3.9.6 Fig. 1: Three fundamental steps in our proposed methodology Fig. 3: Multi-models breakdown and result code Fig. 2: Image pre-processing step environment, and for deployment, we used Gradio version 3.28.3. Finally, the last step is report generation. The purpose of this step is to map the result code, which is an aggregation of each model's prediction output, to the corresponding sentence in the master text (Figure 4). For example, if a cardiomegaly abnormality is detected (coded as "1"), the sentence "Terdapat kardiomegali, CTR \(<\) 50%" will be selected from the master text. Otherwise, the sentence "Bentuk jantung baik, tidak dietmukan kardiomegali" will be used. ### _Evaluation Strategy_ We used accuracy as our evaluation metric, both for individual trained models and for overall system performance. Accuracy is determined by calculating the total number of correct predictions divided by the total number of predictions. However, when assessing the system's performance, a true prediction is only achieved when all constituent models provide accurate predictions. For instance, if the prediction labels for the cardiomegaly, lung effusion, and consolidation models are [0, 0, 1], respectively, and the ground truth labels are [0, 0, 0], then the prediction result is considered incorrect. ### _Problem Limitations_ There are several limitations in our study that should be acknowledged, including: * Radiology image limitation: The suggested method was trained using adult frontal CXRs and is only intended to be used for adult patients' frontal CXR images. Therefore, the system may not be suitable for interpreting X-rays of pediatric patients. * Abnormalities detection limitation: The method is designed to identify specific abnormalities, namely cardiomegaly, lung effusion, and consolidation. As a result, any additional abnormalities present in the image that fall outside the scope of this study are not be detected by the system. ## IV Result ### _Dataset Pre-processing_ We used The NIH CXR Dataset [15] to train our models, which comprises hundreds of thousands of CXR images with 14 different labels for abnormality findings. However, we only used 3 of the 14 abnormality labels, namely cardiomegaly, lung effusion, and consolidation. To ensure data quality, we employed domain experts to conduct reannotation. For each abnormality, we selected 2,000 images at random (1,000 images with the presence of abnormality and 1,000 without). Any images that did not meet our criteria, such as low image quality, incorrect orientation, or misdiagnosis, were excluded. Following annotation, the dataset was randomly split into the training (70%) and testing (30%) datasets. Table II presents the results of our annotated dataset. To evaluate the system's performance, we created a separate dataset, named the "system evaluation dataset," by randomly selecting 200 images from The NIH CXR dataset. In contrast to the model training dataset, this dataset comprises three labels: cardiomegaly, lung effusion, and consolidation, each with a binary value of 0 for absence or 1 for presence of abnormality. This dataset is designed to assess the system's ability to accurately predict all three labels simultaneously. Additionally, domain experts have reannotated the dataset to ensure its quality, just as we did with the model training dataset. ### _Modeling_ There were several parameters that we considered for hyperparameter tuning, such as learning rate, optimizer, and pretrained model. Our hyperparameter tuning strategy was based on the "one-factor-at-a-time" approach, where we varied one parameter at a time while keeping the others at a default value. For each parameter value, we compared the resulting training and testing accuracy. Hyperparameter tuning was performed for each model, and the results are presented in Tables III, IV, and V. After completing hyperparameter tuning, we found that the optimal values were different for each abnormality. The most optimal learning rates were 1e-3 for effusion and cardiomegaly, and 1e-4 for consolidation. Adam was the best optimizer since we found that SGD did not produce the best accuracy. Also, the most optimal pre-trained model for each abnormalities were different, such as GoogLeNet for cardiomegaly, ResNet50 for effusion, and ResNet18 for consolidation. Using these optimal parameters for each abnormality, we trained the final model and recorded the training and testing accuracy as shown on Table VI. The final models will be the models used in the system. ### _System Evaluation_ We successfully created and deployed the system in the form of a web-based application (Figure 5). The system's performance evaluation was done using the system evaluation dataset. For a correct prediction, the system needed Fig. 4: Report generation is based on the result code and master text to accurately predict all three abnormalities; any incorrect prediction would render the overall result incorrect. Despite our hypothesis that the multi-model approach would yield promising results, the system's accuracy was only 20% (Table VII). Our proposed system exhibits significantly lower performance in comparison to the studies conducted by Amjoud et al. [11] and Ghadekar et al. [12] in the generation of radiology reports. To investigate the poor performance of the system, we conducted an error analysis that included a model breakdown analysis. We discovered that the accuracy of the cardiomegaly, lung effusion, and consolidation models towards the system evaluation dataset were only 52%, 80%, and 40%, respectively (Table VII). Of the three models, the worst performance results were obtained by the cardiomegaly and consolidation models. The poor results obtained by the cardiomegaly and consolidation models indicate that these models failed to generalize well. Although the system evaluation dataset and model training dataset were sourced from the same dataset (the NIH CXR Dataset [15]), this doesn't guarantee that the data is the "same". Medical images are highly heterogeneous, which can increase the risk of overfitting and lead to a loss of generalizability for the models [2]. Furthermore, the errors of each model contribute to the poor performance of the system. Based on the accuracy, it is evident that the error rates of the cardiomegaly, lung effusion, and consolidation models were 48% (1), 20% (2), and 60% (3), respectively. The combination of these errors could make it difficult for the system to make accurate predictions, and this could explain the poor performance results. Below is our mathematical proof of probability to explain the poor performance of our system. Let \(A\), \(B\), and \(C\) denote the cardiomegaly, lung effusion, and consolidation models, respectively. \(P_{correct}(A)\), \(P_{correct}(B)\), and \(P_{correct}(C)\) represent the probabilities of accurately predicting the cardiomegaly, lung effusion, and consolidation models, respectively. In this case, the models' probabilities of accurate prediction are the models' accuracies themselves. Since the models function independently, we can assume that the events are also independent. We can calculate the probabilities of Fig. 5: System example correct and error predictions using the following formula. \[\begin{split} P_{error}(A)&=1-P_{correct}(A)\\ &=1-0.52\\ &=0.48\\ \end{split} \tag{1}\] \[\begin{split} P_{error}(B)&=1-P_{correct}(B)\\ &=1-0.8\\ &=0.2\\ \end{split} \tag{2}\] \[\begin{split} P_{error}(C)&=1-P_{correct}(C)\\ &=1-0.4\\ &=0.6\\ \end{split} \tag{3}\] \[\begin{split} P_{correct}(A\cap B\cap C)&=P_{correct}(A) \times P_{correct}(B)\\ &\times P_{correct}(C)\\ &=0.52\times 0.8\times 0.4\\ &=0.1664\\ \end{split} \tag{4}\] \[\begin{split} P_{error}(A\cup B\cup C)&=P_{error}(A) +P_{error}(B)+P_{error}(C)\\ &\quad-P_{error}(A\cap B)-P_{error}(A\cap C)\\ &\quad-P_{error}(B\cap C)+P_{error}(A\cap B\cap C)\\ &=0.48+0.2+0.6-(0.48\times 0.2)\\ &\quad-(0.48\times 0.6)-(0.2\times 0.6)\\ &\quad+(0.48\times 0.2\times 0.6)\\ &=0.8336\\ \end{split} \tag{5}\] The calculation shows that the probability of accurate predictions can be as low as \(\approx 16\%\) while the probability of error predictions can be as high as \(\approx 83\%\), which explains the system's poor performance in mathematically. Additionally, this highlights that the performance of the overall system is affected by the individual models' performance, which could explain the system's inferior performance when compared to the Amjoud et al. [11] and Ghadekar et al. [12] studies. ## V Conclusion We developed an automated CXR report generator based on multi-model deep learning, capable of detecting and classifying abnormalities in CXR images. Unfortunately, our system's overall accuracy was only 20%, indicating that our multi-model technique of using separate models for each abnormality does not have the potential to significantly improve the accuracy and efficiency of radiological report generation. Nonetheless, our method has a number of limitations that require further investigation in future research. One of the limitations is the dataset used in this study, which is unable to capture the heterogeneity of medical images, thereby limiting the model's ability to generalize the data. Consequently, the model's underperformance resulted in poor system performance. Future studies should aim to increase the amount of validated radiology data or develop alternative methods or algorithms to detect multiple abnormalities in a single radiology image. ## Acknowledgment This work was the IF5200 Applied Research Project course project of the first, second, and third authors, under the supervision of the fourth and fifth authors. We would like to express our gratitude to the Informatics Master Study Program, School of Electrical Engineering and Informatics, Institut Teknologi Bandung, for providing us access to their high-performance computing server. Our research project would not have been possible without this vital resource. Furthermore, we also extend our appreciation to our colleague, Komang Shary Karismaputri, M.D., a radiology resident from Cipto Mangunkusumo Hospital at the Faculty of Medicine, Universitas Indonesia. Her insightful contributions to this research, particularly in providing valuable insights into the interpretation of radiology images, have greatly enriched the quality and depth of our study.
2306.00107
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is partially due to the distinctive challenges associated with modelling musical knowledge, particularly tonal and pitched characteristics of music. To address this research gap, we propose an acoustic Music undERstanding model with large-scale self-supervised Training (MERT), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training. In our exploration, we identified an effective combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantisation - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attain state-of-the-art (SOTA) overall scores.
Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghao Xiao, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, Roger Dannenberg, Ruibo Liu, Wenhu Chen, Gus Xia, Yemin Shi, Wenhao Huang, Zili Wang, Yike Guo, Jie Fu
2023-05-31T18:27:43Z
http://arxiv.org/abs/2306.00107v4
# MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training ###### Abstract Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is primarily due to the distinctive challenges associated with modelling musical knowledge, particularly tonal and pitched characteristics of music. To address this research gap, we propose an acoustic **M**usic und**ER**standing model with large-scale self-supervised **T**raining (**MERT**), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training. In our exploration, we identified a superior combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). These teachers effectively guide our student model, a BERT-style transformer encoder, to better model music audio. In addition, we introduce an in-batch noise mixture augmentation to enhance the representation robustness. Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attain state-of-the-art (SOTA) overall scores. The code and models are online: [https://github.com/yizhill/MERT](https://github.com/yizhill/MERT). ## 1 Introduction Pre-trained language models (PLMs) have learned generalisable representations of data without human annotated labels in a self-supervised learning (SSL) style, leading to astonishing performance improvement in natural language processing and related fields (Brown et al., 2020; Fang et al., 2022; Chen et al., 2021). Music is widely recognised as a special language that can be used to communicate across different cultures (Mehr et al., 2019). The internal similarity between music and language as a communication interface lays a promising foundation for adapting PLM-based methods to model music sequences. We argue that the benefit is twofold. First, PLMs can potentially pave the way to _unify_ the modelling of a wide range of music understanding, or, so-called Music Information Retrieval (MIR) tasks including but not limited to music tagging, beat tracking, music transcription, source separation, etc., so that different tasks no longer need detailed models or features. Second, releasing a PLM for acoustic music understanding could re-distribute the musical knowledge rather than the data itself, which saves the high costs of manual annotation and at the same time is not restricted by copyright laws. Unfortunately, we have yet to see a general-purpose and cost-effective open-source PLM on _acoustic_ music understanding. Most existing studies are designed to solely address music tagging problems (Pons and Serra, 2019; Spijkervet and Burgoyne, 2021; McCallum et al., 2022; Huang et al., 2022; Zhu et al., 2021; Zhao and Guo, 2021). Also, many of these works do not provide open-source codebases or checkpoints for further evaluation. A promising model is JukeMIR (Castellon et al., 2021) built upon the pre-trained model Jukebox (Dhariwal et al., 2020), whose coverage extends beyond music tagging tasks to key detection and emotion regression. However, this approach used cumbersome auto-regressive transformer decoders containing billions of parameters to hierarchically model music audio. This resulted in inefficiency for music understanding tasks, as it took weeks to extract features from datasets like MTG (Bogdanov et al., 2019) with a consumer-grade GPU. The aforementioned research gap has urged us to design and open-source a _generalisable_ and _affordable_ pre-trained acoustic music model. In this paper, we propose an acoustic **M**usic und**ER**standing model with large-scale self-supervised **T**raining (**MERT**). MERT inherits a speech SSL paradigm, employing teacher models to generate pseudo targets for sequential audio clips. Specifically, to capture the distinctive pitched and tonal characteristics in music, MERT incorporates a multi-task paradigm to balance the _acoustic_ and _musical_ representation learning as demonstrated in Fig. 1. In the proposed design, a Residual Vector Quantization - Variational Autoencoder (RVQ-VAE) (Defossez et al., 2022) is used as the _acoustic teacher_ to provide discretised acoustic-level summarisation of the music signal. The Constant-Q Transformation (CQT) (Brown, 1991) model is further introduced and regarded as the _music teacher_ for pitch and harmonic inductive bias. Regarding the context dependencies and music hierarchies, as indicated in (Borsos et al., 2022), we leave the task of modelling such high-level and abstract patterns to the stacked layers of self-attentions in the Transformer. We argue that a robust acoustic music understanding model should produce meaningful representations when music is mixed with irrelevant audio as in real-world scenarios. As a result, we introduce an in-batch noise mixup data augmentation using random clips that are efficiently sampled on-the-fly to corrupt the audio recordings, which pushes the model to learn the same semantics in the obscured context. Furthermore, we explore a wide range of settings for the transformer and 1D convolution encoder to overcome the instability in acoustic model pre-training, and hence scale up MERT from 95M to 330M model size when blending acoustic and musical knowledge. By scaling up to 330M size, MERT achieves overall state-of-the-art (SOTA) results on various MIR tasks, which demonstrates a strong generalisation ability on various music understanding applications. Last but not least, we analyse multiple pre-trained settings considering the teachers and augmentation choices and share Figure 1: Illustration of the MERT Pre-training Framework. our decision routes in ablation studies,SS 5.2 and SS 5.3, which may potentially guide future acoustic music understanding pre-training research. To summarise, our contributions are: * proposing a multi-task style predictive acoustic self-supervised learning paradigm, which achieves SOTA performance on various MIR tasks, including significant yet unexplored tasks for pre-training such as pitch detection, beat tracking and source separation applications; * exhibiting a broad analysis of the ablation study of the proposed MERT pre-training paradigm; * exploring robust and stable tricks for acoustic music models to overcome training instability and frequent crashes when scaling up the pre-training on parameter size and training data; * providing an open-source, generalisable and affordable acoustic music pre-trained model, which addresses the needs of both industry and research communities. ## 2 Related Work PLMs for Acoustic MusicThe field of music information retrieval (MIR) has faced limitations in data access due to the costs associated with annotating music audio and country-specific copyright laws (Chen et al., 2019; Castellon et al., 2021). To address this challenge, pre-trained language models (PLMs) for acoustic music have been proposed to provide reusable learned representations, enabling transfer learning for various downstream MIR tasks without the need for extensive data labelling (Castellon et al., 2021). However, current acoustic music pre-trained models still have room for improvement in terms of providing open-source, generalisable, and lightweight learned representations suitable for both industrial and research applications (McCallum et al., 2022). Existing acoustic music pre-trained models primarily focus on tagging tasks and rely on supervised tagging labels for pre-training (Pons and Serra, 2019; Spijkervet and Burgoyne, 2021; McCallum et al., 2022; Huang et al., 2022). While some studies have explored contrastive learning for acoustic music pre-training, they face limitations in training data availability and model size, hampering their performance improvements (Choi et al., 2017; Li et al., 2022). Additionally, several models trained on inaccessible datasets or without publicly available codes and model weights make it difficult to reproduce or extend their approaches (McCallum et al., 2022; Castellon et al., 2021; Li et al., 2022; Zhu et al., 2021; Zhao and Guo, 2021). Although some general-purpose audio representation models show potential for music audio representation learning, their performance is mostly evaluated on limited MIR downstream tasks (Saeed et al., 2021; Borsos et al., 2022; Wang et al., 2023). This lack of comprehensive evaluation hampers further studies and inhibits a thorough understanding of their capabilities and limitations. Self-Supervised Speech ProcessingMusic and speech processing are closely related (Jasmin et al., 2020) since they are usually processed with the same audio data formats. Additionally, both acoustic music and speech processing models need to deal with the cocktail party problem (Brown and Bidelman, 2022; Petermann et al., 2022) since good source separation capabilities help both separating noises and background sounds with speech and processing polyphonic music audio. These common grounds between music and speech processing inspire us to adopt SOTA speech pre-trained models tailored specifically to music audio processing tasks. Existing research work targeting general-purpose audio representations (Saeed et al., 2021; Borsos et al., 2022; Wang et al., 2023) has verified that self-supervised speech processing models can be extended beyond speech by adapting them to downstream entry-level music tasks, including generating mono piano music and music reconstruction. Audio Representation with Language ModellingMask strategy-based large-scale language models have been applied to various other applications (Lample and Charton, 2019; Chen et al., 2021, 2021, 2022), while remaining under-explored in acoustic music understanding. For audio, Dhariwal et al. (2020) investigates generating hierarchical tokens which can be further employed to reconstruct music, inspiring the following research to understand and generate acoustic music based on extracted discrete tokens from continuous features. Baevski et al. (2019) introduce a pre-trained VQ-VAE (Baevski et al., 2019) to provide prediction targets to conduct speech representation learning with MLM. While introducing K-means to provide discrete token codebooks and pre-training the model to detect sound units, Hsu et al. (2021) claim that a better teacher model in SSL could lead to better downstream task performance. Additionally, recent speech processing pre-trained models (Borsos et al., 2022; Wang et al., 2023) propose to train or adopt separately trained codecs (Zeghidour et al., 2021; Defossez et al., 2022) for discrete token extraction. Based on the conclusion from previous studies, the recently released RVQ-VAEs (Zeghidour et al., 2021; Defossez et al., 2022), achieving good results in music reconstruction, could be adopted as teacher models for music understanding pre-training and providing acoustic information guidance. Yet some of the uniqueness of music processing such as timbre and harmony remains unexplored. We thus propose to incorporate a corresponding musical teacher model in MERT to fill such a gap. ## 3 Methodology This section introduces the pre-training paradigm and architecture of our models. It includes prediction to acoustic teachers such as k-means or deep music features, and reconstruction to music teachers such as CQT spectrum, both based on the well-established masked language model (MLM) paradigm. ### Pre-training with MLM Supervised Learning requires a labelled dataset \(\mathcal{D}_{t}=\{x_{i}^{(t)},y_{i}^{(t)}\}_{i=1}^{N}\). Here, \(N\) is the number of data samples, \(x_{i}^{(t)}\) is the \(i^{th}\) data sample in the dataset, and \(y_{i}^{(t)}\) is the corresponding label. From \(\mathcal{D}_{t}\), we can train a machine learning algorithm \(f_{\theta}\left(\cdot\right)\) parameterised with \(\theta\) that makes label predictions on each data sample. Unsupervised learning, in contrast, learns an algorithm based on an unlabelled dataset \(\mathcal{D}=\{x_{i}\}_{i=1}^{M}\), with SSL being a specific type of this class. For each data sample \(x_{i}\), SSL derives a new data \(x_{i}^{\prime}\) with a pseudo label \(y_{i}^{\prime}\). The training process is to minimise the loss between each pseudo label \(y_{i}^{\prime}\) and the prediction based on new data \(\hat{y}_{i}=f_{\theta}(x_{i}^{\prime})\) as denoted in Eq.1. \[\theta^{*}=arg\,min_{\theta}\sum_{x_{i}^{(t)}\in D}\mathcal{L}\left(f_{\theta} (x_{i}^{\prime(t)}),y_{i}^{\prime(t)}\right). \tag{1}\] MLM is a famous example of pseudo-label generation. Let \(x_{i}=\left[x_{i}^{(1)},x_{i}^{(2)},\cdots,x_{i}^{(L)}\right]\) be the \(i^{th}\) data sample in a speech or language dataset with length \(L\), and \(M\subset[L]\) is a subset of indices randomly chosen from \(1\) to \(L\). Then, the new data is defined by the following equation \[x_{i}^{\prime}=\left[\mathbf{1}_{[L]\backslash M}(1)\cdot x_{i}^{(1)},\mathbf{ 1}_{[L]\backslash M}(2)\cdot x_{i}^{(2)},\cdots,\mathbf{1}_{[L]\backslash M} (L)\cdot x_{i}^{(L)}\right] \tag{2}\] where \(\mathbf{1}_{[L]\backslash M}(x)\) denotes the indicator function, that is, \(\mathbf{1}_{[L]\backslash M}(x)=1\) if and only if \(x\) is outside the masked indices set \(M\). The pseudo-label that needs to be learned is typically \(y_{i}^{\prime}=x_{i}-x_{i}^{\prime}\), i.e., the masked data. However, reconstructing masked data \(y^{\prime}\) for raw audio tasks as pseudo-label is hard to train. HuBERT (Vaswani et al., 2017; Hsu et al., 2021) uses a dimension-reduced feature \(z^{\prime}\) derived from \(y^{\prime}\) with phonetic acoustic information, which forms the design basis of our pre-training strategy. As a speech SSL system, HuBERT utilises offline clustering to acquire pseudo labels for a BERT-like prediction loss. Specifically, it uses Mel-frequency cepstral coefficients (MFCCs), a widely-used traditional feature in speech-related tasks, as acoustic features for clustering. The obtained results are then utilised as pseudo labels in the first iteration of pre-training. It then uses the learned representation for clustering to get a pseudo label for the second iteration pre-training. Such a pseudo label includes acoustic information in human speech and can be aligned to phonemes. The loss functions of HuBERT are formulated as follows: \[\mathcal{L}_{H}(f;x,M,Z)=\sum_{t\in M}\log p_{f}(z_{t}\mid x^{\prime},t) \tag{3}\] where \(\log p_{f}(\cdot\mid x^{\prime},t)\) is the log-likelihood function on clustering results given the masked input \(x^{\prime}\) and position \(t\) derived from \(f\); likelihood function \(p_{f}\) is the Noise Contrastive Estimation (NCE) loss which is defined as \[p_{f}(c\mid x^{\prime},t)=\frac{\exp(\text{sim}(T(o_{t}),e_{c})/\tau)}{\sum_{c ^{\prime}=1}^{C}\exp(\text{sim}(T(o_{t}),e_{c^{\prime}})/\tau)}, \tag{4}\] Here, \(c\in[C]\) is a codeword of the clustering results and \(e_{c}\) represents its embedding; sim is the cosine similarity; \(o_{t}\) is the output of the model at timestep \(t\); and \(T(o_{t})\) is the linear transformation of \(o_{t}\), making it have the same dimension as \(e_{c}\). Besides, \(\tau\) scales the logit which is set to \(0.1\) in HuBERT. The linear transformation \(T\), the model to generate outputs, and the embedding of all the clustering results are all learnable. Overall, we use the same model as HuBERT but introduce several notable variations tailored to music. Specifically, we designed a better hidden-unit \(z\) as pseudo tags for pre-training with multiple music acoustic features. In addition, we added a reconstruction loss to music features and employed additional music augmentation tricks. ### Modelling Acoustic Information The MFCC features are only good at modelling acoustic and timbre information for single-pitch signals, and therefore, the clustering results do not provide much timbre information in music recording. We proposed two potential approaches as the teacher on acoustic information: one based on traditional features, and the other based on deep learning. The first method uses k-means on the log-Mel spectrum and Chroma features for timbre and harmonic acoustic information, respectively. In the case of music representation, each frame contains more information compared to speech, necessitating a larger number of classes for k-means clustering. The complexity of the k-means algorithm is linear with the number of centroids (clustering centres), leading to a time-consuming k-means for the music feature. To tackle this problem, we employ 300-means for the log-Mel spectrum with dimension 229, and 200-means for Chroma features with dimension 264, resulting in a total of 60,000 classes (200 centroids for Chroma features multiplied by 300 centroids for the log-Mel spectrum). Despite the increased number of classes, the computational complexity remains comparable to that of HuBERT. The disadvantage of k-means is that it is difficult to scale up to a larger number of classes and larger datasets, and the results are sensitive to initialisation. The second choice for our acoustic teacher is EnCodec (Defossez et al., 2022), a recent learnable feature with 8-layer residual Vector Quantized-Variational AutoEncoder (VQ-VAE). Each acoustic feature, denoted as \(z_{enc}\in[C]^{L\times 8}\), is a 2-dimensional auditory code matrix, and \(L\) is the length of the recording. The row vector of each matrix \(z_{enc}[t,:]\) represents the results of \(8\) different clusterings for frame \(t\), and the column vector of each matrix \(z_{enc}[:,j]\) represents the results from the \(j^{th}\) codebook of the audio sequence, where \(j\in\{1,\dots,8\}\). EnCodec converts 24kHz input waveforms to 8 different embeddings at 75Hz with a 320-fold reduction, and the quantizer has \(1024\) dimensions. In this setting, for each 5-second waveform, the discrete acoustic feature is a matrix with \(375\times 8\) entries, representing 375 frames (75Hz \(\times\) 5s) and 8 deep acoustic features. With these embeddings, the decoder of EnCodec can reconstruct the waveform at 24 kHz with authentic information in timbre. ### Modelling Musical Information Apart from acoustic information, we added a new reconstruction loss to the Constant-Q transform (CQT) spectrogram to emphasise pitch-level information. The CQT is a type of frequency transform that is widely used in various MIR tasks, such as pitch detection, chord recognition, and music transcription. It is similar to the Fourier transform, but bin widths are proportional to frequency rather than equal, giving each octave the same number of bins, resulting in a better time-frequency trade-off for music audio where multiple pitches occur in multiple octaves. We utilize mean squared error (MSE) loss to reconstruct the CQT spectrum \(z_{cqt}\) from the masked input audio \(x^{\prime}\). That is, \[\mathcal{L}_{CQT}(f_{cqt};x,M,\mathbf{z}_{cqt})=\sum_{t\in[L]}\left\|z_{cqt, t}-f_{cqt}(x^{\prime})_{t}\right\|_{2} \tag{5}\] And the final loss function \(\mathcal{L}\) is a linear combination of both the acoustic loss function \(\mathcal{L}_{H}\) and the musical-pitch loss function \(\mathcal{L}_{CQT}\): \[\mathcal{L}=\alpha\cdot\mathcal{L}_{H}+\mathcal{L}_{CQT} \tag{6}\] ### Robust Representation Learning We introduce "in-batch noise mixup" for music SSL. The mixup augmentation refers to the audio clip being added up with a certain ratio of shorter audio excerpts to form an augmented single sample during pre-training, instead of using the original audio. We randomly sample the audio segments from the same batch and add them to audio at random positions according to some probability. Theoretically, sampling from the whole training dataset would provide more randomness and thus be more beneficial to the representation robustness, but we narrow the sampling pool to the same audio batch considering the limited computational resources. The mixup could enable the learning of more robust musical representations and force the model to focus on the useful musical source and to ignore the noise. A pseudocode implementation can be found in Appendix A. ## 4 Experiments ### Evaluation Protocol Downstream TasksWe evaluate our method and compare it with baseline models on 14 downstream tasks, including frame-level classification or regression tasks like music tagging, key detection, genre classification, emotion score regression, instrument classification, pitch classification, vocal technique detection, and singer identification; and sequential tasks like beat tracking and source separation. For music tagging, we utilise the MagnaTagATune (MTT) (Law et al., 2009) and MTG-Jameto (Bogdanov et al., 2019) datasets, averaging multiple embeddings for long audio recordings. Key detection is accomplished using the Giantsteps and Giantsteps-MTG-keys datasets (Knees et al., 2015; Korzeniowski and Widmer, 2017), with a refined accuracy metric. Genre classification is performed using the GTZAN (Tzanetakis and Cook, 2002) and MTG-Genre datasets, with ROC, and average precision (AP) metrics. Emotion score regression is conducted on the Emomusic dataset (Soleymani et al., 2013), with \(r^{2}\) of arousal and valence as evaluation metrics. For instrument classification, we use the Nsynth (Engel et al., 2017) and MTG-instrument datasets, with ROC, and AP metrics. The Nsynth dataset is also used for pitch classification, with accuracy as the evaluation metric. Vocal technique detection and singer identification are performed using the VocalSet dataset (Wilkins et al., 2018), with accuracy as the evaluation metric. Beat tracking is conducted on the GTZAN Rhythm dataset (Marchand and Peeters, 2015), using the F-measure as an evaluation metric. Finally, source separation is accomplished using the MUSDB18 dataset (Rafii et al., 2017), with the Source-to-Distortion Ratio (SDR) as the evaluation metric. The full descriptions of the datasets and tasks can be found in Appendix B.1. Probing ProtocolFollowing Castellon et al. (2021); Yang et al. (2021), we restrict the testing protocol with probing rather than fine-tuning, i.e. freezing the backbone pre-trained models as deep feature extractor and only train a simple downstream structure, typically a multilayer perceptron (MLP) for frame-level tasks. For a fair comparison, we also limit the space for hyper-parameters searching. For more details please refer to Appendix B.2. ### Baseline Methods We select models pre-trained with various paradigms from both music and speech domains as our baselines to provide a comprehensive evaluation of the generalisation ability of the designs. MNiCNN (Pons and Serra, 2019) is selected as a representative supervised method, which is pre-trained with supervision from the Million Song Dataset tags (Bertin-Mahieux et al., 2011). CLMR (Spijkervet and Burgoyne, 2021) and MULE (McCallum et al., 2022) are selected as representatives of SOTA music representations trained with contrastive learning. Jukebox (Dhariwal et al., 2020) and the corresponding transfer learning method, JukeMIR (Castellon et al., 2021) is selected as the representative of transfer learning from a large-scale generative pre-trained musical representation. We also select the recently proposed strong speech SSL models, HuBERT (Hsu et al., 2021) and data2vec (Baevski et al., 2022), as our baselines since they share the same MLM pre-training paradigm with MERT. While HuBERT reconstructs the masked discrete tokens provided by the K-means teacher, data2vec uses the student model updated with an exponential moving average gradient to produce continuous representations for MLM prediction. In order to reveal the effectiveness of the pre-training paradigm itself rather than the training data distribution, we re-train the speech models and denote them by HuBERT\({}^{\text{music}}\) and data2vec\({}^{\text{music}}\). Additionally, we present the advanced SOTA for each task including results from both supervised and self-supervised methods. ### Implementation Details Training SettingsWe deploy the proposed SSL architecture in the training of various model sizes with matched scales of data. We mined 160K hours of music recordings from the Internet to build a large-scale music dataset. Accordingly, the base (95M) size models are trained with a 1K hours subset whereas the whole dataset is used for the large (330M) model. Specifically, we provide a special edition of the base model, MERT-95M-public, that is trained on a totally publicly available music dataset, music4all (Santana et al., 2020), with a data size of 910 hours. In the context of self-attention, the computational complexity scales quadratically with the sequence length. Therefore, when dealing with limited computational resources, there exists a trade-off between the batch size and the sequence length. In our preliminary experiments, we have observed that increasing the batch size provides greater performance improvements compared to extending the context length. To ensure manageable computation during pre-training, we adopt a strategy of randomly truncating audio clips into 5-second segments. This duration roughly corresponds to a 2-bar context in music. It is worth noting that our model utilizes a convolutional relative positional embedding, similar to the approach introduced by Baevski et al. (2020) in Wav2Vec, enabling it to operate effectively in longer contexts if required. The effective batch sizes and learning rates for the base model and large model are set to \(1.5\) and \(5.5\) hours, and their learning rates are set to \(5\mathrm{e}{-4}\), \(1.5\mathrm{e}{-3}\) respectively. Pre-training of our models has been carried out with the fairseq3 framework. The base and large models are trained with 64 A100-40GB GPUs with half-precision settings. Footnote 3: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq) Training StabilityIn our empirical findings, we have observed that when scaling up acoustic encoder-only models, they tend to exhibit a higher susceptibility to training instability compared to models of similar size in natural language processing and computer vision domains. Such instability can result in decreased performance or, in extreme cases, even lead to crashes in model training. During our experimentation with scaling up to the MERT-33OM model, we encountered notable instability manifested by constant gradient clipping and sporadic spikes in losses. This instability had a detrimental effect on the accuracy of masked language modeling (MLM) predictions and resulted in decreased performance on downstream tasks. Our attempts to resume training from previously saved checkpoints and data batches proved unsuccessful in mitigating the instability. Furthermore, we observed that reducing the learning rate in this context not only failed to address the issue but also led to a decline in performance and hindered the convergence of the model. We further explored the effectiveness of a seemingly-powerful method DeepNorm (Wang et al., 2022) in stabilizing acoustic language model pre-training but found it to be ineffective in this particular scenario. Additionally, we discovered that incorporating attention relaxation techniques (Chen et al., 2021) proved beneficial in addressing the instability challenges we encountered. However, we found that transitioning from post-layer normalization (Post-LN) to pre-layer normalization (Pre-LN) offered a potential solution by alleviating the instability and allowing training to continue. More information can be found in appendix B.3. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline **Dataset** & **MTT** & **GS** & **GTZAN** & **GTZAN** & **EMO** & & **Nighth** & **VocalSet** & **VocalSet** \\ **Task** & \multicolumn{2}{c}{Tegging} & Key & Gure & Hygthm & Emotion & Instrument & Patch & Tech & Singer \\ \hline **Metrics** & ROC & AP & Acc\({}^{\text{MERT}}\) & Acc & F1\({}^{\text{max}}\) & R2\({}^{\text{x}}\) & R2\({}^{\text{x}}\) & Acc & Acc & Acc & Acc \\ \hline \hline MaxCNN [(40)] & 90.6\({}^{\text{max}}\) & 38.3\({}^{\text{max}}\) & 12.8\({}^{\text{max}}\) & 79.0\({}^{\text{max}}\) & - & 46.6\({}^{\text{max}}\) & 70.3\({}^{\text{max}}\) & 72.6 & 64.1 & 70.3 & 57.0 \\ CMR [(47)] & 89.4\({}^{\text{max}}\) & 36.1\({}^{\text{max}}\) & 14.9\({}^{\text{max}}\) & 68.6\({}^{\text{max}}\) & - & 45.3\({}^{\text{max}}\) & 67.8\({}^{\text{max}}\) & 67.9 & 47.0 & 58.1 & 49.9 \\ Match-CNN [(15; 56)] & 91.5\({}^{\text{max}}\) & **41.2\({}^{\text{max}}\)** & 66.7\({}^{\text{max}}\) & 79.7\({}^{\text{max}}\) & **61.7\({}^{\text{max}}\)** & 72.1\({}^{\text{max}}\) & 70.4 & 91.6 & 76.7 & 82.6 \\ MMLE [(15)] & 91.4\({}^{\text{max}}\) & 40.4\({}^{\text{max}}\) & 66.7\({}^{\text{max}}\) & 73.5\({}^{\text{max}}\) & - & 57.7\({}^{\text{max}}\) & 70.0\({}^{\text{max}}\) & 74.0\({}^{\text{max}}\) & 89.2\({}^{\text{max}}\) & 75.5 & **87.5** \\ MultiBERT-Base-CNN [(25)] & 90.2 & 37.7 & 14.7 & 70.0 & **88.6** & 42.1 & 66.5 & 69.3 & 77.4 & 65.9 & 75.3 \\ MariDev-base-base [(15)] & 90.0 & 36.2 & 50.6 & 74.1 & 68.2 & 52.1 & 71.0 & 69.4 & 90.1 & 71.1 & 81.4 \\ \hline MERT-95M-95M-* & 90.6 & 38.4 & 65.0 & 72.6 & 88.3 & 52.9 & 69.9 & 71.3 & 92.3 & 74.6 & 77.2 \\ MERT-95M-95M-* & 90.7 & 38.4 & 67.3 & 72.8 & 88.1 & 59.7 & 72.5 & 70.4 & 92.3 & 75.6 & 78.0 \\ MERT-95M-95M-* & 91.0 & 39.3 & 63.5 & 78.6 & 88.3 & 60.0 & **76.4** & 70.7 & 92.6 & 74.2 & 83.7 \\ MERT-95M-95M-* & 91.3 & 40.2 & 68.6 & 79.3 & 87.9 & 61.2 & 74.7 & 72.6 & **94.4** & **76.9** & 87.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental Performances of MERT and Baselines on Downstream Tasks (1/2). The baselines are grouped by supervised and unsupervised pre-training paradigms. The superscripts denote the category of the acoustic teacher used by MERT models. “public” refers to the MERT model trained with only open-source dataset. Results with star* are in the references. ## 5 Results Analysis ### Performance & Efficiency of MERT Models The results on all the downstream tasks are provided in Tab. 1 and Tab. 2. As suggested by the average scores in Tab. 2, MERT-330M*YAE achieves the same score as the combination of previous SOTAs and becomes the new SOTA on 4 metrics even when the supervised methods are included in the comparison. It is also noteworthy that the other smaller MERT-95Ms still have close performance when using fewer parameters. Generally, MERT models perform well on tasks focusing on local-level musical information such as beat, pitch and local timbe such as singer information, and remain competitive on the other tasks such as music tagging, key detection, and genre classification, which require more global-level information. This indicates the blending of acoustic and musical teachers could provide comprehensive guidance for the understanding of music recordings, though pre-trained in only a 5-second context length. Nevertheless, the performances of our models in tasks with more global music information are close to state-of-the-art, suggesting MERT models are capable of recognising global patterns well, thanks to the use of relative position embeddings and the contextualisation of the transformer network. Further work can be focused on modelling longer context. In addition, our model can demonstrate good results with limited data, and public data may lack enough diversity. For one thing, MERT-95M-public and MERT-95M are both trained on a \(\sim\)1k hour dataset. Both of them have comparable results with the SOTA and MERT-330M, proving that MERT can converge effectively and learns generalisable music representations with limited training data. For another, the MERT-95M-public is trained with Music4ALL (Santana et al., 2020), a 910-hours public music dataset with mainly pop music and lack of diversity in music style. The experimental results show comparable performance to other settings. In particular, its performance does not have a significant difference besides genre classification on GTZAN compared to MERT-95M. This suggests our model can acquire a powerful representation even with a dataset that is not representative. Moreover, we evaluated the performance of the MERT*YAE model with a parameter size of 95M and 330M, given the use of the EnCodec feature enables us to scale up the model compared to the k-means feature. The results demonstrated that increasing the model size to 330M yielded improved performance or had a very small difference (less than 0.1%) in performance on most of the tasks besides beat tracking. More importantly, the lightweight sizes of MERTs open up new possibilities for transferring one general understanding model for large-scale classification or sequence labelling MIR tasks. MERT series models achieve better or comparable performance with only \(1.9\%\) (95M) and \(6.6\%\) (330M) parameters compared to the self-supervised baseline Jukebox-5B (Dharifwal et al., 2020). Even when our evaluation is in probing setting, most models could not be trained on sequence labelling tasks like beat tracking or source separation with affordable computational costs except for MERT and baseline models with similar architecture (Hsu et al., 2021; Baevski et al., 2022). ### The Effectiveness of Acoustic & Musical Teacher As demonstrated in Tab. 3, we explore optimal combinations and selections of the teacher models in the MERT paradigm with a subset of downstream tasks, including auto-tagging (MTT), key detection (GS), genre classification (GTZAN), and emotion recognition(EMO). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline **Dataset** & **MTG** & **MTG** & **MTG** & **MTG** & \multicolumn{3}{c}{**MTG**} & \multicolumn{3}{c}{**MTG**} \\ **Task** & \multicolumn{3}{c}{Instructured} & \multicolumn{3}{c}{Most/Theme} & \multicolumn{3}{c}{Genre} & \multicolumn{3}{c}{Top50} & \multicolumn{3}{c}{Source Separation} & \multicolumn{1}{c}{**Avg.**} \\ \hline **Metrics** & ROC & AP & ROC & AP & ROC & AP & ROC & AP & SDR\({}^{\text{max}}\) & SDR\({}^{\text{max}}\) & SDR\({}^{\text{max}}\) & SDR\({}^{\text{max}}\) \\ \hline \hline MasiCNN [(40)] & 74.0 & 17.2 & 74.0 & 12.6 & 86.0 & 17.5 & 82.0 & 27.5 & - & - & - & - & - \\ CLMR [(47)] & 73.5 & 17.0 & 77.3 & 12.6 & 84.6 & 16.2 & 81.3 & 26.4 & - & - & - & - \\ Jakobsen [(56; 57)] & - & - & - & - & - & - & - & - & 5.1\({}^{*}\) & 4.9\({}^{*}\) & 4.1\({}^{*}\) & 2.7\({}^{*}\) & - \\ MULTI [(55)] & 76.6 & 19.2 & 78.0 & 15.4 & **85.0** & 20.4 & 83.7 & 30.6 & - & - & - & - & - \\ HBERT-95M* [(25)] & 75.5 & 17.8 & 76.0 & 13.9 & 86.5 & 18.0 & 82.4 & 28.1 & 4.7 & 3.7 & 1.8 & 2.1 & 58.8 \\ data/check-base-tuning(3) & 76.1 & 19.2 & 76.7 & 14.3 & 87.1 & 18.8 & 83.0 & 29.2 & 5.5 & 5.5 & 4.1 & 3.0 & 59.9 \\ \hline MERT-95M* [(88)] & 77.2 & 19.6 & 75.9 & 13.7 & 87.0 & 18.6 & 82.8 & 29.4 & 5.6 & 5.6 & 4.0 & 3.0 & 62.9 \\ MERT-95M* [(88)] & 77.5 & 19.6 & 76.2 & 13.3 & 87.2 & 18.8 & 83.0 & 28.9 & 5.5 & 5.5 & 3.7 & 3.0 & 63.0 \\ MERT-95M* [(88)] & 77.5 & 19.4 & 76.4 & 13.4 & 87.1 & 18.8 & 83.0 & 28.9 & 5.5 & 5.5 & 3.8 & 3.1 & 63.7 \\ MERT-330M* [(88)] & 78.1 & 19.8 & 76.5 & 14.0 & 86.7 & 18.6 & 83.4 & 29.9 & 5.3 & 5.6 & 3.6 & 3.0 & **64.7** \\ \hline Previous SOTA & **78.8** & **202.1**[(1)] & **76.6** & **16.4**[(15)] & 87.7 & 20.3[(1)] & **84.3** & **32.1**[(5)] & **9.3** & **10.8** & **10.4** & **6.4**[(13)] & 64.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental Performances of MERT and Baselines on Downstream Tasks (2/2). Average scores across _task_ are calculated on the SOTA results and models applicable to all the tasks. We reproduce the original HuBERT (Hsu et al., 2021) setting on music datasets with only the acoustic teacher K-means\({}^{\text{MFCC}_{\Delta}}\)1 and the teacher K-means\({}^{\text{MFCC}_{\Delta}}\)2 trained on features produced by HuBERT model from the first stage similar to DeepCluster (Caron et al., 2018). We observe that such models perform poorly on the key detection and emotion recognition tasks even we increase the dimension of the MFCC features from \(100\) to \(2000\). As the re-clustering K-means does not bring significant improvement in the second stage pre-training, we stick to the ordinary one stage pre-training to study the influence brought by the teachers with less computational cost. Given that the key information is highly related to the pitch classes of the audio, we then introduce such inductive bias by providing the K-means acoustic teacher with both Logmel and Chroma features, denoted as K-means\({}^{\text{Logmel+Chroma}}\)1. The additional pitch information indirectly brought by the Chroma feature immediately endow the model a certain of level of key detection ability and raise the accuracy from \(15.6\) to \(55.1\) while keeping or increasing performances on other tasks. This confirms that the potentials of transformer models can be better excavated from more dimensions by introducing extra pseudo prediction targets in the MLM scheme. Footnote 1: [https://github.com/hugging-learning/](https://github.com/hugging-learning/) Following such an intuition, it could be further assumed that designing a proper multi-task learning pre-training paradigm can guide the model to produce more general representations for various music understanding tasks. We thus propose leveraging the CQT musical teacher to introduce harmonic inductive bias during the pre-training. Compared to models trained with only the acoustic teacher \({}^{\text{MFCC}_{\Delta}}\)1 or \(\text{K-means}^{\text{Logmel+Chroma}}\)1, MERM models trained with the newly proposed CQT musical teacher that are naturally more aligned to music audio can achieve significant performance gains on not only the key detection task but also the tasks requiring the high-level information like genre classification and emotion recognition. Footnote 1: [https://github.com/hugging-learning/](https://github.com/hugging-learning/) However, given that K-means models are difficult to scale up on large-scale datasets due to the memory and computational requirements, we use the RVQ-VAE model EnCodec (Defossez et al., 2022) as the final version of our acoustic teacher without looking for the immeasurable hyper-parameter \(K\) for music audio. The EnCodec could intuitively provide more comprehensive acoustic information since the audio can be largely recovered from the acoustic prediction targets, i.e. the intermediate discrete codecs produced by the EnCodec encoder, by a neural decoder from the RVQ-VAE. We observe that leveraging only one top (\(1024^{\text{codebook}\ref{eq:RQ-VAE}}\)) or bottom layer (\(1024^{\text{codebook}\ref{eq:RQ-VAE}}\)) of the residual codebooks in RVQ-VAE can provide abundant information in pre-training, the utilisation of all layers in the codebooks allows the student models to learn more sufficient acoustic patterns. While the strategy of randomly accessing one of the codebooks for each batch can alleviate the use of GPU memory and lead to similar performance compared to using all of them at a time, the setting of predicting 8 codebooks all together is adopted in the final version of MERM and further utilised in the 330M scaling-up pre-training due to faster convergence. By replacing the acoustic teacher with RVQ-VAE, MERM can achieve average score \(66.9\) similar to \(67.3\) from the K-means\({}^{\text{Logmel+Chroma}}\)1 version while leaving the possibility of scaling up with more training data. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline **Acoustic** & **Acoustic** & **Musical** & \multicolumn{2}{c}{**MTT**} & \multicolumn{2}{c}{**GS**} & \multicolumn{2}{c}{**GTZAN**} & \multicolumn{2}{c}{**EMO**} & \multicolumn{2}{c}{**Avg.**} \\ **Teacher** & **Target Class** & **Tacher** & \multicolumn{2}{c}{Taggering} & \multicolumn{2}{c}{Key} & \multicolumn{2}{c}{Genre} & \multicolumn{2}{c}{Emotion} & \multicolumn{1}{c}{**Avg.**} \\ \cline{5-10} & & & ROC & AP & Acc\({}^{\text{Refined}}\) & Acc & R2V & R2A & \\ \hline K-means\({}^{\text{MFCC}_{\Delta}}\) & 100 & & 89.8 & 36.3 & 15.1 & 66.2 & 39.6 & 67 & 49.4 \\ K-means\({}^{\text{MFCC}_{\Delta}}\) & 500 & & 90.3 & 38 & 17 & 70 & 40.6 & 67.5 & 51.3 \\ K-means\({}^{\text{MFCC}_{\Delta}}\) & 2000\({}^{\text{\text{\text@b ### Ablation Study on Loss Weight & Mixup Probability We conducted a hyperparameter search to determine the optimal weight for the musical loss applied to masked audios in the k-means setting. Additionally, we investigated the impact of in-batch noise mixup augmentation on each training sample. We applied the same weight and mixup probability for both the EnCodec setting and the large model setting. In Table 4, we present the results of our pre-training setting ablation study, which uses the same evaluation setting in SS 5.2. The table includes various parameters and evaluation metrics for different acoustic teacher models and target classes. We further explored the influence of different musical loss weights for the 95M K-means model with Logmel and Chroma features. By adjusting the musical loss weight, we observed a decrease in performance on all of four tasks and found that a weight of \(1\) yielded the best performance for the base model. Additionally, we alter the in-batch mixup probability to evaluate whether it is affecting the performance of the model. We found the mixup probability provides worse results in \(\mathtt{MERT^{K-means}}\) but provides better performance for \(\mathtt{MERT^{WQ-VAE}}\). Therefore, we determined a probability of \(0.5\) to be suitable based on the average performance score. Such a phenomenon deserves more attention. Overall, our ablation study provides valuable insights into the impact of different settings and parameters on the performance of the acoustic language model. These findings can inform the development of more effective and efficient models in the domain of acoustic language processing. ## 6 Conclusion In conclusion, our work underscores the potential of SSL for modelling raw music audio and the efficacy of our approach, MERT, in pre-training sizeable models. We present a novel paradigm that integrates RVQ-VAE and CQT teacher models, providing a unique blend of acoustic and musical information necessary for MLM-based pre-training for music understanding. This integration, bolstered by the application of an in-batch noise mixup data augmentation and Pre-LN, enables the learning of robust music representations with that further training stability. The performance of the MERT model surpasses previous SSL baselines, achieving SOTA or comparable results across a wide range of MIR tasks while using significantly fewer parameters. We anticipate that our method and the forthcoming public release of our codes and models will catalyse further research into the application of SSL in music audio, thereby broadening the scope and depth of human understanding of music. ## Limitation and Future Wrok Our models are trained using only 5-second audio signals due to constraints in computational resources and the extended length of audio signals. Despite these models being capable of handling longer sequences thanks to relative positional embedding, this approach could potentially limit their performance in tasks requiring a comprehensive understanding of extended musical contexts. We plan to continue training our models on a longer context once gaining access to more computing resources. Moreover, although we propose several techniques to improve the training stability for the acoustic pre-training, we still suffer from the gradient exploding issues with the half-precision training for settings with larger batch sizes and model sizes. In addition, we observe inverse-scaling effect in specific tasks while scaling-up to MERT-330M, which indicates that our design could be further improved by stabilising the training. \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline **Parameter** & **Acoustic** & **Acoustic** & **Target Class** & **Micalical** & **In-batch** & **MIT** & **GS** & **GrZAN** & **EMO** \\ & & & **Loss** & **Miquup** & & **Tagging** & **Key** & **Gente** & **Emmotion** & **Avg.** \\ \hline & & & & & ROC & AP & \(\text{Acc}_{0}^{\text{Mental}}\) & Acc & \(\text{R2}^{\text{V}}\) & \(\text{R2}^{\text{A}}\) & \\ \hline & & & N/A & N/A & 90.5 & 37.6 & 55.1 & 75.2 & 40.1 & 68.2 & 62.1 \\ & & & 1 & N/A & 90.8 & 38.4 & 65.0 & **78.6** & 53.1 & 68.7 & 67.3 \\ 95M & K-means++nearest-Current-Current & 300 + 200 & 2 & N/A & 90.6 & 38.1 & 62.7 & 66.9 & 45.5 & 67.9 & 62.7 \\ & & & 5 & N/A & 90.4 & 37.3 & 65.3 & 70.3 & 45.7 & 68.3 & 64.1 \\ & & & 1 & 0.25 & 90.6 & 37.9 & **65.5** & 70.0 & 49.6 & 72.5 & 65.2 \\ & & & 1 & 0.5 & 90.7 & 38.6 & 64.9 & 72.8 & 45.3 & 71.9 & 65.2 \\ \hline 95M & & & & & & & & & & \\ 95M & RVQ-VAE & 1024\(\times\)8 \({}^{\text{A}}\)-methods & 1 & N/A & 90.5 & 38.4 & 63.2 & 77.2 & 53.2 & 72.3 & 66.9 \\ & & & 1024\(\times\)8 \({}^{\text{A}}\)-methods & 1 & 0.5 & **91.0** & **39.3** & 63.3 & **78.6** & **60.0** & **76.4** & **68.8** \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation Results for Pre-training Setting Ablation Study. ## Acknowledgement This work was supported by the National Key Research and Development Program of China under Grant 2021ZD0111100. Yizhi Li is fully funded by an industrial PhD studentship (Grant Number: 171362) from the University of Sheffield, UK. Yinghao Ma is a research student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, supported by UK Research and Innovation (Grant Number: EP/S022694/1). We acknowledge IT Services at The University of Sheffield for the provision of services for High Performance Computing. We would also like to express great appreciation for the suggestions from Dr. Chris Donahue.
2309.10591
PENELLOPE V. The magnetospheric structure and the accretion variability of the classical T Tauri star HM Lup
HM Lup is a young M-type star that accretes material from a circumstellar disk through a magnetosphere. Our aim is to study the inner disk structure of HM Lup and to characterize its variability. We used spectroscopic data from HST/STIS, X-Shooter, and ESPRESSO taken in the framework of the ULLYSES and PENELLOPE programs, together with photometric data from TESS and AAVSO. The 2021 TESS light curve shows variability typical for young stellar objects of the "accretion burster" type. The spectra cover the temporal evolution of the main burst in the 2021 TESS light curve. We compared the strength and morphology of emission lines from different species and ionization stages. We determined the mass accretion rate from selected emission lines and from the UV continuum excess emission at different epochs, and we examined its relation to the photometric light curves. The emission lines in the optical spectrum of HM Lup delineate a temperature stratification along the accretion flow. While the wings of the H I and He I lines originate near the star, the lines of species such as Na I, Mg I, Ca I, Ca II, Fe I, and Fe II are formed in an outer and colder region. The shape and periodicity of the 2019 and 2021 TESS light curves, when qualitatively compared to predictions from magnetohydrodynamic models, suggest that HM Lup was in a regime of unstable ordered accretion during the 2021 TESS observation due to an increase in the accretion rate. Although HM Lup is not an extreme accretor, it shows enhanced emission in the metallic species during this high accretion state that is produced by a density enhancement in the outer part of the accretion flow.
A. Armeni, B. Stelzer, R. A. B. Claes, C. F. Manara, A. Frasca, J. M. Alcalá, F. M. Walter, Á. Kóspál, J. Campbell-White, M. Gangi, K. Mauco, L. Tychoniec
2023-09-19T13:05:17Z
http://arxiv.org/abs/2309.10591v1
PENELLOPE V. The magnetospheric structure and the accretion variability of the classical T Tauri star HM Lup+ ###### Abstract HM Lup is a young M-type star that accretes material from a circumstellar disk through a magnetosphere. Our aim is to study the inner disk structure of HM Lup and to characterize its variability. We used spectroscopic data from HST/STIS, X-Shooter, and ESPRESSO taken in the framework of the ULLYSES and PENELLOPE programs, together with photometric data from TESS and AAVSO. The 2021 TESS light curve shows variability typical for young stellar objects of the "accretion burster" type. The spectra cover the temporal evolution of the main burst in the 2021 TESS light curve. We compared the strength and morphology of emission lines from different species and ionization stages. We determined the mass accretion rate from selected emission lines and from the UV continuum excess emission at different epochs, and we examined its relation to the photometric light curves. The emission lines in the optical spectrum of HM Lup delineate a temperature stratification along the accretion flow. While the wings of the H i and He i lines originate near the star, the lines of species such as Na i, Mg i, Ca i, Ca ii, Fe i, and Fe ii are formed in an outer and colder region. The shape and periodicity of the 2019 and 2021 TESS light curves, when qualitatively compared to predictions from magnetohydrodynamic models, suggest that HM Lup was in a regime of unstable ordered accretion during the 2021 TESS observation due to an increase in the accretion rate. Although HM Lup is not an extreme accretor, it shows enhanced emission in the metallic species during this high accretion state that is produced by a density enhancement in the outer part of the accretion flow. ## 1 Introduction Classical T Tauri stars (CTTSs) are young (\(\sim 1-10\) Myr), low-mass (\(<2\,M_{\odot}\)) objects surrounded by a circumstellar disk (Hartmann et al. 2016). Their strong magnetic fields truncate the disk at a few stellar radii (typically 5 \(R_{\star}\), Hartmann et al. 1998). The current paradigm for the interaction between the disk and the star is the magnetospheric accretion (Bouvier et al. 2007), in which the material free-falls onto the star following the magnetic field lines. The rich emission line spectrum typical of CTTSs (Joy 1945; Herbig 1962) can be explained in the framework of this model (Hartmann et al. 1994; Muzerolle et al. 1998), as well as the continuum excess flux, which results from the accretion shock at the stellar surface (Calvet & Gullbring 1998). Young stellar objects (YSOs) are known to be variable, both photometrically and spectroscopically (Joy 1945; Herbst et al. 1994; Hartmann et al. 2016; Fischer et al. 2022). Many different processes can contribute to this variability, such as variable accretion rate, rotational modulation due to stellar spots, circumstellar extinction, and flares (Cody et al. 2014, 2022). The accretion process is often accompanied by outflows (Hartmann et al. 2016; Bally 2016), either in the form of disk winds or jets (e.g., Romanova et al. 2009; Ferreira 2013, and references therein). Since the spectro-photometric variability of YSOs shows up in a broad range of wavelengths, from the X-rays to the infrared (Appenzeller & Mundt, 1989), it is essential to study these objects using a multiwavelength approach, by means of simultaneous observations in different spectral regions. Many works have shown the capabilities of simultaneous spectro-photometry in determining stellar and accretion parameters, unveiling the inner disk structure of CTTSs and studying accretion variability on different timescales (e.g., Bouvier et al., 2007; Alencar et al., 2018; Zsidi et al., 2022a, b; Fiorellino et al., 2022). This requires coordinated monitoring campaigns of a range of instruments, which are notoriously difficult to achieve. The _Hubble UV Legacy Library of Young Stars as Essential Standards_, (ULLYSES, Roman-Duval et al., 2020; Espailat et al., 2022) now offers such a possibility. The aim of this program is to obtain low and medium resolution spectra of YSOs covering the wavelength range from the far-UV (\(\sim 150\) nm) to the infrared (\(\sim 1000\) nm) with the Hubble Space Telescope (HST). Together with the accompanying optical program PENELLOPE (Manara et al., 2021) at ESO Very Large Telescope (VLT), it provides an unprecedented spectroscopic dataset to study the accretion variability of YSOs. PENELLOPE typically provides three high resolution spectra either with the _Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations_(ESPRESSO, Pepe et al., 2021) or the _Ultraviolet and Visual Echelle Spectrograph_(UVES, Dekker et al., 2000) and one medium resolution X-Shooter (Vernet et al., 2011) spectrum per target taken close in time to the ULYSES observations. The high resolution spectra are needed to study variability in the emission line profiles, while the X-Shooter spectrum is used to constrain the stellar and accretion parameters. As much as possible, the ULYSES and PENELLOPE observations are scheduled during times where the targets are observed with the Transiting Exoplanet Survey Satellite (TESS, Ricker et al., 2014). TESS produces short-cadence, about one-month-long light curves for many YSOs with a spectral response that covers the red/infrared wavelength range (\(\sim 0.6-1.1\,\mu\)m). Since most of the excess flux due to accretion is emitted in the near-UV to optical spectral region, at wavelengths shorter than the TESS filter band (Calvet & Gullbring, 1998), it is also important to obtain simultaneous multiband photometry (Robinson et al., 2022). In this work we take advantage of the wealth of data provided by the aforementioned programs to study in detail a single target, chosen because of the simultaneous coverage with all the relevant programs. The target of this study is HM Lup (Sz 72), a CTTS with spectral type M2 and a mass of \(0.3M_{\odot}\)(Alcala et al., 2014; Manara et al., 2022) located in the Lupus cloud at a distance of 156 pc (Gaia Collaboration et al., 2021). It hosts a disk with a dust mass of \(4.09~{}M_{\oplus}\) and a radius of \(10.97\) au (Manara et al., 2022), inclined by \(53\pm 19\) deg relative to the line of sight (Ansdell et al., 2016). The goal of this paper is twofold. First, we aim to study the temperature stratification of the magnetosphere of HM Lup using a range of emission lines. Secondly, we aim to investigate the spectrophotometric variability of the system. The paper is structured as follows. In Sect. 2 we describe the observations. We report the basic properties of the star in Sect. 3. We present the results based on the optical spectrum of the system in Sect. 4 and the results on the spectrophotometric variability in Sect. 5, while we discuss them in Sect. 6. ## 2 Observations For our study, we selected HM Lup as one of the PENELLOPE targets having spectra taken simultaneously with a TESS observation, in this case in sector 38 (2021). HM Lup was also observed with TESS in Sector 12 (2019). ### Simultaneous data We downloaded the TESS light curves of sectors 12 and 38 from the MAST archive1. The full frame images (FFI) were reduced by the TESS Science Processing Operations Center (SPOC), producing a 10 minute cadence light curve. We define the beginning of TESS Sector 38 observations, \(\rm MJD_{0}\equiv 59333.363\), as the reference time for all the simultaneous spectroscopic and photometric data. Figure 1 shows the TESS light curve in which the Pre-search Data Conditioning Simple Aperture Photometry (PDCSAP) flux (\(F\)) was converted into TESS magnitudes (\(T\)) using the relation \(T=-2.5\cdot\log_{10}F+ZP\) where \(ZP=20.44\) is the TESS Zero Point magnitude (Fausnaugh et al., 2021; Vanderspek et al., 2018). Footnote 1: [https://archive.stsci.edu/](https://archive.stsci.edu/) We downloaded \(BVR_{\rm c}I_{\rm c}\) photometry available for HM Lup from the American Association of Variable Star Observers (AAVSO) International Database2. Most of the data were secured from the beginning of TESS observations and span \(\sim 140\) days. Data from different observers were sometimes obtained during the same day at small temporal distance (\(\Delta t\sim 15\) min). To have a better view of the variability, we binned the photometry to \(0.15\) d. Footnote 2: [https://www.aavso.org/aavso-international-database-aid](https://www.aavso.org/aavso-international-database-aid) HM Lup was selected as an ULYSES target and observed on 4 May 2021 with the Space Telescope Imaging Spectrograph (STIS) camera between 1650 and 10200 A with a resolving power of \(R\sim 1500\). Contemporaneous to the HST data, medium-high resolution optical spectroscopy was obtained in the framework of PENELLOPE (Manara et al., 2021). The journal of the spectroscopic observations is reported in Table 1, while Fig. 1 shows the temporal position of the spectra relative to the 2021 photometric data. High resolution spectra were obtained with ESPRESSO in Pr. Id. 106.20Z8.003 (PI Manara). Since in the first observation the conditions did not fulfill the requirements, namely the seeing was \(>1.55\arcsec\), the observation was repeated the day after. However, with its signal-to-noise ratio (S/N) of 12, the first spectrum can still be used. The ESPRESSO spectra cover a wavelength range between 3800 and 7880 A with a resolution of 140000. These spectra were flux-calibrated as explained in Appendix A. The X-Shooter spectrum taken from Pr. Id. 106.20Z8.004 (PI Manara) is divided into three arms, UVB (\(3000-5600\) A, \(R\sim 5400\)), VIS (\(5600-10200\) A, \(R\sim 18400\)), and NIR (\(10200-24800\) A, \(R\sim 11600\)). In order to achieve this spectral resolution, the observation was carried out with slit widths of 1\(\arcsec\), 0.4\(\arcsec\), 0.4\(\arcsec\) in the UVB, VIS and NIR arms, respectively. The absolute flux calibration correcting for slit losses was achieved using a short exposure taken with a 5\(\arcsec\) slit, as explained by Manara et al. (2021). Telluric correction was performed on the ESPRESSO and X-Shooter spectra using the molecfit tool (Smette et al. 2015). All the optical spectra were reduced by the PENELLOPE team and made publicly available on Zenodo3. Footnote 3: [https://zenodo.org/communities/odysseus/](https://zenodo.org/communities/odysseus/) ### Nonsimultaneous data In addition to the simultaneous spectro-photometric data, we have also included observations of HM Lup taken at other epochs. In particular, an X-Shooter spectrum obtained on 18 April 2012, which was already presented by Alcala et al. (2014, 2017) and Frasca et al. (2017) taken from Pr. Id. 089.C-0143(A) (PI Alcala), and the TESS light curve from sector 12, obtained on 25 May 2019, UT 01:11:43 (\(\rm{MJD}_{1}=58628.050\)) and shown in Fig. 2. ## 3 Stellar parameters The properties of the system were determined by Alcala et al. (2017) and Frasca et al. (2017) by fitting the 2012 X-Shooter spectrum. While the first paper focused on the spectral type (SpT), the stellar luminosity \(L_{\star}\) and the accretion properties, the second work used the ROTFIT routine (Frasca et al. 2015) to derive the atmospheric parameters of the accreting star, namely the effective temperature \(T_{\rm eff}\), the gravity \(\log g\), the projected rotational velocity \(v\sin i\), and the systemic radial velocity RV. Manara et al. (2022) updated the values from Alcala et al. (2017) using the _Gaia_ DR3 distance and assuming the Herczeg & Hillenbrand (2014) relation between SpT and \(T_{\rm eff}\) and non-magnetic evolutionary tracks by Feiden (2016). We applied the same procedure to the 2021 X-Shooter and ESPRESSO spectra. Except for the RV, the parameters are in agree \begin{table} \begin{tabular}{c c c} \hline Instrument & \(\rm{MJD}-\rm{MJD}_{0}\) [d] & \(t_{\rm exp}\) [s] \\ \hline X-Shooter & \(-3298.17\) & \(300-250-100\) \\ ESPRESSO & 2.89 & 1650 \\ X-Shooter & 3.80 & \(470-380-100\) \\ ESPRESSO & 3.83 & 1650 \\ HST/STIS & 5.14 & 1230 \\ ESPRESSO & 5.75 & 1650 \\ ESPRESSO & 12.86 & 1650 \\ \hline \end{tabular} \end{table} Table 1: Journal of the spectroscopic observations. The exposure time values for the X-Shooter spectra are for the UVB, VIS and NIR arms respectively. \(\rm{MJD}_{0}=59333.363\). Figure 1: Light curves and timing of simultaneous spectroscopy for HM Lup. The topmost panel shows the 2021 10 minute cadence TESS light curve (Sector 38). The inset displays the Lomb-Scargle Periodogram for the TESS light curve. Overlaid in purple is a boxcar smoothed version of the light curve produced by phase-folding the data with the period of the highest peak in the Lomb-Scargle periodogram, 4.79 d. The phase \(\phi\) was computed with this period and \(\phi=0\) at the maximum of the TESS light curve, MJD 59336.97. The vertical dashed lines mark the epochs of the simultaneous spectroscopic observations. Here ES = ESPRESSO and XS = X-Shooter. The other panels show the part of the AAVSO \(BVR_{C}I_{c}\) photometry that is simultaneous with TESS, with a linear interpolation as a guideline. The large open squares mark the synthetic photometry obtained from the X-Shooter spectrum for the four filters. We defined \(\rm{MJD}_{0}=59333.363\) as the beginning of TESS Sector 38 observation. ment with the previous result. The best fit RV value for the 2012 spectrum was \(6.9\pm 2.4\) km s\({}^{-1}\) while we obtained RV = \(-2.7\pm 1.9\) km s\({}^{-1}\) from the 2021 X-Shooter spectrum, suggesting a possible companion. We measured the RVs in the ESPRESSO spectra, obtaining \(-3.1\pm 0.7\) km s\({}^{-1}\), \(-3.6\pm 0.7\) km s\({}^{-1}\), \(-2.5\pm 0.3\) km s\({}^{-1}\), and \(-2.7\pm 0.5\) km s\({}^{-1}\) for the epochs 1, 2, 3, and 4 respectively. Therefore, no RV variation (within 1\(\sigma\)) emerges from the spectra during the 2021 PENELLOPE campaign. We did not find other signs of binarity in the spectra, such as asymmetric absorption lines. In summary from the RV measurements, we cannot exclude that HM Lup is actually a long-period (\(P\gtrsim 10\) d) single-lined spectroscopic binary. Table 2 reports the stellar parameters of the system. Regarding \(v\sin i\) and RV, we adopted the values obtained from an average of the best fit values for the ESPRESSO spectra, given their higher resolution. We computed the stellar radius \(R_{\star}\) inverting the relation \(L_{\star}=4\pi R_{\star}^{2}\sigma T_{\rm eff}^{4}\). In Fig. 3 we show HM Lup in the \(\dot{M}_{acc}\)-\(M_{\star}\) diagram together with the whole X-Shooter Lupus sample. Both the 2012 measurement of the accretion rate and our anticipated new values (from Sect. 5.2) are included, showing that HM Lup is a strongly accreting CTTS. ## 4 The optical spectrum of the system The optical spectrum of HM Lup is rich in emission lines, the strongest being the Balmer series and the Ca ii H & K lines. In addition, permitted emission lines from many other species can be identified, such as He i and singly and doubly ionized metallic elements, for example, Na i, Ca i, Ti i and Ti ii, Fe i, and Fe ii. We detected more than 150 emission lines from the iron peak elements. This feature is reminiscent of the outburst spectra of EXors (Sicilia-Aguilar et al. 2012, 2017) or the spectra of other strong accretors, such as DR Tau (Beristain et al. 1998). We found outflow signatures in the spectra, such as the [O i] 6300 line in emission and blueshifted absorption in the He i 10830 line. However, \begin{table} \begin{tabular}{c c c} \hline Parameter & Value & Reference \\ \hline \(T_{\rm eff}\)\({}^{\dagger}\) & \(3550\pm 70\) K & Frasca et al. (2017) \\ \(\log g\)\({}^{\dagger}\) & \(4.18\pm 0.28\) dex & Frasca et al. (2017) \\ \(L_{\star}^{\ddagger}\) & \(0.27\pm 0.13\)\(L_{\odot}\) & Manara et al. (2022) \\ \(M_{\star}^{\ddagger}\) & \(0.37\pm 0.12\)\(M_{\odot}\) & Manara et al. (2022) \\ \(R_{\star}^{\ddagger}\) & \(1.39\pm 0.34\)\(R_{\odot}\) & this work \\ \(v\sin i\)\({}^{\ddagger}\) & \(5.8\pm 0.4\) km s\({}^{-1}\) & this work \\ RV\({}^{\ddagger}\) & \(-2.4\pm 0.6\) km s\({}^{-1}\) & this work \\ \(i_{\rm d}\) & \(53\pm 19\) deg & Ansdell et al. (2016) \\ \hline \end{tabular} 1 [FOOTNOTE:1]Footnote 1: footnotetext: \(M_{\star}\) [\(M_{\odot}\)] \end{table} Table 2: Stellar parameters of HM Lup obtained by fitting the spectra as described in Sect 3. Figure 3: \(\dot{M}_{acc}\)-\(M_{\star}\) diagram for X-Shooter targets in Lupus, using the values from Manara et al. (2022). The black triangles indicate upper limits on the measured accretion rate. The blue, green and red diamonds mark the accretion rate of HM Lup in 2012 and 2021 (Sect. 5.2). The dashed gray line is a linear best fit to the data. Figure 2: TESS 2019 two-minute cadence light curve (Sector 12) of HM Lup. The inset shows the Lomb-Scargle periodogram. Overlaid in purple is a boxcar smoothed version of the light curve produced by phase-folding the data with the period of the highest peak in the Lomb-Scargle periodogram, 5.41 d. We defined MJD\({}_{1}\equiv 58628.050\) as the beginning of the TESS Sector 12 observation. these features will not be analyzed in this work. A selection of the observed permitted spectral lines is shown in Fig. 4 for the four ESPRESSO spectra. The different excitation potentials of the observed transitions and the differences in the emission line profiles indicate the presence of a thermally stratified environment, with multiple regions that contribute to the observed spectrum. In this section we focus on a single spectrum, the epoch 2 of ESPRESSO (cyan in Fig. 4), to illustrate the stratification of the accretion flow. We chose this spectrum because of its high S/N and the strength of the emission lines from low excitation transitions of neutral and singly ionized species. We discuss the line variability in Sect. 5.3. The atomic parameters for the analyzed emission lines were taken from the NIST Atomic Spectra Database4. Footnote 4: [https://physics.nist.gov/PhysRefData/ASD/lines_form.html](https://physics.nist.gov/PhysRefData/ASD/lines_form.html) ### Balmer series, He i, Ca ii K According to the morphological line profile classification by Reipurth et al. (1996), the Balmer series, except for H\(\alpha\), and the Ca ii K line have a Type IIB profile, that is, a double-peaked emission profile in which the secondary peak exceeds half the strength of the primary peak. The H\(\alpha\) line has instead a flat-topped emission profile. Balmer and Ca ii K emission lines have symmetrical wings up to \(\pm 400\) km s\({}^{-1}\). The He i lines have less broad wings, up to \(\sim\pm 200\) km s\({}^{-1}\), and consist of a narrow component (NC) and a broad component (BC). In the rest of the paper, we focus our analysis on the BC emission. In the magnetospheric accretion scenario, the BC is expected to originate in the infalling material (Beristain et al. 1998, 2001; Hartmann et al. 2016). The free fall velocity onto a star of mass \(M_{\star}\) starting from rest at the disk truncation radius \(R_{\rm T}\) is \[v_{\rm ff}(r)=(2GM_{\star})^{1/2}\left(\frac{1}{r}-\frac{1}{R_{\rm T}}\right)^ {1/2}. \tag{1}\] Assuming \(R_{\rm T}=5~{}R_{\star}\) (Gullbring et al. 1998) and the stellar parameters of HM Lup from Table 2, we obtain a free fall velocity of \(\sim 175\) km s\({}^{-1}\) at \(r=2~{}R_{\star}\), consistent with the observed line wings of the Ca ii K and He i 5876 lines. The wings of the lower Balmer lines, especially H\(\alpha\) and H\(\beta\), exceed these values, possibly due to Stark broadening (Muzerolle et al. 2001; Wilson et al. 2022). Conversely, the higher lines of the series are less affected by Stark broadening and their wings agree well with the wings of He i 5876, as shown in the left panel of Fig. 5 for the H9 line. This suggests a common origin for the BC of the H i and the He i lines. Given the high excitation potentials of the He i lines, which have upper states with energies \(E_{j}\gtrsim 20\) eV, a source of ionizing radiation is required (Beristain et al. 2001). In the magnetospheric accretion flow, these conditions are met in the pre-shock region, that is heated by soft X-rays emitted by the accretion shock (Hartmann et al. 2016). Singly ionized calcium, with its ionization potential of 11.87 eV, is almost completely ionized in such conditions. Azevedo et al. (2006) studied the formation of the Ca ii infrared triplet lines and showed that the gas departs from LTE conditions. Close to the star, because of the higher dilution factor for the accretion shock radiation, calcium is mostly in the doubly ionized stage (Ca iii). However, we observe strong Ca ii emission at all ESPRESSO epochs (Fig. 4). This suggests that the Ca ii lines are emitted from an outer part of the magnetospheric flow, where calcium is predominantly in the singly ionized stage. The right panel of Fig. 5 shows how the wings of the Ca ii K line are less pronounced than the wings of, for example, H\(\delta\), in agreement with this hypothesis. ### Fe i fluorescence The third plot in the second row of Fig. 4 compares the Fe i 4064, 4132, and 4144 lines. The first two are a doublet, having a common upper level. Although these three lines are from the same multiplet, the integrated flux in the Fe i 4132 line, the weaker line of the doublet, is more than twice the integrated flux of the Fe i 4144 line, as shown in Table 3. This behavior was first recognized by Herbig (1945), who proposed that the unusual strength of this doublet is due to a fluorescence mechanism. The Fe i line at 3969.26 A, which shares the upper level with the doublet, is nearly coincident in wavelength with Ca ii H (3968.47 A) and H\(\epsilon\) (3970.08 A). Absorption in this line can produce an overpopulation of the y\({}^{3}\)F\({}^{0}\) (J = 3) level, from which the fluorescent emission lines originate. The top panel of Fig. 6 shows that the Ca ii H + H\(\epsilon\) blend is strongly absorbed in the region of Fe i 3969. We reconstructed the Fe i 3969 absorption profile by creating a model for the unabsorbed emission in the Ca ii H + H\(\epsilon\) blend. To this end, we used the Ca ii K and H\(\delta\) lines as template for Ca ii H and H\(\epsilon\), respectively. We subtracted the continuum in these lines and shifted them in radial velocity to the position of Ca ii H and H\(\epsilon\) relative to the Fe i 3969 rest wavelength. The nonabsorbed model profile was computed as a linear combination, \[F_{\rm model}=0.8F_{\rm CaIIK}+1.3F_{\rm HS}. \tag{2}\] The coefficients were determined by matching the wings of the blended profile (\(|v_{\rm rad}|\gtrsim 150\) km s\({}^{-1}\)), where absorption from Fe i is not expected, to the wings of the two template lines. The absorption profile was then obtained by dividing the observed Ca ii H + He blend by its model. The result is shown in the middle panel of Fig. 6. We fit this profile with a Gaussian \[F_{v}=1+C~{}\exp{\left[\frac{(v-v_{0})^{2}}{2\sigma^{2}}\right]}. \tag{3}\] The best fit parameters are \(C=-0.777\pm 0.002\), \(v_{0}=41.4\pm 0.2\) km s\({}^{-1}\), \(\sigma=64.8\pm 0.3\) km s\({}^{-1}\). The line width is in agreement with those observed in the fluorescent lines (bottom panel of Fig. 6). The Fe i 3969 profile provides constraints on the structure of the circumstellar envelope (Willson 1975). Since the Ca ii and H i lines are formed in the pre-shock region (Sect. 4.1), a foreground structure that is opaque in Fe i must be present to produce absorption. From the Gaussian best fit of the profile and assuming a purely extinguishing medium for which \(F(v)=\exp(-\tau_{v})\), we obtain a minimum optical depth at line center \(\tau_{0}\sim 1.5\). Therefore, this region is optically thick in Fe i. This is confirmed by the integrated flux ratio of the fluorescent lines, \(r_{\rm FeI}\), that is equal to 2.6 at epoch 2, much lower than the ratio of the \(A_{\rm ji}\) values, 5.6. The line center is redshifted by \(\sim 40\) km s\({}^{-1}\), indicating that the absorption is produced in infalling material. Overall, the reconstructed Fe i 3969 absorption profile is similar to the core of the Balmer lines (e.g., H\(\beta\) in Fig. 4) in the epoch 1 and 2 ESPRESSO spectra. To compare the two features quantitatively, we fit the H\(\beta\) line in the epoch 2 ESPRESSO spectrum with a triple Gaussian model. We forced one Gaussian to fit the broad emission and the other two to reproduce the asymmetric depression. Figure 7 displays the results of the fit. Compared to Fe i 3969, the absorption in H\(\beta\) has a blueshifted centroid and extends to lower positive velocities. The difference probably stems from the fact that H\(\beta\) is self-absorbed, while Fe i 3969 ab Figure 4: Selection of continuum subtracted emission lines in the ESPRESSO spectra of HM Lup. The colors for the ESPRESSO spectra are the same as in Fig. 1. The lines from the 2012 X-Shooter spectrum are shown in green for comparison. The vertical dashed lines mark the radial velocity of the system, while the horizontal dashed lines highlight the zero-flux level for each set of lines. The Fe i and Ca i lines were smoothed with a 7 points boxcar filter. Figure 5: Continuum subtracted profiles of selected magneto-spheric lines in the epoch 2 ESPRESSO spectrum. In the left panel, H9 (3835 Å) and He i 5876. In the right panel, H\(\delta\) and Ca ii K. The vertical dashed line marks the radial velocity of the system. The H9 line was smoothed with a boxcar filter and multiplied by 0.5 to match the He i 5876 wings. The Ca ii K line was rescaled to H\(\delta\) in a similar way. Figure 6: Reconstruction of the Fe i absorption. All profiles are from the epoch 2 ESPRESSO spectrum. All lines are plotted relative to the rest wavelength of Fe i 3969. The top panel shows the continuum subtracted profile of the Ca ii H + H\(\epsilon\) blend (black) compared to the Ca ii K (green) and H8 (blue) lines. The red line is the model for the unabsorbed blend (Eq. 2). The vertical dashed lines indicate the velocity displacement of Ca ii H, Fe i 3969 and H\(\epsilon\) relative to the rest wavelength of Fe i 3969. The middle panel shows the Fe i 3969 absorption profile, reconstructed as explained in Sect. 4.2, and its best fit. The bottom panel shows the Fe i 4064 and 4132 emission lines. sorbs against Ca ii H and H\(\epsilon\). This means that, in the case of the Balmer lines, absorption can take place only if emission at a given radial velocity is intercepted by hydrogen gas moving at the same \(v_{\rm rad}\)5 along the line of sight. On the other hand, the fluorescence phenomenon couples regions having radial velocity differences equal to the shift between the rest wavelengths of Ca ii H and H\(\epsilon\) relative to Fe i 3969, that is, \(\sim-59\) km s\({}^{-1}\) and \(\sim+62\) km s\({}^{-1}\) respectively. This indicates the presence of velocity gradients in the accretion flow, in agreement with the magnetospheric accretion scenario. An outer zone of the magnetosphere sees Ca ii H emission that is redshifted, leading to the tuned absorption in Fe i 3969 (Gahm 2001). For the same reason, absorption in Fe i can take place due to H i emission that is seen as blueshifted by Fe i. This result also demonstrates that the Ca ii emission at moderate blueshifted velocities (\(-100\) km s\({}^{-1}\lesssim v_{\rm rad}\lesssim 0\) km s\({}^{-1}\)) is produced in the infalling gas. Footnote 5: The difference can be on the order of the local thermal velocity. ### Other metallic lines Among the numerous metallic lines observed in the spectrum of HM Lup, we selected a set of emission lines that belong to different species and have different ionization potentials and excitation energies, so that they potentially probe different conditions in the accretion structure. Table 3 reports the atomic parameters of the selected transitions. The lines are shown in the second row of Fig. 4. All the lines display a NC+BC structure. The BC has a full width at half maximum (FWHM) of \(\sim 115\) km s\({}^{-1}\) and an emission profile that is skewed to the red. The last two columns show the integrated flux for each line in the epoch 2 and 3 ESPRESSO spectra, chosen as representative of the line behavior during and just after the accretion burst. Before integrating, we dereddened the spectra using \(A_{\rm V}=1\) mag (Sect. 5.2) and the Cardelli et al. (1989) extinction law with \(R_{\rm V}=3.1\). At epoch 2, the emission is strongest in Fe ii, followed by the Na i D lines. Fe i lines are weaker than Fe ii lines, typically by a factor of \(\sim 5-7\), except for the Fe i 4064 and 4132 lines (Sect. 4.2). This is different than EX Lup, for which Sicilia-Aguilar et al. (2012) found a ratio of two for the Fe ii vs. Fe i during the outburst. Figure 8 shows the Ca ii infrared triplet (IRT) lines from the 2021 X-Shooter spectrum, that have profiles similar to those of the metallic lines. The components of the Ca ii IRT have relative intensities that differ from the 1:9:5 relation that is expected from the ratios of their gf-values6, indicating that these lines are formed in an optically thick environment. This sets a lower limit to the electron density of \(10^{11}\) cm\({}^{-3}\) (Hamann & Persson 1992). Footnote 6: The gf-values are proportional to \(g_{\rm j}\cdot A_{\rm ji}\). Additional information on the optical depth of the medium can be obtained from the flux ratio of lines that share the upper level. We chose the Fe ii 4549 and 4352 lines (\(E_{\rm j}=5.55\) eV) for this exercise, since they are strong, not blended and near in wavelength. The integrated flux ratio for the epoch 2 ESPRESSO spectrum is \(r_{\rm FeII}\approx 1.16\). This value is lower than the ratio of their \(A_{\rm ji}\) values, 2.04, confirming that the region where the metallic lines are formed is optically thick. This complicates the derivation of the properties of the medium in terms of line ratios, since the line emission depends on the escape probability (Sobolev 1960; Kogure & Leung 2007). The comparison between the line ratio and the ratio of the \(A_{\rm ji}\) values for the selected Fe i and Fe ii doublets indicate that this region is more optically thick in Fe i than in Fe ii. Moreover, the observation of the Fe i fluorescence (Sect. 4.2) suggests that non-LTE (NLTE) effects can be important in the region where the metallic lines are formed. Therefore, the Fe ii to Fe i ratio cannot be taken as an unambiguous diagnostic of the actual conditions of this region. Some constraints on \(n_{\rm e}\) and \(T\) can still be placed from line ratios under the assumption of LTE conditions (Appendix B). In the conditions for the coexistence of Mg i, Fe i and Fe ii, neutral sodium and calcium are almost completely ionized. In such an environment, the Na i D and Ca i 4227 lines are therefore less optically thick than the Fe i lines. Hence, the Ca ii to Ca i ratio is a better temper Figure 8: Continuum subtracted profiles of the Ca ii IRT lines in the X-Shooter spectrum at MJD - MJD\({}_{0}=3.80\) d, almost simultaneous to the ESPRESSO epoch 2 spectrum. Figure 7: Comparison between the H\(\beta\) and Fe I 3969 absorption components. Top panel: triple Gaussian fit of the continuum normalized H\(\beta\) line in the epoch 2 ESPRESSO spectrum. The dashed lines show the three Gaussian components. Bottom panel: comparison between the continuum normalized absorption profiles in H\(\beta\) and Fe i 3969. The red line is the Gaussian best fit of the Fe i 3969 from Fig. 6. The blue line is the sum of the two Gaussian absorption components in H\(\beta\), rescaled relative to the continuum to allow the comparison with the Fe i 3969 absorption profile. ature indicator than the Fe ii to Fe i ratio. The Ca i 4227 line can be compared to one of the IRT lines, for instance the Ca ii 8542 line. Another temperature diagnostic for the emitting region is the Mg ii to Mg i ratio. The non-detection of Mg ii 4481, which has an upper state with \(E_{\rm j}=11.63\) eV, provides an upper limit to the temperature. We computed this upper limit by estimating the noise level in the continuum subtracted spectrum at the nominal position of the Mg ii line. The atomic parameters of the Mg ii 4481 line and the Ca ii 8542 line are reported in the last two rows of Table 3. Figure 9 shows the result of the LTE analysis described in Appendix B for Ca and Mg. For \(11\la 1\)\(\log n_{\rm e}\la 14\), the Ca ii to Ca i ratio is reproduced with temperatures between \(\sim 4000\) and \(7000\) K, in agreement with the density-dependent upper limit on \(T\) placed by the nondetection of Mg ii 4481 (Fig. 9). ## 5 Spectrophotometric variability ### Photometry Figure 1 summarizes all available photometric data in 2021, while Fig. 2 shows the TESS Sector 12 light curve from May 2019. We lack AAVSO data in the highest flux peak of the 2021 TESS light curve, that is, in the range \(3\ {\rm d}\la\ {\rm MJD}-{\rm MJD}_{0}\la\ 5\ {\rm d}\), which is coincident with the X-Shooter spectrum. To complement the multiband light curve at that epoch, we computed synthetic \(BVR_{\rm c}I_{\rm c}\) photometry from the X-Shooter spectrum using the filter transmission curves downloaded from the SVO Filter Profile Service7(Rodrigo et al., 2012; Rodrigo & Solano, 2020). Footnote 7: [http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/) There are differences in the light curve morphology between the two TESS epochs. The 2019 TESS data show no clear periodicity pattern and a tendency for showing dips, while quasi-periodic spikes are observed in the 2021 TESS photometry. To quantify the different morphology, we computed the variability metrics introduced by Cody et al. (2014) for both light curves. The parameter \(Q\) describes the degree of periodicity and the parameter \(M\) the (a)symmetry around the median value of a given light curve. \(Q\) ranges between 0 (= highly periodic) and 1 (= aperiodic), while strongly negative (positive) values of \(M\) indicate the tendency of showing bursts (dips) in the light curve. We obtained \(Q=0.39\) and \(M=-0.66\) for the 2021 TESS light curve and \(Q=0.57\) and \(M=-0.18\) for the 2019 TESS light curve, suggesting a higher degree of periodicity and a more pronounced bursting behavior in 2021. We performed a Lomb-Scargle (Lomb, 1976; Scargle, 1982) analysis on the TESS light curves. The Lomb-Scargle Periodograms (LSPs) for the TESS 2021 and 2019 light curves are shown in the insets of Figs. 1 and 2, respectively. We detected four peaks in both LSPs, all below a False Alarm Probability (FAP) of \(10^{-5}\). We chose the periods associated to the highest peak of each periodogram, 4.79 d and 5.41 d for the light curves from 2021 and 2019 respectively, as representative of the periodicity of the light curves. This is shown in Figs. 1 and 2 by visually comparing the light curves to an average profile produced by phase folding the data with those periods and smoothing the phase folded light curve with a boxcar filter with a width of 0.25 in phase, as described by Cody et al. (2014). We estimated the uncertainties on the periods as the standard deviation of a Gaussian function fitted to the peaks in the LSPs. The results are \(P=4.79\pm 0.28\) d for the year 2021 and \(P=5.41\pm 0.35\) d for 2019. Although the central value is different, the two periods agree within the uncertainties. The LSP of both TESS light curves present \begin{table} \begin{tabular}{c c c c c c c c} \hline Ion & \(\chi_{\rm I}\) [eV] & \(\lambda\) [Å] & A\({}_{\rm jj}\) [s\({}^{-1}\)] & E\({}_{\rm i}\) [eV] & E\({}_{\rm j}\) [eV] & \(g_{\rm j}\) & F\({}_{\rm int}\) [\(10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\)] \\ & & & & & & & Epoch 2 & Epoch 3 \\ \hline Na i & 5.14 & 5889.95 & \(6.16\cdot 10^{7}\) & 0.00 & 2.10 & 4 & \(5.8\pm 0.5\) & \(2.6\pm 0.6\) \\ Ca i & 6.11 & 4226.73 & \(2.18\cdot 10^{8}\) & 0.00 & 2.93 & 3 & \(3.1\pm 0.9\) & \(1.3\pm 1.2\) \\ Mg i & 7.65 & 5183.60 & \(5.61\cdot 10^{7}\) & 2.89 & 5.11 & 3 & \(4.5\pm 0.7\) & \(1.6\pm 0.8\) \\ Fe i & 7.90 & 4143.87 & \(1.33\cdot 10^{7}\) & 1.56 & 4.55 & 9 & \(1.0\pm 0.7\) & \(0.5\pm 1.0\) \\ Fe i & 7.90 & 4063.59 & \(6.65\cdot 10^{7}\) & 1.56 & 4.61 & 7 & \(7.2\pm 1.6\) & \(2.4\pm 2.2\) \\ Fe i & 7.90 & 4132.06 & \(1.18\cdot 10^{7}\) & 1.61 & 4.61 & 7 & \(2.8\pm 1.6\) & \(0.3\pm 0.6\) \\ Fe i & 7.90 & 4404.75 & \(2.75\cdot 10^{7}\) & 1.56 & 4.37 & 9 & \(1.8\pm 1.0\) & \(0.3\pm 1.3\) \\ Fe ii & 16.20 & 4923.93 & \(4.30\cdot 10^{6}\) & 2.89 & 5.41 & 4 & \(12.5\pm 1.0\) & \(5.3\pm 1.2\) \\ Fe ii & 16.20 & 5316.61 & \(3.90\cdot 10^{5}\) & 3.15 & 5.48 & 10 & \(9.6\pm 0.7\) & \(3.7\pm 0.9\) \\ \hline Ca ii & 11.87 & 8542.09 & \(9.90\cdot 10^{6}\) & 1.70 & 3.15 & 4 & \(7.67\pm 1.1\) & \(-\) \\ Mg ii & 15.04 & 4481.13 & \(2.33\cdot 10^{8}\) & 8.86 & 11.63 & 8 & \(<0.1\) & \(-\) \\ \hline \end{tabular} \end{table} Table 3: Atomic parameters and integrated fluxes of the selected set of metallic lines. The spectra were dereddened with \(A_{\rm V}=1\) mag (Sect. 5.2) for the flux integration. Figure 9: Result of the Saha-Boltzmann analysis for Ca and Mg. Observed line ratios were computed from the epoch 2 ESPRESSO spectrum. Left panel: likelihood for the observed Ca ii 8542 vs. Ca i 4227 ratio as a function of \(\log n_{\rm e}\) and \(T\). Right panel: theoretical Mg ii 4481 vs. Mg i 5184 ratios as a function of \(T\) for \(\log n_{\rm e}\) [cm\({}^{-3}\)] \(=11,12,13,14\). The red area marks the upper limit on the observed ratio. additional, weaker peaks with the same structure. For the case of the 2021 light curve they are at 3.76, 6.20 and 9.58 d. The 2019 light curve shows peaks at similar values. The latter is the first harmonic of the 4.79 d period. To understand whether the other two periods are significant, we produced an average profile analogous to the one shown in purple in Fig. 1. The comparison to the observed light curve showed poor match. This suggests that these peaks are aliases. We studied the periodicity of the AAVSO light curves by means of continuous wavelet analysis, which is a useful tool to determine the frequency content of a signal as a function of time. We chose as wavelet template the Morse wavelet (Lilly and Olhede, 2012) with a symmetry parameter \(\gamma=3\) and a time-bandwidth product \(\mathcal{P}=90\), that produces a time resolution of \(\sim 30\) d. Since the wavelet analysis works only on evenly spaced data, we linearly interpolated the AAVSO data to 0.25 d centers to remove the gaps in the light curves. Higher cadences did not change the results of the analysis. The continuous wavelet transform (CWT) of the \(B\) band photometry is shown in Fig. 10. The light curve and its CWT can be divided into three time segments with different power spectra, as shown in the lower panel of Fig. 10. In the first segment (B1), which is simultaneous to the 2021 TESS light curve, the detected period \(P_{\rm B1}\) is compatible with the period of the TESS light curve, 4.79 d. In the other two segments the main periods detected are \(P_{\rm B2}=6.24\) d and \(P_{\rm B3}=8.84\) d. In the LSP of the whole light curve, in which the time dependence of the power spectrum is lost, these multiple contributions are averaged out. We considered only \(P_{\rm B3}\) as statistically relevant, since it has FAP = 2.5% in the LSP. The part of the AAVSO data that is simultaneous with TESS data follow the shape of the TESS light curve, as shown in Fig. 1. The relative magnitude variations vary across the bands, being 0.3 mag in \(R_{\rm c}\) and \(I_{\rm c}\), 0.5 mag in \(V\) and 0.7 mag in \(B\). The approximately enhanced variability amplitude in the blue part of the spectrum is compatible with changes in the accretion rate, since the excess flux is expected to peak at shorter wavelengths (Calvet and Gullbring, 1998). ### Accretion rate variability A comparison between the two X-Shooter spectra and the STIS spectrum in the \(3200-10000\) A range is shown in Fig. 11. Marked differences in the Balmer continuum and the Balmer jump are evident between the three spectra. In the 2021 X-Shooter spectrum the Balmer continuum was \(\sim 3\) times higher than in 2012. The ratio is reduced to a factor \(\sim 1.5\) in the Paschen continuum. The flux in the STIS spectrum lies approximately between these two. We derived the accretion parameters by fitting each spectrum with a combination of a photospheric (nonaccreting) template and a model for continuum emission from a slab of hydrogen representing the accretion shock. This method yields simultaneously with the UV accretion luminosity (\(L_{\rm acc,UV}\)), the spectral type, and extinction (\(A_{\rm V}\)) of the star. For more details on the procedure we refer to Manara et al. (2013). For the 2012 spectrum we used published values obtained with the same method. That spectrum was first analyzed by Alcala et al. (2014), but in this work we use the updated values from Manara et al. (2022). The resulting values of \(A_{\rm V}\), \(L_{\rm acc,UV}\), and \(\dot{M}_{\rm acc}\) are listed in Table 4. Typical uncertainties on these parameters are 0.1 mag, 0.25, and 0.45 dex respectively (Manara et al., 2013; Alcala et al., 2014; Manara et al., 2021). The best fit of the 2021 X-Shooter spectrum returned a SpT M2, in agreement with the fit obtained by Alcala et al. (2014) for the 2012 spectrum, and \(A_{\rm V}=1\) mag. The STIS spectrum was fitted in the X-Shooter wavelength range. Its low resolution (\(R\sim 1500\)) prevents constraining the SpT. Therefore, we fixed it to M2 in the fitting routine and obtained \(A_{\rm V}=1.5\) mag and \(\dot{M}_{\rm acc}=1.53\cdot 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\). Since this spectrum is only \(\sim 1.3\) d separated in time from the second X-Shooter observation, variations in \(A_{\rm V}\) on the order of \(\Delta A_{\rm V}=0.5\) mag seem unlikely, given that variations in the TESS bandpass are only about 0.2 mag, as shown in Fig. 1. Hence, we fixed \(A_{V}=1\) mag in the STIS best fit, obtaining the accretion rate reported in Table 4. We obtained an independent measure of the accretion rate from the line luminosities, \(L_{\rm line}\), using their empirical relation with \(L_{\rm acc,UV}\), calibrated by Alcala et al. (2017). For the X-Shooter spectra, we integrated the available Balmer and Paschen lines, Br\(\gamma\), Ca ii K and infrared triplet (IRT) and He i 5876 and 6678 lines. Since the ESPRESSO spectra cover a narrower wavelength range, we integrated only the Balmer, He i, and Ca ii K lines. For the STIS spectrum, we excluded the He i lines because of their low S/N. Before integrating, we dereddened the spectra using the \(A_{\rm V}\) value obtained from the slab model fit of the X-Shooter spectrum. Fig. 12 shows the comparison between the accretion luminosity derived from the lines, \(L_{\rm acc,lines}\), and \(L_{\rm acc,UV}\) for the 2021 X-Shooter spectrum. Most of the lines are in agreement with the accretion luminosity obtained from the slab model, but the Ca ii lines overpredict the accretion luminosity. For this reason, we excluded these lines from the calculation of \(L_{\rm acc,lines}\). We further discuss this issue in Sect. 6.1. A global value for \(L_{\rm acc,lines}\) was then derived from a weighted average of the values obtained for the individual lines. This mean value was used to compute the accretion rate from the formula \(\dot{M}_{\rm acc}=1.25L_{\rm acc}R_{*}/(GM_{*})\)(Hartmann et al., 1998). The results are reported in Table 4. Additional information on the variations in the accretion rate of CTTSs can be derived by measuring the veiling, that is, the excess emission due to the accretion process that makes photospheric absorption lines appear less deep (Hartigan et al., 1989). This parameter is defined as the ratio of the excess flux relative to the photospheric flux, \(r_{\lambda}=F_{\rm acc}(\lambda)/F_{\rm phot}(\lambda)\). We computed \(r_{\lambda}\) at 6000 A for the available spectra using the ROTFIT routine, as described by Frasca et al. (2015) and Manara et al. (2021). The result is shown in the last column of Table 4. The comparison between \(L_{\rm acc}\) in 2012 and 2021 indicates that HM Lup was observed in a more active state in 2021, with an accretion rate \(\sim 3-4\) times higher. The computed values of \(\dot{M}_{\rm acc}\) and \(r_{6000}\) support the hypothesis that the photometric behavior of the system in the main burst of the 2021 TESS light curve was caused by an increase in the accretion rate by a factor of \(\sim 1.5\). ### Line variability The variability of the H\(\alpha\), H\(\beta\), Ca ii K, and He i 5876 and 6678 lines between the four ESPRESSO spectra is shown in the first row of Fig. 4. The shape of the He i lines is invariant, but the line flux changes. It is highest in the second epoch and lowest in the third and fourth, in agreement with the photometric behavior during the accretion burst. The H\(\beta\) and Ca ii K lines evolve from a double peaked structure in the first two epochs to a profile with a single red-shifted peak in the third and fourth. H\(\alpha\) always remains flat-topped, suggesting that the double peaks are caused by an optical depth effect. If the temperature and density of the magnetosphere increase, as is likely to happen during the accretion burst, H\(\beta\) and the higher lines of the series can develop a self-absorption component (Muzerolle et al., 2001). The same happens for Ca ii K which, being a resonance line, is more prone to absorption. On the other hand the H\(\alpha\) line, given its higher opacity, can be strongly optically thick and thermalized in these physical conditions, resulting in the disappearance of the self-absorption component (Hartmann et al., 2016). The flux emitted in the metallic lines follows the photometric behavior of the system. It is strongest in the first two epochs, while it returns to its quiescent level after the accretion burst, as suggested by the comparison of line fluxes \begin{table} \begin{tabular}{l c c c c c c c} \hline Instrument & MJD \(-\) MJD\({}_{0}\) [d] & \(A_{\rm v}\) [mag] & \multicolumn{3}{c}{\(\log{(L_{\rm acc}/L_{\odot})}\)} & \(M_{\rm acc}\) [\(10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\)] & \(r_{6000}\) \\ & & & \multicolumn{2}{c}{Slab Model} & Lines & Slab Model & Lines \\ \hline X-Shooter & \(-3298.17\) & \(0.75\) & \(-1.77\pm 0.25\) & \(-1.50\pm 0.07\) & \(2.5\) & \(4.8\) & \(\leq 0.2\ddagger\) \\ ESPRESSO & \(2.89\) & \(1\dagger\) & \(-\) & \(-1.16\pm 0.08\) & \(-\) & \(10.5\) & \(2.24\pm 0.76\) \\ X-Shooter & \(3.80\) & \(1\) & \(-1.20\pm 0.25\) & \(-1.08\pm 0.07\) & \(9.5\) & \(12.7\) & \(1.60\pm 0.2\ddagger\) \\ ESPRESSO & \(3.83\) & \(1\dagger\) & \(-\) & \(-1.09\pm 0.07\) & \(-\) & \(12.4\) & \(1.41\pm 0.74\) \\ HST/STIS & \(5.14\) & \(1\dagger\) & \(-1.37\pm 0.25\) & \(-1.23\pm 0.08\) & \(6.4\) & \(9.0\) & \(-\) \\ ESPRESSO & \(5.75\) & \(1\dagger\) & \(-\) & \(-1.27\pm 0.08\) & \(-\) & \(8.2\) & \(0.92\pm 0.44\) \\ ESPRESSO & \(12.86\) & \(1\dagger\) & \(-\) & \(-1.21\pm 0.08\) & \(-\) & \(9.4\) & \(0.6\pm 0.3\) \\ \hline \end{tabular} 10 \end{table} Table 4: Main accretion parameters derived from the spectra. Figure 11: VLT/X-Shooter and HST/STIS spectra of HM Lup in the STIS wavelength range. The spectra were smoothed to the HST resolution with a Gaussian filter for clarity. Figure 10: Continuous wavelet analysis of the AAVSO \(B\) band photometry. The top panel shows the CWT. The inset on the left displays the LSP of the whole dataset, together with a temporal average of the CWT. The orange horizontal dot-dashed lines mark the periods obtained from the LSP analysis, i.e., \(P_{\rm B2}=6.24\) d and \(P_{\rm B3}=8.84\) d, while the red one indicates the 2021 TESS period, \(P_{\rm TESS}=4.79\) d. The bottom panel displays the AAVSO \(B\) band light curve and its linear interpolation. The insets show the LSP for three different time segments. We excluded the shaded areas from the calculation of the LSPs to highlight the different timescales detected. between the third ESPRESSO epoch and 2012 X-Shooter spectra in Fig. 4. In the third and fourth epoch, the BC is absent in Fe i and reduced in flux by a factor of \(\sim 2-3\) in Fe ii, Mg i, and Na i. The fact that the emission is stronger in both the Fe i and Fe ii lines during the high accretion state suggests that the outburst spectrum is mainly the result of a density enhancement, that is, an overall increase in the number of emitters, rather than a temperature increase. Table 3 shows how the Fe ii to Fe i ratio at epoch 2 is lower than at epoch 3. This is compatible with an increase in the electron density during the outburst that favors the LTE population of Fe i relative to Fe ii (Appendix B). To quantify the line variability we fit the Fe ii 4924 line in each ESPRESSO epoch with a triple Gaussian model (Appendix C) as shown in Fig. 13. We chose this transition as template for the line variability because it has the highest S/N among the metallic lines. The best fit parameters are reported in Table 5. The line variations can be explained in terms of a broad emission component plus a variable redshifted absorption that reproduces the observed red skewness. The line is symmetric at epoch 1. Then, a redshifted absorption at \(\sim+40\) km s\({}^{-1}\) appears at epoch 2, it becomes more pronounced, broader and at higher positive velocities at epoch 3, and eventually disappears at epoch 4. The evolution of the redshifted absorption is indicative of optical depth changes in the infalling material, which is likely associated to the rotation of a non-axisymmetric accretion structure, similar to what observed in EX Lup (Sicilia-Aguilar et al. 2012) or CVSO109 (Campbell-White et al. 2021). ### Epoch 4 ESPRESSO spectrum Although the veiling is constant within the uncertainties between the epoch 3 and 4 ESPRESSO spectra, the accretion rate predicted from the lines is higher in epoch 4 than in epoch 3. This happens because the redshifted emission peak increases relative to the continuum in the higher lines of the Balmer series, as shown in Fig. 4. On the other hand, the He i BC does not change between the two epochs. This further supports the hypothesis that the lines are formed in a stratified environment, both in temperature and in density. The Balmer emission peaks are not linked to the region that produces the veiling, namely, the continuum emission, unlike the wings of the Balmer and He i lines. Although we lack photometric information for the ESPRESSO epoch 4 spectrum, the phase-folded TESS light curve suggests that during this epoch the system is roughly in the same configuration as in the first two epochs. The phases are indeed \(\phi_{4}=1.97\), \(\phi_{1}=-0.11\) and \(\phi_{2}=0.08\). However, the line profiles are remarkably different. The metallic lines are weaker and the double peaked profile is absent in the Balmer lines (Fig. 4). According to the metallic lines and the veiling HM Lup has returned to a lower state of accretion in the epoch 4 ESPRESSO spectrum. The behavior of the metallic lines between the epoch 3 and 4 spectra (Fig. 13 and Sect. 5.3) suggests that the enhanced redshifted emission in the Balmer series at epoch 4 is caused by an increase in the emissivity in the outer region of the accretion flow, where these lines are produced. An alternative explanation could be a decrease in the opacity of this region, that has caused redshifted absorption at epoch 3. ## 6 Discussion ### Temperature stratification of the accretion flow The many emission lines in the optical spectrum of HM Lup allowed us to probe the temperature stratification of the accretion flow. Figure 14 shows a schematic representation of the structure that emerges from the line analysis. The He i and H i emission wings are produced in the pre-shock region, that we call the inner magnetosphere in Fig. 14, in high temperature conditions (\(T\approx 10000\) K). The metallic lines originate in a more external region (near the disk) in which there is coexistence of Na, Mg, Ca, Ca\({}^{+}\), Fe, and Fe\({}^{+}\). This region, that we call the outer magnetosphere, has \(T\approx 6000\) K. The strength of the Ca ii lines relative to the H i lines, as well as the observation of the Fe i absorption against Ca ii, suggests that the Ca ii emission extends further inward, likely between the outer and inner magnetosphere. According to shock models, the pre-shock region is responsible for the Balmer continuum emission (Calvet & Gullbring 1998; Gullbring et al. 2000). Therefore, the He i lines and the H i line wings are related to the UV excess. Since the Ca ii lines are not produced in the pre-shock region, the fact that they overpredict the accretion luminosity Figure 12: Comparison between the accretion luminosity derived from emission lines and that derived from the UV excess in the 2021 X-Shooter spectrum. \(L_{\rm acc,UV}\) and its uncertainty are displayed with the red dashed line and the red shaded area. The blue dashed line and the blue shaded area are the value of \(L_{\rm acc,lines}\) and its uncertainty, obtained from a weighted average of the line values. The lines marked in black were excluded from the calculation of the average. Figure 13: Triple Gaussian fit of the Fe ii 4924 line profiles in the four ESPRESSO spectra. See Appendix C for the fit function. might indicate a stronger emissivity from the outer part of the accretion flow in this system, relative to the stars in the Alcala et al. (2017) sample. ### Origin of the spectrophotometric variability The analysis of the photometric data indicates that the magnetospheric interaction between the disk and the star is unsteady and highly dynamic in HM Lup. Three-dimensional magnetohydrodynamic (MHD) simulations showed that CTTSs may accrete in either a stable or an unstable regime (Romanova et al., 2003, 2004; Kulkarni and Romanova, 2008; Pantolmos et al., 2020). In the stable regime, accretion proceeds in two funnel streams and forms two polar hot spots on the stellar surface. In the unstable regime, the matter accretes in equatorial tongues and forms multiple hot spots on the surface of the star. The light curves are expected to be periodic with the stellar rotation period (\(P_{\star}\)) in the stable regime and stochastic in the unstable regime. The transition between these two regimes depends on the ratio between the magnetospheric truncation radius \(R_{\rm T}\) and the corotation radius \(R_{\rm co}\)(Blinova et al., 2016). Accretion is unstable if \(R_{\rm T}/R_{\rm co}\lesssim 0.71\) and stable otherwise. Blinova et al. (2016) showed that when \(R_{\rm T}/R_{\rm co}\) decreases below \(\sim 0.59\), unstable accretion becomes ordered. In the ordered regime, the matter accretes in one or two ordered tongues that rotate with the inner disk period. The comparison between the morphology of the two TESS light curves suggests that the accretion changed from a chaotic regime in 2019 to a more organized regime in 2021, which could be either the stable or the unstable ordered regime. The 2021 TESS light curve was in a quasi-periodic bursting state with \(P=4.79\pm 0.28\) d. In the stable accretion scenario, this period would correspond to \(P_{\star}\). The period detected in the third segment of the long term \(B\) band AAVSO light curve, \(P_{\rm B3}=8.84\) d, is roughly twice the 2021 TESS period and can be interpreted as its first harmonic, that is easier to detect in the AAVSO light curve given the lower (\(\sim 1\) d) cadence. However, using the 4.79 d period as \(P_{\star}\) and the \(v\sin i\) and \(R_{\star}\) values of Table 2 we obtain an inclination \(i_{\star}=23\pm 6\) deg for the stellar rotation axis, that is, a star that is almost face-on. This is not in agreement with the observed quasi-periodic behavior of the light curve, that is likely the result of rotational modulation and therefore suggests a higher value for \(i_{\star}\), as shown for instance by the 1D hydrodynamic simulations of Robinson et al. (2021). We propose that the quasi-periodic bursting behavior in the 2021 TESS light curve was the result of an increase in the accretion rate that compressed the magnetosphere and drove the system into a regime of unstable ordered accretion. In this scenario, the 4.79 d period detected in the LSP is not the stellar rotation period but it approximately corresponds to the timescale at which matter rotates at the truncation radius. At the boundary between the chaotic and ordered regime, the ratio of the stellar rotation period to the Keplerian rotation period at \(R_{\rm T}\) is equal to \((R_{\rm T}/R_{\rm co})^{-3/2}=(0.45)^{-1}=2.22\)(\(\omega_{\star}^{-1}\) in Blinova et al., 2016). Therefore, \(P_{\star}\) should be in the \(8-10\) d range, compatible with \(P_{\rm B3}\) Figure 14: Temperature structure of the accretion flow that emerges from the line analysis. \begin{table} \begin{tabular}{c c c c c c c} \hline \(\rm{MJD}-\rm{MJD_{0}}\) [d] & \(C_{2}\) & \(v_{2}\) [km s\({}^{-1}\)] & \(\sigma_{2}\) [km s\({}^{-1}\)] & \(C_{3}\) & \(v_{3}\) [km s\({}^{-1}\)] & \(\sigma_{3}\) [km s\({}^{-1}\)] \\ \hline 2.89 & \(4.12\pm 0.05\) & \(-2.6\pm 0.7\) & \(54.5\pm 0.4\) & \(0.36\pm 0.07\) & \(45\pm 3\) & \(20\pm 4\) \\ 3.83 & \(4.28\pm 0.04\) & \(-0.2\pm 0.5\) & \(57.9\pm 0.2\) & \(-0.65\pm 0.05\) & \(38\pm 1\) & \(28\pm 2\) \\ 5.75 & \(2.5\pm 0.2\) & \(11\pm 5\) & \(76\pm 2\) & \(-1.3\pm 0.2\) & \(77\pm 2\) & \(49\pm 3\) \\ 12.86 & \(2.19\pm 0.02\) & \(9\pm 1\) & \(82\pm 1\) & \(-0.46\pm 0.04\) & \(126\pm 2\) & \(35\pm 2\) \\ \hline \end{tabular} \end{table} Table 5: Best fit parameters for the Fe ii 4924 line profiles in the four ESPRESSO spectra. Since we are interested in fitting the BC only, we do not report the parameters for the NC, i.e., \(C_{1}\), \(v_{1}\), \(\sigma_{1}\). Assuming this value as \(P_{\star}\) we obtain \(i_{\star}=47\pm 15\) deg, compatible with the observed quasi-periodic modulation. This value is also in agreement with the inclination of the outer disk axis (Table 2) measured by Ansdell et al. (2016). The cycle-to-cycle variability of the bursts in the 2021 TESS light curve further supports this interpretation. According to the simulations by Blinova et al. (2016), the two tongues may carry different amounts of matter and one of them may sometimes disappear. The simultaneous spectroscopic data made it possible to characterize the photometric variability of the system during the main burst in the 2021 TESS observation in terms of variations in the accretion rate. However, part of the line variability may be due to the rotational modulation of the accretion flow, as suggested by our analysis of the Fe ii 4924 line variability and shown in simulations (e.g., Kurosawa et al. 2008; Kurosawa & Romanova 2013). Although the quasi-periodic behavior of the 2021 TESS light curve supports this scenario, our limited spectroscopic coverage does not allow to test the variability of the emission lines as a function of the rotational phase. We found that during the high accretion state the Balmer lines, except for H\(\alpha\), develop an absorption component and the emission of low-ionization species increases. This is similar to what has been observed in VW Cha by Zsidi et al. (2022b). The dominant effect during the outburst is an increase in the density of the accretion flow. This produces a spectroscopic signature of the colder region near the disk, that is, the enhanced emission in the metallic species. ## 7 Conclusions We presented a comprehensive spectrophotometric study of the CTTS HM Lup. We examined the 2021 TESS light curve and the simultaneous spectroscopy obtained in the framework of the ULYSES and PENELLOPE programs. The photometric data from 2021 were also compared with the 2019 TESS light curve and the long-term AAVSO monitoring. The analysis shows that HM Lup is a star in which the accretion process is unsteady and rapidly variable. Using different emission lines, we reconstructed the temperature structure of the magnetosphere. The He i lines and the wings of the Balmer lines are formed in the pre-shock region and are related to the UV excess continuum. The emission lines from metallic species (Na i, Ca i, Ca ii, Mg i, Fe i, and Fe ii) are instead formed in lower temperature conditions, as already shown by Muzerolle et al. (2001) for the Na i D lines. It must be stressed that the line forming region is more complex than our schematic picture. Density and temperature vary continuously along the accretion flow, and NLTE conditions are expected for the gas (Azevedo et al. 2006). A detailed model should take into account the radiative transfer of light through a structure in which the local velocity and optical depth are functions of the distance from the star. The rich emission line spectrum of HM Lup makes this system ideal to test the temperature gradient in non-isothermal magnetospheric models. The photometric behavior of HM Lup can be explained as the result of variations in the accretion rate in a system in which the magnetospheric radius is smaller than the corotation radius. At the epoch sampled by the 2019 TESS light curve HM Lup was in an unstable and chaotic regime of accretion. During the TESS 2021 observation, the system entered into a regime of ordered accretion due to an increase in the accretion rate. The detected 4.79 d period can be mistaken with the stellar period but it is likely associated to the Keplerian rotation of the matter at the truncation radius. We observed period variations in the long term AAVSO monitoring, in agreement with the simulations by Blinova et al. (2016). In particular, in the third segment of the \(B\) band observations, the system has a periodicity of 8.84 d, which could be the actual rotation period of the star. The spectrum of the system during the burst shows the following three main characteristics: (1) emission in the He i lines increases without changes in the shape of the profile, (2) the Balmer lines develop an absorption component, (3) emission in the metallic species increases. HM Lup gives the possibility to monitor a system with an outburst spectrum that is similar to, but less extreme than, that of EXors. Additional observational material, such as an observing campaign capable of covering a timescale of \(\sim 10\) days with high resolution spectra at nightly cadence and simultaneous photometry, is needed to better understand the nature of the complex variability of this Classical T Tauri Star. ###### Acknowledgements. This work has been supported by Deutsche Forschungsgemeinschaft (DFG) in the framework of the YTHIACA Project (469334657) under the project codes STE 1068/9-1 and MA 8447/1-1. AFR, and JMA acknowledge financial support from the project PRIN-INAF 2019 "Spectroscopically Tracing the Disk Dispersal Evolution" (STRADE) and the Large Grant INAF 2022 "YSOs Outflows, Disks and Accretion: towards a global framework for the evolution of planet forming systems" (YODA), CYM, cow, and KM are funded by the European Union (ERC, WANDA, 101039452). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work benefited from discussions with the ODYSSEUs team8 (HST AR-16129). The authors acknowledge Eloonera Fiorellino, Suzan Edwards, and Rene Oudmaijer for suggestions on this work. AA acknowledges Steve Shore for valuable discussions on this system. Funding for the TESS mission is provided by NASA's Science Mission directorate. The authors acknowledge with thanks the variable star observations from the _AAVSO International Database_ contributed by observers worldwide and used in this research, and Elizabeth Waagen for coordinating the AAVSO Alerts. The authors acknowledge the use of the electronic bibliography maintained by the NASA/ADS9 system. Footnote 8: [https://sites.bu.edu/odysseus/](https://sites.bu.edu/odysseus/) Footnote 9: [https://ui.adsabs.harvard.edu](https://ui.adsabs.harvard.edu)
2302.14485
Memory-aided Contrastive Consensus Learning for Co-salient Object Detection
Co-Salient Object Detection (CoSOD) aims at detecting common salient objects within a group of relevant source images. Most of the latest works employ the attention mechanism for finding common objects. To achieve accurate CoSOD results with high-quality maps and high efficiency, we propose a novel Memory-aided Contrastive Consensus Learning (MCCL) framework, which is capable of effectively detecting co-salient objects in real time (~150 fps). To learn better group consensus, we propose the Group Consensus Aggregation Module (GCAM) to abstract the common features of each image group; meanwhile, to make the consensus representation more discriminative, we introduce the Memory-based Contrastive Module (MCM), which saves and updates the consensus of images from different groups in a queue of memories. Finally, to improve the quality and integrity of the predicted maps, we develop an Adversarial Integrity Learning (AIL) strategy to make the segmented regions more likely composed of complete objects with less surrounding noise. Extensive experiments on all the latest CoSOD benchmarks demonstrate that our lite MCCL outperforms 13 cutting-edge models, achieving the new state of the art (~5.9% and ~6.2% improvement in S-measure on CoSOD3k and CoSal2015, respectively). Our source codes, saliency maps, and online demos are publicly available at https://github.com/ZhengPeng7/MCCL.
Peng Zheng, Jie Qin, Shuo Wang, Tian-Zhu Xiang, Huan Xiong
2023-02-28T10:58:01Z
http://arxiv.org/abs/2302.14485v2
# Memory-aided Contrastive Consensus Learning for Co-salient Object Detection ###### Abstract Co-Salient Object Detection (CoSOD) aims at detecting common salient objects within a group of relevant source images. Most of the latest works employ the attention mechanism for finding common objects. To achieve accurate CoSOD results with high-quality maps and high efficiency, we propose a novel Memory-aided Contrastive Consensus Learning (MCCL) framework, which is capable of effectively detecting co-salient objects in real time (\(\sim\)150 fps). To learn better group consensus, we propose the Group Consensus Aggregation Module (GCAM) to abstract the common features of each image group; meanwhile, to make the consensus representation more discriminative, we introduce the Memory-based Contrastive Module (MCM), which saves and updates the consensus of images from different groups in a queue of memories. Finally, to improve the quality and integrity of the predicted maps, we develop an Adversarial Integrity Learning (AIL) strategy to make the segmented regions more likely composed of complete objects with less surrounding noise. Extensive experiments on all the latest CoSOD benchmarks demonstrate that our life MCCL outperforms 13 cutting-edge models, achieving the new state of the art (\(\sim\)5.9% and \(\sim\)6.2% improvement in S-measure on CoSOD3k and CoSal2015, respectively). Our source codes, saliency maps, and online demos are publicly available at [https://github.com/ZhengPeng7/MCCL](https://github.com/ZhengPeng7/MCCL). \({}^{1}\) Nanjing University of Aeronautics and Astronautics, Nanjing, China \({}^{2}\) ETH Zurich, Zurich, Switzerland \({}^{3}\) Inception Institute of Artificial Intelligence, Abu Dhabi, UAE \({}^{4}\) Harbin Institute of Technology, China \({}^{5}\) Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE {zhengpeng0108, qinjiebuaa, shawnwang.tech, tianzhu.xiang19, huan.xiong.math}@gmail.com ## Introduction Co-Salient object detection (CoSOD) aims at detecting the most common salient objects among a group of source images. Compared with the standard salient object detection (SOD) task, CoSOD is more challenging for distinguishing co-occurring objects across a group of images where salient objects of different classes and non-salient objects of the same class are both hard distractors. CoSOD methods also show their advantage of acting as a pre-processing step for other computer vision tasks, such as semantic segmentation [14], co-segmentation [15], and object tracking [20], _etc._. Previous works tend to exploit various types of consistency among the image group to solve the CoSOD task, including shared features [16] and common semantic information [13]. With the success of unified models in up-stream tasks [14, 15], the latest CoSOD models try to address salient object detection and common object detection in a unified framework [13, 12, 15]. Despite the promising performance achieved by these methods, most of them only focus on learning better consistent feature representations in an individual group [12, 13, 14, 15, 16], which may make them suffer from the following limitations. First, images from the Figure 1: **Accuracy (S-measure) and inference time (ms) of the update-to-date and representative deep-learning-based CoSOD methods on the CoSOD3k dataset [13]. We conduct the comparison on both accuracy (the vertical axis) and speed (the horizontal axis) among seven existing representative CoSOD models and our MCCL. Larger bubbles mean larger volume of the model weights. Our MCCL achieves great performance (0.860 S-measure) in real time (7.6 ms) with a lightweight model (104.5 Mb weights and 5.93G FLOPs). All the methods are tested with batch size 2 on one A100-80G, an online benchmark for inference speed can be found at [https://github.com/ZhengPeng7/CoSOD_fps_collection](https://github.com/ZhengPeng7/CoSOD_fps_collection).** same group can only act as positive samples of each other. Consensus representations learned from all positive samples might be difficult to be distinguished due to the lack of inter-group separability. Besides, the number of images in a single group is usually insufficient for models to learn robust and unique representations, which can be easily distinguishable from others. Due to the higher complexity of image context in real-world applications, the number of object classes will increase significantly, making the consensuses have a higher risk of getting closer to others and being hard to identify. In this situation, a module designed for bridging the cross-group connection and learning consensuses of distinction is in high demand. To achieve accurate and fast CoSOD, we propose Memory-aided Contrastive Consensus Learning (MCCL), which exploits common features within each group and identifies distinctions among different groups, guiding the model to produce co-saliency maps with high integrity. To fulfill the above goal, three key components are proposed in MCCL. **First**, we present the Group Consensus Aggregation Module (GCAM) for mining the common feature within the same group by the correlation principle. **Second**, we introduce the Memory-based Contrastive Module (MCM) to conduct robust contrastive learning with a long-term memory. More concretely, the consensus of each class is saved and updated with momentum in a memory queue to avoid the instability of online contrastive learning. **Third**, we employ the Adversarial Integrity Learning (AIL) to improve the integrity and quality of predicted maps in an adversarial fashion, where a discriminator identifies whether the masked regions are obtained from predicted or ground-truth maps. Analogous to generative adversarial networks [1], our model tries to fool the discriminator and produce high-quality and high-integrity maps that can mask complete and intact objects. Our main contributions can be summarized as follows: * We establish a fast yet strong CoSOD baseline with the Transformer, which outperforms most existing methods that are sophisticatedly equipped with many components. * We introduce the Group Consensus Aggregation Module (GCAM) to generate the consensus of each group in an effective way. To make the consensus more discriminative to each other, we propose the Memory-based Contrastive Module (MCM) in a metric learning way. * Furthermore, the Adversarial Integrity Learning (AIL) is proposed to improve the quality and integrity of predicted co-saliency maps in an adversarial learning manner. * We conduct extensive experiments to validate the superiority of our MCCL. Extensive quantitative and qualitative results show that our MCCL can outperform existing CoSOD models by a large margin. ## Related Work ### Salient Object Detection Before the deep learning era, handcrafted features played the most critical role in detection [12, 13, 14] among the traditional SOD methods. When it comes to the early years of deep learning, features are usually extracted from image patches, which will then be used to generate object proposals [22, 23, 24], or super-pixels [14, 15] as processing units. As stated in [14], the network architectures of existing SOD methods can be divided into five categories, _i.e._, U-shape, side-fusion, multi-branch, single-steam, and multi-stream. So far, U-shape has been the most widely used architecture [11], especially when the fusion between low-level and high-level features is needed. Supervision on the multi-stage output is also employed at the early stages by aggregating features from different levels of networks in the U-shape architecture to make the output features more robust and stable [15, 16, 17]. [15, 16] employed the attention mechanism in their models for further improvement. Besides, some external information is also introduced as extra guidance for training, such as boundary [18], edge [15], and depth [15]. ### Co-Salient Object Detection CoSOD emphasizes on detecting salient objects across groups of images rather than in a single image. Traditional CoSOD methods utilize handcrafted cues (_e.g._, superpixels [1]) to explore the correspondence of images. In contrast, the deep learning-based methods learn the consensus feature representation of common objects in an end-to-end manner [26, 16]. Various model architectures are applied to improve the CoSOD performance, including CNN-based [16, 15] and Transformer-based models [16]. Though some of the existing methods investigate both intra-group and inter-group cues [16], there is still much room for improvement in the fully coordinated and simultaneous use of intra-group and inter-group information. ### Integrity Learning for Saliency Maps The quality of saliency maps has attracted much attention in recent years to make existing saliency-related tasks closer to real-world applications. [14] tries to guide their models to learn integrity via the collaboration between global context and local objects. TSPOANet [14] adopts a capsule network to model the part-object relationship to achieve better wholeness and uniformity of segmented salient objects. In [18], a hybrid loss is applied for more focus on improving the boundary of predicted maps. Furthermore, [15] investigates more into the integrity issue in SOD and tries solving this problem with their carefully designed components. In [15], a confidence enhancement module is proposed to make the predicted maps more binarized. ## Methodology In this section, we first introduce the overall architecture of our MCCL for the CoSOD task. Then, we sequentially introduce the proposed three key components: Group Consensus Aggregation Module (GCAM), Memory-based Contrastive Module (MCM), and Adversarial Integrity Learning (AIL). First, GCAM is used to exploit the common features of images in the same group. Second, MCM is applied to make the learned consensus of different groups more robust and discriminative to each other. Finally, we employ AIL to improve the integrity and quality of predicted maps in an adversarial way. Note that MCM and AIL are only used during training and thus can be entirely discarded during inference for a more lightweight model. ### Overview Fig. 2 illustrates the basic framework of the proposed MCCL including the learning pipeline. Different from existing CoSOD models that take images from a single group [22, 20, 23, 24] as input, our model receives images from multiple groups as input, bringing the potential to bridge the intersection between different groups. First, we take images of \(N\) (default as 2) groups as the input \(\{G_{1}\), \(G_{2}\),..., \(G_{N}\}\). We concatenate all the images as a whole batch \(G\), which is then fed to the encoder. With the backbone network (default as the Transformer network PVTv2 [23]) as our encoder, embedded features are extracted as \(\mathcal{F}\), which is then split by their group categories as \(\{\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{N}\}\), where \(\mathcal{F}_{N}=\{F_{N,s}\}_{s=1}^{S}\in\mathbb{R}^{S\times C\times H\times W}\), \(C\) denotes the channel number, \(H\times W\) means the spatial size, and \(N\) is the group size. Meanwhile, the intermediate features \(\{\mathcal{F}_{1}^{lat},\mathcal{F}_{2}^{lat},\mathcal{F}_{3}^{lat}\}\) at different stages of our encoder are saved and fed to their corresponding stages of the decoder by a 1x1 convolutional layer. Then, \(\{\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{N}\}\) are sequentially fed to GCAM to obtain the consensus of each group. With the consensus of groups \(\{\mathcal{F}_{1}^{out},\mathcal{F}_{2}^{out},...,\mathcal{F}_{N}^{out}\}\), the memory of the corresponding classes is updated in the queue with momentum, supervised by a metric learning loss used in [22]. Furthermore, the consensus of all groups is concatenated as \(\mathcal{F}^{out}\) and fed to the decoder, which consists of four stacked standard residual blocks and combines the early features from lateral connections. Co-saliency maps \(\mathcal{M}\) are generated at the end of the decoder. The prediction of the decoder \(M\) is supervised by the Binary Cross Entropy (BCE) loss and the Intersection over Union (IoU) loss, which provide pixel-level and region-level supervision, respectively. Finally, the predicted co-saliency maps \(\mathcal{M}\) are facilitated together with the source images \(G\) and the ground-truth maps \(GT\). The pixel-wise multiplication between \(G\) and \(M\) leads to \(G^{\mathcal{M}}\), and we obtain \(G^{GT}\) in a similar way. \(G^{\mathcal{M}}\) and \(G^{GT}\) are then fed to an independent discriminator, identifying whether the masked images are generated by the ground-truth maps \(G^{GT}\) or the source images \(G\), which include intact and complete objects. Accordingly, the adversarial loss from the discriminator is applied to the whole generator, and the BCE loss is given to the discriminator. ### Group Consensus Aggregation Module In the wild, objects of the same category tend to share similar appearance, which has been exploited in many related tasks, such as video tracking [23] and semantic segmentation [23], where the correspondence between common objects is used as Figure 2: **Overall framework of the proposed Memory-aided Contrastive Consensus Learning (MCCL). Input images are obtained from multiple groups and fed to an encoder. First, we employ the Group Consensus Aggregation Module (GCAM), where the intra-group features of each group can be learned separately. With the consensus learned from each single group, the consensus features are updated in the memory of each class in the queue of Memory-based Contrastive Module (MCM). Then, contrastive learning is conducted to make the consensus more discriminative to each other. Each stage of our encoder and decoder is connected with only a 1x1 convolution layer for feature adding with the least computation. Our decoder is composed of four DecBlk, which is the vanilla residual block. We design our model as simple as possible to make our study more open and solid. Finally, saliency maps of all groups are predicted based on the supervision from the BCE and IoU losses.** prior information. Here we also apply this mechanism to CoSOD. Similar to Fan et al. (2021), we employ the non-local block Wang et al. (2018) to extract the affinity feature. As shown in Fig. 3, we first split the output feature of the encoder \(\mathcal{F}_{n}\) into \(\{\mathcal{F}_{n}^{A},\mathcal{F}_{n}^{B}\}\), which are then shuffled and fed into the non-local block. Subsequently, in the non-local block, we compute the affinity map of the feature and conduct matrix multiplication between the affinity map and the value feature (_i.e._, 'V' in the non-local block) to obtain the consensus feature \(\{\mathcal{F}_{n}^{A^{\prime}},\mathcal{F}_{n}^{B^{\prime}}\}\). Finally, we perform depth-wise correlation to fuse the original feature with the consensus feature, and concatenate them to form the final consensus representation \(\mathcal{F}_{out}\). ### Memory-based Contrastive Module Metric learning is a widely-used technique that contributes to distinguishing features of different clusters and works in many tasks, including CoSOD Han et al. (2018); Zhang et al. (2017); Zheng et al. (2022). However, CoSOD datasets only contain a limited number of images (tens of images) of limited groups (less than 300 groups). In such cases, naive metric learning cannot work very well due to the small number of samples insufficient for distance measurement. To overcome this issue, some contrastive learning methods introduce the memory queue to achieve more robust contrastive learning with a long-term memory, such as MoCo He et al. (2020), OIM Xiao et al. (2017), _etc._. Inspired by these works, we propose the MCM, which saves the consensus features of each class into memory blocks and updates the corresponding blocks with momentum in every batch. To be more specific, as shown in Fig. 2, the consensus of all groups \(\{\mathcal{F}_{1}^{out},\mathcal{F}_{2}^{out},...,\mathcal{F}_{N}^{out}\}\) are saved or updated in their own memory blocks as \(\{\mathcal{C}_{1},\mathcal{C}_{2}...,\mathcal{C}_{N}\}\). The memory update is as follows: \[\mathcal{C}_{1}^{t}=\beta*\mathcal{C}_{1}^{t-1}+(1-\beta)*\mathcal{F}_{1}^{out}, \tag{1}\] where \(\beta\) denotes the momentum factor and is set to 0.1 by default. When \(\beta\) is set to 0, the MCM belongs to fully online metric learning. As demonstrated in the MCM, each memory block splits itself into two parts, \(\mathcal{C}_{1}^{A}\) and \(\mathcal{C}_{1}^{B}\). In this case, \(\mathcal{C}_{1}^{B}\) is viewed as the positive samples of \(\mathcal{C}_{1}^{A}\), and the whole \(\mathcal{C}_{2}\) is considered as the negative samples of \(\mathcal{C}_{1}^{A}\)Zheng et al. (2022). Then, the loss of MCM can be computed by the GST loss Zheng et al. (2022) as follows: \[L_{\text{Triplet}}(C_{1},C_{2})=||\mathcal{C}_{1}^{A}-\mathcal{C}_{1}^{B}||_{ 2}-||\mathcal{C}_{1}^{A}-\mathcal{C}_{2}^{B}||_{2}+\alpha \tag{2}\] \[L_{\text{MCM}}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}L_{\text{Triplet}}(C _{i},C_{j}), \tag{3}\] where \(\alpha\) denotes the margin used in the triplet loss Schroff et al. (2015), which is set to 0.1. \(||\cdot||_{2}\) means the \(l_{2}\) norm of the input. ### Adversarial Integrity Learning Although some latest works have investigated the integrity of SOD Zhuge et al. (2022), they try to solve this problem by designing sophisticated model architectures and critical components to make predicted maps of higher integrity. These attempts may lead to maps with better quality, but the motivation of their design is not intuitive to the integrity problem. To explicitly solve this problem, we propose the Adversarial Integrity Learning (AIL) in our framework. Three data sources exist in AIL, _i.e._, the source images, the ground-truth maps, and the predicted maps in the current batch. During training, we perform pixel-wise multiplications on two pairs, _i.e._, (source images, ground-truth maps) and (source images, predicted maps), as shown in Fig. 4, to obtain \(G_{GT}\) and \(G_{M}\), respectively. Then, we employ a discriminator to identify whether the regions of source images masked by these two maps are real or fake, as shown in Fig. 2. Obviously, the regions masked by the ground-truth maps are complete and intact objects with 100% integrity. During training, the loss from the discriminator guides the generator to produce maps that can localize objects with higher accuracy and integrity. The ablation results are shown in Fig. 7. ### Objective Function As shown in Fig. 2, the objective function \(L_{sal}\) of the main network (generator) is a weighted combination of the low-level losses (_i.e._, BCE and IoU losses) and the high-level Figure 4: **Discriminator used in our AIL.** The discriminator has four discrimination blocks (DiscBlk) stacked sequentially, with 16, 32, 64, and 128 as the number of their output channels, respectively. Note that there is no batch normalization layer in the first DiscBlk in our implementation. Figure 3: **Group Consensus Aggregation Module.** The feature of the encoder is fed to the GCAM and handled group by group. The original features of one group are evenly split and shuffled before being fed to the non-local block. The depth-wise correlation bridges the semantic interaction between the consensus and original features. losses (metric and adversarial losses). And the discriminator involves the BCE loss. The details of \(L_{MCM}\) can be found in the 'Methodology' section above. The BCE, IoU, and adversarial losses are as follows: \[L_{\rm BCE}=-\sum{[Ylog(\hat{Y}),(1-Y)log(1-\hat{Y})]}, \tag{4}\] \[L_{\rm IoU}=1-\frac{1}{N}\sum\frac{Y\cap\hat{Y}}{Y\cup\hat{Y}}, \tag{5}\] where \(Y\) is the ground-truth map, \(\hat{Y}\) is the predicted map. \[\hat{T}=discriminator(\hat{Y}\cdot G), \tag{6}\] \[L_{\rm adv}=-\sum{[Tlog(\hat{T}),(1-T)log(1-\hat{T})]}, \tag{7}\] where \(\hat{Y}\) is the predicted map, \(G\) denotes the source images, \(\cdot\) denotes the pixel-wise multiplication, \(\hat{T}\) and \(T\) denote the prediction of discriminator on whether \(\hat{Y}\) and \(Y\) is the ground-truth map, respectively. Therefore, our final objective function is: \[L_{\rm sal}=\lambda_{1}L_{\rm BCE}+\lambda_{2}L_{\rm IoU}+\lambda_{3}L_{\rm MCM }+\lambda_{4}L_{\rm adv}, \tag{8}\] \[L_{\rm disc}=\lambda_{5}L_{\rm BCE}, \tag{9}\] where \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\), and \(\lambda_{5}\) are respectively set to 30, 0.5, 3, 10, and 3 to keep all the losses at a reasonable scale at the beginning of training to benefit the optimization. ## Experiments ### Datasets **Training Sets.** We follow [22] to use DUTS_class [22] and COCO-SEG [23] as our training sets. The whole DUTS_class is divided into 291 groups, which contain 8,250 images in total. COCO-SEG contains 200k images of 78 groups and corresponding binary maps. **Test Sets.** For a comprehensive evaluation of our MCCL, we test it on three widely used CoSOD datasets, _i.e._, CoCA [22], CoSOD3k [17], and CoSal2015 [22]. Among the three datasets, CoCA is the most challenging one. It is of much higher diversity and complexity in terms of background, occlusion, illumination, surrounding objects, _etc._. Following the latest benchmark [17], we do not evaluate on iCoseg [16] and MSRC [20], since only one salient object is given in most images there. It is more convincing to evaluate CoSOD methods on images with more salient objects, which is closer to real-life applications. ### Evaluation Protocol Following GCoNet [17], we employ the S-measure [17], maximum F-measure [17], maximum E-measure [17], and mean absolute error (MAE) to evaluate the performance in our experiments. ### Implementation Details We select samples from DUTS_class [22] and COCO-SEG [23] alternatively, and set the batch size as follows: \[batchsize=min(\#group1,...,\#groupN,48), \tag{10}\] where \(\#\) means the image number in the corresponding group. The images are resized to 256x256 for training and inference. The output maps are resized to the original size for evaluation. We apply three data augmentation strategies, _i.e._, horizontal flip, color enhancement, and rotation. Our MCCL is trained for 250 epochs with the AdamW optimizer [18]. The initial learning rate is 1e-4 and divided by 10 at the last 20th epoch. The whole training process takes \(\sim\)3.5 hours and consumes only \(\sim\)7.5GB GPU memory. All the experiments are implemented based on the PyTorch library [24] with a single NVIDIA RTX3090 GPU. ### Ablation Study We conduct the ablation study to validate the effectiveness of each component (_i.e._, GCAM, MCM, and AIL) employed in our MCCL. The qualitative results regarding each module and the combination are shown in Fig. 5. **Baseline.** We establish a solid CoSOD network as the baseline. To keep pace with the latest Transformer network, we also build our model with both Transformer and convolutional neural networks as the backbone. Following GCoNet [17], we feed images of multiple classes and their ground-truth as the input to train our MCCL. Compared with previous CoSOD models, our baseline network achieves promising performance with a simpler architecture and much higher speed. To be consistent with the widely used Transformer network [19, 23, 24], in contrast with the previous CoSOD models [22, 17, 23, 24], we make our model shallower and build it with only four stages in its encoder and decoder. To achieve a pure and fast baseline network, we firstly substitute all the complex blocks in each lateral connection used in [22, 17, 23, 24, 25] with a single Figure 5: **Qualitative ablation studies of different modules and their combinations in our MCCL. (a) Source image; (b) Ground truth; (c) w/ GCAM; (d) w/ GCAM+MCM; (e) w/ GCAM+AIL; (f) w/ GCAM+MCM+AIL, _i.e._, the full version of our model.** \begin{table} \begin{tabular}{r||c c c c c|c c c c|c c c c c} \hline \hline \multicolumn{2}{c||}{\multirow{2}{*}{Method}} & \multirow{2}{*}{Pub.} & \multicolumn{3}{c|}{CoCA (Zhang et al., 2020c)} & \multicolumn{3}{c|}{CoSOD3k (Fan et al., 2022)} & \multicolumn{3}{c}{CoSal2015 (Zhang et al., 2016a)} \\ & & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) \\ \hline CBCS (2013) & TIP & 0.641 & 0.523 & 0.313 & 0.180 & 0.637 & 0.528 & 0.466 & 0.228 & 0.656 & 0.544 & 0.532 & 0.233 \\ GWD (2017) & IJCAI & 0.701 & 0.602 & 0.408 & 0.166 & 0.777 & 0.716 & 0.649 & 0.147 & 0.802 & 0.744 & 0.706 & 0.148 \\ RCAN (2019) & IJCAI & 0.702 & 0.616 & 0.422 & 0.160 & 0.808 & 0.744 & 0.688 & 0.130 & 0.842 & 0.779 & 0.764 & 0.126 \\ GCAGC (2020a) & CVPR & 0.754 & 0.669 & 0.523 & 0.111 & 0.816 & 0.785 & 0.740 & 0.100 & 0.866 & 0.817 & 0.813 & 0.085 \\ GICD (2020c) & ECCV & 0.715 & 0.658 & 0.513 & 0.126 & 0.848 & 0.797 & 0.770 & 0.079 & 0.887 & 0.844 & 0.844 & 0.071 \\ ICNet (2020) & NeurIPS & 0.698 & 0.651 & 0.506 & 0.148 & 0.832 & 0.780 & 0.743 & 0.097 & 0.900 & 0.856 & 0.855 & 0.058 \\ CoADNet (2020b) & NeurIPS & - & - & - & 0.878 & 0.824 & 0.791 & 0.076 & 0.914 & 0.861 & 0.858 & 0.064 \\ DeepACG (2021) & CVPR & 0.771 & 0.688 & 0.552 & 0.102 & 0.838 & 0.792 & 0.756 & 0.089 & 0.892 & 0.854 & 0.842 & 0.064 \\ GCoNet (2021) & CVPR & 0.760 & 0.673 & 0.544 & 0.105 & 0.860 & 0.802 & 0.777 & 0.071 & 0.887 & 0.845 & 0.847 & 0.068 \\ CoEGNet (2022) & TPAMI & 0.717 & 0.612 & 0.493 & 0.106 & 0.837 & 0.778 & 0.758 & 0.084 & 0.884 & 0.838 & 0.836 & 0.078 \\ CADC (2019) & ICCV & 0.744 & 0.681 & 0.548 & 0.132 & 0.840 & 0.801 & 0.859 & 0.096 & 0.906 & 0.866 & 0.862 & 0.064 \\ DCFM\({}^{*}\) (2022) & CVPR & 0.783 & 0.710 & **0.598** & **0.085** & 0.874 & 0.810 & 0.805 & 0.067 & 0.892 & 0.838 & 0.856 & 0.067 \\ UFO (2022) & arXiv & 0.782 & 0.697 & 0.571 & 0.095 & 0.874 & 0.819 & 0.797 & 0.073 & 0.906 & 0.860 & 0.865 & 0.064 \\ \hline **Ours** & Sub. & **0.796** & **0.714** & 0.590 & 0.103 & **0.903** & **0.858** & **0.837** & **0.061** & **0.927** & **0.890** & **0.891** & **0.051** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparisons between our MCCL and other methods. “\(\uparrow\)” (“\(\downarrow\)”) means that the higher (lower) is better. \({}^{*}\) denotes the state-of-the-art method. UFO (Su et al., 2022) is still an arXiv paper and does not show much improvement compared with DCFM (Yu et al., 2022), so we set DCFM as the previous SoTA.** \begin{table} \begin{tabular}{r c c||c c c|c c c c|c c c c} \hline \hline \multicolumn{2}{c||}{\multirow{2}{*}{Modules}} & \multirow{2}{*}{COCA (Zhang et al., 2020c)} & \multicolumn{3}{c|}{CoSOD3k (Fan et al., 2022)} & \multicolumn{3}{c}{CoSal2015 (Zhang et al., 2016a)} \\ & & & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) & \(E_{e}^{\max}\uparrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\max}\uparrow\) & \(\epsilon\downarrow\) \\ \hline & & & 0.756 & 0.683 & 0.553 & 0.118 & 0.880 & 0.828 & 0.798 & 0.075 & 0.905 & 0.866 & 0.861 & 0.062 \\ ✓ & & & 0.779 & 0.709 & 0.577 & 0.103 & 0.894 & 0.851 & 0.824 & 0.061 & 0.921 & 0.884 & 0.882 & 0.053 \\ ✓ & ✓ & & 0.788 & 0.711 & 0.585 & 0.100 & 0.898 & 0.853 & 0.828 & 0.060 & 0.925 & 0.889 & 0.886 & **0.050** \\ ✓ & & ✓ & 0.789 & 0.714 & 0.585 & **0.097** & 0.900 & 0.855 & 0.831 & 0.060 & 0.924 & 0.888 & 0.887 & 0.053 \\ \hline ✓ & ✓ & ✓ & **0.796** & **0.714** & **0.590** & 0.103 & **0.903** & **0.858** & **0.837** & **0.061** & **0.927** & **0.890** & **0.891** & **0.051** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative ablation studies of the proposed components in our MCCL. The components include the GCAM, MCM, AIL, and their combinations.** Figure 6: **Qualitative comparisons of our MCCL and other methods. ‘GT’ denotes the ground truth. We select the results of hard cases due to various reasons. The ‘Extremely Difficult Cases’ means the chopstick group on the test set of CoCA (Zhang et al., 2020c), as chopsticks are thin, tall, and hard to detect. This could be the most difficult case among all groups on the existing test sets.** 1x1 convolution layer [16] as the vanilla Feature Pyramid Network (FPN) [16] does. Secondly, we set only a single residual block as the decoding block, where the output is added with the features from the lateral connection. Finally, instead of the multi-stage supervision on all stages of the decoder [23, 16, 15], we set the pixel-level supervision on only the final output with a weighted sum of BCE and IoU losses to guide the model locally and globally, respectively. Our baseline can beat most of the existing CoSOD methods, and thus can be referred to as a strong baseline for others in the future. **GCAM.** As the performance evaluated in Tab. 2, our GCAM brings much improvement not only on CoCA and CoSOD3k that focus more on complex context with multiple objects, but also on CoSal2015 that is a relatively simple dataset but needs more attention on the precise SOD in a simple environment. **MCM.** In Tab. 2, MCM shows its consistent improvement in all metrics on all datasets. As shown in Fig. 5, MCM helps our model make more accurate predictions than those models without it. **AIL.** AIL guides our model to learn the integrity of predicted regions, and the produced co-saliency maps tend to be more robust and contain one or more complete and intact objects. As shown in Fig. 7, the improvement brought by AIL can be seen from three perspectives. 1) On the object level, AIL increases the completeness of predicted maps of slender objects, which is usually hard to be fully detected. 2) Inside the object, AIL helps fill the unconfident regions which break the structural integrity of the detected objects. 3) Outside the object, AIL suppresses the distractors, with which the regions include just not a complete object. In summary, 1) GCAM splits features into two parts, accelerating the affinity generation. 2) MCM is initially motivated by OIM [21]. Differently, MCM saves features by classes instead of identities; OIM uses the final normalized features to update the queue, while we use the consensus generated by GCAM, which aligns well with CoSOD. 3) The adversarial learning strategy in AIL can also be found in domain adaptation [14], but we are the first to accommodate it to SOD and to use the segmented regions for discrimination. We also apply AIL to other CoSOD models to validate its high generalizability, as shown in Tab. 3. ### Comparison to State-of-the-Art Methods To make a comprehensive comparison, we compare our MCCL with one traditional classical algorithm CBCS [13] and 12 update-to-date deep learning based CoSOD models (see Tab. 1 for all methods used for comparison). Since CoSOD methods have gained much improvement in the past few years and obtained much better performance compared with single-SOD methods, we do not list the single-SOD ones, following previous works [16, 23, 24]. The detailed leaderboard of previous methods can be found in [16]. **Quantitative Results.** Tab. 1 shows the quantitative results of our MCCL and previous competitive methods. Given the above results, we can see that our MCCL outperforms all the existing methods, especially on the CoSOD3k [16] and CoSal2015 [23] datasets, where the ability to detect salient objects is in a higher priority than finding objects with the common class. **Qualitative Results.** Fig. 6 shows the co-saliency maps predicted by different methods for a clear qualitative comparison, where we provide four different types of complex samples from CoCA [23] and CoSOD3k [16]. Compared with existing models, our MCCL shows a stronger ability to eliminate distractors, detect tiny targets, and handle the objects that blend into complex scenes. The extremely difficult cases in which other up-to-date methods fail most of the time further demonstrate the more robust performance of our MCCL. ## Conclusion In this paper, we investigate a novel memory-aided contrastive consensus learning framework (_i.e._, MCCL) for CoSOD. As experiments show, the memory-based contrastive learning with group consensus works effectively to enhance the representation capability of the obtained group features. Besides, the adversarial integrity learning strategy does benefit the saliency model, with the potential to improve the integrity and quality of saliency maps for a variety of SOD and CoSOD models in a general way. \begin{table} \begin{tabular}{l c|c c c c} \hline CoSOD Models & AIL & \multicolumn{3}{c}{CoCA [23]c} \\ \hline GCoNet [23] & & 0.760 & 0.673 & 0.544 & 0.105 \\ GCoNet [23] & ✓ & 0.777 & 0.681 & 0.549 & 0.106 \\ GICD [23] & & 0.715 & 0.658 & 0.513 & 0.126 \\ GICD [23] & ✓ & 0.718 & 0.675 & 0.524 & 0.127 \\ \hline \end{tabular} \end{table} Table 3: **Quantitative ablation studies of the proposed AIL on different models.** We apply the proposed AIL to other CoSOD models [16, 23]c and conduct the evaluation on CoCA [23]c. Figure 7: **Qualitative ablation studies of our AIL.** We conduct the qualitative comparison between the baseline model with (w/) and without (w/o) the proposed AIL.
2310.20531
Force Free Magnetic Field and Modifications in Kerr Black Hole
A recent research presented a solution for a Schwarzschild black hole with a force-free magnetic field showing how the canonical form of background metric changes, Found. Phys. 52 (2022) 4, 93. Therefore, it is logical that this research seeks modified solutions for the Kerr black hole as well. The goal of this paper is, thereby, to pinpoint an exact solution for the Kerr black holes perturbed by force-free magnetic fields. However, to do this we use a well-known tetrad formalism and obtain an explicit expression for the electromagnetic strength tensor in the background metric, that is Kerr, based on the tangent space calculations. Analyzing the stress-energy tensor reveals the perturbed factors in the metric that allow us to understand better the physics of the disks around massive and supermassive black holes. These results show that it is not enough to depend solely on perturbation theory set against the background metric, without considering the backreaction of the force-free magnetufluid on it. This research indicates the necessity of studying the impact of force-free magnetic fields on the structure and evolution of mysterious cosmic objects.
Haidar Sheikhahmadi
2023-10-31T15:14:53Z
http://arxiv.org/abs/2310.20531v1
# Force Free Magnetic Field and Modifications in Kerr Black Hole ###### Abstract A recent research, presented a solution for a Schwarzschild black hole with a force-free magnetic field showing how the canonical form of background metric changes, (Sheikhahmadi, 2021). Therefore, it is logical that this research seeks the modified solutions for the Kerr black hole as well. The goal of this paper is, thereby, to pinpoint an exact solution for the Kerr black holes perturbed by force-free magnetic fields. However, to do this we use a well known tetrad formalism and obtain an explicit expression for the electromagnetic strength tensor in the background metric, that is Kerr, based on the tangent space calculations. Analyzing the stress-energy tensor reveals the perturbed factors in the metric that allow us to understand better the physics of the disks around massive and supermassive black holes. These results show that it is not enough to depend solely on perturbation theory set against the background metric, without considering the backreaction of the force-free magnetufluid on it. This research indicates the necessity of studying the impact of force-free magnetic fields on the structure and evolution of mysterious cosmic objects. Kerr Black Hole, Force Free Magnetic Field, Accretion Disc, Blandford-Znajek Mechanism, Tetrad Formalism + Footnote †: journal: ApJ 0000-0002-1881-7088]Haidar Sheikhahmadi ## 1 Introduction Scientists and astrophysicist are still fascinated by the influence of Kerr black hole--remarkable cosmic singularities endowed with rotational property, in spacetime (Kerr, 1963), see also (Newman & Janis, 1965),(Boyer & Lindquist, 1967). With the growing comprehension of these puzzling objects, there are numerous intriguing aspects that are observed nearby (Akiyama et al., 2019). Out of these phenomena, the appearance of force-free magnetic fields (FFMF)s surrounding Kerr black holes may be one of the most interesting aspects that illustrates how different phenomena like gravity, electromagnetism, and magnetohydrodynamics work together in the severe environment of astrophysics (Chandrasekhar & Elbert, 1953),(Teukolsky & Press, 1974),(Chandrasekhar, 1961),(Semenov et al., 2004), (Thorne & MacDonald, 1982), (Brennan et al., 2013). Blandford and Znajek developed a mathematical model which assumes that rotating black holes interact with a magnetic plasma, and extract the energy of rotation in the process. It is a working scenario that enables us to explain why there are strong jets within active galactic nuclei (AGN) and other astrophysical processes. This process involves using the rotational energy from a black hole for accelerating a collimated outflow or jet by means of the Blandford-Znajek mechanism (Blandford & Znajek, 1977). The reason for this is due to interaction of the black hole's large and plasma magnetic fields. The FFMF is when the energy inferred from a plasma's inertial masses is much smaller than that attributed to its electromagnetic energies (Priest, 2014),(Marsh, 1996). In this regime the charged bodies are closely linked to the magnetic fields and move alongside them with speeds that rapidly approach speed of light (lust & Schlutter, 1954),(Teukolsky & Press, 1974). In other words, near rotating black holes the rotation energy of black hole forms strong and organized magnetic field, which deep affect on local plasma behavior (Aschwanden, 2004). Investigating FFMF surrounding a Kerr black hole offers insight into the complex geometry associated with high rotations rates (Uzdensky, 2005),(Contopoulos et al., 2013),(Wand & Ritz, 2014),(Lupsasca & Rodriguez, 2015). Understanding the relationship between the gravity field of a black hole, its rotation, and the behavior of magnetized plasma in the surroundings will help scientists understand how astrophysical jets are generated; why some accretion disks remain stable while others collapse; and most importantly, how energy can be extracted, inwarded and/or outwarded, from the Kerr black hole (Wilson & Ritz, 2015),(Pan & Yu, 2015). There are advances in numerical simulation and theoretical models over the past few years which make it possible for scientists to study more on the FFMF of a Kerr black holes (Grignani et al., 2018),(Zajacek et al., 2018),(Misner et al., 1998). Such studies illuminate the basic physics of magnetized plasma within the frameworks of strong gravity and rotation of the black holes. Here we can refer the reader to a significant list of literature on FFMF in Kerr black holes (Yuan et al., 2019),(Grignani et al, 2020), (Camiloni et al, 2020) and many other outstanding literature therein. This study will explore the FFMF surroundings for Kerr black holes and the associated physics behind rotating cosmological singularities. We present the modifications arising due to the presence of the FFMF near Kerr black holes. The study of such details of FFMF around Kerr black holes will help us understanding better how such phenomena of gravity, electromagnetism and plasma take place at the most critical places of the Universe. Then via these modifications, one may seek to aid in a larger scientific quest of unravelling the intricacies of Kerr black holes as well as gaining insights into the underlying physical principles that underpin such sophisticated phenomenons deeply more. Inspired by a recent paper on FFMF in Schwarzschild black holes (Sheikhahmadi, 2021), one may wonder whether a similar study can be undertaken for Kerr black holes as a much more physical object. This paper is organised as follows: In Sec. 2 the necessary mathematical tools, including flat and curve electromagnetic strength tensors, the background metric, tangent space, and their relationships, will be presented. In Sec. 3, the energy momentum tensor in the Kerr background will be obtained, then considering it as a source for the perturbation, in Sec. 4 the different components of the perturbed metric will be obtained in detail. In Sec. 5 some discussions will take place, and it will be devoted to concluding remarks as well. ## 2 Mathematics of Force-Free Magnetic Fields Here, we shall consider the consequences of the presence of free-force electrodynamics in the Kerr black hole. In our analysis, we compute the various elements of FFMF in a spherical Cartesian metric. Then, by employing the tetrad formalism (Cartan, 1922),(Newman & Penrose, 1962), we transform the electromagnetic-field tensor to the Kerr black hole background. For instance, by taking advantage of force-free electrodynamics, we can study any possible scenario, which includes the Blandford-Znajek mechanism. The significance of this mechanism is considerable concerning several astronomic effects including the existence of outer disks around massive black holes in galaxies as it occurs in the M87 galaxy centre. As a result, it shed some lights into how jetting and GRBs formulate. The essential construction for such phenomena is the same, yet our attitude towards their study may differ somewhat. Particularly we want to explore what is the effect of the force-free magnetofluid energy-momentum tensor acting as a source on the Kerr black hole's metric. The examination of the influence of the FFMF on geometry deformation associated with the Kerr black hole can provide useful information about the changes in the geometry-induced interplay. ### Force-Free Field in Spherical Coordinate Here we will obtain different components of the force-free field electromagnetism in a spherical coordinate, that is \[dS^{2}=-dt^{2}+dr^{2}+r^{2}d\theta^{2}+r^{2}{\rm sin}^{2}\theta d\varphi^{2}\,. \tag{1}\] The master equation that governs magnetohydrodynamics reads \[\nabla^{2}B+\alpha^{2}B=0\,, \tag{2}\] where \(B\) reads magnetic field, and \(\alpha\) stands for a scalar function, one notices that by considering \(\alpha\) as a constant the problem reduces to an exact solvable solution. There are some techniques, both analytical and numerical, to solve Eq.(2). Here we will follow the approach appeared in (Chandrasekhar & Kendall, 1957),(Chandrasekhar, 1956),(Chandrasekhar & Xanthopoulos, 1979). By introducing a scalar function like \(\Psi\), Eq.(2) can be rewritten as follows \[\nabla^{2}\Psi+\alpha^{2}\Psi=0\,. \tag{3}\] This equation readily has three independent solutions as \[L_{S}=\nabla\Psi\,,\ T=\nabla\times\left(\hat{e}\Psi\right),\ S=\frac{1}{\alpha} \nabla\times T\,, \tag{4}\] well known as solenoidal, toroidal and poloidal solutions respectively. Here \(\hat{e}\) is a unit vector indicating an arbitrary and specific direction. Combining the two above equations one obtains \[B=\frac{1}{\alpha}\nabla\times\nabla\times\left(\hat{e}\Psi\right)+\nabla \times\left(\hat{e}\Psi\right) \tag{5}\] By introducing \(\Psi(r,\theta)=R(r)\Theta(\theta)\) aiming at separating \(\Psi\) in Eq.(3) and by virtue of Eq.(1), one has \[\frac{1}{r^{2}}\frac{d}{dr}\big{(}r^{2}\frac{dR(r)}{dr}\big{)}+ \alpha^{2}R(r)-\frac{n(n+1)}{r^{2}}R(r) =0\,,\] \[\frac{1}{\sin\theta}\frac{d}{d\theta}\big{(}\sin\theta\frac{d \Theta(\theta)}{d\theta}\big{)}+n(n+1)\Theta(\theta) =0\,. \tag{6}\] Then, the function of \(\Psi(r,\theta)\) reads \[\Psi(r,\theta)=\sum_{n=1}(kr)j_{n}(kr)(1-\cos\theta)P_{n}^{(1,-1)}(\cos\theta )\,, \tag{7}\] here \(j_{n}\) stands for spherical Bessel functions and \(P_{n}^{(1,-1)}\) refers the well known Jacobi polynomials and \(k=\alpha\left(K^{2}+\Psi^{2}\right)^{1/2}/\Psi\) in which if \(K\) is a constant, e.g., \(K=0\) then \(\alpha\) to be \(k\), (Marsh, 1996; Sheikhahmadi, 2021). As a first candidate, in Eq.(7) one can consider \(n=1\), that immediately reduces to the following expression \[\Psi(r,\theta)=\Big{(}\frac{\sin(kr)}{kr}-\cos(kr)\Big{)}\sin\theta\,. \tag{8}\] Utilizing all the above results, components of the axial symmetric \(B(r,\theta)\) can be achieved as follows \[B_{r}=\frac{1}{r^{2}\sin\theta}\partial_{\theta}\Psi\,,\ B_{\theta}=-\frac{1} {r\sin\theta}\partial_{r}\Psi\,,\ B_{\phi}=\frac{k\Psi)}{r\sin\theta}\,. \tag{9}\] By having the components of \(B\), and considering metric (1), both covariant and contravariant forms of the electromagnetic strength tensor can be obtained (Sheikhahmadi, 2021) \[\mathfrak{F}_{ab}=\left(\begin{array}{cccc}0&E_{r}&rE_{\theta}&r\sin\theta E _{\phi}\\ -E_{r}&0&-rB_{\phi}&r\sin\theta B_{\theta}\\ -rE_{\theta}&rB_{\phi}&0&-r^{2}\sin\theta B_{r}\\ -r\sin\theta E_{\phi}&-r\sin\theta B_{\theta}&r^{2}\sin\theta B_{r}&0\end{array} \right)\,, \tag{10}\] \[\mathfrak{F}^{ab}=\left(\begin{array}{cccc}0&-E_{r}&-\frac{E_{\phi}}{r}&- \frac{E_{\phi}}{r\sin\theta}\\ E_{r}&0&-\frac{B_{\phi}}{r}&\frac{B_{\theta}}{r\sin\theta}\\ \frac{E_{\phi}}{r\sin\theta}&\frac{B_{\phi}}{r}&0&-\frac{B_{\phi}}{r^{2}\sin \theta}\\ \frac{E_{\phi}}{r\sin\theta}&-\frac{B_{\theta}}{r\sin\theta}&\frac{B_{r}}{r^{2} \sin\theta}&0\end{array}\right)\,, \tag{11}\] where lower case Latin words \(a\) and \(b\) refers spherical coordinate. Obviously for the FFMF problem the electric components are equal to zero, i.e., \(E_{r}=E_{\theta}=E_{\phi}=0\), see (Brennan et al., 2013),(Blinder, 2004). ### Force-Free Field in Kerr Metric In this subsection, we will extend the calculations to a Kerr background, in terms of Boyer and Lindquist coordinate (Boyer and Lindquist, 1967), that is \[g_{\mu\nu}=\left(\begin{array}{cccc}\frac{r^{2}+a^{2}\cos^{2}\left(\theta \right)}{a^{2}-2mr+r^{2}}&0&0&0\\ 0&r^{2}+a^{2}\cos^{2}\left(\theta\right)&0&0\\ 0&0&\frac{\sin\left(\theta\right)^{2}\left(\left(a^{4}+a^{2}r^{2}\right)\cos^ {2}\left(\theta\right)+2mra^{2}\sin^{2}\left(\theta\right)+a^{2}r^{2}+r^{4} \right)}{r^{2}+a^{2}\cos^{2}\left(\theta\right)}&\frac{2mra\sin^{2}\left( \theta\right)}{r^{2}+a^{2}\cos^{2}\left(\theta\right)}\\ 0&0&-\frac{2mra\sin^{2}\left(\theta\right)}{r^{2}+a^{2}\cos^{2}\left(\theta \right)}&\frac{2mra\cdot r^{2}+a^{2}\cos^{2}\left(\theta\right)}{r^{2}+a^{2} \cos^{2}\left(\theta\right)}\end{array}\right)\,, \tag{12}\] where \(G=1\) and \(a:=J/m\) is the Kerr parameter. We will need the contravariant form of the metric (12) in the next calculations, therefore it is better to bring it here, that reads \[g^{\mu\nu}=\left(\begin{array}{ccc}\frac{a^{2}-2mr+r^{2}}{r^{2}+a^{2}\cos^{2} \left(\theta\right)}&0&0\\ 0&\frac{1}{r^{2}+a^{2}\cos^{2}\left(\theta\right)}&0&0\\ 0&0&\frac{\left(a^{2}\cos^{2}\left(\theta\right)-2mr+r^{2}\right)\csc^{2} \left(\theta\right)}{\left(r^{2}+a^{2}\cos^{2}\left(\theta\right)\right)\left( a^{2}-2mr+r^{2}\right)}&-\frac{2amr}{\left(r^{2}+a^{2}\cos^{2}\left(\theta \right)\right)\left(a^{2}-2mr+r^{2}\right)}\\ 0&0&-\frac{2amr}{\left(r^{2}+a^{2}\cos^{2}\left(\theta\right)\right)\left(a^{2 }-2mr+r^{2}\right)}&\frac{\left(-a^{4}-a^{2}r^{2}\right)\cos^{2}\left(\theta \right)-2mr^{2}\sin^{2}\left(\theta\right)-a^{2}r^{2}-r^{4}}{\left(r^{2}+a^{2 }\cos^{2}\left(\theta\right)\right)\left(a^{2}-2mr+r^{2}\right)}\end{array} \right)\,. \tag{13}\] Here one should note \(\mu\) and \(\nu\) run from \(1\ldots 4\), to clarify this the bold face component in Eq.(12) reads the time-time component of the metric, that is \(g_{44}\). To make a map between Kerr background and Spherical coordinate one needs the concept of tetrads \(e^{a}_{\mu}\), see ( Chandrasekhar, 1998), (Newman & Janis, 1965),(Newman & Penrose, 1962), through the following equation \[F_{\mu\nu}=e^{a}_{\mu}e^{b}_{\nu}\mathfrak{F}_{ab}\,. \tag{14}\] Here we bring some of the tetrads for Kerr metric as \[e^{a=1}_{\mu=1}=\frac{\sqrt{a^{2}\cos^{2}\left(\theta\right)+r^{2}}}{\sqrt{a ^{2}-2mr+r^{2}}};\,e^{a=2}_{\mu=2}=\sqrt{r^{2}+a^{2}\!\cos^{2}\left(\theta \right)};e^{a=3}_{\mu=3}=\frac{\sin\left(\theta\right)\sqrt{r^{2}+a^{2}\!\cos ^{2}\left(\theta\right)}\sqrt{a^{2}\!-2mr+r^{2}}}{\sqrt{a^{2}\!\cos^{2}\left( \theta\right)-2mr+r^{2}}};e^{a=4}_{\mu=4}=\frac{\sqrt{a^{2}\!\cos^{2}\left( \theta\right)-2mr+r^{2}}}{\sqrt{a^{2}\cos\left(\theta\right)^{2}+r^{2}}}\,. \tag{15}\] Now using Eqs. (15),(14) and (10) the non-zero covariant components of the electromagnetic strength tensor are attained as \[F_{12}=\frac{\left(r^{2}+a^{2}\!\cos^{2}\left(\theta\right)\right)\sin\left( \theta\right)\left(\cos\left(kr\right)kr-\sin\left(kr\right)\right)}{\sqrt{a^ {2}-2mr+r^{2}}\,r}\,, \tag{16}\] \[F_{13}=-\frac{\left(r^{2}+a^{2}\!\cos^{2}\left(\theta\right)\right)\sin\left( \theta\right)^{2}\left(k^{2}\sin\left(kr\right)r^{2}+\cos\left(kr\right)kr- \sin\left(kr\right)\right)}{\sqrt{a^{2}\!\cos^{2}\left(\theta\right)-2mr+r^{2 }}kr^{2}}\,, \tag{17}\] \[F_{23}=\frac{2\left(r^{2}+a^{2}\!\cos^{2}\left(\theta\right)\right)\sin\left( \theta\right)\sqrt{a^{2}-2mr+r^{2}}\left(\cos\left(kr\right)kr-\sin\left(kr \right)\right)\cos\left(\theta\right)}{\sqrt{a^{2}\!\cos^{2}\left(\theta \right)-2mr+r^{2}}kr}\,. \tag{18}\] To obtain the contravariant form of these expressions one uses the following rule \[F^{\alpha\beta}=g^{\alpha\mu}g^{\beta\nu}F_{\mu\nu}\,, \tag{19}\] and consequently one has \[F^{12}=-\frac{\sqrt{a^{2}-2mr+r^{2}}\sin\left(\theta\right)\left(\cos\left(kr \right)kr-\sin\left(kr\right)\right)}{\left(r^{2}+a^{2}\!\cos^{2}\left(\theta \right)\right)r}\,, \tag{20}\] \[F^{13}=-\frac{\sqrt{a^{2}\!\cos^{2}\left(\theta\right)-2mr+r^{2}}\left(k^{2} \sin\left(kr\right)r^{2}+\cos\left(kr\right)kr-\sin\left(kr\right)\right)}{ \left(r^{2}+a^{2}\!\cos^{2}\left(\theta\right)\right)kr^{2}}\,, \tag{21}\] \[F^{23}=-\frac{2\sqrt{a^{2}\!\cos^{2}\left(\theta\right)-2mr+r^{2}}\left(-\cos \left(kr\right)kr+\sin\left(kr\right)\right)\cot\left(\theta\right)}{\left(r^ {2}+a^{2}\!\cos^{2}\left(\theta\right)\right)\sqrt{a^{2}-2mr+r^{2}}kr}\,. \tag{22}\] Now we are in a position that different components of the energy momentum tensor can be obtained. We will use these results as a source to perturb the Kerr, background, metric. ## 3 FFMF ENERGY Momentum Tensor as a Source of Perturbations First we will obtain components of energy momentum tensor, then will solve perturbed Einstein field equations to obtain the modifications in Kerr metric, induced by FFMF source. ### Energy momentum tensor The definition of the energy momentum tensor in a general curved background reads (Jackson, 1999) \[T_{\mu\nu}=F_{\mu\gamma}F_{\nu}^{\gamma}-\frac{1}{4}g_{\mu\nu}^{(0)}(F_{\alpha \beta}F^{\alpha\beta})\,, \tag{23}\] where \(g_{\mu\nu}^{(0)}\) here is the background Kerr metric, Eq.(12). The diagonal components, for our problem, read \[\begin{array}{ll}T_{44}=\frac{-g_{44}^{(0)}}{2}(F_{12}F^{12}+F_{13}F^{13}+F_ {23}F^{23});&T_{11}=\frac{g_{11}^{(0)}}{2}(F_{12}F^{12}+F_{13}F^{13}-F_{23}F^{2 3})\,,\\ \\ T_{22}=\frac{g_{22}^{(0)}}{2}(F_{12}F^{12}-F_{13}F^{13}+F_{23}F^{23});&T_{33}= \frac{g_{33}^{(0)}}{2}(-F_{12}F^{12}+F_{13}F^{13}+F_{23}F^{23})\,.\end{array} \tag{24}\] For the off-diagonal components we obtain \[\begin{array}{ll}T_{43}=T_{34}=-\frac{g_{22}^{(0)}}{2}(F_{12}F^{12}+F_{13}F^{ 13}+F_{23}F^{23})\\ T_{12}=g_{22}^{(0)}F_{13}F^{23};&T_{21}=g_{11}^{(0)}F_{23}F^{13}\,,\\ T_{13}=g_{33}^{(0)}F_{12}F^{32};&T_{31}=g_{11}^{(0)}F_{32}F^{12}\,,\\ T_{23}=g_{33}^{(0)}F_{21}F^{31};&T_{32}=g_{22}^{(0)}F_{31}F^{21}\,.\end{array} \tag{25}\] By using the results of Eqs.(16) to (22) one gets \[\begin{array}{ll}T_{44}=\frac{\left(2mr-r^{2}-a^{2}\cos^{2}\theta\right)\sin ^{2}\theta}{2(r^{2}+a^{2}\cos^{2}\theta)}\bigg{\{}\frac{\sin^{2}\theta(\cos(kr) kr-\sin(kr))^{2}}{r^{2}}\\ \\ +\frac{\sin^{2}\theta(k^{2}\sin(kr)r^{2}+\cos(kr)kr-\sin(kr))^{2}}{k^{2}r^{4}} +\frac{4\cos^{2}\theta(\cos(kr)kr-\sin(kr))^{2}}{k^{2}r^{2}}\bigg{\}}\,,\\ \\ T_{11}=\frac{\left(2mr-r^{2}-a^{2}\cos^{2}\theta\right)\sin^{2}\theta}{\left( r^{2}+a^{2}\cos^{2}\theta\right)}\Bigg{\{}\frac{\left(\cos(kr)kr-\sin(kr) \right)^{2}}{r^{2}}+\frac{\left(k^{2}\sin(kr)r^{2}+\cos(kr)kr-\sin(kr)\right) ^{2}}{k^{2}r^{4}}+\frac{4\cos^{2}\theta(\cos(kr)kr-\sin(kr))^{2}}{k^{2}r^{2}} \bigg{\}} \tag{27}\] \[\begin{array}{ll}T_{22}=\left(2mr-r^{2}-a^{2}\cos^{2}\theta\right)\sin^{2} \theta\Bigg{\{}\frac{\left(\cos(kr)kr-\sin(kr)\right)^{2}}{r^{2}}+\frac{4( \cos(kr)kr-\sin(kr))^{2}\cot^{2}\theta}{k^{2}r^{2}}\\ \\ -\frac{1}{2}\left(\frac{\left(\cos(kr)kr-\sin(kr)\right)^{2}}{r^{2}}+\frac{ \left(k^{2}\sin(kr)r^{2}+\cos(kr)kr-\sin(kr)\right)^{2}}{k^{2}r^{4}}+\frac{4 \cot^{2}\theta(\cos(kr)kr-\sin(kr))^{2}}{k^{2}r^{2}}\right)\Bigg{\}}\,,\\ \\ T_{33}=\frac{\sin^{4}\theta\left(\frac{2mr\sin^{2}\theta}{\left(r^{2}+a^{2} \cos^{2}\theta\right)}+r^{2}+a^{2}\right)}{\left(\frac{\left(\cos(kr)kr-\sin( kr)\right)^{2}}{r^{2}}+\frac{\left(k^{2}\sin(kr)r^{2}+\cos(kr)kr-\sin(kr) \right)^{2}}{k^{2}r^{4}}+\frac{4\cot^{2}\theta(\cos(kr)kr-\sin(kr))^{2}}{k^{2}r ^{2}}\right)\,.\end{array} \tag{29}\] Avoiding prolongation of the expressions, we bring only some of the off-diagonal components, for instance \[\begin{array}{ll}T_{43}=T_{34}=\frac{mr\sin^{4}\theta}{2(r^{2}+a^{2}\cos^{2 }\theta)}\\ \\ \left(\frac{\left(\cos(kr)kr-\sin(kr)\right)^{2}}{r^{2}}+\frac{\left(k^{2}\sin( kr)r^{2}+\cos(kr)kr-\sin(kr)\right)^{2}}{k^{2}r^{4}}+\frac{4\cot^{2}\theta(\cos(kr)kr- \sin(kr))^{2}}{k^{2}r^{2}}\right)\,,\\ \\ T_{13}=T_{31}=\sin^{2}\!\theta\cos\theta\Big{\{}\frac{\cos^{2}\left(kr\right)kr}{2 }-\frac{\cos\left(kr\right)\sin\left(kr\right)}{2}\,+\left(\frac{m\cos\left(kr \right)}{2}-\frac{\sin\left(kr\right)}{2k}\right)k\cos\left(kr\right)\Big{\}}\,, \\ \\ T_{12}=T_{21}=-\frac{\left(r^{2}+a^{2}\!\cos^{2}\!\theta\right)\sin^{2}\theta}{2 k^{2}r^{3}\sqrt{a^{2}-2mr+r^{2}}}\left(k^{2}\sin\left(kr\right)r^{2}+\cos \left(kr\right)kr-\sin\left(kr\right)\right)\left(\cos\left(kr\right)kr-\sin \left(kr\right)\right)\cot\theta\,.\end{array} \tag{30}\] ### FFMF as a source of perturbations on Kerr background Here, and to study the effects of induced FFMF on the Kerr black hole,we are going to employ the well established linearized theory of gravity (Weinberg, 1972),(Misner et al., 1998),(Padmanabhan, 2010),(Ryder, 2009). In this approach one can perturbatively write down the general form of the metric as \[g_{\mu\nu}=g^{(0)}_{\mu\nu}+h_{\mu\nu}, \tag{33}\] where the source of \(h_{\mu\nu}\) arise from force-free magnohydrodynamics and \(g^{(0)}_{\mu\nu}\) refers the vacuum Kerr solution. We must emphasise here, \(g^{(0)}_{\mu\nu}\) is used to raise and lower indices in linearized gravity approach. Accordingly, Einstein field equations can be expressed as follows \[G^{(0)}_{\mu\nu}+\delta G_{\mu\nu}=0+T_{\mu\nu}\,. \tag{34}\] where \[\delta G_{\mu\nu}=\delta R_{\mu\nu}-\frac{1}{2}g^{(0)}_{\mu\nu}\delta R\,. \tag{35}\] We consider the first order of perturbation, in doing so we need to calculate the \(1^{st}\) order Christoffel symbols, Riemann and Ricci tensors and consequently the Ricci scalar. For the metric,(33), the Christoffel symbols read \[\Gamma^{(1)\eta}_{\mu\nu}=\frac{1}{2}\left(\nabla_{\nu}h^{\eta}_{\mu}+\nabla_ {\mu}h^{\eta}_{\nu}-\nabla^{\eta}h_{\nu\mu}\right)\,. \tag{36}\] By utilizing Eq.(36), one easily gets the Riemann tensor as follows \[R^{(1)\eta}_{\mu\nu\sigma}=\tfrac{1}{2}\left(\nabla_{\nu}\nabla_{\sigma}h^{ \eta}_{\mu}+\nabla_{\nu}\nabla_{\mu}h^{\eta}_{\sigma}-\nabla_{\nu}\nabla^{\eta }h_{\mu\sigma}-\nabla_{\sigma}\nabla_{\nu}h^{\eta}_{\mu}-\nabla_{\sigma} \nabla_{\mu}h^{\eta}_{\nu}+\nabla_{\sigma}\nabla^{\eta}h_{\mu\nu}\right), \tag{37}\] then by using \(g^{(0)}_{\mu\nu}\) to contract Eq.(37), the Ricci tensor is obtained as \[R^{(1)}_{\mu\nu}=R^{(1)\eta}_{\mu\eta\nu}=\frac{1}{2}\left(\nabla_{\eta}\nabla _{\nu}h^{\eta}_{\mu}+\nabla_{\eta}\nabla_{\mu}h^{\eta}_{\nu}-\nabla_{\eta} \nabla^{\eta}h_{\mu k}-\nabla_{\nu}\nabla_{\mu}h\right)\,, \tag{38}\] where \(h\) is the trace of \(h_{\mu\nu}\). To obtain the Ricci scalar the following relation is constructive \[R^{(1)\mu}_{\nu}=g^{\mu\eta(0)}R^{(1)}_{\nu\eta}-h^{\mu\eta}R^{(0)}_{\nu\eta}\,. \tag{39}\] To simplify our analysis, we find it more convenient to use a modified metric called the trace-reversed metric, denoted as \(\bar{h}\mu\nu\). This metric is related to the perturbed metric \(h_{\mu\nu}\), as follows \[\bar{h}_{\mu\nu}\equiv h_{\mu\nu}-\frac{1}{2}g^{(0)}_{\mu\nu}h\,. \tag{40}\] Additionally, we introduce a small coordinate transformation represented by \(x^{\mu}\longrightarrow x^{\mu}+\xi^{\mu}\)(Padmanabhan, 2010). By imposing the Lorentz gauge condition, we obtain an important equation that describes the propagation of an electromagnetic wave on a curved background, given by \[\square\xi^{\mu}=-\nabla_{\nu}\bar{h}^{\mu\nu},. \tag{41}\] This equation describes the relationship between the transformation \(\xi^{\mu}\) and the trace-reversed metric components. It provides valuable insights into how electromagnetic waves propagate in a curved spacetime. In fact, we can express it as follows \[\square\bar{h}_{\mu\nu}+2R^{(0)}_{\eta\mu\nu\nu}\bar{h}^{\eta\epsilon}=4T_{\mu \nu},, \tag{42}\] Here, \(R^{(0)}_{\eta\mu\epsilon\nu}\) represents the Riemann tensor associated with the background geometry, and \(T_{\mu\nu}\) corresponds to the energy momentum tensor. This equation is of great significance in understanding the effects of the trace-reversed metric and the associated force-free electromagnetic waves on the Kerr background (Padmanabhan, 2010). To conduct this study, we consider the prevailing physical conditions that allow us to focus on terms up to the order of \(\mathcal{O}(\lambda^{2}/L^{2})\). Let's clarify it in more details; Our objective is to delve into the equations governing the evolution of a system under a specific configuration. Specifically, we seek to understand the disturbances that arise when force-free electrodynamics is present in a background characterized by a Kerr metric. To streamline our calculations and gain valuable insights, we introduce two parameters that capture the scales of perturbations (\(\lambda\)) and the variations in the background metric (\(L\)). This assumption proves to be immensely helpful as it simplifies our analysis and leads to intriguing findings. By considering the relative magnitudes of \(\lambda\) and \(L\), we can effectively discern the dominant effects and the key features of the system's evolution. As a consequence, we can simplify equation (42) by neglecting the second term, resulting in a more manageable form \[\Box\tilde{h}_{\mu\nu}=4T_{\mu\nu}\,, \tag{43}\] here we suppose \(G=1.\) This equation provides us with valuable insights into the propagation of gravitational waves in the presence of matter. To further explore this, we can utilize the radiative wave form and establish the following relationship \[\bar{h}_{\mu\nu}=16\pi\int\frac{d^{3}R}{4\pi|r-\mathbf{R}|}T_{\mu\nu}=16\pi\int d ^{3}R\left(\sum_{l,m}T_{\mu\nu}\frac{1}{2l+1}\frac{R^{l}}{r^{l+1}}Y_{l}^{m}( \theta,\varphi)Y_{l}^{*m}(\theta,\phi)\right)\,, \tag{44}\] to calculate different components of \(\bar{h}_{\mu\nu}\) in Eq.(44), the following relations for harmonic oscillators are useful \[Y_{0}^{0}=\frac{1}{\sqrt{4\pi}},\quad Y_{2}^{0}=\sqrt{\frac{5}{4 \pi}}\left(\frac{3}{2}\cos^{2}\theta-\frac{1}{2}\right)\,,\] \[\cos^{2}\theta=\frac{2}{3}\left(\sqrt{\frac{4\pi}{5}}Y_{2}^{0}+ \frac{1}{2}\right),\quad\sin^{2}\theta=\frac{2}{3}\left(1-\sqrt{\frac{4\pi}{5 }}Y_{2}^{0}\right)\,. \tag{45}\] It is also useful to note that by considering the magnetostatic limits, we can approximate the inverse distance between two points as (Weinberg, 1972),(Ryder, 2009). \[|\mathbf{r}-\mathbf{R}|=\left(r^{2}-2\mathbf{r}\cdot\mathbf{R}+R^ {2}\right)^{1/2} \approx r\left(1-\frac{\mathbf{r}\cdot\mathbf{R}}{r^{2}}\right)+\cdots\,, \tag{46}\] \[|\mathbf{r}-\mathbf{R}|^{-1}=\left(r^{2}-2\mathbf{r}\cdot\mathbf{ R}+R^{2}\right)^{-1/2} \approx\frac{1}{r}\left(1-\frac{\mathbf{r}\cdot\mathbf{R}}{r^{2}}\right)+\cdots\,. \tag{47}\] These approximations enable us to simplify the mathematical expressions to obtain the modifications to the background metric satisfactorily. ## 4 Ffmf induced perturbations With the help of extracted energy momentum tensor, we are now in a position, to obtain both the diagonal and off-diagonal components of \(\bar{h}_{\mu\nu}\), as appearing in Eq.(44). Therefore first by utilizing the Eqs.(26) to (29) we will obtain the diagonal components of \(\bar{h}_{\mu\nu}\). \[\begin{split}\bar{h}_{44}&\simeq\frac{32\pi m}{15k ^{r}r}\Big{(}\left(9a^{2}k^{2}-5k^{2}\right)\ln\left(kr\right)-\left(a^{2}k^{4 }+3a^{2}k^{2}\right)\ln\left(kr\right)-\left(5+\left(-9a^{2}+5\right)k^{2} \right)\ln\left(2\right)\\ &\quad-\left(-10+3\left(4-3\gamma\right)a^{2}+5\gamma\right)k^{2} -5\gamma+10\Big{)}-\frac{32}{15}\pi k^{2}m\left(a^{2}-\frac{5}{6}\right)r- \frac{16}{75}rk^{4}m\left(a^{2}-5\right)r^{3}+C_{44}\,,\end{split} \tag{48}\] \[\begin{split}\bar{h}_{11}&\simeq\frac{224\pi m}{15k ^{r}r}\Bigg{[}-\frac{10}{7}+\left(\left(a^{2}-\frac{20m^{2}}{7}+\frac{5}{7} \right)k^{2}\right)\ln\left(kr\right)+\left(\frac{\left(-9a^{2}+20m^{2}\right) k^{4}}{7}+\left(a^{2}-\frac{20m^{2}}{7}\right)k^{2}\right)\ln\left(kr\right)\\ &\quad\quad+\left(\frac{5}{7}+\left(a^{2}-\frac{20m^{2}}{7}+\frac {5}{7}\right)k^{2}\right)\ln\left(2\right)+\gamma\left(a^{2}-\frac{20m^{2}}{7} +\frac{5}{7}\right)k^{2}+\frac{5\gamma}{7}\Bigg{]}\\ &\quad\quad-\frac{16}{15}\pi k^{2}m\left(14a^{2}-40m^{2}+5\right)r +\frac{64}{135}\left(a^{2}-10m^{2}+\frac{5}{4}\right)\pi k^{2}r^{2}+C_{11}\,, \\ \bar{h}_{22}&\simeq\frac{4\pi\sin(2kr)\left(\pi a^{2} k^{4}-7a^{2}k^{4}-15\,a^{2}k^{2}+50k^{2}+35\right)}{15k^{r}r}+\frac{8\pi\sin(2kr) \left(a^{2}k^{4}+3a^{2}k^{2}-40k^{2}-35\right)r}{15k^{3}}\\ &\quad\quad\frac{16\pi\left(3a^{2}k^{2}+60k^{2}\cos^{2}(kR)-25k^{ 2}+60\cos^{2}(kR)-25\right)}{45k^{2}}r^{2}+C_{22}\,,\end{split} \tag{50}\] \[\begin{split}\bar{h}_{33}\simeq&-\frac{32\pi\sin(2kr) \left(-\frac{7}{4}-\frac{a^{2}k^{4}}{2}+\left(\frac{a^{2}+4}{4}\right)k^{2} \right)}{15k^{5}r}-\frac{32\pi\sin(2kr)\left(a^{2}k^{6}+\left(-\frac{a^{2}-8}{ 2}\right)k^{4}+\frac{7k^{2}}{2}\right)}{15k^{5}}r\\ &+\frac{32\pi\left(-40\left(k^{4}-\frac{1}{2}k^{2}\right)k\cos \left(kR\right)^{2}+\left(\left(\frac{a^{2}}{3}+\frac{60}{3}\right)k^{4}-\frac{ 25k^{2}}{3}\right)k\right)}{75k^{5}}r^{2}+C_{33}\,.\end{split} \tag{51}\] Some interesting cases can be checked. For instance if one consider \(a=0\), then the corrections reduces to Schwarzschild metric in the presence of FFMF that shows a good concordance with previous results (Sheikhahmadi, 2021). If one switch off the FFMF, by putting \(k=0\), then immediately problem reduces to Kerr solution. Now we can calculate the off-diagonal components. \[\begin{split}\bar{h}_{34}=\bar{h}_{43}\simeq&\frac{32 \pi ma\left(\left(-1+\frac{9}{7}a^{2}k^{2}-2k^{2}\right)(\gamma+\ln(2)+\ln(kr) )+2-\frac{4k^{2}\left(3a^{2}-7\right)}{7}+\left(-\frac{9}{2}a^{2}k^{4}-\frac{3 }{7}a^{2}k^{2}+1\right)\ln(kr)\right)}{15k^{4}r}\\ &-\frac{32}{105}a\left(a^{2}-\frac{7}{3}\right)\pi k^{2}mr-\frac {16}{525}\pi mak^{4}\left(a^{2}-14\right)r^{3}+C_{43}\,,\end{split} \tag{52}\] \[\begin{split}\bar{h}_{13}=\bar{h}_{31}\simeq&\frac{4 \pi\left(15k-30\cos\left(kr\right)^{2}k-15\sin\left(2kr\right)mk^{2}\right)} {75k^{4}r}\\ &+\frac{4\pi\left(-10k^{3}+20k^{3}\cos\left(kr\right)^{2}+10\sin \left(2kr\right)mk^{4}\right)}{75k^{4}}r+\frac{4\pi\left(\frac{5k^{5}m}{4}+10k^ {4}\sin\left(2kr\right)\right)}{75k^{4}}r^{2}+C_{13}\,,\end{split} \tag{53}\] \[\begin{split}\bar{h}_{12}=\bar{h}_{21}\simeq&\frac{ \pi^{2}\left(\left(\left(-a^{2}m+\frac{5}{2}m^{3}\right)k^{2}-\frac{11m}{2} \right)k^{2}\cos^{2}\left(kr\right)-2\left(-\frac{9}{4}+\frac{9k^{2}m^{2}}{8} \right)\sin\left(2kr\right)k-\frac{\left(\left(-a^{2}m+\frac{5}{2}m^{3}\right) k^{2}-\frac{11m}{2}\right)k^{2}}{2}\right)}{4k^{4}r}\\ &+\frac{\left(mk\cos^{2}\left(kr\right)-2\sin\left(2kr\right)- \frac{mk}{2}\right)\pi^{2}}{4k}r+\frac{\left(\cos^{2}\left(kr\right)-\frac{1} {2}\right)\pi^{2}}{4}r^{2}+C_{12}\,,\end{split} \tag{54}\] \[\begin{split}\bar{h}_{23}=\bar{h}_{23}\simeq&\frac{ \pi^{2}\left(\left(\left(-a^{2}m+\frac{5}{2}m^{3}\right)k^{2}-\frac{11m}{2} \right)k^{2}\cos^{2}\left(kr\right)-2\sin\left(2kr\right)\left(-\frac{9}{4}+ \frac{9k^{2}m^{2}}{8}\right)k-\frac{\pi\left(\left(-a^{2}m+\frac{5}{2}m^{3} \right)k^{2}-\frac{11m}{2}\right)k^{2}}{2}\right)}{4k^{4}r}\\ &+\frac{\pi^{2}\left(mk\cos^{2}\left(kr\right)-2\sin\left(2kr \right)-\frac{mk}{2}\right)}{4k}r+\frac{\pi^{2}\left(\cos^{2}\left(kr\right)- \frac{1}{2}\right)}{4}r^{2}C_{23}\,.\end{split} \tag{55}\] In above equations the \(C_{\mu\nu}\)s are some integration constants introduced to control the behaviour of \(h_{\mu\nu}\) coefficients in their \(k\to 0\) limit. When considering the background Kerr metric, denoted as \(g^{(0)}_{\mu\nu}\), and examining the diagonal expressions for \(\bar{h}_{\mu\nu}\) (given by Eqs. (48) to (51)), an interesting result emerges: the trace of \(\bar{h}_{\mu\nu}\) is found to be zero, i.e., \(\bar{h}=0\). This implies that for the perturbed metric, we have \(h_{\mu\nu}=\bar{h}_{\mu\nu}\). Upon closer analysis of Eqs. (52) to (51), we observe that in regions far from the black hole, and at the linearized order of the theory, the Kerr parameter does not appear explicitly. As the strength of the force-free magnetofluid diminishes, the associated corrections become negligible, approaching these \(C_{\mu\nu}\)s coefficients. Finally in the limit \(k\to 0\), the solution will turn into the pure Kerr solution. ## 5 Conclusion As mentioned already, the Blandford-Znajek mechanism is a theoretical model based on General Relativity and magnetohydrodynamics. Observational data provide evidences on existing strong jets in many astrophysical systems but the details of the dynamics governing jet formation and the energetic processes under jets are still an area of ongoing scientific studies. To gain deeper insights into the underlying physics of this phenomenon, a closer investigation of the problem was undertaken. The primary objective was to address the following question: How does the presence of a force-free magnetofluid source impact the geometric properties of the background black hole? Essentially, the aim was to determine an appropriate metric that can effectively describe a magnetized accretion disk surrounding a black hole. By delving into this problem, main quest was dedicated to exploring the interplay between the FFMF source and the underlying geometry of the black hole. The focus was on discerning the modifications and distortions induced by the presence of the magnetized accretion disk. The ultimate goal was to obtain a comprehensive understanding of the metric that accurately represents the combined black hole and magnetized plasma system. By addressing these key aspects, a refined metric is obtained, which provides a more comprehensive description of the magnetized accretion disk's influence on the black hole's geometric structure. Such an achievement would significantly contribute to advancing our understanding of the intricate physics involved in this complex astrophysical scenario. This study has shown that it is not enough to depend solely on perturbation theory set against the background metric. The perturbation of the relevant Einshtein equations revealed that further improvements to the metric had to be effected. These corrections comprise the diagonals and the offsets, thus giving a wider coverage of the system. In order to accomplish this, the tetrad formalism provided a necessary linkage between flat and curved geometries. This formalism helped reproduce the strength tensor of electromagnetic field intensity components. Such an approach enabled a better comprehension of the interaction between a warped spacetime and the surrounding electromagnetic field. The succeeding calculations involved perturbation theory and up to a first order approximation of the metric. The use of this approximation enabled the determination of these corrections that increased the accuracy of the system examined. The study is meant to give a clearer picture of how perturbations affect geometries around the black hole. However, through the analysis of the modified corrections it can be found that the results depend on how close they are to the black hole. Two distinct regions can be identified: around the black hole and also in areas far from that. Close to the black hole and for relatively small distances the correction term is dependent on the distance as, \(1/r\). Importantly, if both the Kerr and electromagnetic parameters are equal to zero then the problem returns to the widely known Schwarzschild case characterized by the gravitational field of the non rotational black hole. On the contrary, these corrections exhibit a characteristic \(r\) power law going all the way up to the second order, expect \(h_{44}\) and \(h_{34}\) containing \(r\) and \(r^{3}\), for areas situated far away from the black hole. The manner in which it behaves is almost similar to the case of a rotating Kerr-de Sitter solution, involving a cosmological term. The analysis highlights some of the changes incurred when there is a black hole and subsequent force-free magnetofluid. The behaviour near the black hole fits the Schwarzschild solution while the behavior farther away reveals how Kerr-de Sitter solutions interact with rotational, cosmic, and geometrical parameters in these respective areas. With regard to future projects, there are two immediate ways forward. It could be interesting to study charged black holes while there are force-free fields around them since this could give an opportunity to understand how different fields interact with each other belonging to a charged black hole and an accretion disk. Such an investigation might shed light on new physical manifestations and enrich our understanding of these intricate systems. Also of interest in this regard is the exploring of shadows, and astrophysical properties related to black holes in the presence of force-free fields. Study of shadows due to gravitational bending of the light around the black holes has sparked a lot of interest. Studying these shadows as well as other astrophysical features in an environment of force-free fields can contribute a lot in understanding black holes' nature and their attributes. ## Acknowledgments H.S. expresses sincere gratitude to Yousef Sobouti for introducing this problem and engaging in numerous constructive discussions during the early stages of this work. Additionally, H.S. extends heartfelt appreciation to Hassan Firouzjahi for engaging in insightful discussions on the physics of black holes and for valuable suggestions resulted in improvements of the quality of presentation of the work.
2310.20257
Diophantine conditions in the law of the iterated logarithm for lacunary systems
It is a classical observation that lacunary function systems exhibit many properties which are typical for systems of independent random variables. However, it had already been observed by Erd\H{o}s and Fortet in the 1950s that probability theory's limit theorems may fail for lacunary sums $\sum f(n_k x)$ if the sequence $(n_k)_{k \geq 1}$ has a strong arithmetic ''structure''. The presence of such structure can be assessed in terms of the number of solutions $k,\ell$ of two-term linear Diophantine equations $a n_k - b n_\ell = c$. As the first author proved with Berkes in 2010, saving an (arbitrarily small) unbounded factor for the number of solutions of such equations compared to the trivial upper bound, rules out pathological situations as in the Erd\H{o}s--Fortet example, and guarantees that $\sum f(n_k x)$ satisfies the central limit theorem (CLT) in a form which is in accordance with true independence. In contrast, as shown by the first author, for the law of the iterated logarithm (LIL) the Diophantine condition which suffices to ensure ''truly independent'' behavior requires saving this factor of logarithmic order. In the present paper we show that, rather surprisingly, saving such a logarithmic factor is actually the optimal condition in the LIL case. This result reveals the remarkable fact that the arithmetic condition required of $(n_k)_{k \geq 1}$ to ensure that $\sum f(n_k x)$ shows ''truly random'' behavior is a different one at the level of the CLT than it is at the level of the LIL: the LIL requires a stronger arithmetic condition than the CLT does.
Christoph Aistleitner, Lorenz Frühwirth, Joscha Prochno
2023-10-31T08:23:57Z
http://arxiv.org/abs/2310.20257v1
# Diophantine conditions in the law of the iterated logarithm for lacunary systems ###### Abstract. It is a classical observation that lacunary function systems exhibit many properties which are typical for systems of independent random variables. However, it had already been observed by Erdos and Fortet in the 1950s that probability theory's limit theorems may fail for lacunary sums \(\sum f(n_{k}x)\) if the sequence \((n_{k})_{k\geq 1}\) has a strong arithmetic "structure". The presence of such structure can be assessed in terms of the number of solutions \(k,\ell\) of two-term linear Diophantine equations \(an_{k}-bn_{\ell}=c\). As the first author proved with Berkes in 2010, saving an (arbitrarily small) unbounded factor for the number of solutions of such equations compared to the trivial upper bound, rules out pathological situations as in the Erdos-Fortet example, and guarantees that \(\sum f(n_{k}x)\) satisfies the central limit theorem (CLT) in a form which is in accordance with true independence. In contrast, as shown by the first author, for the law of the iterated logarithm (LIL) the Diophantine condition which suffices to ensure "truly independent" behavior requires saving this factor of logarithmic order. In the present paper we show that, rather surprisingly, saving such a logarithmic factor is actually the optimal condition in the LIL case. This result reveals the remarkable fact that the arithmetic condition required of \((n_{k})_{k\geq 1}\) to ensure that \(\sum f(n_{k}x)\) shows "truly random" behavior is a different one at the level of the CLT than it is at the level of the LIL: the LIL requires a stronger arithmetic condition than the CLT does. Key words and phrases:Lacunary trigonometric sums, law of the iterated logarithm, Diophantine equations 2020 Mathematics Subject Classification: Primary 42A55, 60F15; Secondary 11D04, 11D45 ## 1. Introduction and main result The classical Hartman-Wintner law of the iterated logarithm (LIL) was proved by Philip Hartman and Aurel Winter in 1941 [17] and quantifies the typical fluctuation of sums of independent and identically distributed (i.i.d.) random variables on the scale between the central limit theorem (CLT) and the law of large numbers (LLN). More precisely, the LIL states that for a sequence \(X_{1},X_{2},\dots\) of i.i.d. random variables of zero mean and finite variance \(\sigma^{2}\in(0,\infty)\), \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}X_{k}\right|}{\sqrt{2N\log\log N }}=\sigma\quad\text{almost everywhere (a.e.)}. \tag{1}\] Today it is a well-known fact in analysis and probabilistic number theory that the asymptotic behavior of sums of i.i.d. random variables is echoed in many ways by lacunary trigonometric sums \(\sum_{k=1}^{N}\cos(2\pi n_{k}x)\) under the so-called Hadamard gap condition \[\frac{n_{k+1}}{n_{k}}\geq q>1,\qquad k\in\mathbb{N}, \tag{2}\] for a sequence \((n_{k})_{k\geq 1}\) of natural numbers; this must be seen in consideration of the fact that the random variables \(X_{k}(x):=\cos(2\pi n_{k}x)\) on the probability space \([0,1]\) with Borel Introduction Let \(\mathbb{R}\) be a real real and complex complex, and let \(\sigma\) be a real real and complex complex, and let \(\sigma\) be a real real and complex complex. Let \(\sigma\) be a real real and complex complex, and let \(\sigma^{\prime}\) be a real real and complex complex. Let \(\sigma^{\prime}\) be a real and complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex. Let \(\sigma^{\prime}\) be a real and complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex. Let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and let \(\sigma^{\prime}\) be a real and complex complex complex, and \(\sigma^{\prime}\) be a complex complex, and \(\sigma^{\prime}\) be a real and complex complex complex, and \(\sigma^{\prime}\) be a complex complex, and \(\sigma^{\prime}\) is a real and complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime}\) is a complex complex complex, and \(\sigma^{\prime distribution), and for which a "non-standard" LIL holds in the form \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}f(n_{k}x)\right|}{\sqrt{2N\log\log N }}=\left|2\cos(\pi x)\right|\quad\text{a.e.} \tag{5}\] with a (non-constant) function on the right-hand side. Thus, for \(\sum f(n_{k}x)\) the LIL can fail to hold in its truly independent form, and instead as a general result we only have an upper-bound LIL \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}f(n_{k}x)\right|}{\sqrt{2N\log \log N}}\leq c_{f,q}\quad\text{a.e.},\] with a constant \(c_{f,q}\in(0,\infty)\) depending on \(f\) and the growth factor \(q\) of \((n_{k})_{k\geq 1}\); see Takahashi [27] for this result, and Philipp [20] for a generalization to the so-called Chung-Smirnov type LIL. The interaction of analytic, arithmetic and probabilistic effects that underpins this theory has led to a wealth of research, leading from famous classical papers such as those of Kac [18] and Gaposhkin [14] to recent work such as that of Berkes, Philipp and Tichy [5], Bobkov and Gotze [6], Conze and Le Borgne [7], and in particular Fukuyama [10, 11, 13, 12]. An interesting observation is that in the general framework, the fine probabilistic behavior of lacunary sums \(\sum f(n_{k}x)\) is intimately related to the number of solutions of certain linear Diophantine equations, such as the two-variable equation \[an_{k}-bn_{\ell}=c. \tag{6}\] Here \(a,b\in\mathbb{N}\) and \(c\in\mathbb{Z}_{\geq 0}\) are fixed, and one has to consider the number of solutions \((k,\ell)\) of the equation with the size of the indices \(k,\ell\) being bounded above by some threshold value. If \(N\in\mathbb{N}\), we shall write \[L(N,a,b,c):=\#\left\{1\leq k,\ell\leq N:\ an_{k}-bn_{\ell}=c\right\}. \tag{7}\] We restrict ourselves to non-negative integers \(c\), since we can always switch to this case by exchanging the roles of the parameters \(k\) and \(\ell\) and that of \(a\) and \(b\), respectively. Note that trivially \(L(N,a,b,c)\leq N\) for any \(a,b,c\) with \((a,b,c)\neq(0,0,0)\) and any \(N\in\mathbb{N}\), as long as \((n_{k})_{k\geq 1}\) is a sequence of distinct integers. In [2] it was proved that \(\sum f(n_{k}x)\) satisfies the CLT under the assumption that the number of solutions to Diophantine equations of the form (6) is asymptotically less than the trivial estimate. More precisely, it was shown in [2, Theorem 1.1] that for any function as in (4) and any lacunary sequence \((n_{k})_{k\geq 1}\), and any \(t\in\mathbb{R}\), \[\lambda\left(\left\{x\in[0,1]\,:\,\sigma_{N}^{-1}\sum_{k=1}^{N}f(n_{k}x)\leq t \right\}\right)\to\Phi(t)\qquad\text{as $N\to\infty$},\] with \[\sigma_{N}^{2}:=\int_{0}^{1}\left(\sum_{k=1}^{N}f(n_{k}x)\right)^{2}\,dx,\] provided that the following two conditions are satisfied: 1. The limiting variance is not degenerate, i.e., \(\sigma_{N}^{2}\geq CN\) for some suitable constant \(C>0\). 2. For all positive integers \(a,b\) with \(a\neq b\), the number of solutions to the Diophantine equation satisfies1 Footnote 1: The case \(a=b\) is not relevant for our paper, since when \(a=b\) and \((n_{k})_{k\geq 1}\) is lacunary then by Lemma 3 below we always have \(L(N,a,a,c)=\mathcal{O}(1)\) uniformly in \(c\) for \(c\neq 0\), and consequently the number of solutions is small enough to be negligible. For \(a=b\) and \(c=0\) we trivially always have \(L(N,a,a,0)=N\) (these solutions are the “diagonal terms” and cannot be avoided). \[L(N,a,b,c)=o(N)\qquad\text{uniformly in }c\in\mathbb{Z}\backslash\{0\}.\] Let us remark that the condition (i) on the non-degeneracy of the variance already appeared in the work of Gaposhkin [14] and is indeed necessary, as shown by examples leading to telescoping sums such as \(f(x)=\cos(2\pi x)-\cos(4\pi x)\) and \(n_{k}=2^{k},\ k\geq 1\). As proved in [2], the Diophantine condition \(L(N,a,b,c)=o(N)\) is optimal and cannot be replaced by \(L(N,a,b,c)\leq\varepsilon N\) for a fixed \(\varepsilon>0\). If additionally to (ii) the sequence \((n_{k})_{k\geq 1}\) satisfies \(L(N,a,b,0)=o(N)\) for all \(a\neq b\), that is, if the number of solutions of (6) with \(c=0\) on the right-hand side is also small, then (i) is not necessary since in that case one has \(\sigma_{N}=\|f\|_{2}\sqrt{N}\) as \(N\to\infty\), and the CLT holds with exactly the same normalizing factor as in the truly independent case (see [2, Theorem 1.2.]). Note that this discussion also explains why the CLT fails to hold for the Erdos-Fortet example: for the sequence \(n_{k}=2^{k}-1,\ k\geq 1\), there are too many solutions to the equation \[n_{k}-2n_{\ell}=1,\] namely all \(N-1\) pairs \((k,\ell)\) of the form \(k=\ell+1\) (cf. Equation (34) below, which explains how the function \(2\cos(\pi x)\) on the right-hand side of (5) arises). As explained in the previous paragraph, the results in [2] provide optimal Diophantine conditions guaranteeing the CLT for \(\sum f(n_{k}x)\). Thus, the relation between sums of dilated functions and arithmetic information in form of the number of solutions of Diophantine equations is completely understood at the level of the CLT. In contrast, the situation in the case of the LIL is much less satisfactory. It was proved in [1, Theorem 1.3] that for \(f\) as in (4) and \((n_{k})_{k\geq 1}\) as in (2), \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}f(n_{k}x)\right|}{\sqrt{2N\log \log N}}=\|f\|_{2}\qquad\text{a.e.}, \tag{8}\] provided that for all fixed positive integers \(a,b\) with \(a\neq b\), \[L(N,a,b,c)=\mathcal{O}\left(\frac{N}{(\log N)^{1+\varepsilon}}\right),\qquad \text{uniformly in }c\in\mathbb{Z}_{\geq 0}, \tag{9}\] for some constant \(\varepsilon>0\). Equation (8) is in perfect accordance with truly independent behavior. However, unlike in the CLT case, it was unclear whether the Diophantine condition (9) for the LIL case was optimal. There were good reasons to believe that the factor \((\log N)^{1+\varepsilon}\) in the stronger Diophantine condition (9) is an artifact coming from the particular proof strategy in [1], which as a key ingredient evokes a classical almost sure invariance principle (ASIP) of Strassen [25]. Roughly speaking, Strassen's ASIP for martingale differences requires the almost sure convergence of conditional second moments, which can essentially be established from (9) using Chebyshev's inequality. Such an argument seems rather wasteful, and hence some effort was put into trying to relax the Diophantine condition for the LIL down to the one which is known to be sufficient in the CLT case. However, in the present paper we prove the rather surprising result that the Diophantine condition (9) is actually optimal (up to lower-order terms) to ensure the LIL for \(\sum f(n_{k}x)\), even when \(f\) is restricted to be a trigonometric polynomial. Our main result is the following. **Theorem 1**.: _Let \(\varepsilon\in(0,1)\). Then, for every constant \(K\in(0,\infty)\), there exist a trigonometric polynomial \(f\) with mean zero and a lacunary sequence \((n_{k})_{k\geq 1}\) such that for all \(a,b\in\mathbb{N}\) with \(a\neq b\), we have_ \[L(N,a,b,c)=\mathcal{O}\left(\frac{N}{(\log N)^{1-\varepsilon}}\right),\qquad \text{uniformly in }c\in\mathbb{Z}_{\geq 0}, \tag{10}\] _and such that_ \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}f(n_{k}x)\right|}{\sqrt{2N\log \log N}}\geq K\|f\|_{2}\qquad\text{a.e.}\] Theorem 1 is remarkable as it shows that to guarantee the validity of a probabilistic limit theorem for lacunary sums of dilated functions on the LIL scale, one needs stronger arithmetic assumptions than one does on the CLT scale. We consider this to be a very interesting phenomenon. The necessary savings factor of order roughly \(\log N\) in the Diophantine condition for the LIL seems to arise essentially as \(e^{(\sqrt{2\log\log N})^{2}/2}\) from the order of the tail of the normal distribution, and thus be directly connected with the fact that the LIL is concerned with deviations exceeding the CLT normalization \(\sqrt{N}\) by an additional factor \(\sqrt{2\log\log N}\). One cannot help but wonder if a similar direct connection between the necessary arithmetic (Diophantine) condition and the size of the deviation that one is interested in persists throughout other scales; note that such a direct link would have to become meaningless at least for additional factors of order exceeding \(\sqrt{2\log N}\), which would correspond to the requirement of saving a factor of more than \(e^{(\sqrt{2\log N})^{2}/2}=N\) in the Diophantine condition, thus asking for less than one solution in (7), which is absurd.2 If so, then the arithmetic theory underpinning the behavior of lacunary sums at small-scale deviations near \(\sqrt{N}\) would be substantially different from the corresponding theory for deviations at large scales beyond \(\sqrt{2N\log N}\). One possible explanation for such a dichotomy could be that two-term Diophantine equations can only control the distribution of normalized lacunary sums at small deviation scales, and that at larger scales a different effect sets in which is only expressible in terms of Diophantine equations in more than 2 variables. We leave these questions for future research. Footnote 2: Possibly it is no coincidence that martingale methods based on conditional second moments also seem to reach a critical point at deviations of order \(\sqrt{2\log N}\), see for example [16]. ## 2. Construction of the sequence & Solutions to Diophantine equations We now present a completely explicit construction of a Hadamard gap sequence \((n_{k})_{k\geq 1}\) satisfying the claim of Theorem 1, and we show that it indeed satisfies the bound (10) for the number of solutions of the two-variable linear Diophantine equations in (6). Step 1. Let \(\varepsilon\in(0,1)\) and \(K\in(0,\infty)\) be given. Choose \(d\in\mathbb{N}\) such that \[\frac{d\sqrt{\varepsilon}}{4}-2>\frac{K\sqrt{d}}{\sqrt{2}}; \tag{11}\] note that this condition can be satisfied by choosing \(d\) sufficiently large (with the necessary size of \(d\) depending on the parameters \(\varepsilon\) and \(K\)). The number \(d\) will later be the degree of the trigonometric polynomial \(f\), which we construct in order to prove Theorem 1. Further, let \(R:=R(\varepsilon)\in\mathbb{N}\) such that \[R>\frac{8}{\varepsilon}. \tag{12}\] The parameter \(R\) will serve as decomposition parameter; we split the set of positive integers \(\mathbb{N}\) into consecutive blocks \(\Delta_{1},\Delta_{2},\dots\) such that \(\#\Delta_{i}=R^{i}\), i.e., the sizes of the block are rapidly increasing. More precisely, we define \[\Delta_{i}:=\left\{\frac{R^{i}-R}{R-1}+1,\dots,\frac{R^{i+1}-R}{R-1}\right\}, \qquad i\in\mathbb{N}. \tag{13}\] From this construction it follows that for each \(i\in\mathbb{N}\), \[\sum_{h=1}^{i-1}\#\Delta_{h}\leq\frac{1}{R-1}\#\Delta_{i},\] i.e., the block \(\Delta_{i}\) is even much larger than the collection of all previous \(i-1\) blocks taken together. To put it more illustratively, the partial sum \(\sum_{k\in\Delta_{1}\cup\dots\cup\Delta_{i}}f(n_{k}x)\) will be dominated by the terms with \(k\in\Delta_{i}\), while the terms with \(k\in\Delta_{1}\cup\dots\cup\Delta_{i-1}\) will be essentially negligible, so that the lower bound in Theorem 1 only has to be established for sums \(\sum_{k\in\Delta_{i}}f(n_{k}x)\) as \(i\to\infty\). Step 2. We shall now split up each block \(\Delta_{i}\), \(i\in\mathbb{N}\), into disjoint subsets, where the number of subsets depends on the parameter \(i\). More precisely, we decompose \(\Delta_{i}\) into3 Footnote 3: Throughout the paper \(\lceil x\rceil\) denotes the smallest integer which is at least as large as \(x\). \[\Delta_{i}^{(m)},\qquad 1\leq m\leq M(i):=\lceil i^{1-\varepsilon}\rceil,\] such that \(\Delta_{i}^{(1)}<\Delta_{i}^{(2)}<\dots<\Delta_{i}^{(M(i))}\) holds element-wise with \(\Delta_{i}=\cup_{m=1}^{M(i)}\Delta_{i}^{(m)}\), and such that all sets \(\Delta_{i}^{(m)},\ 1\leq m\leq M(i)\), have essentially the same cardinality, i.e., \[\left|\#\Delta_{i}^{(m)}-\frac{R^{i}}{\lceil i^{1-\varepsilon}\rceil}\right| \leq 1,\qquad 1\leq m\leq M(i). \tag{14}\] Heuristically, this construction is made in such a way that if we write \(\Delta_{1}\cup\dots\cup\Delta_{i}=\{1,\dots,N\}\) for some suitable \(N\in\mathbb{N}\), then \[\#\Delta_{i}^{(m)}=\mathcal{O}\left(\frac{N}{(\log N)^{1-\varepsilon}}\right) \qquad\text{for all }m\in\{1,\dots,M(i)\}, \tag{15}\] which reflects our bound (10) on the number of solutions of Diophantine equations; note that the implied constant in the \(\mathcal{O}\)-term depends on the parameter \(R\). \(\Delta_{i}^{(1)}\)\(\Delta_{i}^{(2)}\)\(\Delta_{i}^{(M(i))}\)\(\Delta_{i+1}\)\(\Delta_{i}^{(1)}\)\(\Delta_{i}^{(2)}\)\(\Delta_{i}^{(M(i))}\)\(\Delta_{i+1}\)\(\Delta_{i}^{(1)}\)\(\Delta_{i}^{(2)}\)\(\Delta_{i}^{(M(i))}\)\(\Delta_{i+1}\) Illustration of the construction described above. Step 3. We shall now construct \((n_{k})_{k\geq 1}\). For given \(i\in\mathbb{N}\), \(m\in\{1,\ldots,M(i)\}\) and for \(k\in\Delta_{i}^{(m)}\), we define \[n_{k}:=2^{2^{4^{4}}}\left(2^{k}+m\right), \tag{16}\] where the first factor is to be understood as \(2^{(2^{(i^{4})})}\). The heuristic behind this construction is the following. First, the elements of our sequence with indices in different blocks \(\Delta_{i_{1}}\) and \(\Delta_{i_{2}}\) are of very different size (because of the dominating prefactor), so that \(f(n_{k}x)\) and \(f(n_{\ell}x)\) for \(k\in\Delta_{i_{1}},\ \ell\in\Delta_{i_{2}},i_{1}\neq i_{2}\), are "essentially independent". Such pairs \(k\) and \(\ell\) also do not play a relevant role for counting the number of solutions of the Diophantine equations, see Lemma 2 \((ii)\) below. Similarly, pairs \(k\) and \(\ell\) which are contained in the same block \(\Delta_{i}\), but in different sub-blocks \(\Delta_{i}^{(m_{1})}\) resp. \(\Delta_{i}^{(m_{2})}\), will not play a significant role for counting the number of solutions of the Diophantine equations either, see Lemma 2 \((iii)\). What will contribute significantly are only pairs \(k,\ell\) from the same sub-block \(\Delta_{i}^{(m)}\), where there will be many solutions of equations such as \[2n_{k}-n_{\ell}=2^{2^{i^{4}}}m,\] namely whenever \(\ell=k+1\) (note that the number \(m\) on the right-hand side of the Diophantine equation above is the same as in the superscript of the sub-block \(\Delta_{i}^{(m)}\); different sub-blocks correspond to different Diophantine equations that have "many" solutions). In the Erdos-Fortet example there are no different blocks whatsoever, so that there are many solutions of the particular equation \(2n_{k}-n_{k+1}=1\), which leads to \(L(N,2,1,1)\) being as large as \(\approx N\). We need \(L(N,a,b,c)\) to be smaller in order to satisfy (10), and by grouping \(k,\ell\) into approximately \((\log N)^{1-\varepsilon}\) many different blocks, instead of one equation with \(\approx N\) solutions, we obtain \((\log N)^{1-\varepsilon}\) different equations with approximately \(\frac{N}{(\log N)^{1-\varepsilon}}\) many solutions each, which is in accordance with (15). This explains why the sequence constructed in such a way will satisfy the Diophantine condition (10). It is a different story (and will be shown in Section 4) that the sequence constructed in this very particular way indeed leads to a large value on the right-hand side of the LIL, as claimed by Theorem 1 (and the presence of the factor \(2^{2^{i^{4}}}\) in the definition of \(n_{k}\) will also only become clear later). We will explain the heuristics behind this part of Theorem 1 later on, after defining the trigonometric polynomial \(f\). Note: Throughout the rest of this section, implied constants are allowed to depend on \(\varepsilon\) and \(K\) (and consequently also on \(d\) and \(R\)), as well as on \(a\) and \(b\), but not allowed to depend on anything else. In particular, all implied constants are independent of \(c\), \(i\), \(m\) and \(N\). The next lemma provides estimates on the number of solutions of Diophantine equations arising in our setup. As we shall see shortly, from this we can deduce that the sequence \((n_{k})_{k\geq 1}\) constructed above indeed satisfies the Diophantine condition (10) of Theorem 1. **Lemma 2**.: _For the sequence \((n_{k})_{k\geq 1}\) constructed in the paragraph above, we have the following estimates._ 1. _For all_ \(a,b\in\mathbb{N}\) _with_ \(a\neq b\)_, and with_ \(\frac{a}{b}\neq 2^{r}\) _for all_ \(r\in\mathbb{Z}\)_, we have_ \[\#\{k,\ell\geq 1:\ an_{k}-bn_{\ell}=c\}=\mathcal{O}(1),\] _uniformly in_ \(c\in\mathbb{Z}_{\geq 0}\) _._ 2. _For all_ \(a,b\in\mathbb{N}\) _such that_ \(\frac{a}{b}=2^{r}\) _for some_ \(r\in\mathbb{Z}\backslash\{0\}\)_, we have_ \[\sum_{\begin{subarray}{c}i_{1}=1\\ i_{1}\neq i_{2}\end{subarray}}^{\infty}\#\left\{k\in\Delta_{i_{1}},\ \ell\in\Delta_{i_{2}}:\ an_{k}-bn_{\ell}=c \right\}=\mathcal{O}(1),\] _uniformly in_ \(c\in\mathbb{Z}_{\geq 0}\)_._ 3. _For all_ \(a,b\in\mathbb{N}\) _such that_ \(\frac{a}{b}=2^{r}\) _for some_ \(r\in\mathbb{Z}\backslash\{0\}\)_, and for all_ \(i\in\mathbb{N}\)_:_ 1. _If_ \(c=2^{2^{i^{4}}}bm(2^{r}-1)\) _for some_ \(m\in\{1,\ldots,M(i)\}\)_, then_ \[\#\{k,\ell\in\Delta_{i}:\ an_{k}-bn_{\ell}=c\}=\frac{R^{i}}{\left\lceil i^{1- \varepsilon}\right\rceil}-r+\mathcal{O}(i^{2}).\] 2. _If_ \(c\geq 0\) _is not of the form_ \(2^{2^{i^{4}}}bm(2^{r}-1)\) _for some_ \(m\in\{1,\ldots,M(i)\}\)_, then_ \[\#\{k,\ell\in\Delta_{i}:\ an_{k}-bn_{\ell}=c\}=\mathcal{O}(i^{2}).\] Before proving Lemma 2, we recall a general result about differences of elements of lacunary sequences. **Lemma 3** ([28, p.203]).: _Let \((m_{k})_{k\geq 1}\) be a positive sequence of integers satisfying (2) for some \(q>1\). Then for all \(c\in\mathbb{Z}_{\geq 0}\),_ \[\#\left\{1\leq k,\ell\leq N,\ k\neq\ell:\ m_{k}-m_{\ell}=c\right\}=\mathcal{O} (1),\] _where the implied constant depends only on the growth factor \(q\)._ Proof of Lemma 2.: Recalling the definition of \(L(N,a,b,c)\) in Equation (7), we assume that \(a,b\in\mathbb{N}\) with \(a\neq b\), and that \(c\in\mathbb{Z}_{\geq 0}\). \((i)\) Assume \(\frac{a}{b}\) is not of the form \(2^{r}\) for any \(r\in\mathbb{Z}\). Note that the number \(m\) in the definition in line (16) is much smaller than \(2^{k}\) (for large \(k\)). Thus, we have (under slight abuse of the limit notation) that, as \(k\to\infty\), \[\frac{n_{k+1}}{n_{k}}\to 2\qquad\text{on those $k$ and $k+1$ belonging to the same block $\Delta_{i}$}, \tag{17}\] and \[\frac{n_{k+1}}{n_{k}}\to\infty\qquad\text{on those $k$ and $k+1$ belonging to different blocks $\Delta_{i}$ and $\Delta_{i+1}$}. \tag{18}\] Since \(\frac{a}{b}\) is not a power of \(2\) by assumption, this shows that the set \[\#\left\{k,\ell\geq 1:\ an_{k}-bn_{\ell}=0\right\} \tag{19}\] is finite. To see this, we assume there are infinitely many pairs \(k,\ell\in\mathbb{N}\) such that \(an_{k}-bn_{\ell}=0\), i.e., \(\frac{n_{k}}{n_{\ell}}=\frac{a}{b}\). Recall that \(a\) and \(b\) are assumed to be fixed. Thus, by (18) there can only be finitely many solutions of this equation for which \(k\) and \(\ell\) belong to different blocks. Assume that \(\frac{a}{b}>1\), which means that \(\frac{n_{k}}{n_{\ell}}=\frac{a}{b}\) is only possible if \(k>\ell\) (the case \(\frac{a}{b}<1\) can be treated similarly; the case \(\frac{a}{b}=1\) is impossible since \(\frac{a}{b}\) is not an integer power of \(2\) by assumption). Since \(\frac{a}{b}\) is not an integer power of \(2\) by assumption, there exists a \(\delta>0\) such that \(\frac{a}{b}\not\in\bigcup_{j\geq 1}\left[(2-\delta)^{j},(2+\delta)^{j}\right]\). However, on the other hand, by (17) for all sufficiently large \(k\) and \(\ell\) with \(k>\ell\) which belong to the same block, we have \[(2-\delta)^{k-\ell}\leq\frac{n_{k}}{n_{\ell}}\leq(2+\delta)^{k-\ell}.\] Thus, \(\frac{n_{k}}{n\ell}=\frac{a}{b}\) is possible for only finitely many pairs \((k,\ell)\) which belong to the same block. As noted above, there are also only finitely many solutions \((k,\ell)\) which belong to different blocks. Overall, the cardinality of the set in (19) is \(\mathcal{O}(1)\). We now form the set-theoretic union \[A=\bigcup_{k\geq 1}\{an_{k},bn_{k}\},\] and write the elements of \(A\) as a sequence \((m_{k})_{k\geq 1}\) (sorted in increasing order). Because of (17) and (18), we have \(\liminf_{k\to\infty}\frac{m_{k+1}}{m_{k}}>1\) so that the sequence \((m_{k})_{k\geq 1}\) is a lacunary sequence with some suitable growth factor (which depends on \(a\) and \(b\), but these are assumed to be fixed). Thus, by Lemma 3, we have \[\#\left\{k,\ell\geq 1:\ an_{k}-bn_{\ell}=c\right\}\leq\#\left\{k,\ell\geq 1,\ k\neq \ell:\ m_{k}-m_{\ell}=c\right\}=\mathcal{O}(1),\] where the implied constant in the \(\mathcal{O}\)-term is independent of \(c\); note that the first estimate is trivial since on the right-hand side we have twice as many equations (and thus potential solutions). Summarizing our results, in case (i) of the Lemma we have \[\#\left\{k,\ell\geq 1:\ an_{k}-bn_{\ell}=c\right\}=\mathcal{O}(1), \tag{20}\] uniformly in \(c\geq 0\). \((ii)\) Now assume \(\frac{a}{b}=2^{r}\) for some \(r\in\mathbb{Z}\backslash\{0\}\). We first show that there are not many solutions where \(k\) and \(\ell\) come from different blocks. Assume that \(k\in\Delta_{i_{1}}\) and \(\ell\in\Delta_{i_{2}}\) such that \(i_{1}<i_{2}\). Then, whenever \(i_{2}\) is so large that \(2^{r+1}\leq 2^{2^{i_{2}}}\) (which excludes only finitely many values of \(i_{1}\) and \(i_{2}\)), using the trivial estimate \(i_{1}\leq k\leq 2^{k}\) together with \(2^{i_{2}}+2^{(i_{2}-1)^{4}}<2^{i_{2}^{4}}\), we have \[an_{k} \leq b2^{r}2^{2^{i_{1}^{4}}}\left(2^{k}+i_{1}\right)\] \[\leq b2^{r+1}2^{2^{i_{1}^{4}}}2^{k}\] \[< b2^{2^{i_{2}}}2^{2^{(i_{2}-1)^{4}}}2^{\ell}\] \[\leq b2^{2^{i_{2}^{4}}}2^{\ell}\] \[\leq bn_{\ell}.\] Consequently, \[an_{k}-bn_{\ell}=c\] is not possible for any non-negative \(c\) when \(i_{2}\) is sufficiently large. Assume again that \(k\in\Delta_{i_{1}}\) and \(\ell\in\Delta_{i_{2}}\), but now such that \(i_{1}>i_{2}\). Then similar to the previous calculation, assuming that \(i_{1}\) is sufficiently large so that \(2^{-r+1}\leq 2^{2^{i_{1}}}\), and now using that \(2^{i_{1}^{4}}-2^{i_{1}}-2^{(i_{1}-1)^{4}}\geq 2^{i_{1}^{3}}\) holds for all \(i_{1}\in\mathbb{N}\), we have (recall that \(\frac{a}{b}=2^{r}\) and trivially \(i_{2}\leq\ell\leq 2^{\ell}\)) \[\frac{an_{k}}{bn_{\ell}}\geq\frac{2^{2^{i_{1}^{4}}}2^{k}}{2^{-r+1}2^{2^{i_{2} ^{4}}}2^{\ell}}\geq\frac{2^{2^{i_{1}^{4}}}}{2^{2^{i_{1}}}2^{2^{(i_{1}-1)^{4}}} }\geq 2^{i_{1}^{3}}.\] Thus, whenever \(i_{1}\) is sufficiently large, then \[an_{k}\left(1-2^{-i_{1}^{3}}\right)=an_{k}-\frac{an_{k}}{2^{i_{3}^{3}}}\leq an _{k}-bn_{\ell}\leq an_{k}.\] Consequently, for sufficiently large \(i_{1}\) (assuming that "sufficiently large" includes the fact that \(i_{1}\geq 10\)), the equality \(an_{k}-bn_{\ell}=c\) requires that \[n_{k}\in\left[\frac{c}{a},\frac{c}{a}\left(1-2^{-1000}\right)^{-1}\right]. \tag{21}\] We claim that, uniformly in \(c\in\mathbb{Z}_{\geq 0}\), there are only finitely many \(k\in\mathbb{N}\) such that (21) holds; more precisely, if \(k\) is sufficiently large, then (21) uniquely determines \(k\). Indeed, by (17) and (18) we have \(\frac{n_{k}}{n_{k-1}}\geq\frac{3}{2}\) for all sufficiently large \(k\). Thus whenever we have (21), then \[n_{k-1}\leq\frac{2}{3}\frac{c}{a}\left(1-2^{-1000}\right)^{-1}\not\in\left[ \frac{c}{a},\frac{c}{a}\left(1-2^{-1000}\right)^{-1}\right]\] as well as \[n_{k+1}\geq\frac{3}{2}\frac{c}{a}\not\in\left[\frac{c}{a},\frac{c}{a}\left(1- 2^{-1000}\right)^{-1}\right],\] with the possible exception of finitely many indices. Thus, overall we have shown that in the case \(\frac{a}{b}=2^{r}\) for some \(r\in\mathbb{Z}\backslash\{0\}\), we have \[\underbrace{\sum_{i_{1}=1}^{\infty}\sum_{i_{2}=1}^{\infty}\#\left\{k\in \Delta_{i_{1}},\ \ell\in\Delta_{i_{2}}:\ an_{k}-bn_{\ell}=c\right\}=\mathcal{O}(1),}_{i_{1}\neq i _{2}}\#\left\{k\in\Delta_{i_{1}},\ \ell\in\Delta_{i_{2}}:\ an_{k}-bn_{\ell}=c\right\}= \mathcal{O}(1), \tag{22}\] with an implied constant that is independent of \(c\geq 0\). This settles case (ii) of the lemma. \((iii)\) Now we are in the situation where \(k,\ell\) are contained in the same block \(\Delta_{i}\) for some \(i\in\mathbb{N}\). We establish a few general estimates which we shall use in the proof of both \((iii)\)\(a)\) and \((iii)\)\(b)\). Recall that we are in the case where there exists an \(r\in\mathbb{Z}\backslash\{0\}\) with \(\frac{a}{b}=2^{r}\) and let \(k,\ell\in\Delta_{i}\) with \(k+r>\ell\). Then, for suitable \(m_{1},m_{2}\in\{1,\ldots,M(i)\}\), we have \[an_{k}-bn_{\ell} = 2^{r}bn_{k}-bn_{\ell}\] \[= 2^{2^{i^{4}}}b(2^{r}2^{k}+m_{1}-2^{\ell}-m_{2})\] \[= 2^{2^{i^{4}}}b(2^{k+r}+m_{1}-2^{\ell}-m_{2}).\] Thus, \(an_{k}-bn_{\ell}=c\) can only hold when \[2^{k+r}-2^{\ell}=\frac{2^{2^{i^{4}}}b(m_{2}-m_{1})+c}{2^{2^{i^{4}}}b},\] and for given \(b,c,r,m_{1},m_{2}\) (whence the right-hand side of this equation is fixed) there is at most one pair \((k,\ell)\) with \(k+r>\ell\) for which this can hold (this is essentially the uniqueness of the dyadic representation of integers). There are \(\mathcal{O}(M(i)^{2})=\mathcal{O}(i^{2})\) many possible values for \(m_{1},m_{2}\), so the total number of solutions \((k,\ell)\) of \(an_{k}-bn_{\ell}=c\) with \(k+r>\ell\) and \(k,\ell\in\Delta_{i}\) is at most \(\mathcal{O}(i^{2})\), uniformly in \(c\geq 0\). The same argument can be applied to the case \(k+r<\ell\). Thus, we have established \[\sup_{c\in\mathbb{Z}_{\geq 0}}\#\left\{k,\ell\in\Delta_{i}:k+r\neq\ell,an_{k}- bn_{\ell}=c\right\}=\mathcal{O}(i^{2}). \tag{23}\] The number of pairs of indices \(k\) and \(\ell\) with \(k+r=\ell\), which are contained in different sub-blocks \(k\in\Delta_{i}^{(m_{1})}\) and \(\ell\in\Delta_{i}^{(m_{2})}\) for \(m_{1}\neq m_{2}\), is \(\mathcal{O}(M(i))=\mathcal{O}(i)\). Thus, by (23) we have \[\#\left\{k,\ell\in\Delta_{i}:an_{k}-bn_{\ell}=c\right\} =\sum_{m^{\prime}=1}^{M(i)}\#\{k,\ell\in\Delta_{i}^{(m^{\prime})}: k+r=\ell,an_{k}-bn_{\ell}=c\}+\mathcal{O}(i)+\mathcal{O}(i^{2}) \tag{24}\] \[=\sum_{m^{\prime}=1}^{M(i)}\#\{k,\ell\in\Delta_{i}^{(m^{\prime})}: k+r=\ell,an_{k}-bn_{\ell}=c\}+\mathcal{O}(i^{2}).\] In the following, we only consider the case where \(r>0\) (the case \(r<0\) can be treated in an analogous way). We need to count the number of \(k\) and \(\ell\) such that \(k+r=\ell\) and \(k,\ell\in\Delta_{i}^{(m^{\prime})}\) for some \(m^{\prime}\in\{1,\ldots,M(i)\}\), where we have \[\begin{split} an_{k}-bn_{\ell}&=2^{r}bn_{k}-bn_{ \ell}\\ &=b\left(2^{r}n_{k}-n_{\ell}\right)\\ &=2^{2^{i^{4}}}b(2^{r}(2^{k}+m^{\prime})-(2^{\ell}+m^{\prime})) \\ &=2^{2^{i^{4}}}b(\underbrace{2^{r+k}-2^{\ell}}_{=0}+2^{r}m^{ \prime}-m^{\prime}))\\ &=2^{2^{i^{4}}}bm^{\prime}(2^{r}-1).\end{split} \tag{25}\] Now we prove \((iii)\)\(a)\), where we assumed that \(c\) is of the special form \(c=2^{2^{i^{4}}}bm(2^{r}-1)\) for some \(m\in\{1,\ldots,M(i)\}\). By (25), the equation \(an_{k}-bn_{\ell}=c\) is satisfied if and only if \(m=m^{\prime}\) and thus \[\sum_{m^{\prime}=1}^{M(i)}\#\left\{k,\ell\in\Delta_{i}^{(m^{\prime})}:k+r=\ell,an_{k}-bn_{\ell}=c\right\} =\#\{k,\ell\in\Delta_{i}^{(m)}:\ k+r=\ell\} \tag{26}\] Combining (14), (24) and (26), we get in case \((iii)\)\(a)\) \[\#\left\{k,\ell\in\Delta_{i}:an_{k}-bn_{\ell}=c\right\}=\frac{R^{i}}{\lceil i ^{1-\varepsilon}\rceil}-r+\mathcal{O}(i^{2}),\] as desired. Now assume that we are in case \((iii)\)\(b)\), i.e., \(c\) is not of the form \(2^{2^{i^{4}}}bm(2^{r}-1)\) for any \(m\in\{1,\ldots,M(i)\}\). Then (25) yields \[\sum_{m^{\prime}=1}^{M(i)}\#\left\{k,\ell\in\Delta_{i}^{(m^{\prime})}:k+r=\ell,an_{k}-bn_{\ell}=c\right\}=0\] and from (24), we obtain \[\#\left\{k,\ell\in\Delta_{i}:an_{k}-bn_{\ell}=c\right\}=\mathcal{O}(i^{2}),\] as claimed. We can now prove that our gap sequence \((n_{k})_{k\geq 1}\) satisfies the desired Diophantine condition. **Corollary 4**.: _The sequence \((n_{k})_{k\geq 1}\) constructed in this section satisfies the Diophantine condition (10) of Theorem 1._ Proof.: Let \(a,b\in\mathbb{N}\) be fixed. Let \(N\in\mathbb{N}\) be given, and let \(I\in\mathbb{N}\) be such that \(N\in\Delta_{I}\). We observe the following (note that \(\left(\frac{i+1}{i}\right)^{1-\varepsilon}\leq 2\) for all \(i\in\mathbb{N}\)) \[\sum_{i=1}^{I}\frac{R^{i}}{i^{1-\varepsilon}} =\frac{R^{I}}{I^{1-\varepsilon}}+\sum_{i=1}^{I-1}\frac{R^{i}}{i^ {1-\varepsilon}}\] \[\leq\frac{R^{I}}{I^{1-\varepsilon}}+\frac{2}{R}\sum_{i=1}^{I} \frac{R^{i}}{i^{1-\varepsilon}}.\] Rearranging the terms and using the fact that there exists a constant \(c>0\) (depending on \(R\)) such that \(c\log N\leq I\) and \(R^{I}\leq RN\) yields \[\sum_{i=1}^{I}\frac{R^{i}}{i^{1-\varepsilon}}\leq\left(1-\frac{2}{R}\right)^{ -1}\frac{R^{I}}{I^{1-\varepsilon}}=\mathcal{O}\left(\frac{N}{(\log N)^{1- \varepsilon}}\right).\] Combining all the worst-case estimates from Lemma 2, the previous calculation shows that \[\#\{k,\ell\leq N:\ an_{k}-bn_{\ell}=c\}=\mathcal{O}\left(\sum_{i=1}^{I}\frac{ R^{i}}{i^{1-\varepsilon}}\right)=\mathcal{O}\left(\frac{N}{(\log N)^{1- \varepsilon}}\right),\] uniformly in \(c\in\mathbb{Z}_{\geq 0}\), as claimed. ## 3. Further ingredients - Gaposhkin's Berry-Esseen result We will need a Berry-Esseen type quantitative central limit theorem for the particular lacunary trigonometric sum \(\sum_{k=1}^{N}\cos(2\pi 2^{k}x)\). As in the introduction, \(\lambda\) denotes Lebesgue measure and \(\Phi\) denotes the standard normal distribution function. **Lemma 5** (Gaposhkin [15]).: _Let \(\lambda_{1},\ldots,\lambda_{N}\) be non-negative real numbers such that_ \[\sum_{k=1}^{N}\lambda_{k}^{2}=1.\] _Set \(\Lambda_{N}=\max_{1\leq k\leq N}\lambda_{k}\). Then_ \[\sup_{t\in\mathbb{R}}\left|\lambda\left(\left\{x\in(0,1):\ \sqrt{2}\sum_{k=1}^{N} \lambda_{k}\cos(2\pi 2^{k}x)<t\right\}\right)-\Phi(t)\right|=\mathcal{O}\left( \Lambda_{N}^{1/4}\right),\] _where the implied constant is absolute._ ## 4. The law of the iterated logarithm Let \(\varepsilon\in(0,1)\) and \(K\in(0,\infty)\). We now define our trigonometric polynomial \[f(x)=\sum_{j=0}^{d-1}\cos(2\pi 2^{j}x),\] where \(d\) satisfies (11), i.e., \(\frac{d\sqrt{\varepsilon}}{4}-2\geq\frac{K\sqrt{d}}{\sqrt{2}}\). We will prove that for this trigonometric polynomial \(f\), and for the gap sequence \((n_{k})_{k\geq 1}\) we constructed in Section 2, the conclusion of Theorem 1 is indeed satisfied. Clearly, since our cosine functions are uncorrelated, \[\|f\|_{2}=\frac{\sqrt{d}}{\sqrt{2}}. \tag{27}\] Applying the Erdos-Gal law of the iterated logarithm (see Equation (3)) together with the triangle inequality gives \[\limsup_{N\to\infty}\frac{\left|\sum_{k=1}^{N}f(n_{k}x)\right|}{\sqrt{2N\log \log N}}\leq\frac{d}{\sqrt{2}}\qquad\text{a.e.} \tag{28}\] Recall that \(\#\Delta_{i}=R^{i}\). We will show that \[\limsup_{i\to\infty}\frac{\left|\sum_{k\in\Delta_{i}}f(n_{k}x)\right|}{\sqrt{2 R^{i}\log\log R^{i}}}\geq\frac{d\sqrt{\varepsilon}}{2}-2\qquad\text{a.e.} \tag{29}\] Assuming this to be true, then with the notation \(N(i):=\frac{R^{i+1}-R}{R-1}\) and upon noting that \(\{1,\ldots,N(i)\}=\Delta_{1}\cup\cdots\cup\Delta_{i}=\{1,\ldots,N(i-1)\}\cup \Delta_{i}\), for almost all \(x\in[0,1]\), we will obtain \[\limsup_{i\to\infty}\frac{\left|\sum_{k=1}^{N(i)}f(n_{k}x)\right| }{\sqrt{2N(i)\log\log N(i)}}\] \[\geq \limsup_{i\to\infty}\frac{\left|\sum_{k\in\Delta_{i}}f(n_{k}x) \right|}{\sqrt{2N(i)\log\log N(i)}}-\limsup_{i\to\infty}\frac{\left|\sum_{k=1} ^{N(i-1)}f(n_{k}x)\right|}{\sqrt{2N(i)\log\log N(i)}}\] \[\geq \underbrace{\limsup_{i\to\infty}\frac{\left|\sum_{k\in\Delta_{i} }f(n_{k}x)\right|}{\sqrt{2R^{i}\log\log R^{i}}}}_{\geq\frac{d\sqrt{2}}{2}-2 \text{ by \eqref{eq:2011}}}\underbrace{\frac{\sqrt{2R^{i}\log\log R^{i}}}{ \sqrt{2N(i)\log\log N(i)}}}_{\longrightarrow 1\text{ as }i\to\infty}\] \[-\underbrace{\limsup_{i\to\infty}\frac{\left|\sum_{k=1}^{N(i-1)}f( n_{k}x)\right|}{\sqrt{2N(i-1)\log\log N(i-1)}}}_{\leq\frac{d}{\sqrt{2}}\text{ by \eqref{eq:2011}}}\underbrace{\lim_{i\to\infty}\frac{\sqrt{2N(i-1)\log\log N(i-1)}}{ \sqrt{2N(i)\log\log N(i)}}}_{=\frac{1}{\sqrt{R}}\leq\frac{\sqrt{\varepsilon}}{ \sqrt{8}}\text{ by \eqref{eq:2011}}}\] \[\geq \frac{d\sqrt{\varepsilon}}{4}-2.\] By (11) and (27), we have \(\frac{d\sqrt{\varepsilon}}{4}-2>K\|f\|_{2}\). Thus, it remains to establish (29). We note in passing that this chain of calculations actually gives a quantitative form of Theorem 1, where the dependence between the factor \(K\), the size of \(\varepsilon\) from the savings in the Diophantine condition, and the degree \(d\) of the trigonometric polynomial are brought into relation. The conclusion of Theorem 1 is non-trivial when the right-hand side exceeds \(\|f\|_{2}\). One can check that for \(K=1\), the inequality (11) holds for \(d=21\varepsilon^{-1}\), so that for given \(\varepsilon\) from the Diophantine condition, we can construct a counterexample to the LIL in its "truly independent" form by considering a trigonometric polynomial of degree \(\lceil 21\varepsilon^{-1}\rceil\). As \(\varepsilon\) in the Diophantine condition approaches \(0\), we need to increase the degree of the trigonometric polynomial to get a result which doesn't match with the "truly independent" form of the LIL. The main result of this paper is that the Diophantine condition (9) is essentially optimal when trigonometric polynomials of arbitrary degree are considered; however, we do emphatically _not_ claim that this condition is also optimal in the case of trigonometric polynomials whose degree is bounded. For degree 1 (the Erdos-Gal case), no Diophantine condition at all is necessary. For trigonometric polynomials of degree 2, we believe that a condition of order roughly \(L(N,a,b,c)=\mathcal{O}\left(\frac{N}{(\log N)^{1/2}}\right)\) should be optimal, and for degree at most \(d\), a condition of the form \(L(N,a,b,c)=\mathcal{O}\left(\frac{N}{(\log N)^{c(d)}}\right)\) should be optimal, with some suitable constants \(c(d)\) for which \(c(d)\nearrow 1\) as \(d\to\infty\). The relation between \(d\) and \(\varepsilon\) was not in our main focus when writing this paper, but this seems to be an interesting topic for further research. It remains to prove (29). For \(i\in\mathbb{N}\), let \(\mathcal{F}_{i}\) be the sigma-field generated by the collection of intervals \[\left[\frac{a-1}{2^{2^{(i+1)^{4}}}},\frac{a}{2^{2^{(i+1)^{4}}}}\right),\qquad a =1,2,\ldots,2^{2^{(i+1)^{4}}}.\] We set \[Y_{i}(x):=\sum_{k\in\Delta_{i}}f(n_{k}x),\quad x\in[0,1],\] as well as \[Z_{i}:=\mathbb{E}\left(Y_{i}\Big{|}\mathcal{F}_{i}\right),\qquad i\geq 1,\] which, as we shall see in a moment, is a good approximation of the random variable \(Y_{i}\); in probabilistic parlance, the system \((\mathcal{F}_{i})_{i\geq 1}\) forms a filtration of the unit interval. Using that \[\|f^{\prime}\|_{\infty}\leq\sum_{j=0}^{d-1}2\pi 2^{j}\leq 2\pi 2^{d},\] and setting \(I_{a}^{i}:=\left[\frac{a-1}{2^{2^{(i+1)^{4}}}},\frac{a}{2^{2^{(i+1)^{4}}}}\right)\) for \(a\in\left\{1,\ldots,2^{2^{(i+1)^{4}}}\right\}=:S_{i}\), where \(i\in\mathbb{N}\), we see that by using the standard estimate \(|f(x)-f(y)|\leq\|f^{\prime}\|_{\infty}|x-y|\), \[\|Z_{i}-Y_{i}\|_{\infty} =\sup_{x\in[0,1]}|Z_{i}(x)-Y_{i}(x)|\] \[=\sup_{x\in[0,1]}\left|\sum_{a\in S_{i}}\sum_{k\in\Delta_{i}} \left(2^{2^{(i+1)^{4}}}\int_{I_{a}^{i}}f(n_{k}t)dt-f(n_{k}x)\right)\mathds{1 }_{I_{a}^{i}}(x)\right|\] \[\leq\max_{a\in S_{i}}\sup_{x\in I_{a}^{i}}\sum_{k\in\Delta_{i}} \left|2^{2^{(i+1)^{4}}}\int_{I_{a}^{i}}\left(f(n_{k}t)-f(n_{k}x)\right)dt\right|\] \[\leq\max_{a\in S_{i}}\sup_{x\in I_{a}^{i}}\sum_{k\in\Delta_{i}} \frac{||f^{\prime}||_{\infty}n_{k}}{2^{2^{(i+1)^{4}}}}\] \[=\sum_{k\in\Delta_{i}}\frac{2\pi 2^{d}n_{k}}{2^{2^{(i+1)^{4}}}}\] \[\leq\frac{R^{i}2\pi 2^{d}2^{2^{i^{4}}}\left(2^{R^{i+1}}+i\right)}{2 ^{2^{(i+1)^{4}}}},\] which goes rapidly to zero as \(i\to\infty\) (recall that \(R\) and \(d\) are fixed). Thus, we have \[\limsup_{i\to\infty}\frac{|Y_{i}|}{\sqrt{2R^{i}\log\log R^{i}}}\geq\frac{d\sqrt{ \varepsilon}}{2}-2\qquad\text{a.e.},\] (which is just another way of writing (29)) if and only if \[\limsup_{i\to\infty}\frac{|Z_{i}|}{\sqrt{2R^{i}\log\log R^{i}}}\geq\frac{d \sqrt{\varepsilon}}{2}-2\qquad\text{a.e.}, \tag{30}\] and our aim thus becomes to establish (30). For this purpose, we define the sets \[A_{i}:=\left\{x\in[0,1]:\ |Z_{i}|\geq\left(\frac{d\sqrt{\varepsilon}}{2}-2 \right)\sqrt{2R^{i}\log\log R^{i}}-2d^{2}i-3\right\},\qquad i\in\mathbb{N},\] where the terms which are subtracted on the right-hand side will allow us to incorporate errors which will appear later on in the computations. By construction, \(A_{i}\) is \(\mathcal{F}_{i}\)-measurable for all \(i\geq 1\). We claim that \(A_{i}\) is independent of \(\mathcal{F}_{i-1}\) (and hence also independent of \(\mathcal{F}_{i-2},\mathcal{F}_{i-3},\dots\)). This follows from the fact that all numbers \(n_{k}\) with \(k\in\Delta_{i}\) are integer multiples of \(2^{2^{i^{4}}}\) and hence the functions \(Y_{i}\) and \(Z_{i}\) are periodic with period-length \(\frac{1}{2^{2^{i^{4}}}}\). From that we infer, again writing \(I_{a}^{i-1}=\left[\frac{a-1}{2^{2^{i^{4}}}},\frac{a}{2^{2^{i^{4}}}}\right)\) for \(a\in S_{i-1}=\{1,\dots,2^{2^{i^{4}}}\}\), that \(\lambda(A_{i}\cap I_{a}^{i-1})\) has the same value for all \(a\in S_{i-1}\). Thus, for all \(x\in[0,1]\), we obtain \[\mathbb{E}\left[\mathbb{1}_{A_{i}}|\mathcal{F}_{i-1}\right](x) =\sum_{a\in S_{i-1}}2^{2^{i^{4}}}\lambda(A_{i}\cap I_{a}^{i-1}) \mathbb{1}_{I_{a}^{i-1}}(x)\] \[=2^{2^{i^{4}}}\lambda(A_{i}\cap I_{1}^{i-1})\] \[=\sum_{a\in S_{i-1}}\lambda(A_{i}\cap I_{a}^{i-1})\] \[=\lambda(A_{i}),\] which proves our claim. Hence, the sets \(A_{1},A_{2},A_{3},\dots\) are stochastically independent - this was the purpose of the factor \(2^{2^{i^{4}}}\) in the definition of \((n_{k})_{k\geq 1}\) in (16) for all \(k\) from the same block \(\Delta_{i}\), and for switching from \(Y_{i}\) to the discretized approximations \(Z_{i}\).4 Footnote 4: Our aim is to prove that for all \(i\) the lacunary sum \(\sum_{k\in\Delta_{i}}f(n_{k}x)\) is large for a sufficiently large set of values of \(x\), which yields the desired LIL by an application of the divergence Borel–Cantelli lemma. Without the factor \(2^{2^{i^{4}}}\) in the definition of \(n_{k}\) for all \(k\) from the same block \(\Delta_{i}\) we would lack the necessary stochastic independence of these set, which is a necessary prerequisite for an application of the divergence Borel–Cantelli lemma. More specifically, in what follows we will decompose \(\sum_{k\in\Delta_{i}}f(n_{k}x)\) into the product of a “local variance function”, multiplied with a pure trigonometric sum (whose distribution is close to Gaussian), and we have to make sure that the local variance function is not large at the same locations of \(x\), for different values of \(i\). It remains to establish that \[\sum_{i=1}^{\infty}\lambda(A_{i})=+\infty. \tag{31}\] Then, by an application of the second Borel-Cantelli lemma (using the independence of the sets \(A_{1},A_{2},\dots\)), we can conclude that almost all \(x\in[0,1]\) are contained in infinitely many sets \(A_{i}\), which implies (30) and (29). As observed before, we have \(\|Y_{i}-Z_{i}\|_{\infty}\leq 1\) for all sufficiently large \(i\in\mathbb{N}\). Thus, by the triangle inequality \[A_{i}\supseteq\left\{x\in[0,1]:\ |Y_{i}|\geq\left(\frac{d\sqrt{\varepsilon}}{2}-2 \right)\sqrt{2R^{i}\log\log R^{i}}-2d^{2}i-2\right\} \tag{32}\] for all sufficiently large \(i\in\mathbb{N}\). As noted, \(Y_{i}\) is periodic with period \(2^{2^{i^{4}}}\). We now define a new sequence \((\nu_{k})_{k\geq 1}\) via \[\nu_{k}:=\frac{n_{k}}{2^{2^{i^{4}}}},\qquad k\in\Delta_{i},\quad i\in\mathbb{ N},\] or, equivalently, via \[\nu_{k}=2^{k}+m,\qquad k\in\Delta_{i}^{(m)},\quad m\in\{1,\ldots,M(i)\},\ i\in \mathbb{N}.\] Then \(Y_{i}\) has the same distribution as \(\sum_{k\in\Delta_{i}}f(\nu_{k}x)\) so that \[\lambda\left(\left\{x\in[0,1]:\ |Y_{i}|\geq\left(\frac{d\sqrt{ \varepsilon}}{2}-2\right)\sqrt{2R^{i}\log\log R^{i}}-2d^{2}i-2\right\}\right)\] \[= \lambda\left(\left\{x\in[0,1]:\ \left|\sum_{k\in\Delta_{i}}f(\nu_{k}x) \right|\geq\left(\frac{d\sqrt{\varepsilon}}{2}-2\right)\sqrt{2R^{i}\log\log R^ {i}}-2d^{2}i-2\right\}\right). \tag{33}\] Now we will relate the particular choice of our function \(f\) with our particular construction of the sequence \((n_{k})_{k\geq 1}\), somewhat in the spirit of the Erdos-Fortet example. The key point of the Erdos-Fortet example is that in the sum \(\sum\left(\cos(2\pi(2^{k}-1)x)+\cos(4\pi(2^{k}-1)x)\right)\), the term \(\cos(4\pi(2^{k}-1)x)=\cos(2\pi(2^{k+1}-2)x)\) (from frequency \(4\pi\) and index \(k\)) and the term \(\cos(2\pi(2^{k+1}-1)x)\) (from frequency \(2\pi\) and index \(k+1\)) can be combined, so that by the standard trigonometric identity \(\cos(\alpha)+\cos(\beta)=2\cos\left(\frac{\alpha+\beta}{2}\right)\cos\left( \frac{\alpha-\beta}{2}\right)\), we have \[\sum_{k=1}^{N}\left(\cos(2\pi(2^{k}-1)x)+\cos(4\pi(2^{k}-1)x)\right) = \sum_{k=1}^{N}\left(\cos(2\pi(2^{k}-1)x)+\cos(2\pi(2^{k}-2)x)\right)\] \[\qquad+\cos(2\pi(2^{N+1}-2)x)-1\] \[\approx \sum_{k=1}^{N}\left(\cos(2\pi(2^{k}-1)x)+\cos(2\pi(2^{k}-2)x)\right)\] \[= 2\cos(\pi x)\sum_{k=1}^{N}\cos(2\pi(2^{k}-3/2)x), \tag{34}\] i.e., the generalized lacunary sum essentially decomposes into the product of the fixed function \(2\cos(\pi x)\) and a purely trigonometric lacunary sum (this explains why the factor \(2\cos(\pi x)\) appears on the right-hand side of (5)). Our sum \(\sum_{k\in\Delta_{i}}f(\nu_{k}x)\) will split in a somewhat similar way into a (slowly fluctuating) function \(g_{i}(x)\), multiplied with a purely trigonometric lacunary sum (to which we can apply Gaposhkin's quantitative CLT of Lemma 5). However, while in Erdos-Fortet's construction the contribution of only two subsequent summation indices can be combined (leading to a factor \(2\cos(\pi x)\) which is bounded by 2), in our construction the contribution of \(d\) subsequent summation indices can be combined, leading to a function \(g_{i}(x)\) which becomes as large as \(d\) for some values of \(x\). We will continue to comment on the heuristics after some further steps of calculations. For \(i\geq 1\) and \(m\in\{1,\ldots,M(i)\}\), we have \[\sum_{k\in\Delta_{i}^{(m)}}f(\nu_{k}x) = \sum_{k\in\Delta_{i}^{(m)}}\sum_{j=0}^{d-1}\cos(2\pi 2^{j}(2^{k}+m)x)\] \[= \sum_{k\in\Delta_{i}^{(m)}}\sum_{j=0}^{d-1}\cos(2\pi(2^{k+j}+2^{j }m)x). \tag{35}\] The sequence \((n_{k})_{k\geq 1}\) (and so \((\nu_{k})_{k\geq 1}\)) and the trigonometric polynomial \(f\) were constructed in such a way that in the representation (35) there are many terms that can be combined. More precisely, when understanding \(k+j=\ell\) as a new summation index in (35), then there are \(d\) many different pairs \((k,j)\) for which \(k+j\) adds up to \(\ell\). Thus, the sum in (35) can be rewritten as \[\sum_{k\in\Delta_{i}^{(m)}}\sum_{j=0}^{d-1}\cos(2\pi(2^{k+j}+2^{j}m)x)=\sum_{ \ell\in\Delta_{i}^{(m)}}\sum_{j=0}^{d-1}\cos(2\pi(2^{\ell}+2^{j}m)x)+E_{i,m}( x), \tag{36}\] where \(E_{i,m}(x)\) is an error term consisting of sums of cosines which comes from a) the \(d\) smallest indices \(\ell\in\Delta_{i}^{(m)}\), for which there do not exist \(d\) many pairs \((k,j)\) with \(k\in\Delta_{i}^{(m)}\) and \(j\in\{0,\ldots,d-1\}\) such that \(k+j=\ell\), and b) from the \(d\) largest indices \(k\) in \(\Delta_{i}^{(m)}\) for which \(k+j\) exceeds all elements \(\ell\in\Delta_{i}^{(m)}\).5 Hence, \(E_{i,m}\) is a sum of at most \(d^{2}\) many cosine functions, and thus Footnote 5: Let \(\left\{k_{1},\ldots,k_{\#\Delta_{i}^{(m)}}\right\}\) denote the elements of \(\Delta_{i}^{(m)}\) in increasing order. Then \(k_{1}\) has exactly one representation of the form \(\ell+j\) for \(\ell\in\Delta_{i}^{(m)}\) and \(j\in\{0,\ldots,d-1\}\), namely \(k_{1}=k_{1}+0\). \(k_{2}\) has two such representations, namely \(k_{2}=k_{2}+0\) and \(k_{2}=k_{1}+1\). So, the first \(d-1\) elements of \(\Delta_{i}^{(m)}\) have less than \(d\) possible representations of the form \(\ell+j\). This gives us precisely \(\frac{d(d-1)}{2}\) many cosine functions which form one part of \(E_{i,m}\). On the other hand, if \(\ell\) is one of the \(d-1\) largest elements in \(\Delta_{i}^{(m)}\), then we have strictly less than \(d\) choices for \(j\in\{0,\ldots,d-1\}\) such that \(\ell+j\in\Delta_{i}^{(m)}\). This leads to another \(\frac{d(d-1)}{2}\) summands in \(E_{i,m}\). In total \(E_{i,m}\) consists of \(d(d-1)\leq d^{2}\) many cosine functions. \[\left|\left|E_{i,m}\right|\right|_{\infty}\leq d^{2}\] as well as (recall \(M(i)=\left\lceil i^{1-\varepsilon}\right\rceil\leq i\)) \[\left\|\sum_{m=1}^{M(i)}E_{i,m}\right\|_{\infty}\leq d^{2}i. \tag{37}\] Let \(w\in\mathbb{N}\). Then, applying the trigonometric identity \(\cos(x+y)=\cos x\cos y-\sin x\sin y\), we have \[\sum_{j=0}^{d-1}\cos(2\pi(w+2^{j}m)x) = \cos(2\pi wx)\sum_{j=0}^{d-1}\cos(2\pi 2^{j}mx)-\sin(2\pi wx)\sum_{j=0}^ {d-1}\sin(2\pi 2^{j}mx).\] Applying this to the sum in (36), together with (35) and the trigonometric identity \(\cos(2\pi y)=1-2(\sin(\pi y))^{2},\ y\in\mathbb{R}\), after summing over \(m=1,\ldots,M(i)\), we arrive at \[\sum_{k\in\Delta_{i}}f(\nu_{k}x)\] \[= \sum_{m=1}^{M(i)}\left(\sum_{j=0}^{d-1}\cos(2\pi 2^{j}mx)\sum_{k \in\Delta_{i}^{(m)}}\cos(2\pi 2^{k}x)\right.\] \[\left.\quad-\sum_{j=0}^{d-1}\sin(2\pi 2^{j}mx)\sum_{k\in\Delta_{i}^{ (m)}}\sin(2\pi 2^{k}x)+E_{i,m}(x)\right)\] \[= d\sum_{k\in\Delta_{i}}\cos(2\pi 2^{k}x)\] \[\quad-\sum_{m=1}^{M(i)}\sum_{j=0}^{d-1}(\sin(\pi 2^{j}mx))^{2} \sum_{k\in\Delta_{i}^{(m)}}\cos(2\pi 2^{k}x)\] \[\quad-\sum_{m=1}^{M(i)}\sum_{j=0}^{d-1}\sin(2\pi 2^{j}mx)\sum_{k \in\Delta_{i}^{(m)}}\sin(2\pi 2^{k}x)\] \[\quad+\sum_{m=1}^{M(i)}E_{i,m}(x). \tag{38}\] We comment one last time on the heuristics behind our construction of the function \(f\) and the sequence \((n_{k})_{k\geq 1}\). They were carefully adapted to each other so that (after removing the extra periodicity by changing from \(n_{k}\) to \(\nu_{k}\)), we can essentially write \[\sum_{k\in\Delta_{i}}f(\nu_{k}x)=g_{i}(x)\sum_{k\in\Delta_{i}}\cos(2\pi 2^{k}x) +(\mbox{errors}),\] where \(g_{i}(x)\) is a function which becomes as large as \(d\) when \(x\) is small (we ignore here the fact that in the equations above, one lacunary sum is a sine sum, not a cosine sum). Heuristically (ignoring that \(g_{i}(x)\) also depends on \(m\)), we essentially have \(g_{i}(x)=d-g_{i}^{(1)}(x)-g_{i}^{(2)}(x)\), where \(g_{i}^{(1)}\) and \(g_{i}^{(2)}\) are the slowly fluctuating functions in the double sums in lines (38) and (39), respectively. We can ensure that \(g_{i}^{(1)}\) and \(g_{i}^{(2)}\) are small when \(x\) is smaller than \(1/M(i)\approx i^{-1+\varepsilon}\) by some factor. The sum \(R^{-i/2}\sum_{k\in\Delta_{i}}\cos(2\pi 2^{k}x)\) is a classical normalized lacunary sum and behaves like a Gaussian \(\mathcal{N}(0,1/2)\) random variable (see, e.g., [18, Theorem 1]). Thus, for \(x\) near \(0\), the sum \(R^{-i/2}\sum_{k\in\Delta_{i}}f(\nu_{k}x)\) essentially behaves like \(R^{-i/2}d\sum_{k\in\Delta_{i}}\cos(2\pi 2^{k}x)\) and thus like a \(\mathcal{N}(0,d^{2}/2)\) random variable (locally for \(x\) near \(0\) we have gained a factor \(d\) for the variance in comparison with \(\|f\|_{2}^{2}\), this is the key point!), and we can factorize \[\mbox{Prob}\left(x\in[0,1]\,:\,\left|\sum_{k\in\Delta_{i}}f(\nu_{ k}x)\right|\ \mbox{is ``large''}\right)\] \[\geq \mbox{Prob}\big{(}x\ \mbox{is ``close enough'' to $0$ so that $g_{i}(x)\approx d$}\big{)}\times\mbox{Prob}\big{(}\mbox{a $\mathcal{N}(0,d^{2}/2)$ r.v. is ``large''}\big{)}.\] The size of the set of values of \(x\) which are close enough to \(0\) is around \(i^{-1+\varepsilon}\), see above, while the probability of a \(\mathcal{N}(0,d^{2}/2)\) r.v. exceeding something around \(\frac{d\sqrt{\varepsilon}}{2}\sqrt{2\log\log R^{i}}\) is roughly \(e^{-\frac{\varepsilon}{2}\log\log R^{i}}\approx i^{-\varepsilon/2}\). Overall this gives a probability of \(i^{-1+\varepsilon/2}\) that \(\left|\sum_{k\in\Delta_{i}}f(\nu_{k}x)\right|\) is "large", which allows an application of the divergence Borel-Cantelli lemma. Now we make this heuristic precise. Let \(i\in\mathbb{N}\) and \(h_{i}\in\mathbb{N}\) such that \[\frac{1}{20d2^{d}M(i)}\leq 2^{-h_{i}}\leq\frac{1}{10d2^{d}M(i)},\] which implies that \[2^{-h_{i}}\geq\frac{1}{20d2^{d}\lceil i^{1-\varepsilon}\rceil}\geq i^{-1+5 \varepsilon/6}\] for sufficiently large \(i\). If \(i\) is sufficiently large, then for all \(k\in\Delta_{i}\) we have \(k\geq R^{i-1}\geq i\geq h_{i}\), so that by periodicity, for all \(t>0\), we have (recall that \(\#\Delta_{i}=R^{i}\)) \[\lambda\left(\left\{x\in[0,2^{-h_{i}}]\,:\,\sum_{k\in\Delta_{i}} \cos(2\pi 2^{k}x)\geq t\right\}\right)\] \[=2^{-h_{i}}\sum_{a=0}^{2^{h_{i}}-1}\lambda\left(\left\{x\in \left[\frac{a}{2^{h_{i}}},\frac{a+1}{2^{h_{i}}}\right]\,:\,\sum_{k=1}^{R^{i}} \cos(2\pi 2^{k}x)\geq t\right\}\right)\] \[=2^{-h_{i}}\lambda\left(\left\{x\in[0,1]\,:\,\sum_{k=1}^{R^{i}} \cos(2\pi 2^{k}x)\geq t\right\}\right).\] Applying Gaposhkin's Berry-Esseen type estimate (see Lemma 5) with all weights being equal, we obtain using \(1-\Phi(y)\geq 1/(4ye^{y^{2}/2})\) for \(y\geq 4\) (see, e.g., [26, Proposition 3]), that \[2^{-h_{i}}\lambda\left(\left\{x\in[0,1]\,:\,\sum_{k=1}^{R^{i}} \cos(2\pi 2^{k}x)\geq\frac{\sqrt{\varepsilon}}{2}\sqrt{2R^{i}\log\log R^{i}} \right\}\right)\] \[\geq 2^{-h_{i}}\left(1-\Phi\left(\frac{\sqrt{\varepsilon}}{2}\sqrt{ \log\log R^{i}}\right)-\mathcal{O}(R^{-i/4})\right)\] \[\geq \underbrace{2^{-h_{i}}}_{\geq i^{-1+5\varepsilon/6}\text{ for sufficiently large }i}\underbrace{\left(\frac{e^{-\frac{\varepsilon}{4}\log\log R^{i}}}{4\frac{\sqrt{ \varepsilon}}{2}\sqrt{\log\log R^{i}}}-\mathcal{O}(R^{-i/4})\right)}_{\geq i^{- \varepsilon/3}\text{ for sufficiently large }i} \tag{41}\] \[\geq \frac{1}{i^{1-\varepsilon/2}}\] for sufficiently large \(i\in\mathbb{N}\). We need to show that the terms in (38), (39) and (40) do not make a relevant contribution when \(x\in[0,2^{-h_{i}}]\). Recall that by construction the smallest index \(k\) in \(\Delta_{i}\) is of size at least \(R^{i-1}\), and that for sufficiently large \(i\in\mathbb{N}\), we have \(R^{i-1}\geq h_{i}\). We will work on short intervals of the form \(\left[\frac{a}{2^{R^{i-1}}},\frac{a+1}{2^{R^{i-1}}}\right]\subset[0,2^{-h_{i}}]\) for some small integer \(a\). Within such an interval, the function \(\sum_{j=0}^{d-1}(\sin(\pi 2^{j}mx))^{2}\) is essentially constant. More precisely, writing \[s_{a,m,i}:=\sum_{j=0}^{d-1}\left(\sin\left(\pi 2^{j}m\frac{a}{2^{R^{i-1}}} \right)\right)^{2},\] by considering derivatives, we obtain the Lipschitz estimate \[\left|\sum_{j=0}^{d-1}(\sin(\pi 2^{j}mx))^{2}-s_{a,m,i}\right|\leq\frac{2^{d+1} \pi m}{2^{R^{i-1}}}\quad\text{for all }x\in\bigg{[}\frac{a}{2^{R^{i-1}}},\frac{a+1}{2^{R^{i-1}}} \bigg{]}. \tag{42}\] Furthermore, since we assumed that \(\frac{a}{2^{R^{i-1}}}\in[0,2^{-h_{i}}]\), we have \[s_{a,m,i} \leq \sum_{j=0}^{d-1}\left(\sin\left(\pi 2^{j}m2^{-h_{i}}\right) \right)^{2} \tag{43}\] \[\leq \sum_{j=0}^{d-1}\left(\pi 2^{j}m2^{-h_{i}}\right)^{2}\] \[\leq \sum_{j=0}^{d-1}\left(\frac{\pi 2^{j}m}{10d2^{d}M(i)}\right)^{2}\] \[\leq \frac{\pi^{2}}{100d}\] \[\leq \frac{1}{10d},\] uniformly in \(a\) and \(m\). For \(i\in\mathbb{N}\), we set \[S_{a,i}:=\left(\sum_{m=1}^{M(i)}\sum_{k\in\Delta_{i}^{(m)}}s_{a,m,i}^{2} \right)^{1/2},\] and define \[\lambda_{k}:=\frac{s_{a,m,i}}{S_{a,i}},\qquad\text{for }k\in\Delta_{i}^{(m)}.\] Then, we clearly have \[\sum_{k\in\Delta_{i}}\lambda_{k}^{2}=1,\] and by (43) it holds that \[S_{a,i}\leq\frac{R^{i/2}}{10d}.\] Note that, using the estimates \(S_{a,i}\geq\sqrt{\#\Delta_{i}^{(m)}s_{a,m,i}^{2}}\) and \(\#\Delta_{i}^{(m)}\geq\frac{R^{i}}{\lceil i^{1-\varepsilon}\rceil}-1\) for all \(m\in\{1,\ldots,M(i)\}\) by (14), we have \[\max_{k\in\Delta_{i}}\lambda_{k}=\max_{1\leq m\leq M(i)}\frac{s_{a,m,i}}{S_{a,i}}\leq\max_{1\leq m\leq M(i)}\frac{s_{a,m,i}}{\sqrt{\#\Delta_{i}^{(m)}s_{a, m,i}^{2}}}\leq\sqrt{\frac{\lceil i^{1-\varepsilon}\rceil}{R^{i}-\lceil i^{1- \varepsilon}\rceil}}\leq R^{-i/3} \tag{44}\] for sufficiently large \(i\in\mathbb{N}\). Thus, by periodicity, using Lemma 5 with the weights \(\lambda_{k}\) as specified above, and using (44), we have \[2^{R^{i-1}}\lambda\left(\left\{x\in\left[\frac{a}{2^{R^{i-1}}}, \frac{a+1}{2^{R^{i-1}}}\right]\,:\,\left|\sum_{m=1}^{M(i)}\sum_{k\in\Delta_{i}^{ (m)}}s_{a,m,i}\cos(2\pi 2^{k}x)\right|>\sqrt{2R^{i}\log\log R^{i}}\right\}\right)\] \[= 2^{R^{i-1}}\lambda\left(\left\{x\in\left[\frac{a}{2^{R^{i-1}}}, \frac{a+1}{2^{R^{i-1}}}\right]\,:\,\left|\sum_{k\in\Delta_{i}}\lambda_{k}\cos( 2\pi 2^{k}x)\right|>S_{a,i}^{-1}\sqrt{2R^{i}\log\log R^{i}}\right\}\right)\] \[\leq 2^{R^{i-1}}\lambda\left(\left\{x\in\left[\frac{a}{2^{R^{i-1}}}, \frac{a+1}{2^{R^{i-1}}}\right]\,:\,\left|\sqrt{2}\sum_{k\in\Delta_{i}}\lambda_ {k}\cos(2\pi 2^{k}x)\right|>\sqrt{400d^{2}\log\log R^{i}}\right\}\right)\] \[\leq 1-\Phi(\sqrt{400d^{2}\log\log R^{i}})+cR^{-i/12}\] \[\leq i^{-2}\] uniformly in \(a\) for sufficiently large \(i\in\mathbb{N}\) (the last estimate is very coarse, but the point is that our estimate leads to a convergent series), where \(c>0\) is an absolute constant. Note that (42) implies \[\left|\sum_{m=1}^{M(i)}\sum_{j=0}^{d-1}(\sin(\pi 2^{j}mx))^{2}\sum_{k\in\Delta _{i}^{(m)}}\cos(2\pi 2^{k}x)-\sum_{m=1}^{M(i)}s_{a,m,i}\sum_{k\in\Delta_{i}^{(m)}} \cos(2\pi 2^{k}x)\right|\leq R^{i}\frac{2^{d+1}\pi m}{2^{R^{i-1}}}\leq 1\] for sufficiently large \(i\in\mathbb{N}\). After summing over all \(a\) such that \(\left[\frac{a}{2^{R^{i-1}}},\frac{a+1}{2^{R^{i-1}}}\right]\subset[0,2^{-h_{i}}]\), using the previous estimate and the triangle inequality, we finally arrive at \[\lambda\left(\left\{x\in\left[0,2^{-h_{i}}\right]\,:\,\left|\sum _{m=1}^{M(i)}\sum_{j=0}^{d-1}(\sin(\pi 2^{j}mx))^{2}\sum_{k\in\Delta_{i}^{(m)}} \cos(2\pi 2^{k}x)\right|>\sqrt{2R^{i}\log\log R^{i}}+1\right\}\right)\] \[\leq 2^{-h_{i}}i^{-2}\leq i^{-2}\] for sufficiently large \(i\in\mathbb{N}\), as a bound for the contribution of the term in line (38). An analogous argument for the contribution of the term in line (39) yields \[\lambda\left(\left\{x\in\left[0,2^{-h_{i}}\right]\,:\,\left|\sum_{m=1}^{M(i)} \sum_{j=0}^{d-1}\sin(2\pi 2^{j}mx)\sum_{k\in\Delta_{i}^{(m)}}\sin(2\pi 2^{k}x) \right|>\sqrt{2R^{i}\log\log R^{i}}+1\right\}\right)\leq\frac{1}{i^{2}},\] where the relevant point for the argument is that the function \(\sum_{j=0}^{d-1}\sin(2\pi 2^{j}mx)\) is also very small in the interval \([0,2^{-h_{i}}]\) (and where we use a variant of Gaposhkin's Lemma 5 for sine instead of cosine). Combining these two estimates with (36), (37) and (41), we obtain \[\lambda\left(\left\{x\in[0,1]\,:\,\left|\sum_{k\in\Delta_{i}}f(\nu_{k}x)\right|> \left(\frac{d\sqrt{\varepsilon}}{2}-2\right)\sqrt{2R^{i}\log\log R^{i}}-2d^{2}i -2\right\}\right) \geq \frac{1}{i^{1-\varepsilon/2}}-\frac{2}{i^{2}}\] for sufficiently large \(i\in\mathbb{N}\). By (32) and (33) this implies \[\lambda(A_{i})\geq i^{-1+\varepsilon/2}-2i^{-2}\] for all sufficiently large \(i\in\mathbb{N}\). Thus, we have established (31), which completes the proof of Theorem 1. ## Acknowledgments CA was supported by the Austrian Science Fund (FWF), projects I-4945, I-5554, P-34763, and P-35322. LF was supported by the Austrian Science Fund (FWF), projects P-32405 and P-35322. JP was supported by the German Research Foundation (DFG) under project 516672205 and by the Austrian Science Fund (FWF) under project P-32405.
2309.07947
TiBGL: Template-induced Brain Graph Learning for Functional Neuroimaging Analysis
In recent years, functional magnetic resonance imaging has emerged as a powerful tool for investigating the human brain's functional connectivity networks. Related studies demonstrate that functional connectivity networks in the human brain can help to improve the efficiency of diagnosing neurological disorders. However, there still exist two challenges that limit the progress of functional neuroimaging. Firstly, there exists an abundance of noise and redundant information in functional connectivity data, resulting in poor performance. Secondly, existing brain network models have tended to prioritize either classification performance or the interpretation of neuroscience findings behind the learned models. To deal with these challenges, this paper proposes a novel brain graph learning framework called Template-induced Brain Graph Learning (TiBGL), which has both discriminative and interpretable abilities. Motivated by the related medical findings on functional connectivites, TiBGL proposes template-induced brain graph learning to extract template brain graphs for all groups. The template graph can be regarded as an augmentation process on brain networks that removes noise information and highlights important connectivity patterns. To simultaneously support the tasks of discrimination and interpretation, TiBGL further develops template-induced convolutional neural network and template-induced brain interpretation analysis. Especially, the former fuses rich information from brain graphs and template brain graphs for brain disorder tasks, and the latter can provide insightful connectivity patterns related to brain disorders based on template brain graphs. Experimental results on three real-world datasets show that the proposed TiBGL can achieve superior performance compared with nine state-of-the-art methods and keep coherent with neuroscience findings in recent literatures.
Xiangzhu Meng, Wei Wei, Qiang Liu, Shu Wu, Liang Wang
2023-09-14T15:17:42Z
http://arxiv.org/abs/2309.07947v1
# TiBGL: Template-induced Brain Graph Learning for Functional Neuroimaging Analysis ###### Abstract In recent years, functional magnetic resonance imaging has emerged as a powerful tool for investigating the human brain's functional connectivity networks. Related studies demonstrate that functional connectivity networks in the human brain can help to improve the efficiency of diagnosing neurological disorders. However, there still exist two challenges that limit the progress of functional neuroimaging. Firstly, there exists an abundance of noise and redundant information in functional connectivity data, resulting in poor performance. Secondly, existing brain network models have tended to prioritize either classification performance or the interpretation of neuroscience findings behind the learned models. To deal with these challenges, this paper proposes a novel brain graph learning framework called Template-induced Brain Graph Learning (TiBGL), which has both discriminative and interpretable abilities. Motivated by the related medical findings on functional connectivities, TiBGL proposes template-induced brain graph learning to extract template brain graphs for all groups. The template graph can be regarded as an augmentation process on brain networks that removes noise information and highlights important connectivity patterns. To simultaneously support the tasks of discrimination and interpretation, TiBGL further develops template-induced convolutional neural network and template-induced brain interpretation analysis. Especially, the former fuses rich information from brain graphs and template brain graphs for brain disorder tasks, and the latter can provide insightful connectivity patterns related to brain disorders based on template brain graphs. Experimental results on three real-world datasets show that the proposed TiBGL can achieve superior performance compared with nine state-of-the-art methods and keep coherent with neuroscience findings in recent literatures. Functional MRI, Functional Connectivity, Template Learning, Contrast Subgraph, Brain Disease Diagnosis, Explanation Analysis. ## I Introduction With the widespread application of modern medical imaging technologies, magnetic resonance imaging (MRI) [1] technologies have proven to be a valuable tool for investigating neuroscience-related issues, particularly in the diagnosis of neurological disorders. In particular, functional magnetic resonance imaging (fMRI) [2, 3] is one of the non-invasive techniques to observe the temporal dynamics of blood oxygen level dependency (BOLD) response. In recent years, fMRI data has been widely utilized to gain a better understanding of the functional activities and organization of the human brain. For instance, functional organization can be usually characterized by the synchronization of fMRI time series among brain regions [4]. Recent researches [5, 6] suggest that functional connectivities among regions of interest (ROIs) within the brain play a crucial role in influencing behavior, cognition, and brain dysfunction. To analyze those meaningful connectivity patterns, machine learning [7] based works have been widely utilized in functional neuroimaging scenarios, such as disease diagnosis [8, 9], individual demographic information (i.e., intelligent quotient and gender) [10, 11], cognitive ability [12, 13], etc. As the most representative machine learning technology, deep learning [14, 15, 16] has been widely investigated to learn spatial, temporal, and connective patterns of fMRI time series for diagnosing brain disorders. For examples, BrainNetCNN [17] leverages the topological locality of brain networks to predict cognitive and motor developmental outcome scores for infants born preterm; BrainGNN designs novel ROI-aware graph convolutional layers that leverage the topological and functional information of fMRI. With wide applications of transformers [18, 19], there also exist several transform-based works for human brain, such as BRAINNETTF [20], which leverages the unique properties of brain network data to maximize the power of transformer-based models for brain network analysis. Notably, how to interpret the neuroscience findings behind learnt models is as important as classification performance for functional neuroimaging analysis. Due to the black box issue of deep learning methods, these works usually cannot provide explicit interpretation to understand why a certain prediction is made. To address this issue, these works [21, 22, 23, 24] propose neurological biomarkers to understand the pathological mechanism of brain disorders. For example, the work [24] proposes a globally shared explanation generator to highlight disorder-specific biomarkers including salient ROIs and important connections. Such biomarkers heavily depend on the trained deep models, which limits their robustness. Moreover, when facing limited brain network data with noise and redundant information, how to directly train a stale deep model is still uncertain and full of challenges. Different from deep learning methods, traditional machine learning methods are more suitable for the case of limited medical data, so brain network models based on those technologies have been widely investigated for brain diagnosis in recent years. Specifically, to well diagnose brain disease, the neuroimage data of human brain can be firstly preprocessed to generate digitized representations, such as connectivity network among brain regions of interest (ROI), and then traditional machine learning [7] methods can be used to build the corresponding diagnosis model. Among these works, shadow learning-based methods firstly learn the embedding of human brain, then use classical classifier [25] to partition the learnt data into corresponding groups. Moreover, graph-based works [26, 27] attempt to model the brain as a graph, allowing to tackle interesting neuroscience research questions by investigating the topological structure of brain networks. To provide the interpretable diagnosis results, there exist interpretable works based on sparsity theory, subgraph methods. More specifically, sparsity-based works [28, 29, 30] have been widely used in the diagnosis of brain disease by selecting informative features to provide interpretability for disease diagnosis. Similar to sparsity-based theory, subgraph-based works [31, 32, 33] investigate to search the set of brain ROI nodes to discover patterns in the corresponding connectomes, to explain the pathology of brain disease. Most existing methods only focus on the influential ROI nodes and might neglect the important connectivity relationship in the whole brain network, which cannot additionally provide more precise explainable results. ### _Motivations_ Even though above existing methods have obtained promising performance for brain disorder diagnosis in some certain situations, there still remain two important aspects that have yet to be thoroughly investigated comprehensively. 1. Due to expensive costs and environmental diversity of data acquisition, limited brain data with accurate labels and clean features might result in erratic results of deep learning methods. Meanwhile, without guided knowledge of neurological disorders, brain graph models might be further disturbed by the noise and redundant information in original brain networks. 2. For the issue of brain disease diagnosis, learning interpretable models is as important as mere classification performance. However, most of existing brain network models either focus on the classification performance or interpret the neuroscience findings behind learnt models. Thus, how to provide a robust brain graph model with both discriminative and interpretable abilities is very necessary and meaningful. ### _Contributions_ This paper proposes a novel multi-stage functional neuroimaging analysis model with both discrimination and interpretation, called Template-induced Brain Graph Learning (TiBGL). To solve the unstable shortcoming of deep models for limited brain networks, TiBGL proposes template-based brain graph learning to capture template graphs from two intra-group and inter-group aspects, which is motivated by medical research findings [34, 35]. The learned template graphs not only highlight those important connectivity patterns in each group but also remove the noise and redundant information in brain networks. Then, template-induced convolutional neural network is developed to fuse rich information from brain networks and learned template graphs for brain disorder diagnosis, which can generate better subject-level brain networks. Furthermore, TiBGL utilizes template graphs as augmented group-level brain network to explore those meaningful connectivity patterns related to brain disorders, providing an insightful brain interpretation analysis. We evaluate the effectiveness of the proposed TiBGL on two neurological disorders diagnoses, including Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and one brain classification task of gender identification. The major contributions of this paper are summarized as follows: 1. TiBGL proposes template brain graph learning based on recent neuroscience findings to extract meaningful template graphs to enhance the representative capabilities of group-level and subject-level brain networks. 2. TiBGL utilizes template brain graphs as augmentation means for brain networks, and proposes template-induced convolutional network and brain interpretation analysis for achieving discriminative and explainable disease diagnosis tasks. 3. The experimental results on three tasks of brain network classification can demonstrate that the proposed method outperforms its counterparts and provides meaningful insights for neurological disorders. ### _Organization_ The remainder of this paper is organized as follows: in Section II, we briefly review related works on brain disorders, such as diagnosis model and interpretation analysis; in Section III, we describe the details of the proposed TaGBL and its optimization; in Section IV, we conduct extensive experiments on 3 datasets to evaluate the effectiveness and robustness of our proposed TiBGL; in Section V, we make the conclusion of this paper. ## II Related Works In the past decades, brain graph models based on functional neuroimaging analysis have been widely studied. Based on applicational targets, we roughly divide the existing methods into two categories, including diagnosis models and interpretation analysis for brain disorders. ### _Diagnosis Models for Brain Disorders_ Traditional machine learning [7] methods have been widely used to build the corresponding diagnosis model. Among these works, shadow learning-based methods usually follow two stages. Brain networks are firstly processed to generate the embedding of human brain, then use a classical classifier to partition the learnt data into corresponding groups. Additionally, due to the high dimensionality of fMRI data, dimensionality reduction technologies should be used to reduce the dimensionality of brain features. For instance, graph-based works [26, 27] attempt to model the brain as a graph, allowing it to tackle interesting neuroscience research questions by investigating the topological structure of brain networks. For these methods, if brain features from the first stage are not reliable, significant errors can be induced in the second stage. Apart from the above traditional works, deep learning methods for brain research are mainly based on Convolution Neural Networks (CNN) [14, 36], Recurrent Neural Networks (RNN) [16] and Graph Neural Networks (GNN) [15], to learn spatial, temporal and connective patterns of fMRI time series for the diagnosis of brain disorder. For instance, BrainNetCNN [17] leverages the topological locality of brain networks to predict cognitive and motor developmental outcome scores for infants born preterm; BrainGNN [22] designs novel RoI-aware graph convolutional layers that leverage the topological and functional information of fMRI. Moreover, transformers [18, 19] have been studied over different types of data, and there also exist several transform-based works for human brain, such as BRAINNETTF [20], which leverages the unique properties of brain network data to maximize the power of transformer-based models for brain network analysis. Unfortunately, limited brain data might result in erratic results of the above diagnosis models due to expensive costs of data acquisition as well as noise information in original brain networks. ### _Interpretation Analysis for Brain Disorders_ In most applications of drain disorders, understanding the general pattern or mechanism associated with a cognitive task or disease is very necessary. Group-level neural findings usually highlight consistent explanations across subjects [37, 38, 39], such as those key ROIs as well as their connectivity. For example, class activation mapping has been used to identify salient brain regions [40], and to visualize effective features by gradient sensitivity [41]. The neurological biomarkers-based work [21, 22, 23] can be also used to provide interpretability for group-level differences. Besides, personalized treatments [42, 43] for outcome prediction or disease sub-type detection require learning the individual-level biomarkers to achieve the best predictive performance. However, the above biomarkers heavily depend on the performance of trained deep models. When facing limited brain network data with noise and redundant information, it's full of difficulties to directly train a stale deep model with powerful ability. Different from biomarker-based works, sparsity-based works [28, 29, 30] have been widely used in the diagnosis of brain disease by selecting informative features to provide interpretability for disease diagnosis. Similar to sparsity-based theory, subgraph-based works [31, 32, 33] investigate to search the set of brain ROI nodes to discover patterns in the corresponding connectomes, to explain the pathology of brain disease. For example, the work [32] proposes a novel approach for classifying brain networks based on extracting contrast subgraphs, i.e. a set of vertices whose induced subgraphs are dense in one class of graphs and sparse in the other, which confirms the interestingness of the discovered patterns to match background knowledge in the neuroscience literature. However, most existing methods only focus on the influential ROI nodes and might neglect the important connectivity relationship in the whole brain network, which cannot additionally provide more precise explainable results. ## III The Proposed Method In this section, we first introduce the data processing of brain networks and important notations in this paper. Then, we propose the template brain graph learning to extract meaningful template graphs for brain classification and explanation analysis. Subsequently, we propose the template-induced convolutional neural network for classification tasks. Finally, we propose template-induced brain interpretation analysis to identify connection patterns that might be associated with specific brain disorders. ### _Data Processing & Notations_ Given one human brain neuroimage dataset that consists of \(K\) resting-state fMRI scans from \(\mathbf{\mathcal{C}}\) groups. For each resting-state fMRI scan ( as shown in Fig. (1(a))), we firstly utilize the standard preprocessing procedure to preprocess original data followed by the work [44], including slice-timing correction, realignment, co-registration, normalization and smoothing. Then, we utilize the atalas with template image to parcellate all resting-state fMRI scans to extract \(M\) mean time series of BOLD signal, where \(M\) denotes the number of brain regions. As shown in Fig. (1(b))), we visualize each brain ROIs corresponding to one specific color. Finally, we construct functional connectivity (FC) matrix \(\mathbf{W}_{k}\) as brain graph for the \(k\)th subject by calculating the Pearson coefficient between ROIs, as shown in Fig. (1(c)). ### _Overall TiBGL Framework_ The goal of this paper is to mine these template brain graphs from original brain graph set \(\{\mathbf{W}_{k},1\leq k\leq K\}\) to simultaneously implement accurate diagnosis and explanation exploration for brain disorders. As shown in Fig. 2, the proposed TiBGL is a whole framework for functional neuroimaging analysis, consisting of three following parts. The first part is on template brain graph learning, which extracts the meaningful template graph \(\{\mathbf{G}_{c},1\leq c\leq\mathcal{C}\}\) for all groups, by minimizing intra-group and inter-group loss functions. The second part is template-induced convolutional neural network to classify augmented brain networks with template graphs into the correct group, consisting of input block, encoder block, and output block. The third part is on template-induced brain interpretation analysis for disorder diagnosis, which employs contrastive strategy to search the subgraph that reflects the functional difference between different groups. ### _Template Brain Graph Learning_ In this section, we attempt to extract the meaningful template graph \(\{\mathbf{G}_{c},1\leq c\leq\mathcal{C}\}\) for all groups to augment instance-level and group-level brain networks for diagnosing and explaining the neurological disorders. Inspired by two research findings [34, 35], we decide to exploit template graphs from two intra-group and inter-group aspects. One finding [35] is that there are consistent patterns in functional connectivity among individuals from same group. The other finding [34] is that there are edge-level differences in functional brain connectivity matrices among different groups. Therefore, we propose intra-group and inter-group loss functions to extract the template graph \(\mathbf{G}_{c}\), as shown in Fig. 2(a). **Intra-group Template Brain Graph Learning.** Inspired by the first finding, we can regard the template graph \(\mathbf{G}_{c}\) as the latent consistent pattern in the \(c\)th group. For this reason, we need to minimize the difference between template graph and brain graph matrices within the same group. At the same time, we consider that subjects might play roles of different importance in learning template graph \(\mathbf{G}_{c}\). Inspired by the adaptively weighting strategy [45, 46, 47], we can adaptively allocate suitable weight for each subject in learning template graphs. To this end, the above considerations can be formulated as \[\mathbf{L}_{intra}=\sum_{c=1}^{\mathcal{C}}\sum_{k\in\mathcal{I}_{c}}\mathbf{\alpha} (c,k)\left\|\mathbf{G}_{c}-\mathbf{W}_{k}\right\|_{F}^{2}, \tag{1}\] where \(\mathcal{I}_{c}\) denotes the set of the indexes of samples in \(c\)th group. \(\mathbf{\alpha}(c,k)=1/\|\mathbf{G}_{c}-\mathbf{W}_{k}\|_{F}^{\frac{1}{2}}\) is the adaptively allocated weight for the \(k\)th subject in learning the template \(\mathbf{G}_{c}\), which refers to the definition in the work [45]. Differing from considering equal importance for all subjects, we can mine more refined information. **Inter-group Template Brain Graph Learning.** To increase the discriminative ability of template graphs, we attempt to maintain a large (finite) distance between elements in different template graphs. Inspired by the second finding, we attempt to maintain a margin of safety around the edge-level boundaries. That is to say, we require any two template graphs \(\mathbf{G}_{c_{1}}\) and \(\mathbf{G}_{c_{2}}\) to must satisfy the following inequality condition \[|\mathbf{G}_{c_{1}}(i,j)-\mathbf{G}_{c_{2}}(i,j)|\geq\gamma, \tag{2}\] where \(\gamma\geq 0\) is a user-defined parameter. In other words, one template graph is different from another template graph that invades the perimeter plus \(\gamma\) margin defined by the edge-wise difference between two template graphs. Combining the above equation with margin strategy [48], we propose the inter-group loss function that enlarges the discrepancy between different templates, which can be formulated as \[\mathbf{L}_{inter}=\sum_{c_{1}\neq c_{2}}\sum_{i=1}^{M}\sum_{j=1}^{M}\left\|\mathbf{G }_{c_{1}}(i,j)-\mathbf{G}_{c_{2}}(i,j)\right|-\gamma\right\rvert_{+}, \tag{3}\] where the term \(\left[z\right]_{+}=max(z,0)\) denotes the standard hinge function. Fig. 1: Data processing for fMRI image. **Overall framework.** We combine the aforementioned two components as well as sparsity normalization term into one framework, and the final objective function can be formed as follows: \[\begin{split}\mathbf{L}_{Group}=&\underbrace{\sum_{c=1}^{ \mathcal{C}}\sum_{k\in\mathcal{I}_{e}}\mathbf{\alpha}(c,k)\left\|\mathbf{G}_{c}-\mathbf{W} _{k}\right\|_{p}^{2}}_{Intra-group\ Loss}+&\lambda_{1}\underbrace{ \sum_{c=1}^{\mathcal{C}}\left|\mathbf{G}_{c}\right|_{1}}_{Sparsity\ Term}\\ +&\lambda_{2}\underbrace{\sum_{c1\neq c2}\sum_{i=1}^{ M}\sum_{j=1}^{M}\left\|\mathbf{G}_{c_{1}}(i,j)-\mathbf{G}_{c_{2}}(i,j)\right|-\gamma \right\|_{+}}_{Inter-group\ Loss},\end{split} \tag{4}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are two hyper-parameter to balance above loss terms. Differing from those globally shared mask-based methods [24, 49, 50], learned template graphs \(\mathbf{G}_{c}\) can prompt consistence information in intra-group networks and the edge-level difference between inter-group brain networks. In this way, we can utilize these template graphs as the basic component to help the following tasks of brain disorder diagnosis as well as its interpretation analysis. **Optimization.** With the alternating optimization strategy, we can solve the variables in Eq. (4) in turns. To be specific, with all template graphs but \(\mathbf{G}_{c}\) fixed, we can obtain the following optimization problem for template graph \(\mathbf{G}_{c}\), \[\begin{split}\mathbf{L}_{Group}=&\sum_{k\in\mathcal{I}_ {e}}\mathbf{\alpha}(c,k)\left\|\mathbf{G}_{c}-\mathbf{W}_{k}\right\|_{p}^{2}+\lambda_{1} \left|\mathbf{G}_{c}-\mathbf{W}_{k}\right|_{1}\\ +&\lambda_{2}\sum_{c\neq c_{1}}\sum_{i=1}^{M}\sum_{j=1 }^{M}\left\|\mathbf{G}_{c}(i,j)-\mathbf{G}_{c_{1}}(i,j)\right|-\gamma\right\|_{+}, \end{split} \tag{5}\] It's easy to find that the Eq. (5) can be seen as one typical form of the least absolute shrinkage and selection operator (LASSO) [51] model. Referring to the works related to LASSO, the LARS [52] algorithm can be employed to solve the template graph \(\mathbf{G}_{c}\) in Eq. (5). In this way, we can get the whole optimization process for Eq. (4). Firstly, we use the mean values in all groups to initialize all template graphs. Then, we iteratively solve the Eq. (5) for \(\mathbf{G}_{c}\) until all variables converge. The whole procedure to solve Eq.(4) is summarized in **Algorithm 1**. ### _Template-induced Convolutional Neural Network_ Motivated by the observation that certain types of inductive biases on spatial relations can be beneficial to classification tasks, we propose a novel template-induced convolutional neural network. As shown in Fig. 2(b), it mainly adopts three following blocks to implement the diagnosis of brain disorders. **Input Block.** Given a brain graph \(\mathbf{W}_{k}\), we construct multiple source representations for brain graph \(\mathbf{W}_{k}\), by learnt template graphs. To be specific, we sum all group-level template graphs as global template graph \(\mathbf{G}_{c}\), and then fuse global template graph \(\mathbf{G}_{c}\) and brain network \(\mathbf{W}_{k}\) in element-wise manner, i.e., \(\mathbf{W}_{k}\odot\mathbf{G}\). The above operator can be seen as the data augmentation based on group-level template graphs for brain networks. Compared to original brain networks, augmented brain graphs pay more attention to those important connectivities, by removing the irrelated and noise information, which have the more discriminative ability. Fig. 2: Flowchart of TiBGL framework, consisting of three following parts. The first part is on template brain graph learning, which extracts the meaningful template graph from all groups based on intra-group and inter-group loss functions. The second part is template-induced convolutional neural network to classify augmented brain networks with template graphs into the correct group, consisting of input block, encoder block, and output block. The third part is on template-induced brain interpretation analysis for disorder diagnosis. **Encoder Block.** We design the encoder block to extract the features of brain graphs after data augmentation. To be specific, we design a convolutional neural network that mainly contains three layer types: edge-to-edge filter layer, edge-to-node pooling (E2N Pool) layer, and node-to-graph pooling (N2G Pool) layer. Edge-to-edge filter layer consists of one or more simple convolutional filters of a particular shape and performs the specific operation on the brain graph. E2N and N2G Pool layers are adopted to downsample the data from edges and nodes to obtain the graph-level representation, which refers to the work [17]. It takes feature maps from the input block as input and then outputs a distinct feature map \(\mathbf{f}^{\mathbf{H}}(\mathbf{W}_{k}\odot\mathbf{G})\) for the output block. **Output Block & Loss Function.** After the processing of encoder block, we can obtain its output as the final hidden feature. Finally, we use one-layer linear network \(\mathbf{f}_{O}\) as the output layer. According to the above consideration, we can obtain the output of the whole network as follows: \[\hat{\mathbf{y}}_{k}=\mathbf{f}_{O}(\mathbf{f}_{\mathbf{H}}(\mathbf{W}_{k}\odot\mathbf{G})), \tag{6}\] where \(\hat{\mathbf{y}}_{k}\) denotes the predicted group of the \(k\)th subject. For the classification tasks, we can employ cross entropy as the loss function to train the whole neural network. Finally, we utilize the stochastic gradient descent (SGD) method to update all weight parameters. In this way, we combine template graphs with convolutional neural networks into an end-to-end framework for brain disorder diagnosis. Compared to other deep learning works, the main advantage is that the prior information of the template graph is beneficial for end-to-end training with limited subjects, leading to potential performance improvements. ### _Template-induced Brain Interpretation Analysis_ The mere predictive power of the proposed TiBGL is of limited interest to neuroscientists, for which there exist plenty of tools for the diagnosis of specific mental disorders. What matters is the interpretation of TiGBL, as it can provide novel insights and hypotheses. However, most existing biomarker-based brain network works usually record the top \(K\) connectivities to identify connectivity patterns that highlight those important ROIs and connections. Obviously, there exists a gap between such artificial construction manner and related findings that individuals in disease groups often exhibit weakened functional connectivity in neural networks. To overcome this limitation, we attempt to employ the learned template graphs as group-level brain networks and combine them with the related finding to identify connection patterns that might be associated with specific brain disorders. To be specific, we attempt to explore the subgraph \(\mathbf{E}\) as connectivity patterns for the explanation analysis of brain disorders, motivated by frequent subgraph mining tasks [53, 54]. We use \(M\)-dimensional vector \(\mathbf{e}\) to denote the nodes set \(\mathbf{V}\) of subgraph \(\mathbf{E}\). If the \(i\)th ROI node in the nodes set \(\mathbf{V}\), \(\mathbf{e}_{i}=1\); otherwise, \(\mathbf{e}_{i}=0\). Let \(\mathbf{E}^{\prime}=\mathbf{e}_{i}{}^{T}\mathbf{e}_{i}\), which can be seen as a fully connected graph based on nodes set \(\mathbf{V}\). To significantly distinguish the difference between two groups of brain graphs, we expect that the graph \(\mathbf{E}^{\prime}\) is close to one template graph \(\mathbf{G}_{c_{1}}\), but far away from the other template graph \(\mathbf{G}_{c_{2}}\). Based on the observation of weakened connectivity in functional domains of patients, we can formulate the following problem to search node set in subgraph \(\mathbf{E}\): \[\begin{split}\min&\frac{1}{2}\|\mathbf{G}_{c_{1}}-\mathbf{E }^{\prime}\|_{{}_{F}}^{2}-\frac{1}{2}\|\mathbf{G}_{c_{2}}-\mathbf{E}^{\prime}\|_{{}_{F }}^{2}+\eta\|\mathbf{E}^{\prime}\|_{1}\\ s.t.&\mathbf{e}_{i}=0\ or\ 1,1\leq i\leq M,\end{split} \tag{7}\] where the third term is the regularization term penalizing solutions, and \(\eta>0\) is the hyper-parameter. For the convenience of solving Eq. (7), Eq.(7) can be rewritten as the following equation by expanding the above equation and simplifying it by neglecting the constant terms: \[\begin{split}\min& tr((\mathbf{G}_{c_{1}}-\mathbf{G}_{c_{2} })\mathbf{E}^{\prime})+\eta\|\mathbf{E}^{\prime}\|_{1}\\ s.t.&\mathbf{e}_{i}=0\ or\ 1,1\leq i\leq M.\end{split} \tag{8}\] Notably, the first term also is seen as the similarity between fully connected graph \(\mathbf{E}^{\prime}\) and the difference \(\mathbf{G}_{c_{1}}-\mathbf{G}_{c_{2}}\) between two template graphs. Once we get the nodes set \(\mathbf{e}\) by solving the above equation, we can need to determine if exists the relation between selected ROIs. We select the edges between nodes based on the subtracted template graph, i.e. \(|\mathbf{G}_{c_{1}}-\mathbf{G}_{c_{2}}|\). Mathematically, if \(i\)th node and \(j\)th node in \(\mathbf{V}\) are connected with each other in subtracted template graph \(|\mathbf{G}_{c_{1}}-\mathbf{G}_{c_{2}}|\), \(\mathbf{E}(i,j)=1\); otherwise, \(\mathbf{E}(i,j)=0\). Accordingly, we can generate a meaningful subgraph to explicitly explain the dissimilarity between different groups, based on the aspect of the functional connectivity among brain ROIs, as shown in Fig.(2)(c). Notably, Eq. (8) is a non-deterministic polynomial-time hardness (NP-hard) problem. To effectively solve the nodes set \(\mathbf{e}\), we can transform the Eq. (8) into semidefinite programming (SDP) and then refine it by the local-search procedure, referring to the works [32, 55]. Compared TiBGL with other related subgraph works [56, 57] that are NP-hard and hard to approximate, it has been proved that this approximation is much better. More importantly, the subgraph \(\mathbf{E}\) guided by guided knowledge might mine those more reasonable and refined neuroscience findings from clinical human brain data. To this end, the proposed TiBGL can not only provide the robust brain network identification model but also explore some new neuroscience findings in clinical human brain data. ## IV Experiments and Analysis In this section, we conduct lots of experiments to comprehensively validate the effectiveness and explanation of the proposed TiBGL. In this section, our goal is to answer the following questions: * RQ1. Can TiBGL perform better than other counterparts in brain classification? * RQ2. Can TiBGL provide meaningfully interpretable findings for neurological disorders? * RQ3. Why does TiBGL work in the diagnosis of neurological disorders? * RQ4. What are the advantages of the proposed TiBGL? ### _Datasets and Preprocessing_ **Datasets.** To fully validate the identification performance of our proposed framework, massive experiments are performed on three tasks of brain graph classification, including two types of brain neuro-science disorder diagnosis and gender classification. More specifically, we collect the brain graph data from two publicly available and one self-organized datasets, which can be introduced as follows: * **Autism Brain Imagine Data Exchange (ABIDE)**1 provides previously collected rs-fMRI ASD and matched controls data for the purpose of data sharing in the scientific community. ABIDE contains 1112 subjects, which are composed of structural and resting state fMRI data along with the corresponding phenotypic information. In particular, we focus on the portion of dataset containing adolescents, which contains individuals whose ages are between 15 and 20 years. Thus, 116 subjects can be divided into conditional group, labeled as ASD, and 121 subjects can be divided into control group, labeled as TD. Footnote 1: [http://preprocessed-connectomes-project.org/abide/download.html](http://preprocessed-connectomes-project.org/abide/download.html) * **Attention Deficit Hyperactivity Disorder (ADHD)** takes from USC Multimodal Connectivity Database (USCD)2, which collects rs-frmi ADHD and matched controls data from multiple sites. It's an unrestricted public release of resting-state fMRI and anatomical datasets. In particular, we select 520 subjects, which are usually used to evaluate brain graph models. Here, we select 190 subjects in the condition group, labeled as ADHD, and 330 subjects in the control group, labeled as TD. Footnote 2: [http://umcd.humanconnectemproject.org](http://umcd.humanconnectemproject.org) * **China-Japan Friendship Hospital (CJFH)** collects rs-frmi data for the purpose to exploit the relationship between brain functional connectivity and big five personality traits, which is one self-organized human brain dataset. CJFH contains the rs-fmri data of 346 individuals. Here, we focus on the task of gender classification. To be specific, there are 171 male individuals, labeled as Male, and 175 female individuals, labeled as Female. **Preprocessing.** For the ABIDE dataset, we downloaded the preprocessed rs-fMRI series data from the preprocessed ABIDE dataset with Configurable Pipeline for the Analysis of Connectomes (CPAC), band-pass filtering (0.01 - 0.1 Hz), no global signal regression, parcellating each brain into 116 ROIs by the Automated Anatomical Labeling (AAL) atlas.For the ADHD and CJFH datasets, we utilize the standard preprocessing procedure to process original fMRI data, as work [44]. Then, Craddock 200 (CC200) is used to extract the brain ROIs for each subject, which parcellates each brain into 200 ROIs. The AAL atlas is employed to extract the time series of ROIs for each subject in the CJFH dataset, and we empirically neglect the brain ROIs in cerebellum for CJFH dataset, which parcellates each brain into 90 ROIs. For above datasets, we use the mean time series of ROIs to compute the correlation matrix as functional connectivity, i.e. brain graph, which provides an indicator of co-activation levels in brain regions. ### _Compared Methods and Experimental Settings_ **Compared Methods.** We evaluate the effectiveness of our framework in classifying brain graphs by comparing it with related methods. Three types of methods are chosen to compare their performance with our proposed framework. The first type of methods are traditional machine learning methods, which firstly utilize three feature transform methods, including raw features (Raw), Principle Component Analysis (PCA) [58], and Lasso [59], and then select Support Vector Machine (SVM) as the classifier. The second type of methods are based on graph structure information, including Graph2Vec [60], Sub2Vec [61]. These works aim to transform the brain graph into the low-dimensional embedding based on the graph structure information. The third type of methods belong to deep learning methods, including Graph Convolutional Networks (GCN) [62], BrainNetCNN [17], DIFFPOOL [63], and BrainGNN [22]. **Experimental Settings.** For the first type of methods, we keep the upper triangle of the matrices and flattened the triangle values to vectors, and use them as the input feature. For the second type of methods, we additionally construct the binary graph for brain graph by threshold filtering and then use it as the input. For the third type of methods, we select the original brain graph as input for BrainNetCNN and adjust suitable nodes features and graph structure for other methods with their best performances. For all datasets, we randomly select 70% of the samples as training samples, 10% of samples as validating samples, and the remaining 20% of samples as testing samples at each iteration. We repeatedly run the above validation process ten times for all methods and use the accuracy of classification as the evaluation index. Finally, we summarize all experimental results to comprehensively evaluate all methods. ### _Comparison Results and Analysis (Q1)_ To validate the superior performance of the proposed TiBGL, this paper has conducted massive experiments on the ABIDE, ADHD200, and CJFH datasets. We run all methods on the above three datasets in the same environment and then summarize all experimental validation results. To be specific, we calculate the classification accuracy (ACC), and area under curve (AUC) as the final evaluation index. As shown in Table I, we summarize the mean evaluation indexes as well as their standard deviations. For the ABIDE dataset, the proposed TiBGL gets the best performance with ACC of 71.55% and AUC of 74.96%. Among those methods conducted on the ABIDE dataset, TiBGL only performs 70% on two terms of ACC and AUC. Besides, BrainNetCNN also obtains the promising performance compared to other methods, which might discover the important connectivity patterns for ASD. However, GCN and DIFFPOOL have relatively poor performance, and the main reason is that graph neural networks should be further adjusted for brain networks, such as BrainGNN. For the ADHD dataset, the proposed TiBGL also gets the best performance, in which the performance on AUC is more than 80%. Compared to traditional machine learning methods, deep learning-based methods have more superior performance in most situations. For example, GCN, BrainNetCNN and BrainGNN all perform 70% on the index of AUC. Besides, Sub2Vec also obtains comparable performance with ACC of 65.51% and AUC of 70.78% on the ADHD dataset. For the CJFH dataset, this task mainly focuses on gender classification according to brain networks, which is usually used to validate the performances of brain graph models. Even though the proposed TiBGL can obtain the best performance, deep learning-based methods have poorer performance compared to traditional machine methods. For example, Lasso have the best performance with ACC of 76.25% and AUC of 80.68% besides TiBGL. However, DIFFPOOL just obtain the performance with ACC of 57.31% and AUC of 63.56%, which is far lower than Lasso. According to results in Table I and the above discussions, we can readily find that the proposed TiBGL performs other methods in most situations. The main reason is that the template brain graphs can provide the guided knowledge to induce the classification model to pay more attention to those import brain ROIs as well as their connectivity relationship. Especially for the issue of neurological disorders diagnosis, there exists much noise and redundant information in original brain graphs, which is one key factor to limit the performance of diagnosis models. Taking ASD and ADHD as examples, it's effortless to observe that both traditional and deep learning methods cannot obtain the applicable results on the ABIDE and ADHD200 datasets, such as BrainGNN and GCN. Comparing the results of neurological disorders with gender classification task, we also find that the performances of these methods on CJFH dataset are superior to ABIDE and ADHD200 datasets, which can further show the challenges of neurological disorders. Therefore, it's necessary to first extract the prior knowledge for the subsequent neurological disorders diagnosis. ### _Interpretation Analysis for Neurological Disorders (Q2)_ The accuracy of brain graph models is a predominant goal over its interpretability, which is instead a key requirement in neuroscience. To explain whether TiBGL can utilize template brain graphs to discover meaningfully interpretable findings for neurological disorders, we conduct the related validation on ABIDE dataset. Firstly, we first extract the template brain graphs \(\mathbf{G}_{ASD}\) and \(\mathbf{G}_{TD}\) for ASD and TD groups. In the later section, we additionally visualize their heatmaps in Fig.6. Then, we can obtain the critical brain ROIs that are related to ASD, by solving Eq. (8). To be specific, we summarize the brain ROIs as a list [ ___R_landic_Oper_L_, _R_landic_Oper_R_, _Insula_L_, _Insula_R_, _Cingulum_Mid_L_, _Hippocampus_L_, _Postcentral_R_, _SupraMarginal_L_, _SupraMarginal_R_, _Putamen_L_, _Heschl_L_, _Heschl_R_, _Temporal_Sup_L_, _Temporal_Sup_R_, Cerebelum_9_L, Vermis_3 ], where each element is the abbreviation of brain ROI label. We also visualize these brain ROIs in Fig. 3. Though Fig. 3, we can find that these brain ROIs are mainly concentrated inin several regions. Furthermore, we combine the template graphs with obtained ROI sets in section IV-D to generate the subgraph that contains the key ROIs as well as their connectivity. Then, we visualize the subgraph of brain network in Fig. 4. Through Fig. 4, we can find that these key connections among brain ROIs of cerebellum, prefrontal cortex, posterior parietal cortex, and middle temporal gyri, are highly related to ASD and TD classification. More importantly, the above finding on ASD is consistent with several previous studies [64, 65]. This implies that the findings by TiBGL should be insightful and meaningful. To this end, the obtained subgraph of brain network can be seen as the interpretation result for neurological disorders. ### _Ablation Analysis (Q3)_ To explain the reasons why the proposed TiBGL can work in the diagnosis of neurological disorders, the study of ablation analysis is conducted to evaluate the effects of template brain graphs, encoder blocks, as well as hyper-parameters. Specifically, for each test, the corresponding term is removed while retaining the other terms. #### Iv-E1 Effort of Template Brain Graph To demonstrate the effort of the proposed template brain graph, we conduct experiments of TiBGL and its variant on the ABIDE and CJFH datasets. As shown in Table II, the proposed TiGBL gets the best performance on these two datasets. Compared to TiGBL without template brain graphs, our proposed TiBGL can obtain obvious improvement due to the discovered knowledge by templates. Therefore, the effect of brain network identification can further go beyond. Besides, TiBGL extracts template graphs inspired by two research findings as illustrated in Section III-C. For this reason, we mainly analyze whether the obtained template graphs are consistent with these two neural findings. Firstly, we visualize the similarity scores \(<\mathbf{W}_{k},\mathbf{G}_{c}>\) in Fig. 5. Through the data distributions in Fig. 5, we can find that the similarity between given brain graph and template brain graph belonging to its group is usually larger than other template brain graphs. This implies that the template brain graph can be seen as one consistent pattern in functional connectivity among individuals in the same group, which is consistent with the first neural finding. Secondly, we summarize the heatmaps of template brain graphs in ABIDE and CJFH datasets, as shown in Fig. 6. Different from dense brain graphs of individuals, the template brain graphs are more sparse, containing those edges that have important roles for its group. Sparse brain graphs can be also seen as one effective manner to eliminate noise and redundant information in original brain graphs. More importantly, there exists the structural difference between different groups, through Fig. 6. This implies that template brain graphs can reveal the groups' edge-level differences in functional brain connectivity matrices, which is consistent with the neural finding [34]. #### Iv-E2 Effort of Convolutional Encoder Block To validate the effort of convolutional blocks, we additionally propose two variants of TiBGL by substituting it with multilayer perception (MLP) and Transformer. For MLP, we flatten the fused brain graphs as vectors to predict the brain diagnose. For Transformer, we use the rows of fused brain graphs as tokens, and finally pool all tokens as global representation. To be specific, we detailly summarize the experimental result of classification accuracy on the ABIDE and CJFH datasets in Table III. According to Table III, it's obvious that TiBGL with convolutional blocks can get better performance. The main reason is that CNN with fewer parameters might be suitable such situations with limited brain graphs. Therefore, we can employ such convolutional blocks in brain network modeling advancing the model's performance. Besides, this also implies that the learnt template brain graphs can be utilized as a plug-in-play manner in brain graph models. #### Iv-E3 Effort of Hyper-parameters To validate the effort of hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) in eq. (4), we conduct the experiments with different settings on the ABIDE dataset. We summarize the experimental result of classification accuracy in Tables IV-V. The propose TiBGL can obtain stable results on the ABIDE dataset in most situations. According to results in the Tables IV-V, we can readily find that the proposed TaGBL Fig. 4: Visualization of Subgraph on ABIDE dataset, which shows those important connectivities between brain ROIs related to ASD, such as Rolandic_Oper_L, Rolandic_Oper_R, Insulin_L, Insulin_R, Cingulum_Mid_L, Hippocampus_L, Postcentral_R, etc. Fig. 3: Visualization of Brain ROIs on ABIDE dataset, which shows those important ROIs related to ASD, such as ROIs of cerebellum, prefrontal cortex, posterior parietal cortex, and middle temporal gyri, etc. obtains the best performance when \(\lambda_{1}=0.1\) and \(\lambda_{2}=0.005\). More importantly, it's obvious that there exists a wide range for hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) in which relatively stable and good results can be readily obtained. ### _Discussions (Q4)_ Based on the above experimental results and analysis, we can find that the TiBGL not only gets better performance on brain classification tasks but also discovers those insightful connectivity patterns related to brain disorder. For this reason, it's not difficult to observe that the proposed TiBGL has the Fig. 5: Visualization of Similarity Scores on ABIDE and CIFH dataset. X-axis and Y-axis can stand for the coefficients with template graphs in ASD and TD groups, respectively. Visualization results demonstrate that template brain graphs can reveal the groups’ edge-level differences in functional brain connectivity matrices. Fig. 6: Template Brain Graphs on ABIDE and CIFH datasets. X-axis denotes the brain ROIs, and the colorbar can reflect the different levels of functional connectivity between brain ROIs. Visualizations of different template brain graphs in ASD and TD groups imply that template brain graphs can reveal the groups’ edge-level differences in functional brain connectivity matrices. Similarly, visualizations of different template brain graphs in Male and Female groups can also validate the above observation. following advantages in terms of robustness, discrimination, and interpretation of brain network model. * **Robustness.** TiBGL introduces template brain graphs to overcome the issues of noise and redundant information in original brain graphs, guiding the following tasks to highlight those important brain ROIs as well as their connectivity relationships. The template graphs exploit template graphs from two intra-group and inter-group aspects, which can augment instance-level and group-level brain networks for diagnosing and explaining the neurological disorders. Compared with those works based on dense brain networks, TiBGL adopts one more effective manner to improve its robustness. * **Discrimination.** Compared with current brain network classification works, the redundant and noisy information in the instance-level brain networks has been firstly removed to some extent, augmented by template brain graphs. More importantly, augmented brain networks are relatively easy to be divided into correct groups. Then, CNN model with few parameters is adopted to implement the brain network classification for brain disorders, which is beneficial for end-to-end deep model training with limited subjects. * **Interpretation.** TiBGL combines the template brain graphs with related neuroscience findings to search meaningful subgraph in brain network, highlighting those important ROI nodes as connectivity patterns. The subgraph can explicitly provide the ROIS sets and their connectivities for disorder analysis, which helps to better understand the neural mechanism of neurological disorders. The results on ABIDE dataset show that the findings of TiBGL can keep coherent with recent neuroscience literatures. ## V Conclusion Mining human-brain networks to discover patterns that can be used to discriminate between healthy individuals and patients affected by some brain disorders is a fundamental task in neuroscience. To accomplish this target, this paper proposes a novel brain graph learning framework with both discriminative and interpretable abilities, called TiBGL. TiBGL firstly proposes template brain graph learning to extract the template graph for each group, which can be utilized to induce the following tasks of discrimination and interpretation analysis. Notably, the learned template graphs not only highlight those important connectivity patterns in each group but also remove the noise and redundant information in brain networks. Then, template-induced convolutional neural network is designed to fuse rich information from brain graphs and learned template graphs. Moreover, TiBGL provides insightful brain interpretation analysis induced by template graphs to explore meaningful connectivity patterns related to brain disorders. To this end, TiBGL can provide a whole framework with powerful discrimination and interpretation via learnt templates. The experimental results on brain graph datasets can demonstrate that the proposed TiBGL can help to better diagnose and understand the neural mechanism of neurological disorders. ## Acknowledgements The authors would like to thank the anonymous reviewers for their insightful comments and suggestions to significantly improve the quality of this paper.
2309.06481
Towards Accurate Modeling of Line-Intensity Mapping One-Point Statistics: Including Extended Profiles
Line-intensity mapping (LIM) is quickly attracting attention as an alternative technique to probe large-scale structure and galaxy formation and evolution at high redshift. LIM one-point statistics are motivated because they provide access to the highly non-Gaussian information present in line-intensity maps and contribute to break degeneracies between cosmology and astrophysics. Now that promising surveys are underway, an accurate model for the LIM probability distribution function (PDF) is necessary to employ one-point statistics. We consider the impact of extended emission and limited experimental resolution in the LIM PDF for the first time. We find that these effects result in a lower and broader peak at low intensities and a lower tail towards high intensities. Focusing on the distribution of intensities in the observed map, we perform the first model validation of LIM one-point statistics with simulations and find good qualitative agreement. We also discuss the impact on the covariance, and demonstrate that if not accounted for, large biases in the astrophysical parameters can be expected in parameter inference. These effects are also relevant for any summary statistic estimated from the LIM PDF, and must be implemented to avoid biased results. The comparison with simulations shows, however, that there are still deviations, mostly related with the modeling of the clustering of emitters, which encourage further development of the modeling of LIM one-point statistics.
José Luis Bernal
2023-09-12T18:00:06Z
http://arxiv.org/abs/2309.06481v2
# Towards Accurate Modeling of Line-Intensity Mapping One-Point Statistics: ###### Abstract Line-intensity mapping (LIM) is quickly attracting attention as an alternative technique to probe large-scale structure and galaxy formation and evolution at high redshift. LIM one-point statistics are motivated because they provide access to the highly non-Gaussian information present in line-intensity maps and contribute to break degeneracies between cosmology and astrophysics. Now that promising surveys are underway, an accurate model for the LIM probability distribution function (PDF) is necessary to employ one-point statistics. We consider the impact of extended emission and limited experimental resolution in the LIM PDF for the first time. We find that these effects result in a lower and broader peak at low intensities and a lower tail towards high intensities. Focusing on the distribution of intensities in the observed map, we perform the first model validation of LIM one-point statistics with simulations and find good qualitative agreement. We also discuss the impact on the covariance, and demonstrate that if not accounted for, large biases in the astrophysical parameters can be expected in parameter inference. These effects are also relevant for any summary statistic estimated from the LIM PDF, and must be implemented to avoid biased results. The comparison with simulations shows, however, that there are still deviations, mostly related with the modeling of the clustering of emitters, which encourage further development of the modeling of LIM one-point statistics. ## I Introduction Line-intensity mapping (LIM) is an emerging observational technique that aims to obtain three-dimensional maps of the Universe using the integrated flux of bright spectral lines over cosmological scales [1; 2; 3; 4]. Therefore, LIM probes the aggregate line emission by all sources at a given redshift. This grants access to otherwise undetectable faint populations of emitters and makes LIM the optimal tracer of the large-scale structure at high redshift in the high-noise or high-confusion regimes [5; 6]. Furthermore, dropping the requirement of resolved detection of individual emitters enables the use of low-aperture telescopes and quick scans of large portions of the sky. Besides tracing the underlying matter density fluctuations, line-intensity fluctuations depend on the astrophysical phenomena that trigger the line emission. Hence, combining different spectral lines, LIM also probes galaxy formation and evolution across cosmic times, complementing luminosity functions from galaxy surveys. LIM is attracting increasing attention, with numerous experiments and surveys currently observing [7; 8; 9; 10; 11; 12; 13] and many others that will see first light in the forthcoming years [14; 15; 16; 17; 18; 19; 20]. Preliminary detections (see e.g., Refs. [21; 22; 23; 24; 25; 26; 27; 28; 29]) give rise to optimism regarding the prospects for this technique and its potential to bridge between low-redshift galaxy surveys and early-Universe probes like the cosmic microwave background (see e.g., Refs. [30; 31; 32; 33]). Most commonly, the main type of summary statistics employed to analyze line-intensity maps are 2-point statistics like the correlation function or its Fourier counterpart, the power spectrum. They benefit from some robustness against smooth, uncorrelated observational contaminants and build upon the comprehensive formalism developed for galaxy surveys. However, line-intensity fluctuations, which trace the non-linear, non-Gaussian large-scale structure and are modulated by non-trivial astrophysical processes, are very non Gaussian. Therefore, a significant part of the information contained in line-intensity maps is not captured by power spectrum measurements. Furthermore, LIM intrinsic sensitivity to both cosmology and astrophysics may hinder the interpretation of the LIM power spectra. For instance, the mean intensity and the bias relating matter and source fluctuations present a perfect degeneracy in the power spectrum at large scales. Although the degeneracy can be partially broken using smaller scales in the mildly non-linear regime [34; 35], the power spectrum is only sensitive to the first two moments of the luminosity function, which result in degenerate astrophysical and cosmological parameters (see e.g., Refs. [36; 37] for a discussion on the degeneracies between astrophysics and cosmology for the power spectrum). Accessing non-Gaussian information and breaking degeneracies between cosmology and astrophysics motivate the development of alternative summary statistics. Non-Gaussian information can be obtained using higher-order statistics like the bispectrum and trispectrum. However, one-point statistics, which depend directly on the LIM probability distribution function (PDF), hence on the full distribution of (non-Gaussian) intensity fluctuations and the whole line-luminosity function, offer preferable properties. The simplest one-point statistics is the actual estimator of the PDF, known as voxel intensity distribution (VID) in the context of LIM [38; 39]. The VID is very complementary to the power spectrum because while the latter depends on clustering and the first two moments of the line-luminosity function, the former depends on subsequent convolutions of the luminosity function and zero-lag moments of clustering. The potential of the combination of both summary statistics has been demonstrated in the literature [40; 41]. Furthermore, several studies have highlighted the sensitivity of the VID not only to astrophysical parameters but also to beyond-\(\Lambda\)CDM cosmologies and physics beyond the Standard Model, either directly or through reducing degeneracies when combined with the power spectrum [42; 43; 44; 45; 46]. First instances of the LIM PDF formalism relied on small modifications to the probability-of-deflection techniques (see e.g., Refs. [47; 48; 49; 50]): they considered the conditional probability of finding a given intensity in a voxel given number of emitters contained in it [38; 39]. Ref. [41] explicitly modeled the dependence on the local emitter overdensity to derive the first analytic model of the VID supersample covariance and the VID-power spectrum covariance. Recently, Ref. [51] adapted the formalism presented in Refs. [52; 53] to compute the LIM PDF using the halo model, emphasizing the ability of the VID to break the degeneracy between the mean intensity and the emitter bias. Nonetheless, all previous work assumes that each emitter just contributes to the voxel in which it is contained. This in an unphysical approximation: whether it is due to extended sources, line broadening caused by the internal peculiar velocity of the gas or the limited experimental resolution, the observed emission from each emitter is redistributed to a finite volume that extends beyond the voxel in which each emitter is located. In this work we build upon the halo-model based formalism from Ref. [51] and model this effect in the analytic derivation of the LIM PDF. We apply the Ergodic hypothesis and obtain the PDF of the intensity in a voxel for a single emitter from the spatial profile of its observed intensity. From this starting point we can model the LIM PDF including the smearing of the observed signal due to extended sources, line broadening, and limited experimental resolution. For the first time, we show a comparison between the theoretical analytic prediction of the LIM PDF and the results for simulations to validate our model. We find qualitative good agreement, satisfactorily capturing the effects of the extended intensity profiles. Including extended observed intensity profiles significantly changes the VID prediction, and they must be taken into account to ensure unbiased parameter inference. We also find, as expected, that accounting for the fact that an emitter contributes to more than one voxel increases the correlation between intensity bins. However, other parts of the modeling, especially those related to matter clustering, must be improved to match the results from simulations. The updated VID modeling is now included in the publicly available code lim.1 Footnote 1: [https://github.com/jl-bernal/lim](https://github.com/jl-bernal/lim) While in this work we focus on the VID, extended emission and limited experimental resolution must also be included in the derivation of any summary statistic that relies on the LIM PDF. Examples beyond the VID include 1-point cross correlations of different data sets, such as the conditional VID [54] or the deconvolved density estimator [55; 56]. We assume the standard \(\Lambda\)CDM cosmology, with best-fit parameter values from the full _Planck_ analysis without external observations [57]. In addition, we employ the following convention for the Fourier transforms. For a \(d\)-dimensional Fourier-space variable \(\mathbf{u}\) being the conjugate of a configuration-space variable \(\mathbf{v}\), the direct and inverse Fourier transforms of a function \(f\) and its Fourier counterpart \(\tilde{f}\) are given by \[\begin{split}\tilde{f}(\mathbf{u})&=\int\mathrm{d}^{d} \mathbf{v}\ f(\mathbf{v})e^{-i\mathbf{v}\mathbf{u}}\,,\\ f(\mathbf{v})&=\int\frac{\mathrm{d}^{d}\mathbf{u}}{(2\pi) ^{d}}\ \tilde{f}(\mathbf{u})e^{i\mathbf{v}\mathbf{u}}\,,\end{split} \tag{1}\] where the tilde denote Fourier-space functions. Given the large dynamical range of the variables considered for Fourier transform (spanning orders of magnitude, positive and negative values), we compute the Fourier transform using Non-Uniform Fast Fourier Transform routines as implemented in FINUFFT2[58; 59]. Footnote 2: [https://finufft.readthedocs.io/en/latest/](https://finufft.readthedocs.io/en/latest/) This article is structured as follows. First, we review the PDF modeling from Ref. [51] and introduce the effects of extended intensity profiles in Sec. II. We discuss limited experimental resolution and line broadening as causes of the extended profiles and their impact in the PDF prediction in Sec. III. We compare theoretical predictions with simulations in Sec. IV. We discuss the relevance of modeling the extended emission estimating the bias introduced in parameter inference when point sources are considered in Sec. V. We summarize and conclude in Sec. VI. ## II LIM 1-point PDF In this section we review the modeling to compute the line-intensity 1-point PDF and extend it to account for extended intensity profiles. We build upon the modeling for point sources presented in Ref. [51] using the halo model, which in turn adapted the formalism introduced in Refs. [52] and [53] for the thermal Sunyaev-Zel'dovich effect and weak lensing convergence, respectiveley. ### LIM signal The specific line intensity per unit frequency emitted at a given position \(\mathbf{x}\) is given by \[I_{\nu}(\mathbf{\mathrm{x}})=\frac{c}{4\pi\nu H(z)}\rho_{\mathrm{L}}(\mathbf{x})\,, \tag{2}\] where \(c\) is the speed of light, \(\nu\) is the rest-frame frequency of the spectral line of interest, \(H(z)\) is the Hubble parameter at redshift \(z\) corresponding to position \(\mathbf{x}\), and \(\rho_{\mathrm{L}}\) is the line luminosity density. Using the Rayleigh-Jeans relation, the brightness temperature can be defined from the specific intensity as \[T(\mathbf{x})=\frac{c^{3}(1+z)^{2}}{8\pi k_{\mathrm{B}}\nu^{3}H(z)}\rho_{\mathrm{L} }(\mathbf{x})=X_{\mathrm{LT}}\rho_{\mathrm{L}}(\mathbf{x})\,, \tag{3}\] where \(k_{\mathrm{B}}\) is the Boltzmann constant, and we have defined \(X_{\mathrm{LT}}\) as a redshift-dependent multiplicative factor to simplify the expressions. During this work we will use the brightness temperature as variable to describe line-intensity maps, but our approach is equally applicable to specific intensities. We can model the spatial distribution of the line luminosity associated to a single source located at \(\mathbf{x}_{s}\), without loss of generality, as \[\frac{\mathrm{d}L_{s}}{\mathrm{d}\mathbf{x}}(\mathbf{x}|\mathbf{\theta}_{s})=L_{s,\mathrm{ tot}}(\mathbf{\theta}_{s})\varrho(\mathbf{x}-\mathbf{x}_{s}|\mathbf{\theta}_{s})\,, \tag{4}\] where \(L_{s,\mathrm{tot}}\) is the total line luminosity of the source (i.e., integrated over its line width and point spread function) and \(\varrho\) is a three-dimensional spatial emission profile with inverse-volume units that is normalized to unity: \(\int\mathrm{d}^{3}\mathbf{x}\varrho(\mathbf{x}|\mathbf{\theta}_{s})=1\). Any experimental or astrophysical effect that results in a spatial smoothing of the observed signal in the final map can be embedded in \(\varrho\), which is generally anisotropic. Both \(L_{s,\mathrm{tot}}\) and \(\varrho\) depend on a set of astrophysical properties of the source included in the parameter vector \(\mathbf{\theta}_{s}\). Then, the temperature in a specific point can be expressed as \[T(\mathbf{x})=X_{\mathrm{LT}}\sum_{s}\frac{\mathrm{d}L_{s}}{\mathrm{d}\mathbf{x}}(\bm {x}|\mathbf{\theta}_{s})\,, \tag{5}\] where the sum is over all sources, indexed by \(s\). We build upon this expression to derive the PDF for extended profiles. ### The PDF for extended profiles Since brightness temperature is an additive quantity, the PDF of the aggregate emission is the convolution of the PDF of each emitter. This calculation is much more tractable in Fourier space applying the convolution theorem. Let us define \(\uptau\) as the Fourier conjugate of the brightness temperature. The Fourier transform of a given PDF \(\mathcal{P}\) of the brightness temperature, \[\tilde{\mathcal{P}}(\uptau)= \int\mathrm{d}T\mathcal{P}(T)e^{-iT\uptau}=\langle e^{-iT\uptau}\rangle= \tag{6}\] \[= \frac{1}{V_{\mathrm{obs}}}\int_{V_{\mathrm{obs}}}\mathrm{d}^{3} \mathbf{x}e^{-iT(\mathbf{x})\uptau}\,,\] is the characteristic function, and angle brackets denote average over realizations. The characteristic function is dimensionless, and can also be obtained invoking the Ergodic hypothesis taking the average over the observed volume \(V_{\mathrm{obs}}\), as done in the last equality above. Let us explicitly separate the halo mass \(M\) from \(\mathbf{\theta}\) to ease the readability. Consider first an infinitesimal mass bin centered at \(M\), for which the brightness temperature PDF is \[\mathcal{P}^{(M)}(T) =\int\mathrm{d}\mathbf{\theta}\mathcal{P}(\mathbf{\theta}|M)\times \tag{7}\] \[\times\left[\mathcal{P}^{(M,\mathbf{\theta})}_{N=0}\delta_{\mathrm{D} }(T)+\mathcal{P}^{(M,\mathbf{\theta})}_{N=1}\mathcal{P}^{(M,\mathbf{\theta})}_{1}(T) \right]\,,\] where we marginalize over the conditional multidimensional distribution \(\mathcal{P}(\mathbf{\theta}|M)\) of astrophysical properties given a halo mass (see e.g., Ref. [60]), \(\mathcal{P}^{(M,\mathbf{\theta})}_{N=x}\) is the PDF of having \(x\) emitters (halos) of mass \(M\) and set of properties \(\mathbf{\theta}\) contributing to a specific point, and \(\mathcal{P}^{(M,\mathbf{\theta})}_{x}(T)\) is the PDF of finding a temperature \(T\) in a point in the space receiving contributions from \(x\) emitters with such properties. If there is no emitter (i.e., \(N=0\)), there is no signal and \(\mathcal{P}_{0}(T)=\delta_{\mathrm{D}}(T)\) is the Dirac delta. For an infinitesimal mass bin \(\mathcal{P}^{(M,\mathbf{\theta})}_{N>1}=0\), hence \(\mathcal{P}^{(M,\mathbf{\theta})}_{N=0}=1-\mathcal{P}^{(M,\mathbf{\theta})}_{N=1}\). The characteristic function can therefore be expressed as \[\tilde{\mathcal{P}}^{(M)}(\uptau)=1+\int\mathrm{d}\mathbf{\theta}\mathcal{P}(\mathbf{ \vartheta}|M)\mathcal{P}^{(M,\mathbf{\vartheta})}_{N=1}\left(\tilde{\mathcal{P}}^ {(M,\mathbf{\vartheta})}_{1}(\uptau)-1\right)\,. \tag{8}\] Although the profile \(\varrho\) of the extended emission extends arbitrarily in space, in practice we can truncate it at some distance large enough that there is no sizable signal loss. Under this assumption, the signal profile covers a finite volume \(V_{\mathrm{prof}}\) which can depend on \(M\) and \(\mathbf{\vartheta}\), so that \[\tilde{\mathcal{P}}^{(M,\mathbf{\vartheta})}_{1}(\uptau)=\int\mathrm{d}^{3}\mathbf{x }\mathrm{d}L\mathcal{P}(\mathbf{x}|M,\mathbf{\vartheta})\mathcal{P}(L|M,\mathbf{\vartheta },\mathbf{x})e^{-iT\uptau}\,, \tag{9}\] where we explicitly marginalize over the position (for which \(\mathcal{P}(\mathbf{x}|M,\mathbf{\vartheta})\) is uniform over \(V_{\mathrm{prof}}\) and zero otherwise) and \(\mathcal{P}(L|M,\mathbf{\vartheta},\mathbf{x})\) accounts for any distribution of the line luminosity given the halo mass, the astrophysiscal properties, and the distance to the center of the profile.3 Equation (9) is equivalent to Eq. (6) for a single source. Note that the temperature factor in the exponential depends on all the quantities that are marginalized over. Finally, assuming Poisson statistics, and neglecting clustering for now, \[\mathcal{P}^{(M,\boldsymbol{\theta})}_{N=1}=\mathrm{d}M\frac{\mathrm{d}n}{ \mathrm{d}M}V_{\mathrm{prof}}(M,\boldsymbol{\theta})\,, \tag{10}\] where \(\mathrm{d}n/\mathrm{d}M\) is the halo mass function. For infinitesimal bins, \(\mathcal{P}^{(M,\boldsymbol{\theta})}_{N=1}\ll 1\), so that we can interpret Eq. (8) as the linear expansion of the exponential \[\begin{split}&\tilde{\mathcal{P}}^{(M)}(\uptau)=\\ &\quad=\exp\left\{\int\mathrm{d}\boldsymbol{\theta}\mathcal{P}( \boldsymbol{\theta}|M)\mathcal{P}^{(M,\boldsymbol{\theta})}_{N=1}\left( \tilde{\mathcal{P}}^{(M,\boldsymbol{\theta})}_{1}(\uptau)-1\right)\right\}\,. \end{split} \tag{11}\] Now we need to extend this to all halos. Using the convolution theorem, the characteristic function for the whole population is the product of the individual characteristic functions. Thus, the characteristic function \(\tilde{\mathcal{P}}^{(u)}\) for the whole population without accounting for clustering is \[\begin{split}\tilde{\mathcal{P}}^{(u)}&(\uptau)= \prod\tilde{\mathcal{P}}^{(M)}(\uptau)=\\ &=\exp\left\{\int\mathrm{d}M\mathrm{d}\boldsymbol{\theta} \mathcal{P}(\boldsymbol{\theta}|M)\frac{\mathrm{d}n}{\mathrm{d}M}V_{\mathrm{ prof}}(M,\boldsymbol{\theta})\right.\times\\ &\qquad\qquad\times\left(\tilde{\mathcal{P}}^{(M,\boldsymbol{ \theta})}_{1}(\uptau)-1\right)\right\},\end{split} \tag{12}\] where we directly have substituted the sum of exponents by the integral limit in the last equality. From the expression above, and assuming that the astrophysical properties are uncorrelated with clustering, it is trivial to include the effect of clustering. Clustering varies in scales much larger than the observed intensity profiles of the sources. Therefore, for a specific realization (or position), we can include the effects of clustering by adding the halo overdensity field \(\delta_{h}\equiv(n_{h}-\langle n_{h}\rangle)/\langle n_{h}\rangle\) to the halo mass function, where \(n_{h}\) is the local halo number density. Therefore, we promote \[\frac{\mathrm{d}n}{\mathrm{d}M}\rightarrow\frac{\mathrm{d}n}{\mathrm{d}M}(1+ \delta_{h}(\boldsymbol{x},M))\,. \tag{13}\] Then, the characteristic function \(\tilde{\mathcal{P}}^{(\delta)}\) accounting for clustering for a _single_ realization is \[\begin{split}\tilde{\mathcal{P}}^{(\delta)}(\uptau)=\exp& \left\{\int\mathrm{d}M\mathrm{d}\boldsymbol{\theta}\mathcal{P}( \boldsymbol{\vartheta}|M)\frac{\mathrm{d}n}{\mathrm{d}M}V_{\mathrm{prof}}(M, \boldsymbol{\vartheta})\right.\times\\ &\qquad\qquad\times\left(\tilde{\mathcal{P}}^{(M,\boldsymbol{ \vartheta})}_{1}(\uptau)-1\right)\delta_{h}\right\}\tilde{\mathcal{P}}^{(u)} \,.\end{split} \tag{14}\] The global characteristic function is then the average over the realizations of \(\tilde{\mathcal{P}}^{(\delta)}\). Since \(\tilde{\mathcal{P}}^{(u)}\) does not depend on the overdensities, we can take it out of the average, and we are left with the exponential of the term including \(\delta_{h}\). We invoke the moment-generating function, which states that for a random variable \(X\), \[\langle e^{X}\rangle=\exp\left\{\sum_{p=1}^{\infty}\langle X^{p}\rangle/p! \right\}\,. \tag{15}\] By definition, \(\langle\delta_{h}\rangle=0\), and all terms with \(p>2\) vanish for a Gaussian distribution. Although gravitational collapse induces non Gaussianities and higher-order terms should be included, we take a first approximation and truncate the moment-generating function at \(p=2\). Furthermore, we relate \(\delta_{h}\) to the underlying matter density field \(\delta_{m}\) with a linear, mass-dependent halo bias \(b_{h}\) so that \(\delta_{h}^{(M)}=b_{h}(M)\delta_{m}\) and the second cumulant of the halo distribution is \[\langle\delta_{h}^{(M)}(\boldsymbol{x})\delta_{h}^{(M^{\prime})}(\boldsymbol{ x})\rangle\equiv b_{h}(M)b_{h}(M^{\prime})\sigma^{2}\,, \tag{16}\] where \(\sigma^{2}\) is the zero-lag variance of the matter distribution. Finally, the overall characteristic function is given by \[\begin{split}\tilde{\mathcal{P}}(\uptau)=\exp&\left\{ \left[\int\mathrm{d}M\mathrm{d}\boldsymbol{\theta}\mathcal{P}(\boldsymbol{ \vartheta}|M)\frac{\mathrm{d}n}{\mathrm{d}M}V_{\mathrm{prof}}(M,\boldsymbol{ \vartheta})\right.\times\\ &\qquad\qquad\times\left(\tilde{\mathcal{P}}^{(M,\boldsymbol{ \vartheta})}_{1}(\uptau)-1\right)b_{h}(M)\right]^{2}\frac{\sigma^{2}}{2} \right\}\tilde{\mathcal{P}}^{(u)}\,.\end{split} \tag{17}\] We can obtain the PDF of the brightness temperature in a point by computing the inverse Fourier transform of the characteristic function above. Sometimes it can be useful to consider the PDF of brightness temperature fluctuations \(\Delta T\equiv T-\bar{T}\) as a crude approximation for foreground subtraction, where \(\bar{T}\) is the mean brightness temperature. This can be obtained multiplying the characteristic function by \(e^{i\bar{T}\uptau}\). ### Voxelized volume and practical considerations The derivation in the subsection above returns the PDF of the brightness temperature in a specific point. In practice, however, we measure the brightness temperature from observations in a discretized map, a data cube in which each cell or voxel (three-dimensional pixels) corresponds to a comoving volume \(V_{\mathrm{vox}}\). Discretizing the map is relevant at three stages of the derivation above: the volume of the profiles, the characteristic function of a single specific emitter, and the cumulants of the matter overdensity field. After discretizing the observed volume, the measured temperature \(T_{i}\) in a voxel centered at \(\boldsymbol{x}_{i}\) is \[T_{i}=\frac{X_{\mathrm{LT}}}{V_{\mathrm{vox}}}\sum_{s}\int_{V_{\mathrm{vox},i} }\mathrm{d}^{3}\boldsymbol{x}\frac{\mathrm{d}L_{s}}{\mathrm{d}\boldsymbol{x} }(\boldsymbol{x}|\boldsymbol{\vartheta}_{s})\,, \tag{18}\] where the sum is over all sources indexed by \(s\). For isotropic intensity profiles and spherical voxels, the right hand side of Eq. (9) can be easily evaluated inverting the relationship between temperature and position for a given emitter (see appendix B in Ref. [53]). However, this is not possible in a more general setup. In particular, the observed signal profiles in LIM surveys are anisotropic due to different angular and spectral resolutions, to line broadening only affecting the profile in the direction along the line of sight, etc. Moreover, although the voxelization can be arbitrary it is more optimal to follow the experimental resolutions, which do not need to correspond to similar distances in the direction along and transverse to the line of sight. Instead, we take a different approach and explicitly compute the spatial integral in Eq. (9), adapted for a voxelized space. First, we consider a three-dimensional space and locate the emitter of interest in its center. We grid the space with cells with the same comoving size that the voxels and compute the total luminosity on each voxel with the integral of Eq. (18). We truncate the emission profile at a minimum relative luminosity \(L_{\rm rel}^{\rm min}\) with respect to the voxel with maximum luminosity (e.g., keeping only the \(N_{\rm vox}^{\rm prof}\) voxels with \(L_{\rm vox}/{\rm max}(L_{\rm vox})\geq L_{\rm rel}^{\rm min}\)). We normalize the luminosity on each voxel so that their sum is still \(L_{s,\rm tot}\), and consider \(V_{\rm pred}=N_{\rm vox}^{\rm prof}V_{\rm vox}\). Then, we take advantage of the scaling property of the Fourier Transform. Let us consider some scale temperature \(T_{0}\) so that any \(T=CT_{0}\). We can assume any PDF \(\mathcal{P}_{T_{0}}\) for \(T_{0}\) to account for any scatter in the conditional relations considered in the previous subsections. We assume a lognormal distribution \[\mathcal{P}_{T_{0}}(T;T_{0})=\frac{\exp\left\{\frac{-\log_{10}\left(\frac{T}{ T_{0}}\right)-\frac{\sigma_{\rm scat}^{2}\log(10)}{2}}{2\sigma_{\rm scat}^{2}} \right\}}{\sqrt{2\pi}T\sigma_{\rm scat}\log(10)} \tag{19}\] with mean \(T_{0}\) and scatter \(\sigma_{\rm scat}\) in dex, so that for any \(T^{\prime}\equiv CT_{0}\) we have \(\mathcal{P}_{T^{\prime}}(T;CT_{0})=\mathcal{P}_{T_{0}}(T/C;T_{0})/C\), which fulfills \(\tilde{\mathcal{P}}_{T^{\prime}}(\uptau;CT_{0})=\tilde{\mathcal{P}}_{T_{0}}( C\uptau;T_{0})\). Grouping all \(N_{\rm vox}^{(i)}\) voxels with the same temperature \(T_{i}\), the characteristic function for a halo with mass \(M\) and properties \(\mathbf{\vartheta}\) is \[\tilde{\mathcal{P}}_{1}^{(M,\mathbf{\vartheta})}(\uptau)=\sum_{i}\frac{N_{\rm vox }^{(i)}}{N_{\rm vox}^{\rm prof}}\tilde{\mathcal{P}}_{T_{0}}(T_{i}\uptau/T_{0} ;T_{0})\,, \tag{20}\] which corresponds to the weighted average of the characteristic function for each position in the spatial profile, i.e., a discretized version of Eq. (9). Finally, the discretization also affects the impact of clustering in the PDF. Voxels are the basic unit of information we have access to, hence halo overdensities are smoothed over scales of the size of the voxel. This can be modeled by convolving the overdensity field with the window function \(W_{\rm vox}\) of the voxel, usually a normalized top-hat function with the extension of the voxel, leaving \[\delta_{h}^{\rm v}(\mathbf{x})=\int\mathrm{d}^{3}\mathbf{x}^{\prime}W_{\rm vox}(\mathbf{x }-\mathbf{x}^{\prime})\delta_{h}(\mathbf{x}^{\prime}), \tag{21}\] where \(\int\mathrm{d}^{3}\mathbf{x}W_{\rm vox}=1\). We substitute \(\delta_{h}\) by \(\delta_{h}^{\rm v}\), and similarly for \(\delta_{m}\), in all their instances in the previous subsection. This change can be summarized with a slight modification in Eq. (17), where the cumulant of matter perturbations must be \[\sigma_{\rm vox}^{2}=\int\frac{\mathrm{d}^{3}\mathbf{k}}{(2\pi)^{3}}\tilde{W}_{\rm vox }^{2}(\mathbf{k})P_{m}^{(s)}(k)\,, \tag{22}\] \(P_{m}^{(s)}\) is the nonlinear matter power spectrum in redshift space and \(\tilde{W}_{\rm vox}\) is the Fourier transform of the voxel window function. We estimate the nonlinear power spectrum as modeled by HMcode[61]. In this work, we model redshift-space distortions with the Kaiser factor and a phenomenological Lorentzian suppression to model the fingers of God (following e.g., Ref. [36]) with characteristic scale \[\sigma_{\rm FoG}=\frac{4\pi}{3}\int\frac{\mathrm{d}k}{(2\pi)^{3}}P_{m}^{\rm lin }(k)\,, \tag{23}\] where \(P_{m}^{\rm lin}\) is the linear matter power spectrum in real space. Still, the PDF is a continuous quantity which cannot be directly measured from the observations. Instead, the VID \(\mathcal{B}\), a histogram of the measured brightness temperature in each voxel normalized by the total number of voxels, can be used as an estimator of the PDF. The relation between the VID and the PDF of the signal (i.e., in the absence of experimental thermal noise) is4 Footnote 4: Previous studies do not normalize the VID by the number of voxels observed, and therefore include a factor \(N_{\rm vox}\) multiplying the integral in Eq. (24). We prefer to normalize the VID to deal with an intensive quantitive, the value of which does not depend on the size of the survey. This approach allows for a more intuitive understanding of the VID values, as well as easier comparisons between experiments. \[\mathcal{B}^{\rm(s)}(\Delta T_{i})=\int_{\Delta T_{i}}\mathrm{d}\Delta T \mathcal{P}(\Delta T)\,, \tag{24}\] where the integral is limited to the temperature interval centered on \(\Delta T_{i}\). Nonetheless, there is no perfect experiment without noise, and the total VID \(\mathcal{B}\) is connected to the total PDF \(\mathcal{P}_{\rm tot}(\Delta T)=(\mathcal{P}*\mathcal{P}_{\rm noise})(\Delta T)\) through Eq. (24), where \(\mathcal{P}_{\rm noise}\) is the instrumental noise PDF and '*' denotes the convolution operator. ## III Effects from extended emission The derivation above is general enough to account for any three-dimensional signal profile and any source that can be embedded in the halo model formalism. Suppose a series of effects that cause extended signal which independently correspond to three-dimensional profiles (or window functions) \(W_{i}\), normalized to unity. Then, the final extended observed profile is given by the convolution of all of them: \[\varrho(\mathbf{x})=\left(W_{1}*W_{2}*\cdots*W_{n}\right)(\mathbf{x})\,. \tag{25}\] As an example, we limit our study to emission coming from halos and to two of the most prevalent causes of extended profiles: experimental resolution and line broadening. The former relates to the fact that any experiment has limited resolution, which prevents to access small scale information and smears the observed maps; the latter is a physical effect caused by the Doppler broadening of the emission line due to the peculiar velocities of the gas where the emission originates. ### Limited resolution and line broadening The small-scale information in line-intensity maps is limited by the angular and spectral resolutions of the experiment. The exact characterization of the beam profiles and the line-spread function depend on each experiment. Here, we will consider a Gaussian beam with full-width half maximum \(\theta_{\mathrm{FWHM}}\) and a Gaussian line-spread function with standard deviation given by the channel width \(\delta\nu\) of the experiment. The standard deviation of the Gaussian window functions for these resolutions correspond to comoving distance scales transverse to and along the line of sight given by \[\sigma_{\perp}=D_{\mathrm{M}}(z)\frac{\theta_{\mathrm{FWHM}}}{\sqrt{8\log 2}} \,,\qquad\sigma_{\parallel}=\frac{c\delta\nu(1+z)}{H(z)\nu_{\mathrm{obs}}}\,, \tag{26}\] respectively, where \(D_{\mathrm{M}}\) is the comoving angular diameter distance, and \(\nu_{\mathrm{obs}}\) is the observed frequency. The window function \(W_{\mathrm{res}}=W_{\mathrm{res}}^{\perp}W_{\mathrm{res}}^{\parallel}\) for the resolution is composed of \[\begin{split} W_{\mathrm{res}}^{\perp}(\mathbf{x}_{\perp})& =\frac{\exp\left\{-(x_{\perp,1}^{2}+x_{\perp,2}^{2})/2\sigma_{ \perp}^{2}\right\}}{\sqrt{2\pi}\sigma_{\perp}}\,,\\ W_{\mathrm{res}}^{\parallel}(x_{\parallel})&=\frac{ \exp\left\{-x_{\parallel}^{2}/2\sigma_{\parallel}^{2}\right\}}{\sqrt{2\pi} \sigma_{\parallel}}\,,\end{split} \tag{27}\] where the subscript '1' or '2' denote the directions of the axis transverse to the line of sight. We assume that the bulk of the line broadening is due to the rotation of the gas in the halo, which in turn only depends on the halo mass. Let us consider that the broadening follows a Gaussian profile with full-width half maximum determined by the rotation velocity \(v(M)\) in units of physical velocity. The standard deviation of the Gaussian profile in comoving space is \[\sigma_{v}(M)=\frac{v(M)(1+z)}{2\sqrt{2\log(2)}H(z)}\,. \tag{28}\] Line broadening effectively reduces the map resolution along the line of sight. Therefore, the window function capturing its effect is the same \(W_{\mathrm{res}}^{\parallel}\) from Eq. (27), but using \(\sigma_{v}\) instead of \(\sigma_{\parallel}\). However, a strictly Gaussian profile would consider that all galaxies have the same angle of inclination with respect to the line of sight. A much more reasonable assumption involves randomly oriented emitters, as considered in Ref. [62]. The actual standard deviation for a galaxy with an inclination angle \(\varphi\) is \(2\sigma_{v}\sin\varphi/\sqrt{3}\), assuming that the previously calculated \(\sigma_{v}\) corresponds to galaxies with median inclination over random inclinations. The average profile can be obtained marginalizing over the inclination of the galaxy, which follows a uniform distribution on \(\cos\varphi\equiv\mu_{\varphi}\): \[\begin{split} W_{\mathrm{broad}}(x_{\parallel},M)=\\ =&\int_{-1}^{1}\mathrm{d}\mu_{\varphi}\frac{\exp \left\{-3x_{\parallel}^{2}/8\sigma_{v}^{2}(M)(1-\mu_{\varphi}^{2})\right\}}{ 4\sigma_{v}(M)\sqrt{2\pi(1-\mu_{\varphi}^{2})/3}}=\\ =&\frac{\sqrt{3\pi/2}}{4\sigma_{v}}\mathrm{Erfc} \left(\sqrt{3x_{\parallel}^{2}/8\sigma_{v}^{2}}\right)\,,\end{split} \tag{29}\] where \(\mathrm{Erfc}\) is the complementary error function. If we assume that these are the only contributions to extended signal, applying Eq. (25) we find \[\begin{split}\varrho(\mathbf{x},M)&=W_{\mathrm{res}}( \mathbf{x})*W_{\mathrm{broad}}(x_{\parallel},M)=\\ &=W_{\mathrm{res}}^{\perp}(\mathbf{x}_{\perp})\left(W_{\mathrm{res}}^{ \parallel}(x_{\parallel})*W_{\mathrm{broad}}(x_{\parallel},M)\right)\,.\end{split} \tag{30}\] If we were to ignore the effect of random inclinations for the line broadening, \(\varrho^{\parallel}\) would be a Gaussian with standard deviation \(\sqrt{\sigma_{\parallel}^{2}+\sigma_{v}^{2}(M)}\). However, we have not found an analytic expression for the convolution of a Gaussian and Eq. (29). Numerically, it can be quickly computed using the convolution theorem. For reference, the Fourier transform of Eq. (29) is \[\tilde{W}_{\mathrm{broad}}(k_{\parallel})=\left(\sqrt{\frac{2}{3}}k_{ \parallel}\sigma_{v}\right)^{-1}F\left(\sqrt{\frac{2}{3}}k_{\parallel}\sigma_{ v}\right)\,, \tag{31}\] where \(F\) is the Dawson integral. ### Experimental case and astrophysical model By design, the VID depends not only on the intrinsic signal but also on the experimental setup of the LIM survey, e.g., through the noise, the resolution, the voxel volume, etc. For instance, if the spectral resolution is bad enough, the effect of line broadening will be negligible. This is why we need to focus on a specific case and generalize the qualitative effect of extended signal in the VID. To augment the effect of line broadening, we choose good experimental resolution. We consider the spectral line CO(1-0) observed at \(\nu_{\rm obs}=28.8\) GHz (\(z=3\)), with \(\theta_{\rm FWHM}=3\) arcmin and \(\delta\nu=15\) MHz, and a noise-per-voxel standard deviation of \(\sigma_{\rm N}=2.5\,\mu\rm K\). This ensures some effect of the line broadening and that the VID is only dominated by noise at small brightness temperatures. Since we are interested in the VID prediction, we assume a good understanding of the foregrounds and calibration of sky subtraction. The optimal pixel size for a projected angular map, balancing the minimization of the correlation between different intensity bins and the number of pixels to reduce the statistical uncertainties corresponds to \(\theta_{\rm FWHM}\)[63]. We find the same to be true for the direction along the line of sight. Therefore, we choose our voxel size in the direction transverse to and along the line of sight as the correspondent to \(\theta_{\rm FWHM}\) and \(\delta\nu^{\rm FWHM}\equiv\delta\nu\sqrt{8\log 2}\), respectively. We model the mean relationship between the total line luminosity \(L_{s,\rm tot}\) and the halo mass using the fiducial COMAP model [64], \[\frac{L_{\rm CO}}{L_{\odot}}(M)=4.9\times 10^{-5}\frac{C}{\left(M/M_{\star} \right)^{A}+\left(M/M_{\star}\right)^{B}}\,, \tag{32}\] with \(A=-2.85\), \(B=-0.42\), \(C=10^{10.63}\), and \(M_{\star}=10^{12.3}\,M_{\odot}\), obtained from a fit to results from Universe Machine [65], COLDz [66] and COPSS [22]. We apply a mean-preserving scatter \(\sigma_{\rm scat}=0.42\) dex in the expression above, introduced in practice in Eq. (19). We assume the halo mass function and mass-dependent linear halo bias from Ref. [67] and Ref. [68], respectively. Probably the best estimation of the gas dynamics in a halo comes from the maximum circular velocity calculated by a halo finder in a simulation. However, there is no given relationship between this velocity and the halo mass. We choose to use the virial velocity as a good approximation, given by \[v(M)=50\,{\rm km/s}\left(\frac{M}{10^{10}\,M_{\odot}}\right)^{1/3}\,, \tag{33}\] to compute the line width with Eq. (28). We refer the interested reader to Ref. [62] for a comprehensive discussion on line broadening, covering the estimation of \(v(M)\) from observations, and the modeling of the line broadening in simulations and in the prediction of the power spectrum multipoles. panel of Fig. 1. We include results for a point source, accounting only for the limited resolution of the experiment, and including line broadening with and without marginalizing over the inclination of the emitter. Limited resolution spreads the signal over three voxels,5 but line broadening results in an even more extended profile for the most massive halos. Marginalizing over random inclinations results in a slightly more peaked profile at the center, but marginally similar tails. Footnote 5: Note that for the more standard voxel size corresponding to \(\delta\nu\) instead of \(\delta_{\nu}^{\rm FWHM}\) the signal extends over 5-7 voxels. For our example, only halos with masses \(\gtrsim 4\times 10^{12}\,M_{\odot}\) have a line broadening larger than the line-spread function. However, as can be seen in the Figure, accounting for the two effects together already results in wider profiles for lighter halos. This, of course, depends on the voxel size and the specific \(v(M)\) relation considered. We can quantify the relative contribution to the intensity map from the emitters that extend beyond the resolution limits with the luminosity-weighted distribution of \(\sigma_{v}\) (bottom panel of Fig. 1). The contribution of halos with emission lines broader than the line-spread function is small, because although they are the brightest emitters, their abundance is very small. We therefore expect a small effect in the VID with respect to considering only the experimental resolution. Finally, the random inclination of the emitters broadens the distribution of \(\sigma_{v}\), but have a similar relative contribution to the line-intensity map (as seen with the cumulative distribution function). We show the effects of the extended profiles in the PDF in Fig. 2, considering the same cases than in Fig. 1. We show the VID of the signal alone and including the noise, using 90 linearly spaced bins of 0.5 \(\mu\)K width in the interval \(T\in[-5,40]\,\mu\)K, as well as the VID corresponding to only noise. The effects induce a very significant change in the VID, following the intuition introduced at the beginning of the section. The extended emission narrows and smooths the VID, resulting in a lower tail towards higher brightness temperatures and a broader, lower peak at low brightness temperatures. Adding fractional contributions of several emitters to voxels which otherwise would be empty results in less very faint voxels. At the same time, distributing the signal of very bright voxels over their adjacent voxels imply that they are not that bright any more. As one could expect from the results shown in Fig. 1, the impact of including the line broadening on top of the experimental resolution is small and limited only to the height of the tail at high temperatures. Marginalizing over the galaxy inclination has a negligible effect for this example, but it may have some impact for a different set up. Note that, although the VID at high brightness temperatures is significantly smaller after considering extended emission, we can expect better sensitivity, since there is a larger deviation from the noise-only signal at low brightness temperatures, where the statistical uncertainties are smaller. Figure 1 is a clear example that extended profiles must be taken into account when computing the LIM PDF. The redistribution of signal to different voxels due to the extended profiles has another consequence for the interpretation of the VID. While for point sources the identification of bright voxels with more massive halos or brighter emitters is already non trivial due to the unknown number of emitters in each voxel, the redistribution of signal further smears this link. This results on a very broad distribution of halo masses without large differences between bright or faint voxels, as found in Ref. [69]. We show the dependence of the VID on clustering in Fig. 3. Higher matter clustering implies a higher clustering of galaxies and subsequently a larger variance in the brightness temperature fluctuations. Therefore, it increases the high-temperature tail of the VID, compensating at temperatures slightly above zero, as shown in the figure and in the inset, respectively. Since this effect can be degenerate with the \(L(M)\) relation, analyses must Figure 2: VID of the signal (top) and including the contribution from the instrumental noise (bottom) for our example. We show the VID for point sources in red, including the effects from limited resolution in blue, adding line broadening for edge-on galaxies in magenta, and marginalizing also over their inclination in orange. The inset in the bottom panel zooms one the deviation over the noise-alone VID (black-dotted). marginalize over \(\sigma_{\rm vox}^{2}\) to avoid biased results. Perhaps surprisingly, line broadening has a smaller impact on the VID than in the power spectrum multipoles (see Ref. [62]). This is because line broadening is more severe for brighter, more massive, but much less abundant halos. While they dominate the contributions to the shot noise of the power spectrum (which depends on the second moment of the luminosity function) and the linear power spectrum (since their fluctuations are more biased), they almost do not contribute to the overall luminosity (see Fig. 1). On top of that, the impact of the bias is smaller and very distributed over all masses in the VID (see Eq. (17)). Furthermore, the quadrupole of the power spectrum traces the anisotropy of the clustering, hence it can be very affected by line broadening, while the VID is not sensitive to anisotropies. ## IV Validation with simulations In this section we compare our predictions with simulations to test the accuracy of the modeling presented in Sec. II. We assume the astrophysical model and experimental set up described in Sec. III.2. We employ the publicly available code Simple6[70], based on lognormal galaxy distribution simulations7[71], to compute the LIM lognormal simulations. We consider the luminosity-weighted average linear halo bias (\(b=4.07\)) corresponding to our model. We generate line-intensity maps for a volume of \(0.158\) (\(\mathrm{Gpc}/h\))3 at a single redshift \(z=3\), which corresponds to \(150^{3}\) cuboid-like voxels, defined as the full-width half maximum of the telescope beam and channel width. We run the simulations with a mesh three times finer (e.g., \(450^{3}\) cells) in order to accurately capture the smoothing and the extended emission. We assign the intensity emitted by each galaxy to the mesh using the 'nearest grid point' routine, apply as many filters as needed depending on the case to smooth the simulated line-intensity map, and then downsample to the voxel-size resolution using the same routine.8 Footnote 6: [https://github.com/mlujnie/simple](https://github.com/mlujnie/simple) Footnote 7: [https://bitbucket.org/komatsu5147/lognormal_galaxies/src/master/](https://bitbucket.org/komatsu5147/lognormal_galaxies/src/master/) Footnote 8: While ‘nearest grid point’ introduces aliases and ringing for power spectrum measurements (see Ref. [72] for a thorough study), we have tested that higher-order mass-assignment kernels such as ‘clouds in cell’ introduce spurious effects in the VID, even after compensating their effect on the map by deconvoluting the kernel. The finer grid also prevents from resolution-dependent artifacts related with the Fourier transform in the final map. For each realization, we consider four different cases: no smoothing applied (i.e., point sources); modeling the experimental resolution with an anisotropic Gaussian filter with standard deviations \(\sigma_{\parallel}=1.37\,\mathrm{Mpc}/h\) and \(\sigma_{\perp}=1.62\,\mathrm{Mpc}/h\) along and transver to the line of sight, respectively; and including on top of it the line broadening with and without accounting for random inclinations. We follow Ref. [62] to implement line broadening in the simulations. We first compute the line width \(\sigma_{v}\) for each galaxy. To do so, we modify Simple to first assign a mass to each galaxy following the halo mass function. We can compute the line luminosity and line width associated with each mass using Eq. (32) with a mean-preserving logarithmic scatter \(\sigma_{\rm scat}\), and Eqs. (33) and (28), respectively. Random galaxy inclinations are accounted for multiplying the edge-on \(\sigma_{v}\) by a factor \(2\sqrt{(1-\mu_{\varphi}^{2})/3}\), where \(\mu_{\varphi}\) is sampled from a uniform distribution within \([-1,1]\). We bin galaxies in the simulation by \(\sigma_{v}\) and compute an intensity-weighted average \(\bar{\sigma}_{v}^{i}\) for each bin. Afterwards, we add the intensity from the galaxies within each bin to empty meshes and apply a Gaussian filter with \(\sigma_{\perp}=0\) and \(\sigma_{\parallel}=\bar{\sigma}_{v}^{i}\). We add all meshes and apply the resolution filter described above. In all cases, we remove the mean from the maps, measure the VID for \(\Delta T\), and compute their average \[\vec{\mathcal{B}}_{a}=\frac{1}{N_{\rm r}}\sum_{i}^{N_{\rm r}}\mathcal{B}_{a}^{( i)} \tag{34}\] Figure 3: VID including the contribution from the instrumental noise for our example and accounting for experimental resolution and the line broadening for randomly oriented galaxies. We show the prediction for several values of \(\sigma_{\rm vox}^{2}\) with different colors, as indicated in the legend, and the noise-only VID with a black-dotted line. The inset in the bottom panel zooms one the deviation over the noise-alone VID. and covariance \[\mathcal{C}_{ab}=\frac{1}{N_{\mathrm{r}}}\sum_{i}^{N_{\mathrm{r}}}\left\{\left( \mathcal{B}_{a}^{(i)}-\bar{\mathcal{B}}_{a}\right)\left(\mathcal{B}_{b}^{(i)}- \bar{\mathcal{B}}_{b}\right)\right\} \tag{35}\] for the \(\Delta T_{a}\) and \(\Delta T_{b}\) bins from \(N_{\mathrm{r}}=1008\) independent realizations. Note that we have defined \(\mathcal{B}_{a}\equiv\mathcal{B}(\Delta T_{a})\). By definition, the galaxy bias in the lognormal simulations used in Simple is linear and the same for all galaxies. We therefore assume \(b(M)=4.07\) for the comparison between the theoretical prediction and the measured \(\bar{\mathcal{B}}\) from the lognormal simulations. Note that for this validation we use \(\sigma_{\mathrm{vox}}^{2}=0.24\) computed from Eq. (22) using also Eq. (23), instead of fitting it to the measurements (see Fig. 3 for the effect of \(\sigma_{\mathrm{vox}}^{2}\) in the VID). We compare theoretical predictions and measurements from the simulations in Fig. 4, including \(\sqrt{\mathcal{C}_{aa}}\) as a reference for the uncertainties. The model provides a good description of the mean measurements from the lognormal realizations, successfully capturing the effects of the smoothing. There is a small absolute deviation between the prediction and the simulations at intermediate scales, present in all four cases. We show the relative difference between the theoretical prediction and the measurement from simulations in Fig. 5. Regarding the clustering, we consider the lognormal simulations introduced above, but also Poisson-sampled realizations without any clustering of emitters; regarding the intensity profiles, we consider point sources and extended profiles with effects from resolution and line broadening marginalizing over the inclination. The mean and covariance for the Poisson realizations are obtained from 100 independent realizations. Here we can appreciate that although our model qualitatively captures the effects of extended profiles, it does not provide an accurate prediction of the VID. The inaccuracy is most severe at intermediate brightness temperatures, where the relative deviation is maximum and the covariance is smaller. We associate this deviation to two potential sources of error. First, we can see that even for point sources clustering is not properly captured. This is because the modeling of clustering is limited. Truncating the moment-generating function at second order removes any impact Figure 5: Relative deviation between the theoretical prediction and the mean from Poisson realizations and lognormal simulations, considering point sources and extended profiles as indicated by the titles of the panels. For the cases with clustering (lognormal), we vary \(\sigma_{\mathrm{vox}}^{2}\), highlighting the fiducial value used throughout the rest of the study in magenta. Shaded gray areas show the relative uncertainties related with the diagonal of the covariance matrix. Note the change of scale in the \(y\)-axis between the two top and the two bottom panels. Figure 4: Mean measured VID from 1008 independent lognormal realizations including instrumental noise (solid step lines) and the prediction using the formalism described in this work (dotted lines). Shaded regions correspond to the square root of the diagonal of the covariance. We show results for point sources in red, including the effects from resolution in blue, adding line broadening in purple and marginalizing over inclinations in orange. of non Gaussianity of the halo distribution in the PDF.9 This could be captured including higher terms in the expansion of the cumulants of clustering in Eq. (15) (and subsequently in Eq. (17)). Note that neglecting nonlinear bias is most likely a bigger issue, but our lognormal simulations assume linear bias by definition. Second, there is a small deviation introduced by the extended profiles even in the absence of clustering. This may be due to limitations of our model or to potential numerical artifacts in the implementation of the line broadening in the simulated line-intensity maps. While the impact of artifacts related with resolution and filters in the power spectrum have been thoroughly investigated, there is significantly less studies on their effects on the histogram of the map. Furthermore, these deviations may not depend trivially on the experimental set up and ratio between \(\sigma_{\parallel}\) and \(\sigma_{v}\). We leave a dedicated study with a deeper study of the numerical artifacts for future work. Footnote 9: Lognormal simulations exhibit a higher degree of non Gaussianity in the galaxy distribution than N-body simulations (see e.g., [73]). Therefore, deviations from the measurements in the lognormal realizations related to the truncation of the moment expansion are expected to be larger than in a more realistic case. Finally, we study the impact of the extended profiles in the covariance matrix of the VID. An analytic estimation of the covariance can be obtained from the 2-point PDF, following the derivation described in Ref. [53] and adapting it to LIM. However, the covariance of the VID is usually assumed to be that of a multinomial distribution, since the VID is a histogram. A multinomial distribution presents a covariance \[\mathcal{C}^{\text{multinom.}}_{ab}=\begin{cases}\frac{\mathcal{B}_{a}}{N_{ \text{vox}}}(1-\mathcal{B}_{a})&\text{if }a=b,\\ -\frac{\mathcal{B}_{a}\mathcal{B}_{b}}{N_{\text{vox}}}&\text{if }a\neq b, \end{cases} \tag{36}\] so that its correlation matrix, defined in general as \(\mathcal{R}_{ab}\equiv\mathcal{C}_{ab}/\sqrt{c_{aa}C_{bb}}\), has off-diagonal terms given by \(\mathcal{R}^{\text{multinom.}}_{ab}=-\sqrt{\mathcal{B}_{a}\mathcal{B}_{b}/[(1- \mathcal{B}_{a})(1-\mathcal{B}_{b})]}\). The diagonal of the numerical covariance computed using Eq. (35) agrees very well with the multinomial variance for all cases, but the off-diagonal terms do not. We show the numerical correlation matrices computed from the lognormal realizations for the four cases in Fig. 6. We do not find the strong negative correlations expected for the multinomial distribution. On the contrary, we find negligible correlation for point sources except for \(\Delta T\lesssim 5\,\mu\text{K}\), which is dominated by the instrumental noise. Once emitters are allowed to contribute to more than one voxel, the off-diagonal correlation grows to \(\lesssim 0.1\); it is slightly higher once line broadening is considered. The deviation from the multinomial distribution off-diagonal covariance is due to physical effects and sources contributing to several voxels inducing correlations beyond the multinomial distribution. Note that the correlation would be larger for smaller voxels, as demonstrated in Ref. [63]. The covariance measured from the simulations does not include the supersample contributions. These contributions, derived for a previous formalism of the VID in Ref. [41], have been found to be generally smaller than statistical covariance from Poisson sampling of a histogram. Similarly, the off-diagonal physical covariance is also expected to be larger than the supersample covariance. ## V Parameter-inference bias We have shown that the VID accounting for extended emission is very different that the standard computation for point sources. Therefore, an analysis assuming point sources is prone to obtain highly biased results. We consider an analysis of the VID with the same experimental set up and model as in previous sections. We focus on the \(L(M)\) relation and take \(C\), \(A\), \(B\) and \(M_{\star}\) as free parameters (see Eq. (32)). We also include \(\sigma^{2}_{\text{vox}}\) as nuisance parameter. Assuming Gaussian errors, the logarithm of the likelihood for the VID at position \(\mathbf{p}\) in the parameter space is \[-2\log\mathcal{L}(\mathbf{p})=\sum_{ab}\Delta\mathcal{B}_{a}(\mathbf{p})\left( \mathcal{C}^{-1}\right)_{ab}\Delta\mathcal{B}_{b}(\mathbf{p})\,, \tag{37}\] Figure 6: Correlation matrices computed numerically from 1008 lognormal realizations. We consider 4 cases as indicated by the titles at the top of each panel. where \(\Delta\mathcal{B}_{a}\) is the difference between the measured VID and the prediction. We use the covariance matrix numerically estimated in Sec. IV. However, the inverse of a covariance matrix estimated from a finite number of realizations is not unbiased; we apply the correction factor described in Ref. [74] to the inverse of the covariance matrix. In our case, the correction amounts to a \(\sim 10\%\) factor. Let us assume, for this exercise, that the data is perfectly described by our modeling of the VID including the extended signal caused by experimental resolution and line broadening for random galaxy inclinations for a set of _true_ parameter values \(\mathbf{p}_{\rm fid}\). An analysis for this model would then return best-fit parameter values \(\mathbf{p}_{\rm bf}^{\rm sm}=\mathbf{p}_{\rm fid}\) and \(\Delta\mathcal{B}_{a}(\mathbf{p}_{\rm fid})=0\). For an incorrect modeling --assuming point sources in our case--, the best-fit \(\mathbf{p}_{\rm bf}^{\rm ps}\) inferred will be biased. This bias can be estimated using a Fisher-matrix analysis, through a linear approximation of the likelihood. This is not a good approximation for our case, since we expect large deviations in \(\mathcal{B}\) for the incorrect model (see Fig. 2). However, this exercise provides a qualitative estimate of the impact in parameter inference. We denote the predictions for the two models with \(\mathcal{B}^{\rm sm}\) and \(\mathcal{B}^{\rm ps}\), respectively, standing for'smoothed' and 'point sources'. The estimated systematic bias introduced due to the poor modeling is [75] \[\Delta\mathbf{p}_{\rm syst}\equiv\mathbf{p}_{\rm bf}^{\rm ps}-\mathbf{p}_{\rm fid }=\] \[=F_{\rm ps}^{-1}\sum_{ab}\mathbf{\nabla}_{p}\mathcal{B}_{a}^{\rm ps} |_{\mathbf{p}_{\rm fid}}\left(\mathcal{C}_{\rm ps}^{-1}\right)_{ab}\left[ \mathcal{B}_{b}^{\rm sm}(\mathbf{p}_{\rm fid})-\mathcal{B}_{b}^{\rm ps}(\mathbf{p}_{ \rm fid})\right]\,, \tag{38}\] where \[F=\sum_{ab}\mathbf{\nabla}_{p}\mathcal{B}_{a}|_{\mathbf{p}_{\rm fid}}\left(\mathcal{C} ^{-1}\right)_{ab}(\mathbf{\nabla}_{p}\mathcal{B}_{b}|_{\mathbf{p}_{\rm fid}}) \tag{39}\] is the Fisher matrix computed for the case of interest, "T" stands for the transpose operator, and \(\mathbf{\nabla}_{p}\mathcal{B}_{a}^{\rm ps}|_{\mathbf{p}_{\rm fid}}\) is the gradient in parameter space of the VID evaluated at \(\mathbf{p}_{\rm fid}\). We show marginalized forecasted uncertainties for the two models including the estimation for the systematic bias in Fig. 7. All parameters present big systematic biases, except for \(\log_{10}M_{\star}\), for which the marginalized 1-dimensional bias is \(\sim 1.8\sigma\). In addition, note that the constraining power and the degeneracies obtained if point sources are assumed are also different. Incomplete modeling may affect the conclusions obtained from a Fisher forecasts, which motivates the use of more accurate models even for qualitative forecast sensitivity [76]. ## VI Conclusions Line-intensity mapping proposes an alternative tracer for the large-scale structure of the Universe, providing unprecedented sensitivity to high redshifts and faint emitters. Contrary to other tracers, LIM is also sensitive to astrophysical processes driving galaxy formation and evolution. This additional dependence adds another layer of complexity to the line-intensity fluctuation maps, which become very non Gaussian. Therefore, LIM power spectra can only capture a fraction of the information encoded in line-intensity maps. Furthermore, the power spectrum is affected by degeneracies between cosmology and astrophysics. These are the main motivations to explore one-point statistics, since they probe the whole distribution of intensity fluctuations and are more sensitive to features in the line-luminosity function. So far, the modeling of LIM one-point statistics has relied on an unphysical assumption: that each emitter contributes just to the voxel that contains it. This assumption implies that experiments have perfect resolution and that all emitters are point sources with negligible line width. In this work we have dropped this assumption and implemented for the first time the contribution of an emitter to several voxels in the LIM PDF theoretical prediction. We build upon previous work which use the halo model [51, 52, 53], noting the additional complication that LIM implies with respect to projected angular fields. We circumvent this complication invoking the Ergodic hypothesis and integrating over the spatial distribution of the intensity profile when computing the intensity PDF of a single emitter. Figure 7: 68% and 95% confidence level forecasted marginalized constraints on the parameters controlling the \(L(M)\) relation for the experimental set up described in Sec. III.2 and IV. We compare the results for point sources (red) and modeling the limited resolution of the experiment and line broadening marginalizing over the emitter inclination (orange), assuming that the latter is the correct description of the data. _True_ parameter values are marked by a star, while the best fit assuming point sources is marked by a black dot. We compare four different cases: point sources, adding the effects of limited resolution, adding on top it line broadening assuming that all galaxies are perfectly edge-on, and marginalizing over their inclination. Extended profiles significantly alter the LIM PDF prediction, with changes of more than one order of magnitudes. The resulting PDF has a broader, lower peak at low intensities and a lower, shorter tail towards high intensities. This follows our intuition: if an emitter contributes to several voxels, there will be many less empty or very faint voxels and also less very bright voxels, respectively. We find that, for the example considered, the dominant effect is related to the limited resolution, while including the effects of line broadening adds a small modification. Since line broadening only acts in the direction along the line of sight and is more relevant for massive, rare emitters, it has a smaller effect on the VID than the the experimental resolution, which acts in the three directions and affects all emitters equally. The qualitative behaviour agrees with our expectations, and althouh the LIM PDF depends a lot on the specific astrophysical model and experimental setup, we expect these results to be generally applicable. We validate, for the first time, the theoretical prediction of the VID against simulations. We compare our implementation with lognormal simulations and find good qualitative agreement. Nonetheless, we still find deviations that are related with the clustering of emitters, as well as potential numerical numerical artifacts from the implementation of line broadening in the simulations or limitations in our model to include extended profiles in the theoretical prediction. We leave further development of the modeling of clustering for the LIM PDF and a deeper study of the implementation of extended profiles in simulations and its impact on the measured VID for future work. Alternative frameworks like the theory of large deviations, which is being developed for the PDF of cosmic magnification (see e.g. Ref. [77] and references there in), provide a better prediction for the matter clustering PDF. Therefore, a combination of both approaches, for instance combining the LIM PDF for unclustered sources with a PDF for the clustering of such sources, might lead to a more accurate prediction. We also study the effects of extended profiles in the VID covariance. First, we see that while the diagonal of the covariance follows closely the variance for a multinomial distribution, as expected from a histogram, the off-diaigonal terms significantly deviate. We find negligible correlation for point sources, instead of the strong negative correlations expected for a multinomial distribution. Second, we find that extended profiles increase the correlation between intensity bins in the VID, reaching correlations of around 10%. Finally, we demonstrate that very large systematic biases in parameter inference can be expected if point-like intensity profiles are assumed. One-point statistics, especially through its complementarity with the power spectrum, holds promise to break parameter degeneracies and significantly increase the constraining power of LIM surveys. In the case observational contaminants preclude the use of the VID, there are alternatives, combining different LIM observations or line-intensity maps and galaxy catalogs, which are more robust to observational systematics. However, since these alternative summary statistics still depend on the intensity PDF, they must account for the profiles of the observed signal from each emitter. This work is but one of the first steps towards an accurate and efficient modeling for the LIM PDF. Further research is required to achieve a successful application to observations. Nonetheless, the promise of LIM one-point summary statistics motivates this effort, since only through their combination with LIM power spectra and other observations LIM surveys will unlock their full potential. ###### Acknowledgements. The author thanks Nickolas Kokron and Gabriela Sato-Polito for key discussions during the development of this work, and especially Patrick Breysse for important discussions and also sharing the implementation of the LIM PDF from his previous work. The author acknowledges funding from the Ramon y Cajal Grant RYC2021-033191-I, financed by MCIN/AEI/10.13039/501100011033 and bythe European Union "NextGenerationEU"/PRTRR, and the computer resources provided by the Spanish Supercomputing Network (RES) node at Universidad de Cantabria and the Institute of Physics of Cantabria (IFCA-CSIC).
2309.13932
Construction of type I-Log blowup for the Keller-Segel system in dimensions $3$ and $4$
We construct finite time blowup solutions to the parabolic-elliptic Keller-Segel system $$\partial_t u = \Delta u - \nabla \cdot (u \nabla \mathcal{K}_u), \quad -\Delta \mathcal{K}_u = u \quad \textup{in}\;\; \mathbb{R}^d,\; d = 3,4,$$ and derive the final blowup profile $$ u(r,T) \sim c_d \frac{|\log r|^\frac{d-2}{d}}{r^2} \quad \textup{as}\;\; r \to 0, \;\; c_d > 0.$$ To our knowledge this provides a new blowup solution for the Keller-Segel system, rigorously answering a question by Brenner, Constantin, Kadanoff, Schenkel, and Venkataramani (Nonlinearity, 1999).
V. T. Nguyen, N. Nouaili, H. Zaag
2023-09-25T08:02:57Z
http://arxiv.org/abs/2309.13932v1
# Construction of type I-log blowup for the Keller-Segel system in dimensions \(3\) and \(4\). ###### Abstract. We construct finite time blowup solutions to the parabolic-elliptic Keller-Segel system \[\partial_{t}u=\Delta u-\nabla\cdot(u\nabla\mathcal{K}_{u}),\quad-\Delta \mathcal{K}_{u}=u\quad\text{in}\ \,\,\mathbb{R}^{d},\,\,d=3,4,\] and derive the final blowup profile \[u(r,T)\sim c_{d}\frac{|\log r|^{\frac{d-2}{d}}}{r^{2}}\quad\text{as}\,\,\,r \to 0,\,\,\,c_{d}>0.\] To our knowledge this provides a new blowup solution for the Keller-Segel system, rigorously answering a question by Brenner _et al_ in [9]. Key words and phrases:Blowup solution, Blowup profile, Stability, Semilinear wave equation 2010 Mathematics Subject Classification: Primary: 35K50, 35B40; Secondary: 35K55, 35K57 November 7, 2021 ## 1. Introduction. We are interested in the existence of blowup solutions to the Keller-Segel system \[\left\{\begin{array}{rl}\partial_{t}u=&\Delta u-\nabla\mathcal{K}_{u}\cdot \nabla u+u^{2},\\ 0=&\Delta\mathcal{K}_{u}+u,\end{array}\right.\quad x\in\mathbb{R}^{d}, \tag{1.1}\] where \(u(t):x\in\mathbb{R}^{d}\to\mathbb{R}\) subject to initial data \(u(0)=u_{0}\). The system (1.1) appears in many biological and astrophysical contexts. Here, \(u(x,t)\) stands for the density of particles or cells and \(\mathcal{K}_{u}\) is a self-interaction potential. In the two-dimensional case, it is used to model the so-called _chemotaxis_ phenomena in biology first introduced by Patlak [48] and Keller-Segel in [41] (see also [42][43] for a derivation of a general model). In higher dimensional cases, the system (1.1) appears as a simplified model for self-gravitating matter in stellar dynamics, see for example [59], [60], [54], [11] and [21]. We refer the paper [38] where the author gives a nice survey of mathematical derivation of (1.1) and related models. It's worth mentioning that the system (1.1) is a special case belonging to a much wider class of nonlocal aggregation equations including those with degenerate diffusion read as \[\partial_{t}u=\Delta A(u)+\nabla\cdot(B(u)\nabla\mathcal{K}\ast u),\quad(x,t) \in\mathbb{R}^{d}\times[0,\infty), \tag{1.2}\] where \(A(u)\) and \(B(u)\) can be nonlinear functions and \(\mathcal{K}\) is an arbitrary local integrable function. In (1.1), we have \(A(u)=B(u)=u\) and \(\mathcal{K}\) is the Newtonian potential. Beside covering a wide range of application, equation (1.2) posses an interesting mathematical phenomena Introduction The purpose of this paper is to study the problem of the following Cauchy problem \[\begin{cases}\partial_{t}u=\Delta u+|u|^{p-1}u\quad\text{with}\;\;p=2,\\ \partial_{t}u=\Delta u+|u|^{p-1}u\quad\text{with}\;\;p=2,\end{cases} \tag{1.1}\] where \(\Delta\) is a smooth smooth smooth function and \(u\) is a smooth smooth function. The solution of (1.1) is a smooth smooth function with \(\Delta=\Delta u\), where \(\Delta\) is a smooth smooth function and \(\Delta\) is a smooth smooth function. The solution of (1.1) is a smooth smooth function with \(\Delta=\Delta u\), where \(\Delta\) is a smooth function and \(\Delta\) is a smooth function. The solution of (1.1) is a smooth function with \(\Delta=\Delta u\), where \(\Delta\) is a smooth function and \(\Delta\) is a smooth function. The solution of (1.1) is a smooth function with \(\Delta=\Delta u\), where \(\Delta\) is a smooth function and \(\Delta\) is a smooth function. The solution of (1.1) is a smooth function with \(\Delta=\Delta\), where \(\Delta\) is a smooth function and \(\Delta\) is a smooth function. We also have the existence of blowup solutions in higher dimensional cases \(d\geq 3\) established in [16] where the authors rigorously constructed a so-called collapsing-ring blowup solutions in the spirit of blowup for the nonlinear Schrodinger equation [46] and found the blowup rate \[\|u(t)\|_{L^{\infty}(\mathbb{R}^{d})}\sim(T-t)^{-\frac{2d}{d-1}}. \tag{1.7}\] This result was formally derived in [35] for \(d=3\) and reestablished by Brenner _et al_ in [9] for \(d\geq 3\) (with a formal analysis too). The authors of [9] also predicted many other blowup patterns for (1.1) in the higher dimensional cases. There are also self-similar solutions having the blowup speed \(\|u(t)\|_{L^{\infty}}\sim(T-t)^{-1}\) described in [36], [52], [28] and [55]. In this paper we exhibit a new type of blowup solutions to (1.1) that haven't been observed in the literature to our knowledge (the possible occurrence of this solution was briefly mentioned in Section 4.3 of [9]). Consider the space dimension \[d=3,\ 4,\] and restrict to the case of radially symmetric solutions. Then, for any smooth radial function \(u\in L^{\infty}(\mathbb{R}^{d})\), the potential term is defined as \[\partial_{r}\mathcal{K}_{u}(r)=-\frac{1}{r^{d-1}}\int_{0}^{r}u(\zeta)\zeta^{d- 1}d\zeta,\quad r=|x|, \tag{1.8}\] and the system (1.1) is written as a nonlocal semilinear heat equation, \[\partial_{t}u=\Delta_{d}u+\Big{(}\frac{1}{r^{d-1}}\int_{0}^{r}u(\zeta)\zeta^{ d-1}d\zeta\Big{)}\partial_{r}u+u^{2}, \tag{1.9}\] where \(u(t):\mathbb{R}_{+}\to\mathbb{R}\) and \(\Delta_{d}\) is the Laplacian acting on radial functions in \(\mathbb{R}^{d}\), i.e. \[\Delta_{d}=\partial_{r}^{2}+\frac{d-1}{r}\partial_{r}.\] Our main result is the following. **Theorem 1.1** (Existence of finite-time blowup solutions to (1.1)).: _Consider \(d=3,4\) and let \(\ell=\frac{d}{d-2}\) and \(\alpha=\frac{d-2}{2d}\). There exists radially symmetric initial data \(u_{0}\in L^{\infty}(\mathbb{R}_{+})\) such that the corresponding solution to System (1.1) blows up in finite-time \(T<\infty\) only at the origin and admits the following asymptotic dynamics._ 1. (Inner expansion) \[u\big{(}y\sqrt{T-t},t\big{)}=\frac{1}{T-t}\Big{[}1-\frac{1}{B_{\ell}}\frac{ \phi_{2\ell}(y)}{|\log(T-t)|}+o\big{(}\frac{1}{|\log(T-t)|}\big{)}\Big{]}\quad \text{as}\quad t\to T,\] (1.10) _where the convergence holds on any compact sets_ \(\{y\leq C\}\)_, the function_ \(\phi_{2\ell}(y)\) _is the polynomial of degree_ \(2\ell\) _satisfying_ \(\Delta\phi_{2\ell}-\frac{1}{2\ell}y\partial_{y}\phi_{2\ell}+\phi_{2\ell}=0\)_,_ \[B_{3}=39360\ \ \text{for}\ \ (d,\ell)=(3,3)\quad\text{and}\quad B_{2}=576\ \ \text{for}\ \ (d,\ell)=(4,2).\] (1.11) 2. (Intermediate profile) _Let_ \(Q\) _be a positive function defined by_ \[\forall\xi\in\mathbb{R}_{+},\quad\frac{1-dQ}{Q^{\ell}}=c_{\ell}\xi^{2\ell} \quad\text{with}\quad c_{\ell}=\frac{d^{\ell+1}}{B_{\ell}(d+2\ell)\ell^{\ell} }>0.\] (1.12) _Let_ \(F(\xi)=dQ(\xi)+\xi Q^{\prime}(\xi)\)_, we have_ \[\sup_{|y|\in\mathbb{R}^{d}}\left|(T-t)u\big{(}y\sqrt{T-t},t\big{)}-F\Big{(} \frac{y}{|\log(T-t)|^{\frac{1}{2\ell}}}\Big{)}\right|\to 0\quad\text{as}\ \ t\to T.\] (1.13) _._ 3. (Final profile) _There exists_ \(u^{*}\in\mathcal{C}(\mathbb{R}_{+}\setminus\{0\},\mathbb{R})\) _such that_ \(u(r,t)\to u^{*}(r)\) _as_ \(t\to T\) _uniformly on compact subsets of_ \(\mathbb{R}_{+}\setminus\{0\}\)_, where_ \[u^{*}(r)\sim(d-2)\left(\frac{2}{c_{\ell}}\right)^{\frac{1}{\ell}}\frac{|\log r |^{\frac{1}{\ell}}}{r^{2}}\quad\text{as}\;\;r\to 0.\] (1.14) **Remark 1.2** (New blowup profile).: One of the significant contribution of this work is the construction of a _new_ blowup profile (1.12) with a log correction to the blowup variable \[\xi=\frac{r}{\sqrt{T-t}\;|\log(T-t)|^{\frac{1}{2\ell}}}\quad\text{with}\quad \ell\geq 2. \tag{1.15}\] Let us mention that Brenner _et al_[9] wondered whether blowup solutions exist with this particular scaling for any dimension \(d\geq 3\), not necessarily with the same power we get here for the log correction (see Section 4.3 in that paper). In particular, they did not provide the exact power of the log correction (i.e. \(\frac{1}{2\ell}\)), as we found here for \(d=3\) or \(d=4\). The appearance of the _new_ blowup scale given in (1.15) shows a strong influence of the drift-term to the blowup dynamic of (1.1). Recall from (1.4) that the stable blowup scale for (1.3) is given with \(\ell=1\) and the intermediate profile is explicitly given. As a consequence, the existing analysis developed for (1.3) can not be straightforward implemented to (1.1), although the general framework for the construction remains the same once the blowup is concerned. It's worth remarking that the final blowup profile derived in [55] for the class of radially symmetric _decreasing_ solutions satisfies \[c_{1}r^{-2}\leq u(r,T)\leq c_{2}r^{-2}\quad\text{and}\quad u(x,T)\leq C(T-t+r^{ 2})^{-1}.\] Recall that our constructed solution is radially symmetric, but not a decreasing function. Indeed, the inner expansion (1.13) involves a Hermite type polynomial of degree \(2\ell\) which changes signs on the set \(y\in(0,y_{0})\) for \(y_{0}\gg 1\). Hence, our intermediate and final blowup profiles (1.13) and (1.14) are excluded from what described in [55] and in agreement with the description of [28] asserting that all Type I blowup solutions are asymptotically backward self-similar. **Remark 1.3** (Co-dimensional stability).: The initial data we consider in the construction depends on \(\ell\) parameters \((d_{i})_{0\leq i\leq\ell-1}\) (see (3.46) for a proper definition) to control growing eigenmodes of the linearized operator. One parameter can be eliminated by the translation in time invariance of the problem, so it remain \((\ell-1)\) eigenmodes to be handled. Roughly speaking, our constructed solution is \((\ell-1)\) co-dimension stable in the sense that if we fix those \((\ell-1)\) unstable directions and perturb only the remaining components of the solution (see Definition (3.1) for a definition of solution decomposition and a bootstrap regime to control them), the solution still admits the same behavior as described in Theorem 1.1. We also remark the connection of our constructed solution to the backward self-similar solutions to (1.9) which is of the form \[u(r,t)=\frac{1}{T-t}\Phi(y),\quad y=\frac{r}{\sqrt{T-t}},\] where \(\Phi\) solves \[0=\Delta_{d}\Phi+\Big{(}\frac{1}{y^{d-1}}\int_{0}^{y}\Phi(\zeta)\zeta^{d-1}d \zeta\Big{)}\partial_{y}\Phi+\Phi^{2}-\frac{1}{2}y\partial_{y}\Phi-\Phi.\] There are four explicit solutions \[\Phi_{1}\equiv 0,\quad\Phi_{2}\equiv 1,\quad\Phi_{3}=\frac{2(d-2)}{y^{2}},\quad \Phi_{4}=\frac{1}{y^{d-1}}\partial_{y}\Big{[}\frac{4(d-2)(2d+y^{2})y^{d}}{(2(d-2) +y^{2})^{2}}\Big{]}.\] The blowup profile introduced in Theorem 1.1 is asymptotically like the constant solution \(\Phi_{2}\) in compact sets. In a recent work [29], Glogic and Schorkhuber showed the stability of the solution with \(\Phi_{4}\) profile in three dimensions in \(H^{3}(\mathbb{R}^{3})\). **Remark 1.4** (Extensions and related problems).: From our analysis, we suspect that the blowup scale with a log correction (1.15) can only occur in dimensions \(d=3\) and \(d=4\). In fact, the appearance of such a log correction is strongly related from our point of view to the presence of a zero eigenvalue for the linearized operator defined below in (2.5), which occurs only for \(d=3,4\) (see Remark 2.1). Other blowup scales without a log correction are suspected to exist similarly as in the nonlinear heat equation (1.3) where a negative eigenmode of the linearized operator is assumed to be dominant in the inner expansion (1.10). The suppression of blowup in (1.1) by modifying the nonlinearity has been an interesting direction recently, see for an example [44], [3], [31], [39]. We remark that our construction actually works for the nonlinear perturbation problem where we consider the nonlinearity \(u^{2}\) added a perturbation \(f(u)\), namely the problem \[\partial_{t}u=\Delta u-\nabla\mathcal{K}_{u}\cdot\nabla u+u^{2}+f(u),\] where \(f\) satisfies the growth condition \[|f(u)|\leq C(1+|u|^{q})\ \ \text{for}\ \ 0\leq q<2,\quad\text{or}\quad f(u)= \epsilon u^{2}\ \ \text{for}\ \ 1+\epsilon>0.\] The first assumption turns to be exponentially small in the self-similarity setting (2.1) and the contribution from this small nonlinear term is neglectable. The later assumption ensures that there associated ODE \(u^{\prime}=(1+\epsilon)u^{2}\) still blows up in finite time, hence the intermediate blowup profile (1.14) and the final profile (1.14) are modified with the factor \(\frac{1}{1+\epsilon}\). We suspect the case \(\epsilon\leq-1\) would prevent blowup to happen, or if blowup does occur, its dynamic would be completely different from what established in this paper. The analysis presented in this work is expectedly applicable to the general equation (1.2) with \(A(u)=u\) and \(B(u)=u^{p-1}\) for \(p>1\) or \(A(u)=u^{m}\) for \(m>0\) and \(B(u)=u\) up to some technicalities. The later case is an interesting model (_porous medium_ type equation) used to describe gravitational collapse phenomena (see [11]). **Strategy of the construction:** We briefly describe the idea of the proof of Theorem 1.1. - _Change of variables_: We consider the change of variables \[u(r,t)=\frac{1}{T-t}w(y,s),\ \ y=\frac{r}{\sqrt{T-t}},\ \ s=-\log(T-t),\] and introduce the partial mass setting \[v(y,s)=\frac{1}{y^{d}}\int_{0}^{y}w(\zeta,s)\zeta^{d-1}d\zeta,\qquad w=\frac{ 1}{y^{d-1}}\partial_{y}(y^{d}v).\] where \(v\) solves the semilinear heat equation \[\partial_{s}v=\Delta_{d+2}v-\frac{1}{2}y\partial_{y}v+dv^{2}+yv\partial_{y}v. \tag{1.16}\] We note that such a transformation is just to simplify the analysis, and emphasize that the strategy and main idea remain the same once we work with the equation satisfied by \(w\). - _Linearization_: Through a formal computation of the blowup profile \(Q\) given in Section 2, we introduce the linearization \[v(y,s)=Q(\xi)+\varepsilon(y,s),\quad\xi=\frac{y}{s^{\frac{1}{2\ell}}},\] where \(\varepsilon\) solves the linearized problem \[\partial_{s}\varepsilon=\mathscr{H}\varepsilon+NL(\varepsilon)+E, \tag{1.17}\] where \(NL\) is a quadratic nonlinear term, \(E\) is a generated error and \(\mathscr{H}\) is the linearized operator \[\mathscr{H}=\Delta_{d+2}-\Big{(}\frac{1}{2}-Q(\xi)\Big{)}y\partial_{y}+\big{(} 2dQ-1+\xi\partial_{\xi}Q(\xi)\big{)}.\] We observe that \(\mathscr{H}\) behaves differently depending on the behavior of the profile \(Q(\xi)\): - For \(y\gg s^{\frac{1}{2\ell}}(\xi\gg 1)\), we have by the decaying property \(|Q(\xi)|+|\xi\partial_{\xi}Q(\xi)|=\mathcal{O}(\xi^{-2})\), the linear operator \(\mathscr{H}\) behaves like \(\Delta_{d+2}-\frac{1}{2}y\partial_{y}-\mathrm{Id}\), which has fully negative spectrum. - For \(y\ll s^{\frac{1}{2\ell}}(\xi\ll 1)\), we have by the asymptotic behavior \(\big{|}Q(\xi)-\frac{1}{d}\big{|}+|\xi\partial_{\xi}Q(\xi)|=\mathcal{O}(\xi^{2})\), the linear operator \(\mathscr{H}\) behaves like \(\Delta_{d+2}-\frac{1}{2\ell}y\partial_{y}+\mathrm{Id}\), which has \(\ell\) positive eigenvalues, a zero eigenvalue and infinity many negative ones. - For \(y\sim s^{\frac{1}{2\ell}}(\xi\sim 1)\), this is the transition region (intermediate zone) that we don't have any asymptotic simplification of \(\mathscr{H}\). This is one of the major difficulties of the paper. Indeed, it makes one of the main differences with respect to the analysis for the nonlinear heat equation (1.3) that results in a different approach presented in this paper. - _Decomposition:_ Based on the behavior of the linearized operator \(\mathscr{H}\), we split the control of \(\varepsilon\) into three regions: for a fixed large constant \(K\gg 1\), - The outer region \(y\geq Ks^{\frac{1}{2\ell}}(\xi\geq K)\): Since \(\mathscr{H}\) behaves like the one with fully negative spectrum, the estimate of \(\varepsilon\) in this region is straightforward by using the semigroup associated to the linear operator \(\Delta_{d+2}-\frac{1}{2}y\partial_{y}\) (see Section 4.6), \[j=0,1,\quad\|(y\partial_{y})^{j}\varepsilon(y,s)\mathbf{1}_{\{\xi\geq K\}}\|_ {L^{\infty}}\lesssim\|(y\partial_{y})^{j}E(s)\|_{L^{\infty}}+\|(y\partial_{y}) ^{j}\varepsilon(y,s)\mathbf{1}_{\{\xi\sim K\}}\|_{L^{\infty}},\] where \(\|(y\partial_{y})^{j}E(s)\|_{L^{\infty}}\lesssim s^{-\frac{1}{\ell}}\) is the typical size of the generated error. We need the information from the intermediate region for the boundary term located on \(\xi\sim K\) to completely close the estimate in the outer region. - The intermediate region \(K\leq y\leq 2Ks^{\frac{1}{2\ell}}\), we control the solution in the weighted \(L^{2}\) norm, \[\|\varepsilon(s)\|_{\flat}^{2}=\int_{K}^{\infty}\frac{|\varepsilon(y,s)|^{2}} {y^{4\ell+2}}\frac{dy}{y}.\] (we can replace the weight \(y^{4\ell+2}\) by \(y^{2k}\) for any \(k\geq 2\ell+1\) with an improved refinement of the generated error). Thanks to the monotone property of the profile \(Q\) and the dissipative structure of the parabolic equation, we are able to arrive at the monotonicity formula (see Lemma 4.6) \[j=0,1,2,\quad\frac{d}{ds}\|(y\partial_{y})^{j}\varepsilon\|_{\flat}^{2}\leq- \delta_{0}\|(y\partial_{y})^{j}\varepsilon\|_{\flat}^{2}+\|(y\partial_{y})^{ j}E\|_{\flat}^{2}+\|(y\partial_{y})^{j}\varepsilon\mathbf{1}_{\flat \succ K}\|_{\flat}^{2}, \tag{1.18}\] where \(\|(y\partial_{y})^{j}E\|_{\flat}^{2}\lesssim s^{-2-\frac{3}{\ell}}\) is the size of the error term. By Sobolev inequality, we obtain a pointwise estimate \(|\varepsilon(y,s)|\lesssim s^{-1-\frac{3}{2\ell}}(|y|^{2\ell+1}+1)\) that provides the necessary information of \(\varepsilon\) at the boundary \(y\sim s^{\frac{1}{2\ell}}\) to complete the estimate in the outer region. It's worth mentioning that in the case of the nonlinear heat equation (1.3), this kind of pointwise estimate can be directly achieved by using a semigroup approach. However, we are not able to follow that approach for the Keller-Segel equation (1.1) due to the lack of knowledge on semigroup theory associated to \(\mathscr{H}\). Here, we still need the information of \(\varepsilon\) at the boundary \(y\sim K\) to completely close the estimate of \(\|(y\partial_{y})^{j}\varepsilon\|_{\flat}\) after a forward integration in time. - The inner region \(y\leq 2K\), the linearized operator \(\mathscr{H}\) is regarded as a perturbation of \[\mathscr{H}_{\frac{1}{2\ell}}+\mathrm{Id}\quad\text{with}\quad\mathscr{H}_{ \frac{1}{2\ell}}:=\Delta_{d+2}-\frac{1}{2\ell}y\partial_{y}.\] The operator \(\mathscr{H}_{\frac{1}{2\ell}}\) is self-adjoint in \(L^{2}_{\rho}\) with the exponential weight \(\rho=\exp(-\frac{y^{2}}{4\ell})y^{d+1}\). Since \(\mathscr{H}_{\frac{1}{2\ell}}\) posses \(\ell\) positive eigenmodes and a zero one, we further decompose \[\varepsilon(y,s)=\varepsilon_{\natural}(y,s)+\tilde{\varepsilon}(y,s),\quad \varepsilon_{\natural}(y,s)=\sum_{k=0}^{2\ell-1}\varepsilon_{k}(s)\varphi_{2k }(y),\quad\langle\tilde{\varepsilon},\varphi_{2k}\rangle_{L^{2}_{\rho}}=0 \text{ for }k=0,\cdots,2\ell-1,\] where \(\varphi_{2k}\) is the eigenfunction of \(\mathscr{H}_{\frac{1}{2\ell}}\) corresponding to the eigenvalue \(-\frac{k}{\ell}\) and \(\tilde{\varepsilon}\) solves the equation \[\partial_{s}\tilde{\varepsilon}=\mathscr{H}_{\frac{1}{2\ell}}\tilde{ \varepsilon}+\tilde{\varepsilon}+\sum_{k=0}^{2\ell-1}\big{[}-\varepsilon_{k} ^{\prime}+\big{(}1-\frac{k}{\ell}\big{)}\varepsilon_{k}\big{]}\varphi_{2k}(y) +R+\tilde{\mathcal{V}}(\tilde{\varepsilon})+NL(\tilde{\varepsilon}),\] where \(\tilde{\mathcal{V}}\) and \(NL\) are small linear and nonlinear terms. Using the spectral gap \(\langle\mathscr{H}_{\frac{1}{2\ell}}\tilde{\varepsilon},\tilde{\varepsilon} \rangle_{L^{2}_{\rho}}\leq-2\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}\) and a standard energy estimate, we end up with (see Lemma 4.5) \[\frac{d}{ds}\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}\leq-\delta\|\tilde{ \varepsilon}\|_{L^{2}_{\rho}}^{2}+\|R\|_{L^{2}_{\rho}}^{2},\quad\text{with} \quad\|R\|_{L^{2}_{\rho}}\lesssim s^{-3}.\] As for the finite dimensional part, we simply obtain by a projection onto the eigenmode \(\varphi_{2k}\), \[\big{|}-\varepsilon_{k}^{\prime}+\big{(}1-\frac{k}{\ell}\big{)}\varepsilon_{ k}\big{|}\lesssim\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}+s^{-2},\quad\big{|} \varepsilon_{\ell}^{\prime}+\frac{2}{s}\varepsilon_{\ell}\big{|}\lesssim\| \tilde{\varepsilon}\|_{L^{2}_{\rho}}+s^{-3}.\] The equation of \(\varepsilon_{\ell}\) is delicate as it's related to the projection onto the null mode where the contribution from the small potential term must be taken into account to produce the factor \(\frac{2}{s}\) as well as an algebraic cancellation in the projection of the error term onto the null mode to reach \(\mathcal{O}(s^{-3})\). Those calculations get used of the precise value \(B_{\ell}\) given in (1.11) (see Lemma 4.4). A forward integration in time yields the estimates \[\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}\lesssim s^{-3},\quad|\varepsilon_{k}(s )|\lesssim s^{-2}\text{ for }\ell+1\leq 2\ell-1,\quad|\varepsilon_{\ell}(s)| \lesssim\frac{\log s}{s^{2}}.\] The remaining growing modes \(\big{(}\varepsilon_{k}(s)\big{)}_{0\leq k\leq\ell-1}\) are then controlled by a topological argument where we need to construct initial data for which these components converges to zeros as \(s\to\infty\). The \(L^{2}_{\rho}\) estimate provides information on compact sets of \(y\), where we can get estimate of \(\varepsilon(y,s)\) for \(y\sim K\) to close the estimate for the intermediate region. This decomposition is detailed in the definition of bootstrap regime (3.1) in which we successfully construct solutions admitting the behavior as described in Theorem 1.1. We organize the rest of the paper as follows: In Section 2 we perform a formal spectral analysis to obtain the blowup profile. We formulate the linearized problem in Section 3 and define a bootstrap regime to control the remainder. In Section 4 we control the remainder in the bootstrap regime and prove that the solution of the linearized problem is trapped in this regime for all time from which we conclude the proof of the main theorem. **Acknowlegments:** V.T. Nguyen is supported by the National Science and Technology Council of Taiwan (ref. 111-2115-M-002-012 and ref. 112-2628-M-002-006). A part of this work was done when V.T. Nguyen visited the Universite Paris Dauphine and he wants to thank the institution for their hospitality and support during his visit. H. Zaag is supported by the ERC Advanced Grant LFAG/266 "Singularities for waves and fluids". ## 2. Formal derivation of the blowup profile In this section we formally derive the blowup profile through a spectral approach that will be rigorously implemented in the next section. As you will see, the profile with a logarithmic correction to the blowup scale only appears in the case of dimension \(d=3\) or \(d=4\) for which the linearized operator poses zero eigenvalues. We work with the self-similar variables \[u(r,t)=\frac{1}{(T-t)}w(y,s),\quad\mathcal{K}_{u}(x,t)=\mathcal{K}_{w}(y,s), \quad y=\frac{r}{\sqrt{T-t}},\quad s=-\log(T-t), \tag{2.1}\] where \(w\) solves the equation for \((y,s)\in\mathbb{R}_{+}\times[-\log T,+\infty)\), \[\partial_{s}w=\Delta_{d}w-\left(\partial_{y}\mathcal{K}_{w}+\frac{1}{2}y \right)\partial_{y}w-w+w^{2}, \tag{2.2}\] with \(\Delta_{d}\) being the Laplacian in \(\mathbb{R}^{d}\) acting on radial functions, i.e. \[\Delta_{d}=\partial_{y}^{2}+\frac{d-1}{y}\partial_{y},\] and the potential term is defined by \[\partial_{y}\mathcal{K}_{w}(y,s)=-\frac{1}{y^{d-1}}\int_{0}^{y}w(\zeta,s) \zeta^{d-1}d\zeta. \tag{2.3}\] A linearization of \(w\) around the nonzero constant solution \[\bar{w}=1,\qquad-\partial_{y}\mathcal{K}_{\bar{w}}=\frac{y}{d},\] leads to the linearized equation for \(q=w-\bar{w},\quad\mathcal{K}_{q}=\mathcal{K}_{w}-\mathcal{K}_{\bar{w}}\): \[\partial_{s}q=\mathscr{L}_{\alpha}q-\partial_{y}\mathcal{K}_{q}\partial_{y}q +q^{2},\quad(y,s)\in\mathbb{R}_{+}\times[-\log T,+\infty), \tag{2.4}\] where \[\mathscr{L}_{\alpha}=\Delta_{d}-\alpha\;y\partial_{y}+1\quad\text{with}\quad \alpha=\frac{d-2}{2d}. \tag{2.5}\] **Spectral properties of \(\mathscr{L}_{\alpha}\):** The linear operator \(\mathscr{L}_{\alpha}\) is formally a self-adjoint operator in \(L^{2}_{\omega}(\mathbb{R}_{+})\) with the weight function \[\omega(y)=c_{d}y^{d-1}e^{-\frac{\alpha|y|^{2}}{2}},\qquad\int_{0}^{\infty} \omega(y)dy=1, \tag{2.6}\] where \(c_{d}=2^{\frac{d}{2}-1}\alpha^{-\frac{d}{2}}\Gamma\left(\frac{d}{2}\right)\) to normalize \(\|\omega\|_{L^{1}}=1\), \(\Gamma\) is the Gamma function. The spectrum of \(\mathscr{L}_{\alpha}\) is discrete \[\text{spec}(\mathscr{L}_{\alpha})=\{\lambda_{2n}=1-2n\alpha,\;n\in\mathbb{N}\}, \tag{2.7}\] and the corresponding eigenfunction \(\phi_{2n}\) is a polynomial of degree \(2n\) that satisfies \[\Delta_{d}\phi_{2n}-\alpha y\partial_{y}\phi_{2n}=-2n\alpha\phi_{2n}. \tag{2.8}\] In particular, we have the close form of \(\phi_{2n}\) given by \[\phi_{2n}(y)=H_{n}(z)=\sum_{k=0}^{n}A_{n,k}z^{k}\quad\text{with}\quad z=2\alpha y ^{2}, \tag{2.9}\] where \(H_{n}\) is the regular solution to the Kummer-type ODE \[4zH_{n}^{\prime\prime}+\big{(}2d-z\big{)}H_{n}^{\prime}+nH_{n}=0. \tag{2.10}\] and \(A_{n,k}\)'s satisfy the recurrence relation \[A_{n,n}=1,\quad A_{n,k-1}=-\frac{2k(2k+d-2)}{n-k+1}\;A_{n,k}\;\;\text{for}\;\;k =1,...,n. \tag{2.11}\] We note the orthogonality identity \[\int_{0}^{\infty}\phi_{2n}(y)\phi_{2m}(y)\omega(y)dy=(2\alpha)^{-\frac{d-2}{2} }\int_{0}^{\infty}H_{n}(z)H_{m}(z)z^{\frac{d-1}{2}}e^{-\frac{z}{4}}dz=a_{n} \delta_{n,m}, \tag{2.12}\] where \(\delta_{n,m}=0\) if \(n\neq m\) and \(\delta_{n,m}=1\) if \(n=m\). **Remark 2.1** (The zero mode).: Consider \(\ell\in\mathbb{N}\) such that \(\lambda_{2\ell}=1-2\alpha\ell=0\), which gives \(\ell=\frac{1}{2\alpha}=\frac{d}{d-2}\in(1,3]\). We see that there are only two cases for which \(\ell\) is an integer number: \[\ell=3\;\;\text{for}\;\;d=3\quad\text{and}\quad\ell=2\;\;\text{for}\;\;d=4.\] **Remark 2.2** (The first few eigenfucntions).: We list here the first few eigenfunctions (generated by Matlab symbolic) served for our computation later: - for \(d=3\), \[H_{0}(z) =1,\qquad\quad H_{1}(z)=z-6,\qquad\quad H_{2}(z)=z^{2}-20z+60,\] \[H_{3}(z) =z^{3}-42z^{2}+420z-840,\] \[H_{4}(z) =z^{4}-72z^{3}+1512z^{2}-10080z+15120,\] \[H_{5}(z) =z^{5}-110z^{4}+3960z^{3}-55440z^{2}+277200z-332640,\] \[H_{6}(z) =z^{6}-156z^{5}+8580z^{4}-205920z^{3}+2162160z^{2}-8648640z+8648640,\] - for \(d=4\), \[H_{0}(z) =1,\quad H_{1}(z)=z-8,\quad H_{2}(z)=z^{2}-24z+96,\] \[H_{3}(z) =z^{3}-48z^{2}+576z-1536,\] \[H_{4}(z) =z^{4}-80z^{3}+1920z^{2}-15360z+30720.\] We can expand an arbitrary polynomial \(P_{n}(z)\) in terms of \(\sum_{k=0}^{n}H_{k}(z)\) through the inverse \[\begin{pmatrix}1\\ z\\ z^{2}\\ \vdots\\ z^{n}\end{pmatrix}=\mathcal{D}_{n}^{-1}\begin{pmatrix}H_{0}\\ H_{1}\\ H_{2}\\ \vdots\\ H_{n}\end{pmatrix}\quad\text{with}\quad\mathcal{D}_{n}=\begin{pmatrix}1&&&&\\ A_{1,0}&1&&&\\ A_{2,0}&A_{2,1}&1&&\\ \vdots&\vdots&&\ddots&\\ A_{n,0}&A_{n,1}&A_{n,2}&\cdots&1\end{pmatrix}, \tag{2.13}\] where \(A_{i,j}\) is given by (2.11). A direct check yields \[\mathcal{D}_{n}^{-1}=\{|A_{i,j}|\}_{1\leq i,j\leq n},\quad z^{n}=\sum_{k=0}^{n} |A_{n,k}|H_{k}(z), \tag{2.14}\] from which and the orthogonality (2.12) imply \[\int_{0}^{\infty}y^{2n}\phi_{2m}(y)\omega(y)dy=0\quad\text{for}\;\;m\geq n+1. \tag{2.15}\] **Appearance of Type I log-profile:** Consider \[(d,\ell)=(3,3)\quad\text{and}\quad(d,\ell)=(4,2).\] We decompose \(q(y,s)\) according to the eigenspace of \(\mathscr{L}_{\alpha}\), namely that \[q(y,s)=\sum_{k\in\mathbb{N}}a_{k}(s)\phi_{2k}(y)\equiv\sum_{k\in\mathbb{N}}a_{ k}(s)H_{k}(z),\quad z=2\alpha y^{2}.\] Since \(\sum_{k\geq\ell+1}a_{k}(s)\phi_{2k}(y)\) is the projection of \(q(y,s)\) on the negative mode of \(\mathscr{L}_{\alpha}\), we may ignore it in this formal derivation, meaning that we only consider the ansatz \[q(y,s)=\sum_{k\leq\ell}a_{k}(s)\phi_{2k}(y).\] By assuming the zero mode is dominant in the sense that \[|a_{k}(s)|\ll|a_{\ell}(s)|\;\;\text{for}\;\;k\leq\ell-1, \tag{2.16}\] we plugin this ansatz into (2.4) and project it onto the eigenmode \(\phi_{2j}\) for \(j=0,...,\ell\): \[a^{\prime}_{j}=(1-2j\alpha)a_{j}+\|\phi_{2j}\|_{L^{2}_{\omega}}^{-2}\left\langle NL,\phi_{2j}\right\rangle_{L^{2}_{\omega}},\] where \[NL=-\Big{(}\sum_{k\leq\ell}a_{k}\partial_{y}\phi_{2k}\Big{)}\Big{(}\sum_{k \leq\ell}a_{k}\partial_{y}\mathcal{K}_{\phi_{2k}}\Big{)}+\Big{(}\sum_{k\leq \ell}a_{k}\phi_{2k}\Big{)}^{2}.\] From (2.16) and the fact the \(\int_{0}^{\infty}P_{n}(y)\omega(y)dy=\mathcal{O}(1)\), we see that \[\|\phi_{2j}\|_{L^{2}_{\omega}}^{-2}\left\langle NL,\phi_{2j}\right\rangle_{L^{ 2}_{\omega}}=\mathcal{O}\big{(}a_{\ell}^{2}\big{)}\quad\text{and}\quad\|\phi_ {2\ell}\|_{L^{2}_{\omega}}^{-2}\left\langle NL,\phi_{2\ell}\right\rangle_{L^{ 2}_{\omega}}=a_{\ell}^{2}B_{\ell}+o\big{(}a_{\ell}^{2}\big{)},\] where \[B_{\ell}=\|\phi_{2\ell}\|_{L^{2}_{\omega}}^{-2}\left\langle-\partial_{y}\phi _{2\ell}\partial_{y}\mathcal{K}_{\phi_{2\ell}}+\phi_{2\ell}^{2},\phi_{2\ell} \right\rangle_{L^{2}_{\omega}}.\] To compute \(B_{\ell}\), we simply expand \[-\partial_{y}\phi_{2\ell}\partial_{y}\mathcal{K}_{\phi_{2\ell}}+\phi_{2\ell}^ {2}=\sum_{k=0}^{2\ell}B_{k}\phi_{2k}(r)\equiv\sum_{k=0}^{2\ell}B_{k}H_{k}(z), \tag{2.17}\] from which we directly obtain the constant \(B_{\ell}\) by the orthogonality (2.12). **Compute the constant \(B_{\ell}\) in (2.17):** From the definition \(\phi_{2\ell}(y)=H_{\ell}(z)\) with \(z=2\alpha r^{2}\), we have \[8\alpha z^{\frac{d}{2}}\partial_{z}\mathcal{K}_{H_{\ell}}=-\int_{0}^{z}H_{\ell }(\xi)\xi^{\frac{d-2}{2}}d\xi.\] We then write \[-\partial_{y}\phi_{2\ell}\partial_{y}\mathcal{K}_{\phi_{2\ell}}+\phi_{2\ell}^ {2}=\frac{1}{y^{d-1}}\partial_{y}\big{(}y^{d-1}\phi_{2\ell}\partial_{y} \mathcal{K}_{\phi_{2\ell}}\big{)}=\frac{1}{z^{\frac{d-2}{2}}}\partial_{z} \left(H_{\ell}\int_{0}^{z}H_{\ell}(\xi)\xi^{\frac{d-2}{2}}d\xi\right).\] For the case \((d=3,\ell=3)\), we have \[-\partial_{y}\phi_{2\ell}\partial_{y}\mathcal{K}_{\phi_{2\ell}}+ \phi_{2\ell}^{2} =\frac{1}{\sqrt{z}}\partial_{z}\left(H_{3}\int_{0}^{z}H_{3}(\xi) \sqrt{\xi}d\xi\right)\] \[=\frac{5}{3}z^{6}-\frac{416}{3}z^{5}+\frac{12628}{3}z^{4}-57792z^ {3}+364560z^{2}-940800z+705600\] \[=\sum_{k=0}^{6}B_{k}H_{6}(z), \tag{2.18}\] where the second line is computed by Matlab symbolic, and we have from (2.14) the value of \(B_{3}\) is given by \[B_{3}=\frac{5}{3}|A_{6,3}|-\frac{416}{3}|A_{5,3}|+\frac{12628}{3}|A_{4,3}|-57792 =39360. \tag{2.19}\] Thus, the ODE satisfied by \(a_{3}\) is \[a_{3}^{\prime}\sim B_{3}a_{3}^{2},\qquad\text{hence},\ \ \ a_{3}(s)\sim-\frac{1}{B_{3}s}. \tag{2.20}\] For the case \((d=4,\ell=2)\), we similarly compute \[-\partial_{y}\phi_{2\ell}\partial_{y}\mathcal{K}_{\phi_{2\ell}}+ \phi_{2\ell}^{2} =\frac{1}{z}\partial_{z}\left(H_{2}\int_{0}^{z}H_{2}(\xi)\xi d\xi\right)\] \[=\frac{3}{2}z^{4}-70z^{3}+1056z^{2}-5760z+9216\] \[=\frac{3}{2}H_{4}+B_{3}H_{3}+B_{2}H_{2}+\cdots, \tag{2.21}\] where we use (2.14) to compute the value of \(B_{2}\) which is given by \[B_{2}=\frac{3}{2}|A_{4,2}|-70|A_{3,2}|+1056=576. \tag{2.22}\] Hence, \[a_{2}^{\prime}\sim 576a_{2}^{2}\qquad\text{hence},\ \ \ a_{2}(s)\sim-\frac{1}{576s}. \tag{2.23}\] In summary, we have found the asymptotic expansion in \(L_{\omega}^{2}\) as follows: \[w(y,s)=1-\frac{1}{B_{\ell}s}\phi_{2\ell}(y)+\cdots=1-\frac{1}{B_{\ell}}\frac{ (2\alpha y^{2})^{\ell}}{s}+\cdots, \tag{2.24}\] where \(B_{\ell}\) is given in (2.19) and (2.22) for the case \((d=3,\ell=3)\) and \(d=2,\ell=2\) respectively. **Matching asymptotic expansions and the profile:** The inner expansion (2.24) suggests to look for a profile of the form \[w(y,s)\sim F(\xi)\quad\text{with}\quad\xi=\frac{y}{s^{\frac{1}{2\ell}}}=\frac {|x|}{\sqrt{T-t}|\log(T-t)|^{\frac{1}{2\ell}}}. \tag{2.25}\] Plugging this form to (2.2), we obtain at the leading order the nonlocal ODE satisfied by \(F\): \[\frac{1}{\xi^{d-1}}\partial_{\xi}\Big{(}F\int_{0}^{\xi}F(\zeta)\zeta^{d-1}d \zeta\Big{)}-\frac{1}{2}\xi F^{\prime}-F=0, \tag{2.26}\] subjected to the initial condition \[F(0)=1.\] One looks for solution of the form \[F(\xi)=\frac{1}{\xi^{d-1}}\partial_{\xi}(\xi^{d}Q),\] and convert (2.26) to the ODE \[dQ\big{(}Q-1/d\big{)}+\big{(}Q-1/2\big{)}\xi Q^{\prime}=0,\quad Q(0)=\frac{1}{d}. \tag{2.27}\] The two trivial solutions are \(Q=0\) and \(Q=\frac{1}{d}<1/2\) for \(d\geq 3\). We observe from (2.27) that \(Q(\xi)\) is a positive decreasing function on \((0,\infty)\), namely that \[Q(\xi)>0,\;Q^{\prime}(\xi)<0\text{ for }\xi\in(0,\infty). \tag{2.28}\] As for the positivity, we assume there exists a first point \(\xi^{*}\in(0,\infty)\) such that \(Q(\xi^{*})=0\) and \(Q(\xi)>0\) for \(\xi\in(0,\xi^{*})\), then \(Q^{\prime}(\xi^{*})<0\). From (2.27), we have \(-\frac{1}{2}Q^{\prime}(\xi^{*})=0\), which is a contradiction. As for the decreasing, we assume there exist a first point \(\tilde{\xi}\) such that \(Q^{\prime}(\tilde{\xi}=0)\) and \(Q^{\prime}(\xi)<0\) for all \(\xi\in(0,\tilde{\xi})\), then equation (2.27) gives either \(Q(\tilde{\xi})=0\) or \(Q(\tilde{\xi})=\frac{1}{d}\). The first case is not possible because of the strict positivity of \(Q\), the second case gives \(Q(0)=Q(\tilde{\xi})=\frac{1}{d}\) for all \(\xi\in(0,\tilde{\xi})\) consequently \(Q^{\prime}(\xi)=0\) for all \(\xi\in(0,\tilde{\xi})\), which is a contradiction. We conclude that \(Q\) is decreasing and connects \(Q(0)=1/d\) to \(Q(\infty)=0\). By solving \(\xi=\xi(Q)\), we obtain the autonomous ODE \[\frac{\partial\xi}{\partial Q}+\frac{Q-1/2}{dQ(Q-1/d)}\xi=0\quad\text{with }\xi(0)=\frac{1}{d}\] whose solution is implicitly given by \[c_{\ell}\xi^{2\ell}=\frac{1-dQ}{Q^{\ell}},\quad c_{\ell}\in\mathbb{R}_{+}. \tag{2.29}\] From the initial condition \(Q(0)=1/d\), we look for an expansion near \(\xi=0\), \[Q(\xi)=\frac{1}{d}-\frac{c_{\ell}}{d^{\ell+1}}\xi^{2\ell}+\frac{\ell c_{\ell} ^{2}}{d^{2\ell+1}}\xi^{4\ell}+\mathcal{O}(\xi^{6\ell})\quad\text{for}\quad \xi\to 0. \tag{2.30}\] As for \(\xi\) large, the decay condition \(Q(\xi)\to 0\) as \(\xi\to\infty\), then from (2.29) we see that \[Q(\xi)\sim c_{\ell}^{-\frac{1}{\ell}}\xi^{-2}+\mathcal{O}(\xi^{-4})\quad\text {for}\quad\xi\to\infty. \tag{2.31}\] From the relation \(F(\xi)=dQ+\xi Q^{\prime}\), we end up with \[F(\xi)=1-\frac{c_{\ell}(d+2\ell)}{d^{\ell+1}}\xi^{2\ell}+\frac{c_{\ell}^{2} \ell(d+4\ell)}{d^{2\ell+1}}\xi^{4\ell}+\mathcal{O}(\xi^{6\ell})\quad\text{as }\;\xi\to 0.\] Matching this expansion with (2.24) yields the value of \(c_{\ell}\): \[\frac{c_{\ell}(d+2\ell)}{d^{\ell+1}}=\frac{(2\alpha)^{\ell}}{B_{\ell}},\quad \text{hence},\quad c_{\ell}=\frac{(2\alpha)^{\ell}d^{\ell+1}}{B_{\ell}(d+2 \ell)}. \tag{2.32}\] The correct value of \(c_{\ell}\) is crucial in many algebra cancellation in the rigorous analysis later (for example, improved estimates of the projection onto the null mode in Lemmas 4.2 and (4.4)). ## 3. Linearized problem and bootstrap regime In this section, the constants \(\ell\) and \(\alpha\) are fixed as \[\ell=3\ \ \text{for}\ \ d=3\quad\text{and}\quad\ell=2\ \ \text{for}\ \ d=4,\quad\alpha=\frac{d-2}{2d}=\frac{1}{2\ell}.\] We formulate the problem to show that there exist initial data so that the problem (2.2) has a global in time solution that satisfies \[\sup_{y\geq 0}\Big{|}w(y,s)-F\big{(}ys^{-\frac{1}{2d}}\big{)}\Big{|}\to 0\quad \text{as}\quad s\to\infty, \tag{3.1}\] where \(F(\xi)=\frac{1}{\xi^{d-1}}\partial_{\xi}(\xi^{d}Q(\xi))=dQ(\xi)+\xi\partial_{ \xi}Q(\xi)\) and \(Q(\xi)\) is defined by (2.29). ### The partial mass setting Since we are working in the radial setting, it's convenient to work in the partial mass to simplify the analysis, namely we introduce \[m_{w}(y,s)=\int_{0}^{y}w(\zeta,s)\zeta^{d-1}d\zeta,\quad w(y,s)=\frac{\partial _{y}m_{w}}{y^{d-1}},\quad\partial_{y}\mathcal{K}_{w}=-\frac{m_{w}}{y^{d-1}}. \tag{3.2}\] We write from (2.2) the equation for \(m_{w}\), \[\partial_{s}m_{w}=\partial_{y}^{2}m_{w}-\frac{d-1}{y}\partial_{y}m_{w}-\frac{ 1}{2}y\partial_{y}m_{w}+\frac{d-2}{2}m_{w}+\frac{m_{w}\partial_{y}m_{w}}{y^{d- 1}}. \tag{3.3}\] To keep the same scaling invariance of the original solution \(w(y,s)\), we further introduce \[v(y,s)=\frac{m_{w}(y,s)}{y^{d}},\quad w=dv+y\partial_{y}v=\frac{1}{y^{d-1}} \partial_{y}(y^{d}v), \tag{3.4}\] where we write from (3.3) the equation for \(v\): \[\partial_{s}v=\Delta_{d+2}v-\frac{1}{2}y\partial_{y}v-v+dv^{2}+yv\partial_{y}v, \tag{3.5}\] where \(\Delta_{d+2}\) stands for the Laplacian in dimension \(d+2\) acting in the radial functions, i.e. \[\Delta_{d+2}=\partial_{y}^{2}+\frac{d+1}{y}\partial_{y}.\] We have found in the previous section through a formal spectral analysis and matching asymptotic expansions the following approximate blowup profile to (3.5), \[Q=Q(\xi),\quad\forall\xi=ys^{-\frac{1}{2d}}\geq 0,\] that solves (2.27) and is defined by (2.29). We recall that the profile \(Q\) is strictly monotone and positive, \[Q(0)=\frac{1}{d},\quad\lim_{\xi\to\infty}Q(\xi)=0,\quad Q(\xi)>0,\quad Q^{ \prime}(\xi)<0,\quad\forall\xi>0. \tag{3.6}\] The monotonicity of \(Q\) makes the perturbative analysis simpler for the associated linearized problem from (3.5). In contrary, a linearization from (2.2) around \(F\) would make the analysis more complicated in terms of technicalities due to the lack of monotone property of \(F\). This is one of the reasons we choose to work with the partial mass setting (3.4). Nevertheless, there is always equivalent between the analysis with (3.4) and the one with the original variable \(w(y,s)\) upto some technicalities. ### Linearization A linearization of (3.5) around the profile \(Q\), namely \[v(y,s)=Q(\xi)+\varepsilon(y,s), \tag{3.7}\] leads to the linearized problem \[\partial_{s}\varepsilon=\mathscr{H}\varepsilon+NL(\varepsilon)+E, \tag{3.8}\] where \(\mathscr{H}\) is the second order linear operator \[\mathscr{H}=\Delta_{d+2}-V_{1}y\partial_{y}+V_{2}, \tag{3.9}\] with \[V_{1}(\xi)=\frac{1}{2}-Q(\xi),\quad V_{2}(\xi)=2dQ-1+\xi\partial_{\xi}Q, \tag{3.10}\] and \(E\) is the generated error, \[E=\Delta_{d+2}Q-\partial_{s}Q. \tag{3.11}\] and \(NL\) is the nonlinear quadratic term \[NL(\varepsilon)=d\varepsilon^{2}+y\varepsilon\partial_{y}\varepsilon. \tag{3.12}\] From the asymptotic behavior of the profile \(Q(\xi)\) given in (2.30) and (2.31), we observe \[V_{1}(\xi)=\frac{1}{2\ell}+\mathcal{O}(\xi^{2\ell}),\quad V_{2}(\xi)=1+ \mathcal{O}(\xi^{2\ell}),\quad\xi\ll 1,\] and \[V_{1}(\xi)=\frac{1}{2}+\mathcal{O}(\xi^{-2}),\quad V_{2}(\xi)=-1+\mathcal{O}( \xi^{-2}),\quad\xi\gg 1,\] Thus, the full linearized operator \(\mathscr{H}\) behaves differently depending on the region: \[\mathscr{H}\sim\mathscr{H}_{\frac{1}{2\ell}}+\mathrm{Id},\quad\text{where} \quad\mathscr{H}_{\frac{1}{2\ell}}=\Delta_{d+2}-\frac{1}{2\ell}y\partial_{y}, \quad\text{for}\;\;y\ll s^{-\frac{1}{2\ell}}, \tag{3.13}\] and \[\mathscr{H}\sim\mathscr{H}_{\frac{1}{2}}-\mathrm{Id},\quad\text{where}\quad \mathscr{H}_{\frac{1}{2}}=\Delta_{d+2}-\frac{1}{2}y\partial_{y},\quad\text{ for}\;\;y\gg s^{-\frac{1}{2\ell}}. \tag{3.14}\] We note that in _the outer region_\(y\gg s^{-\frac{1}{2\ell}}\), the operator \(\mathscr{L}\) behaves the same as for the one considered for the classical semilinear heat equation (1.3). However, _the inner region_\(y\ll s^{-\frac{1}{2\ell}}\) and _the intermediate region_\(y\sim s^{-\frac{1}{2\ell}}\), the operator behaves differently in comparison with what is known in the analysis for (1.3). To our knowledge, there is no a complete spectral theory for the full linear operator \(\mathscr{H}\). We thus use a different approach to avoid this missed piece in the analysis, especially in the intermediate region. We recall here the spectral properties of \(\mathscr{H}_{\frac{1}{2\ell}}\), which play an important role in the analysis when the inner region is concerned. The linear operator \(\mathscr{H}_{\frac{1}{2\ell}}:H_{\rho}^{2}(\mathbb{R}_{+})\to L_{\rho}^{2}( \mathbb{R}_{+})\) is self-adjoint, where the weight function \[\rho(y)=e^{-\frac{|y|^{2}}{4\ell}}y^{d+1}.\] In particular, we have from (2.8) and (3.4), \[\mathscr{H}_{\frac{1}{2\ell}}\varphi_{2n}=-\frac{n}{\ell}\varphi_{2n},\quad n \in\mathbb{N}, \tag{3.15}\] where the following relation holds \[\varphi_{2n}(y)=\frac{1}{y^{d}}\int_{0}^{y}\phi_{2n}(\zeta)\zeta^{d-1}d\zeta, \tag{3.16}\] with \(\phi_{2n}\) being defined explicitly in (2.9). We use the even index \(2n\) instead of \(n\) to infer that we are in the radial setting and the eigenfunctions are only polynomials of even degrees. Note that \(\varphi_{2n}\) has the same form as \(\phi_{2n}\). We also have the orthogonality \[\int_{0}^{\infty}\varphi_{2n}(y)\varphi_{2m}(y)\rho(y)dy=c_{n}\delta_{n,m}, \tag{3.17}\] and the family of the eigenfunctions \(\{\varphi_{2n}\}_{n\in\mathbb{N}}\) forms a complete orthogonal basis in \(L^{2}_{\rho}(\mathbb{R}_{+})\) in the sense that for any \(g\in L^{2}_{\rho}(\mathbb{R}_{+})\) we can decompose it as \[g(y)=\sum_{n\in\mathbb{N}}g_{n}\varphi_{2n}(y),\quad g_{n}=\langle g,\varphi_{ 2n}\rangle_{\rho}=\int_{0}^{\infty}g\varphi_{2n}\rho dy. \tag{3.18}\] ### Bootstrap regime Our aim is to construct a global in time solution \(\varepsilon(y,s)\) to (3.8) that satisfies \[\|\varepsilon(s)\|_{L^{\infty}(\mathbb{R}_{+})}\to 0\quad\text{as}\quad s \to\infty. \tag{3.19}\] This requirement implies that the dynamics of (3.8) mainly relies on the linear part \(\mathscr{H}\) since the nonlinear term is roughly quadratic. The construction is based on the following observation: **- the outer region \(y\gg s^{\frac{1}{2\ell}}\) (\(\xi\gg 1\)):** thanks to the decay of \(Q(\xi)\), the linear part \(\mathscr{H}\sim\mathscr{H}_{\frac{1}{2}}-\text{Id}\) has a fully negative spectrum. Thus, we can control the solution in this region without difficulties. In particular, let \(K\gg 1\) be a large fixed constant and define \[\varepsilon^{\text{ex}}(y,s)=\varepsilon(y,s)\big{(}1-\chi_{{}_{K}}(\xi) \big{)}, \tag{3.20}\] where \[\chi_{{}_{K}}(\xi)=\chi_{{}_{0}}\Big{(}\frac{\xi}{K}\Big{)}, \tag{3.21}\] and \(\chi_{{}_{0}}\) is the cut-off function \[\chi_{{}_{0}}\in\mathcal{C}^{\infty}\big{(}\mathbb{R}_{+},[0,1]\big{)},\quad \chi_{{}_{0}}(\xi)=1\text{ if }\xi\in[0,1]\quad\text{and}\quad\chi_{{}_{0}}(\xi)=0\text{ if }\xi\geq 2.\] From (3.8), we write the equation for \(\varepsilon^{\text{ex}}(y,s)\), \[\partial_{s}\varepsilon^{\text{ex}}=\big{(}\mathscr{H}_{\frac{1}{2}}-\text{Id }\big{)}\varepsilon^{\text{ex}}+\big{(}1-\chi_{{}_{K}}\big{)}\big{[}Qy \partial_{y}\varepsilon+(2dQ+\xi\partial_{\xi}Q)\varepsilon+NL(\varepsilon)+E \big{]}+\mathcal{E}^{bd}(\varepsilon), \tag{3.22}\] and \(\mathcal{E}^{bd}\) is the boundary term due to the cut-off defined by \[\mathcal{E}^{bd}(\varepsilon)=\Big{(}-\partial_{s}\chi_{{}_{K}}+\Delta_{d+2} \chi_{{}_{K}}-\frac{1}{2\ell}y\partial_{y}\chi_{{}_{K}}\Big{)}\varepsilon+2 \partial_{y}\chi_{{}_{K}}\partial_{y}\varepsilon. \tag{3.23}\] We need here the information of \(\varepsilon\) at the boundary \(Ks^{\frac{1}{2\ell}}\leq y\leq 2Ks^{\frac{1}{2\ell}}\) (\(K\leq\xi\leq 2K\)) that we retrieve from the estimate of \(\varepsilon\) in the intermediate region to close the estimate for \(\varepsilon^{\text{ex}}\). **- the intermediate region \(y\sim s^{\frac{1}{2\ell}}\) (\(\xi\sim 1\)):** As mentioned earlier, there is a lack of a complete description of spectral property of \(\mathscr{H}\), we are not able to use the semigroup estimate as for the nonlinear heat (1.3) (see for example, [45], [47]). Thanks to the monotone property of \(Q\) and the dissipation, we can achieve the control of \(\varepsilon\) in this region through a standard energy estimate from (3.8). To obtain a small enough estimate, we need to refine the approximate solution by introducing \[\Psi(y,s)=Q(\xi)+\hat{\Psi}(y,s),\quad\hat{\Psi}(y,s)=-\frac{1}{B_{\ell}s} \Big{(}\varphi_{2\ell}(y)-\frac{(2\alpha y^{2})^{\ell}}{2\ell+d}\Big{)}\chi_{ {}_{0}}(\xi), \tag{3.24}\] where \(B_{\ell}\) is the constant defined in (2.22) and (2.19) and \(2\alpha=\frac{1}{\ell}\). Recall from (2.24) and (3.4), we have an equivalent inner expansion of \(v(y,s)\) in \(L^{2}_{\rho}\): \[v(y,s)=\frac{1}{d}-\frac{1}{B_{\ell}s}\varphi_{2\ell}(y)+\cdots, \tag{3.25}\] and from (2.30) and (2.32), \[Q(\xi)=\frac{1}{d}-\frac{1}{B_{\ell}(2\ell+d)}(2\alpha\xi^{2})^{\ell}+\frac{ \ell d}{B_{\ell}^{2}(2\ell+d)^{2}}(2\alpha\xi^{2})^{2\ell}+\mathcal{O}(\xi^{ 6\ell})\quad\text{as}\quad\xi\ll 1. \tag{3.26}\] Hence, we have for \(y<s^{\frac{1}{2\ell}}\), \[\Psi(y,s)=\frac{1}{d}-\frac{\varphi_{2\ell}(y)}{B_{\ell}s}+\frac{\ell d}{B_{ \ell}^{2}(2\ell+d)^{2}}\frac{(2\alpha y^{2})^{2\ell}}{s^{2}}+\mathcal{O}\Big{(} \frac{\langle y\rangle^{6\ell}}{s^{3}}\Big{)}, \tag{3.27}\] which agrees with the expansion (3.25). We then linearize \[v(y,s)=\Psi(y,s)+\hat{\varepsilon}(y,s), \tag{3.28}\] and write from (3.8) and the relation \(\varepsilon=\hat{\Psi}+\hat{\varepsilon}\), \[\partial_{s}\hat{\varepsilon}=\mathscr{H}\hat{\varepsilon}+\hat{\mathcal{V}} \hat{\varepsilon}+NL(\hat{\varepsilon})+\hat{E}, \tag{3.29}\] where \(\mathscr{H}\), \(NL\) are defined in (3.9), (3.12), and \(\hat{\mathcal{V}}\) is small linear operator \[\hat{\mathcal{V}}\hat{\varepsilon}=-\hat{\Psi}y\partial_{y}\hat{\varepsilon} +(2d\hat{\Psi}+y\partial_{y}\hat{\Psi})\hat{\varepsilon}, \tag{3.30}\] and \(\hat{E}\) is the generated error \[\hat{E}(y,s)=-\partial_{s}(Q(\xi)+\hat{\Psi})+\Delta_{d+2}Q(\xi)+\mathscr{H} \hat{\Psi}+NL(\hat{\Psi}). \tag{3.31}\] We introduce the following norm: for a fixed large constant \(K\gg 1\), \[\|\hat{\varepsilon}(s)\|_{\flat}^{2}=\int_{0}^{\infty}(1-\chi_{{}_{K}}(y)) \Big{(}\frac{|\hat{\varepsilon}|^{2}}{|y|^{4\ell+2}}\Big{)}\frac{dy}{y}. \tag{3.32}\] In fact, we can replace the power \(4\ell+2\) by any integer number \(2k\) with \(k\geq 2\ell+1\), so that after some integration by parts, we get an estimate of the form \[\frac{d}{ds}\|\hat{\varepsilon}(s)\|_{\flat}^{2}\leq-\delta(k)\|\hat{ \varepsilon}(s)\|_{\flat}^{2}+\|\hat{E}(s)\|_{\flat}^{2}+\text{"boundary terms $y\sim K$''},\] where \(\delta(k)\) is strictly positive for \(k\geq 2\ell+1\). Due to the cut-off \((1-\chi_{{}_{K}}(y))\), we need information of \(\hat{\varepsilon}\) for \(K\leq y\leq 2K\) to estimate the boundary term and to complete the estimate of \(\|\hat{\varepsilon}(s)\|_{\flat}\). This information is retrieved from the control in the inner region. **- the inner region \(y\ll s^{\frac{1}{2\ell}}\) (\(\xi\ll 1\)):** The linearized operator \(\mathscr{H}\) behaves like \(\mathscr{H}_{\frac{1}{2\ell}}+\text{Id}\) that has \(\ell-1\) positive eigenvalues, a zero mode and infinite many negative ones. We need further refinement to achieve the control of \(\hat{\varepsilon}\) in this region. More precisely, we decompose \[\hat{\varepsilon}(y,s)=\Psi(y,s)+\hat{\varepsilon}_{\natural}(y,s)+\tilde{ \varepsilon}(y,s), \tag{3.33}\] where \[\hat{\varepsilon}_{\natural}(y,s)=\sum_{k=0}^{2\ell-1}\hat{\varepsilon}_{k}(s )\varphi_{2k}(y),\quad\hat{\varepsilon}_{k}(s)=\langle\hat{\varepsilon}, \varphi_{2k}\rangle_{\rho}. \tag{3.34}\] The introduction of \(\hat{\varepsilon}_{\natural}\) is to obtain a spectral gap estimate once we perform \(L^{2}_{\rho}\) estimate for \(\tilde{\varepsilon}\). In fact, we have the orthogonality condition \[\langle\tilde{\varepsilon},\varphi_{2k}\rangle_{\rho}=0\quad\text{for}\quad k=0, \cdots,\ell, \tag{3.35}\] where \(\varphi_{2k}\) is the eigenfunction associated to the linear operator \(\mathscr{H}_{\frac{1}{2\ell}}\), from which we have the estimate \[\langle\mathscr{H}_{\frac{1}{2\ell}}\tilde{\varepsilon},\tilde{\varepsilon} \rangle_{\rho}+\langle\tilde{\varepsilon},\tilde{\varepsilon}\rangle_{\rho} \leq-\langle\tilde{\varepsilon},\tilde{\varepsilon}\rangle_{\rho}. \tag{3.36}\] From (3.8), we write the equation for \(\tilde{\varepsilon}\), \[\partial_{s}\tilde{\varepsilon}=\big{(}\mathscr{H}_{\frac{1}{2\ell}}+\text{Id }\big{)}\tilde{\varepsilon}+\tilde{\mathcal{V}}\tilde{\varepsilon}+NL(\tilde{ \varepsilon})+\tilde{E}(y,s), \tag{3.37}\] where \(\mathscr{H}\) is the linearized operator around \(Q\) introduced in (3.9), \(\mathcal{V}\) is a small first order linear term, \[\tilde{\mathcal{V}}=-\tilde{V}_{1}y\partial_{y}+\tilde{V}_{2}, \tag{3.38}\] with \[\tilde{V}_{1}=\frac{1}{d}-\Psi-\hat{\varepsilon}_{\natural},\quad\tilde{V}_{2 }=2d\Psi-2+y\partial_{y}\Psi+2d\hat{\varepsilon}_{\natural}+y\partial_{y}\hat {\varepsilon}_{\natural}, \tag{3.39}\] the nonlinear term \(NL(\tilde{\varepsilon})\) is defined by (3.12) and \(E\) is the total error term, \[\tilde{E}(y,s)=-\partial_{s}(\Psi+\hat{\varepsilon}_{\natural})+\Delta_{d+2}Q (\xi)+\mathscr{H}(\hat{\Psi}+\hat{\varepsilon}_{\natural})+NL(\hat{\Psi}+ \hat{\varepsilon}_{\natural}). \tag{3.40}\] We now define the bootstrap regime to fully control the solution to (3.8). **Definition 3.1** (Bootstrap regime).: Let \(s>1\) and \(A>1\), we define \(\mathcal{S}_{A}(s)\) the set of all functions \(\varepsilon(s)\in W^{1,\infty}(\mathbb{R}_{+})\) such that \[|\hat{\varepsilon}_{k}(s)|\leq As^{-2}\quad\text{for}\quad 0\leq k\neq\ell\leq 2 \ell-1, \tag{3.41}\] \[|\hat{\varepsilon}_{\ell}(s)|\leq A^{2}s^{-2}\log s, \tag{3.42}\] \[\|\tilde{\varepsilon}(s)\|_{L^{2}_{\rho}(\mathbb{R}_{+})}\leq As^{-3}, \tag{3.43}\] \[j=0,1,2,\quad\|(y\partial_{y})^{j}\hat{\varepsilon}(s)\|_{\natural}\leq A^{1 +j}s^{-1-\frac{3}{2\ell}}, \tag{3.44}\] \[j=0,1,\quad\|(y\partial_{y})^{j}\varepsilon^{\text{ex}}(s)\|_{L^{\infty}( \mathbb{R}_{+})}\leq A^{4+j}s^{-\frac{1}{\ell}},\quad\|y\varepsilon^{\text{ ex}}(s)\|_{L^{\infty}(\mathbb{R}_{+})}\leq A^{4}s^{-\frac{1}{2\ell}}, \tag{3.45}\] where \(\hat{\varepsilon}\), \(\tilde{\varepsilon}\), \(\varepsilon^{\text{ex}}\) and \(\|\cdot\|_{\natural}\) are introduced in (3.33), (3.20) and (3.32). **Remark 3.2** (Order of estimates).: The bootstrap estimates defined in the shrinking set \(\mathcal{S}_{A}\) shows the priorities in order to achieve the control of \(\varepsilon\) in the whole \(\mathbb{R}_{+}\) as follows: we first obtain \(L^{2}_{\rho}\)-estimate for \(\tilde{\varepsilon}\) which directly gives \(L^{2}_{loc}\)-estimate, then a standard parabolic regularity yields \(L^{\infty}(y\lesssim K)\) bound for \(\hat{\varepsilon}\) for any \(K>0\); this \(L^{\infty}_{loc}\)-estimate is then used in the energy estimate of \(\|\hat{\varepsilon}(s)\|_{\natural}\) to bound boundary terms (having the support \(y\in[K,2K]\)) due to the cut-off \(\chi_{{}_{K}}(y)\) (see (3.32)). A parabolic regularity type argument yields a similar estimate for \(\|y\partial_{y}\hat{\varepsilon}(s)\|_{\natural}\), from which and Sobolev we get \(L^{\infty}\)-estimates for \(y\lesssim Ks^{\frac{1}{s^{2\ell}}}\). This \(L^{\infty}\)-bound for \(y\sim Ks^{\frac{1}{s^{2\ell}}}\) enters the estimate of \(\|\varepsilon^{\text{ex}}\|_{L^{\infty}}\) due to the cut-off \(\chi_{{}_{K}}(\xi)\). We thus can obtain an \(L^{\infty}\) bound for \(\varepsilon\) in the whole \(\mathbb{R}_{+}\). Since the nonlinear is quadratic, we just need a rough bound of \(\|\varepsilon\|_{L^{\infty}(\mathbb{R})}\) to handle this nonlinear term in all estimates. ### Existence of solutions in the bootstrap regime The strategy is to show that if we start with initial data \(\varepsilon(s_{0})\in\mathcal{S}_{A}(s_{0})\), then the corresponding solution \(\varepsilon(s)\) to the equation (3.8) stays in \(\mathcal{S}_{A}(s)\) for all \(s\geq s_{0}\). In particular, we consider the initial data of the form \[\varepsilon(y,s_{0})=\hat{\Psi}(y,s_{0})+\hat{\varepsilon}(y,s_{0}),\quad\hat{ \varepsilon}(y,s_{0})\equiv\hat{\psi}[\mathbf{d},A,s_{0}](y)=\frac{A}{s_{0}^{ 2}}\left(\sum_{i=0}^{\ell-1}d_{i}\varphi_{2i}\right)\chi_{{}_{0}}(\xi) \tag{3.46}\] where \(\hat{\Psi}\) is defined in (3.24), \(\varphi_{2i}\)'s are the eigenfunctions of \(\mathscr{H}_{\frac{1}{2\ell}}\), \(\chi_{{}_{0}}\) is the cut-off function introduced right after (3.21), \[s_{0}\gg 1,\ \ A\gg 1,\ \ \mathbf{d}=(d_{0},\cdots,d_{\ell-1})\in B_{1}( \mathbb{R}^{\ell}),\] are real parameters to be determined for which the corresponding solution \(\varepsilon(s)\) is trapped in \(\mathcal{S}_{A}(s)\) for all \(s\geq s_{0}\). Precisely, we aim at proving the following proposition which is the central of our construction leading to the conclusion of Theorem 1.1. **Proposition 3.3** (Existence of solutions in \(\mathcal{S}_{A}\)).: _There are \(s_{0}\gg 1,A\gg 1\) and \(\mathbf{d}\in B_{1}(\mathbb{R}^{\ell})\) such that the solution \(\varepsilon(s)\) to (3.8) with the initial data \(\varepsilon(s_{0})=\psi[\mathbf{d},A,s_{0}]\) defined in (3.46) is trapped in \(\mathcal{S}_{A}(s)\) for all \(s\geq s_{0}\)._ Proof.: By the definition of \(\hat{\psi}[\mathbf{d},A,s_{0}]\) and the projection of \(\hat{\psi}[\mathbf{d},A,s_{0}]\) onto \(\varphi_{2k}\), we obtain by a direct computation and the exponential decay of the weight function \(\rho\), \[\hat{\psi}_{k}=\frac{Ad_{k}}{s_{0}^{2}}+\mathcal{O}(s_{0}^{-2}e^{-\kappa s_{0} ^{1/\ell}})\quad\text{for}\ \ k=1,\cdots,\ell-1,\] and \[|\hat{\psi}_{k}|=\mathcal{O}(As_{0}^{-2}e^{-\kappa s_{0}^{1/\ell}})\quad\text {for}\ \ k\geq\ell,\] for some \(\kappa>0\), and \[|\tilde{\psi}(y)| =\big{|}\hat{\psi}(y)-\sum_{k=0}^{2\ell-1}\hat{\psi}_{k}\varphi_{ 2k}(y)\big{|}=\Big{|}\sum_{k=0}^{\ell-1}\hat{\psi}_{k}\varphi_{2k}(y)\big{(}1- \chi_{{}_{0}}(\xi)\big{)}-\sum_{k=\ell}^{2\ell-1}\hat{\psi}_{k}\varphi_{2k}(y) \Big{|}\] \[\lesssim\frac{A}{s_{0}^{2}}\langle y\rangle^{2\ell-2}\mathbf{1}_ {\{\xi\geq 1\}}+As_{0}^{-2}e^{-\kappa s_{0}^{1/\ell}}\langle y\rangle^{4\ell-2 },\quad\forall y>0.\] This yields the bounds \[\|\tilde{\psi}\|_{L^{2}_{\rho}}\lesssim As_{0}^{-2}e^{-\kappa s_{0}^{1/\ell}},\quad\sum_{j=0}^{2}\|(y\partial_{y})^{j}\hat{\psi}\|_{\flat}\lesssim As_{0}^ {-2}e^{-\kappa s_{0}^{1/\ell}}.\] By the definition (3.20), we have by \(\chi_{{}_{0}}(\xi)\big{(}1-\chi_{{}_{K}}(\xi)\big{)}=0\) for \(K\geq 2\), thus, \(\varepsilon^{\rm ex}(s_{0})\equiv 0\). We then conclude that for \(A\gg 1\) and \(s_{0}\gg 1\), the initial data \(\varepsilon(y,s_{0})\in\mathcal{S}_{A}(s_{0})\) with strictly inequalities, except for the first \(\ell\) components \(\hat{\psi}_{k}\) with \(k=0,\cdots,\ell-1\). From the local Cauchy problem of (1.1) in the radial setting in \(L^{\infty}(\mathbb{R}^{d})\), for each initial data \(\varepsilon(s_{0})=\tilde{\Psi}(s_{0})+\hat{\psi}_{\mathbf{d},A,s_{0}}\in \mathcal{S}_{A}(s_{0})\), there is a unique solution \(\varepsilon(s)\in\mathcal{S}_{A}(s)\) for \(s\in[s_{0},s_{*})\). If \(s_{*}=+\infty\), we are done. If \(s_{*}<\infty\), we have \(\varepsilon(s_{*})\in\partial\mathcal{S}_{A}(s_{*})\). We claim that \(\varepsilon(s_{*})\) touches the boundary \(\partial\mathcal{S}_{A}(s_{*})\) only for the first \(\ell\) components \(\hat{\varepsilon}_{k}(s_{*})\) with \(k=0,\cdots,\ell-1\). In particular, we claim the following: **Proposition 3.4** (Reduction to finite dimensional problem).: _For \(A\gg 1\), \(s_{0}=s_{0}(A)\gg 1\), there exists \(\mathbf{d}=(d_{0},\cdots,d_{\ell-1})\in B_{1}(\mathbb{R}^{\ell})\) such that if the solution \(\varepsilon(s)\) to (3.8) with the initial data \(\varepsilon(y,s_{0})=\tilde{\Psi}(y,s_{0})+\hat{\psi}[\mathbf{d},A,s_{0}](y)\) defined as in (3.46) satisfies \(\varepsilon(s)\in\mathcal{S}_{A}(s)\) for \(s\in[s_{0},s_{*}]\) and \(\varepsilon(s_{*})\in\partial\mathcal{S}_{A}(s_{*})\), then it holds_ \[\big{(}\hat{\varepsilon}_{0},\cdots,\hat{\varepsilon}_{\ell-1}\big{)}(s_{*}) \in\partial\Big{(}\Big{[}-\frac{A}{s_{*}^{2}},\frac{A}{s_{*}^{2}}\Big{]}\Big{)} ^{\ell}. \tag{3.47}\] _Moreover, we have_ \[\frac{d}{ds}\hat{\varepsilon}_{k}^{2}(s_{*})>0\quad\text{for}\;\;k=0,\cdots, \ell-1. \tag{3.48}\] Assuming Proposition (3.4), we see from (3.47) that the map \[\Theta:[-1,1]^{\ell} \to\partial\big{(}[-1,1]^{\ell}\big{)},\] \[(d_{0},\cdots,d_{\ell-1}) \mapsto\frac{s_{*}^{2}}{A}\big{(}\hat{\varepsilon}_{0},\cdots, \hat{\varepsilon}_{\ell-1}\big{)}(s_{*})\] is well defined. From the transverse crossing (3.48), we see that \((\hat{\varepsilon}_{0},\cdots,\hat{\varepsilon}_{\ell-1})(s)\) actually crosses it boundary at \(s=s_{*}\), hence, \((\hat{\varepsilon}_{0},\cdots,\hat{\varepsilon}_{\ell-1})(s)\) only leaves \(\mathcal{S}_{A}(s)\) at \(s=s_{0}\). This is a contradiction since \(\Theta\) is the identity map on the boundary sphere and it can not be a continuous retraction of the unit ball. This concludes the proof of Proposition 3.3, assuming Proposition 3.4. ## 4. Control the solution in the bootstrap regime ### Properties of the shrinking set We claim the following. **Lemma 4.1** (Properties of the shrinking set).: _Let \(A\gg 1\) and \(s\geq s_{0}\gg 1\) and \(\varepsilon(s)\in\mathcal{S}_{A}(s)\) be a solution to (3.8). We have_ _i) (Local_ \(L^{\infty}\)_-estimate) For all_ \(M>0\) _and_ \(j=0,1\)_,_ \[\|\partial_{y}^{j}\tilde{\varepsilon}(s)\|_{L^{\infty}(y\leq M)}\lesssim C(M) As^{-3},\quad\|\partial_{y}^{j}\tilde{\varepsilon}(s)\|_{L^{\infty}(y\leq M)} \lesssim C(M)A^{2}s^{-2}|\log s|. \tag{4.1}\] _ii) (Pointwise estimate)_ \[\forall y>0,\quad|\hat{\varepsilon}(y,s)|+|y\partial_{y}\hat{\varepsilon}(y,s )|\lesssim A^{3}s^{-1-\frac{3}{2\ell}}\langle y\rangle^{2\ell+1}. \tag{4.2}\] _iii) (Global_ \(L^{\infty}\)_-estimate)_ \[\|\hat{\varepsilon}(s)\|_{L^{\infty}(\mathbb{R}_{+})}+\|(y\partial_{y})\hat{ \varepsilon}(s)\|_{L^{\infty}(\mathbb{R}_{+})}\lesssim A^{5}s^{-\frac{1}{ \ell}}. \tag{4.3}\] Proof.: (i) From the \(L^{2}_{\rho}\) bound (3.43) of \(\tilde{\varepsilon}\), we get \[\|\tilde{\varepsilon}(s)\|_{L^{2}(y\leq 2M)}\lesssim e^{\frac{M^{2}}{2\ell}} \|\tilde{\varepsilon}(s)\|_{L^{2}_{\rho}}\lesssim C(M)As^{-3}.\] A standard parabolic regularity then yields the estimate \[\|\tilde{\varepsilon}(s)\|_{L^{\infty}(y\leq M)}+\|\partial_{y}\tilde{ \varepsilon}(s)\|_{L^{\infty}(y\leq M)}\lesssim C(M)As^{-3}.\] We recall from the decomposition (3.33), \[\tilde{\varepsilon}(y,s)=\sum_{k=0}^{2\ell-1}\hat{\varepsilon}_{k}(s)\varphi_{ 2k}(y)+\tilde{\varepsilon}(y,s). \tag{4.4}\] Using the bootstrap bounds (3.41), (3.41) and the \(L^{\infty}\) local bound of \(\tilde{\varepsilon}\) yields the desired estimates. (ii) We first claim that if \(\int_{y\geq 1}(u^{2}+|u\partial_{y}u|^{2})y^{-1}dy<+\infty\), then \[\|u\|_{L^{\infty}(y\geq 1)}\lesssim\int_{y\geq 1}(u^{2}+|y\partial_{y}u|^{2})y^{-1 }dy. \tag{4.5}\] By making a change of variable \(v(z)=u(Mz)\), we write \[\|u\|_{L^{\infty}([M,2M])}^{2} =\|v\|_{L^{\infty}([1,2])}^{2}\lesssim\int_{1}^{2}v^{2}(z)dz+\int _{1}^{2}|\partial_{z}v(z)|^{2}dz\] \[\lesssim\int_{M}^{2M}v^{2}(y)\frac{dy}{M}+\int_{M}^{2M}M|\partial _{y}v(y)|^{2}dy\] \[\lesssim\int_{M}^{2M}v^{2}(y)y^{-1}dy+\int_{M}^{2M}|y\partial_{y} v(y)|^{2}y^{-1}dy.\] Taking the supremum with respect to \(M\) yields the inequality (4.5). We then apply (4.5) with \(u=\frac{\tilde{\varepsilon}}{y^{2\ell+1}}\) and \(u=\frac{y\partial_{y}\tilde{\varepsilon}}{y^{2\ell+1}}\) to obtain from (3.44) \[\forall y\geq 1,\quad|\hat{\varepsilon}(y,s)|+|y\partial_{y}\hat{\varepsilon} (y,s)|\lesssim A^{3}s^{-1-\frac{3}{2\ell}}\langle y\rangle^{2\ell+1},\] from which and (4.1), we obtain (4.2). (iii) The estimate (4.3) follows from (4.2) and the bootstrap bound (3.45). ### Decomposition of the error We claim the following. **Lemma 4.2** (Estimate of the generated error).: _We have_ _(i)_ (\(L^{\infty}\)-bound of \(E\) and \(\hat{E}\))__ \[\|E(s)\|_{L^{\infty}(\mathbb{R}_{+})}+\|y\partial_{y}E(s)\|_{L^{\infty}( \mathbb{R}_{+})}+\|\hat{E}(s)\|_{L^{\infty}(\mathbb{R}_{+})}+\|y\partial_{y} \hat{E}(s)\|_{L^{\infty}(\mathbb{R}_{+})}\lesssim s^{-\frac{1}{\ell}}. \tag{4.6}\] _(ii)_ (Decomposition of \(\hat{E}\))__ \[\hat{E}(y,s)=\sum_{k=0}^{2\ell-1}\hat{E}_{k}(s)\varphi_{2k}(y)+\hat{R}(y,s), \tag{4.7}\] _where_ \[\sum_{k=0,k\neq\ell}^{2\ell-1}|\hat{E}_{k}(s)|\lesssim s^{-2},\qquad|\hat{E}_{ \ell}(s)|\lesssim s^{-3},\quad\|\hat{R}(s)\|_{L^{2}_{p}}\lesssim s^{-3}, \tag{4.8}\] _and_ \[\forall y\lesssim s^{\frac{1}{2\ell}},\quad\sum_{j=0}^{2}|(\langle y\rangle \partial_{y})^{j}\hat{R}(y,s)|\lesssim s^{-3}\langle y\rangle^{6\ell-2}. \tag{4.9}\] _In particular, we have_ \[\sum_{j=0}^{2}\int_{1}^{\infty}\frac{|(y\partial_{y})^{j}\hat{E}(y,s)|^{2}}{y ^{4\ell+2}}\frac{dy}{y}\lesssim s^{-2-\frac{3}{\ell}}. \tag{4.10}\] **Remark 4.3**.: The improved estimate for \(\hat{E}_{\ell}\) reaching the order \(s^{-3}\) comes from a crucial algebraic cancellation to remove the term of order \(s^{-2}\) thanks to the precise choice of (2.32). Proof.: (i) From (3.11), we have the estimate for all \(y\geq 0\), \[|E(y,s)| =|E(\xi,s)|=\big{|}-\partial_{s}Q(\xi)+\Delta_{d+2}Q(\xi)\big{|}\] \[\lesssim s^{-1}\Big{|}\xi Q^{\prime}(\xi)\Big{|}+s^{-\frac{1}{ \ell}}\Big{|}Q^{\prime\prime}(\xi)+\xi^{-1}Q^{\prime}(\xi)\Big{|}\lesssim s^{- \frac{1}{\ell}},\] and \[|y\partial_{y}E(y,s)|=|\xi\partial_{\xi}E(\xi,s)|=s^{-1}\Big{|}(\xi\partial_{ \xi})^{2}Q(\xi)\Big{|}+s^{-\frac{1}{\ell}}\Big{|}\xi\partial_{\xi}Q^{\prime \prime}(\xi)+\xi\partial_{\xi}(\xi^{-1}Q^{\prime}(\xi))\Big{|}\lesssim s^{- \frac{1}{\ell}}.\] As for \(\hat{E}\), we have by the definitions of \(\varphi_{2\ell}\) and \(\hat{\Psi}\), \[j=0,\cdots,3,\quad|(\langle y\rangle\partial_{y})^{j}\hat{\Psi}(y,s)|\lesssim s ^{-\frac{1}{\ell}},\quad|\partial_{s}(\langle y\rangle\partial_{y})^{j}\hat{ \Psi}(y,s)|\lesssim s^{-1-\frac{1}{\ell}},\quad\forall y\geq 0.\] Then, we have from (3.31), (3.9) and the bound \(Q+|\xi\partial_{\xi}Q|\lesssim 1\), \[|\hat{E}(y,s)| =|E(y,s)-\partial_{s}\hat{\Psi}+\mathscr{H}\hat{\Psi}+NL(\hat{ \Psi})|\] \[\lesssim|E(y,s)|+|\partial_{s}\hat{\Psi}|+|\Delta_{d+2}\hat{\Psi }|+|y\partial_{y}\hat{\Psi}|+|\hat{\Psi}|+|y\partial_{y}\hat{\Psi}\hat{\Psi}|+ |\hat{\Psi}|^{2}\lesssim s^{-\frac{1}{\ell}},\] and a similar bound for \(|y\partial_{y}\hat{E}|\), which concludes the proof of \((i)\). (ii) From (3.31), (3.9) and (3.13), we rewrite \(\hat{E}\) as \[\hat{E}(y,s) =-\partial_{s}\Psi+\Delta_{d+1}Q(\xi)+\mathscr{H}_{\frac{1}{2\ell }}\hat{\Psi}+\hat{\Psi}\] \[-\Big{(}\frac{1}{d}-Q(\xi)\Big{)}y\partial_{y}\hat{\Psi}+\big{(}2 dQ-2+\xi\partial_{\xi}Q\big{)}\hat{\Psi}+NL(\hat{\Psi}).\] We use the expansions (3.27) and (3.26) of \(\Psi\) and \(Q\), the cancellations \(\big{(}\mathscr{H}_{\frac{1}{2\ell}}+\mathrm{Id}\big{)}\varphi_{2\ell}=0\) and \(\big{(}-\frac{1}{2\ell}y\partial_{y}+\mathrm{Id}\big{)}y^{2\ell}=0\) to write for \(y\leq s^{\frac{1}{2\ell}}\) (\(\xi\leq 1\)), \[-\partial_{s}\Psi+ \Delta_{d+2}Q(\xi)+\mathscr{H}_{\frac{1}{2\ell}}\hat{\Psi}+\hat {\Psi}\] \[=\frac{1}{s^{2}}\left[-\frac{\varphi_{2\ell}}{B_{\ell}}+\frac{ \ell d}{B_{\ell}^{2}(2\ell+d)^{2}}\Delta_{d+2}(2\alpha y^{2})^{2\ell}\right]+ \mathcal{O}\Big{(}\frac{\langle y\rangle^{6\ell-2}}{s^{3}}\Big{)}.\] We notice that \[\tilde{\varphi}_{2\ell}(y)=\varphi_{2\ell}(y)-\frac{(2\alpha y^{2})^{\ell}}{2 \ell+d}=\mathcal{O}(\langle y\rangle^{2\ell-2}),\quad\hat{\Psi}(y,s)=-\frac{1 }{B_{\ell}s}\tilde{\varphi}_{2\ell}\chi_{0}(\xi)=\mathcal{O}\Big{(}\frac{ \langle y\rangle^{2\ell-2}}{s}\Big{)},\] and use again (3.26) to expand (keep track the terms of order \(\mathcal{O}(s^{-2})\) only), \[-\Big{(}\frac{1}{d}-Q(\xi)\Big{)}y\partial_{y}\hat{\Psi} =\frac{1}{s^{2}}\frac{(2\alpha y^{2})^{\ell}}{B_{\ell}^{2}(2\ell +d)}y\partial_{y}\tilde{\varphi}_{2\ell}+\mathcal{O}\Big{(}\frac{\langle y \rangle^{6\ell-2}}{s^{3}}\Big{)},\] \[\big{(}2dQ-2+\xi\partial_{\xi}Q\big{)}\hat{\Psi} =\frac{1}{s^{2}}\frac{(2d+2\ell)(2\alpha y^{2})^{\ell}}{B_{\ell}^{ 2}(2\ell+d)}\tilde{\varphi}_{2\ell}+\mathcal{O}\Big{(}\frac{\langle y\rangle^ {6\ell-2}}{s^{3}}\Big{)},\] \[NL(\hat{\Psi})=\frac{1}{B_{\ell}^{2}s^{2}}\left[d\tilde{\varphi}_{ 2\ell}^{2}+\frac{1}{2}y\partial_{y}\tilde{\varphi}_{2\ell}^{2}\right]+ \mathcal{O}\Big{(}\frac{\langle y\rangle^{6\ell-2}}{s^{3}}\Big{)}.\] A collection of these expansions yields \[\hat{E}(y,s)=\frac{1}{B_{\ell}^{2}s^{2}}P_{4\ell-2}(y)+\hat{R}(y,s)\quad\text{ with}\quad\sum_{j=0}^{2}|(\langle y\rangle\partial_{y})^{j}\hat{R}(y,s)|= \mathcal{O}\Big{(}\frac{\langle y\rangle^{6\ell-2}}{s^{3}}\Big{)}, \tag{4.11}\] where \[P_{4\ell-2}(y) =-B_{\ell}\varphi_{2\ell}+\frac{\ell d}{(2\ell+d)^{2}}\Delta_{d+2}( 2\alpha y^{2})^{2\ell}+\frac{(2\alpha y^{2})^{\ell}}{(2\ell+d)}y\partial_{y} \tilde{\varphi}_{2\ell}\] \[\qquad+\frac{(2d+2\ell)(2\alpha y^{2})^{\ell}}{(2\ell+d)}\tilde{ \varphi}_{2\ell}+d\tilde{\varphi}_{2\ell}^{2}+\frac{1}{2}y\partial_{y}\tilde{ \varphi}_{2\ell}^{2}. \tag{4.12}\] A projection of \(\hat{E}\) onto \(\varphi_{2k}\) immediately gives \[\hat{E}_{k}(s)=\langle\hat{E},\varphi_{2k}\rangle_{\rho}=\mathcal{O}(s^{-2}), \quad k\in\mathbb{N}.\] We claim that the projection of \(P_{4\ell-2}\) onto \(\varphi_{2\ell}\) is identically zero, i.e. \[\langle P_{4\ell-2},\varphi_{2\ell}\rangle_{\rho}=0, \tag{4.13}\] from which we get the improved bound \[\hat{E}_{\ell}(s)=\langle\hat{E},\varphi_{2\ell}\rangle_{\rho}=\langle P_{4 \ell-2},\varphi_{2\ell}\rangle_{\rho}+\langle\hat{R},\varphi_{2\ell}\rangle_{ \rho}=\mathcal{O}(s^{-3}).\] This concludes the proofs of (4.7), (4.8) and (4.9). The estimate (4.10) is straightforward from (4.11) and (4.6). Indeed, we have the estimate for \(j=0,1,2\), \[\int_{1}^{\infty}\frac{|(y\partial_{y})^{j}\hat{E}|^{2}}{y^{4\ell +2}}\frac{dy}{y} \lesssim\int_{1}^{s^{\frac{1}{2\ell}}}\Big{(}\frac{|(y\partial_{y })^{j}P_{4\ell-2}|^{2}}{s^{4}y^{4\ell+2}}+\frac{|(y\partial_{y})^{j}\hat{R}|^{ 2}}{y^{4\ell+2}}\Big{)}\frac{dy}{y}+\int_{s^{\frac{1}{2\ell}}}^{\infty}\frac{ |(y\partial_{y})^{j}\hat{E}|^{2}}{y^{4\ell+2}}\frac{dy}{y}\] \[\lesssim\int_{1}^{s^{\frac{1}{2\ell}}}\Big{(}\frac{y^{4\ell-7}}{s ^{4}}+\frac{y^{6\ell-7}}{s^{6}}\Big{)}dy+s^{-\frac{2}{\ell}}\int_{s^{\frac{1} {2\ell}}}^{\infty}\frac{dy}{y^{4\ell+3}}\lesssim s^{-2-\frac{3}{\ell}}.\] It remains to prove (4.13) to complete the proof of Lemma 4.2. Proof of (4.13).: We use the exact value \((d,\ell)\) to simplify the computation which we can easily implement with Matlab symbolic. Recall from (2.9) and the relation \[\varphi_{2\ell}(y)=\frac{1}{y^{d}}\int_{0}^{y}\phi_{2\ell}(\zeta)\zeta^{d-1}d\zeta,\] we have for \((d,\ell)=(3,3)\): \[\varphi_{6}(y) =\frac{y^{6}}{243}-\frac{2}{3}y^{4}+28y^{2}-280,\quad\tilde{ \varphi}_{6}(y)=-\frac{2}{3}y^{4}+28y^{2}-280,\] \[P_{10}(y) =-B_{3}\varphi_{6}(y)-\frac{4\,y^{10}}{243}+\frac{1148\,y^{8}}{24 3}-\frac{19264\,y^{6}}{81}+\frac{17360\,y^{4}}{3}-62720\,y^{2}+235200,\] and recall from (2.19) that \(B_{3}=39360\) to get \[\frac{1}{\|\varphi_{6}\|_{\rho}^{2}}\langle P_{10},\varphi_{6}\rangle_{\rho}= -B_{3}+39360=0.\] Similarly, we have for \((d,\ell)=(4,2)\): \[\varphi_{4}(y) =\frac{y^{4}}{32}-2y^{2}+24,\quad\tilde{\varphi}_{4}(y)=-2y^{2}+24,\] \[P_{6}(y) =-B_{2}\varphi_{4}(y)-\frac{y^{6}}{8}+33y^{4}-480y^{2}+2304,\] and recall from (2.22) that \(B_{2}=576\) to get \[\frac{1}{\|\varphi_{4}\|_{\rho}^{2}}\langle P_{6},\varphi_{4}\rangle_{\rho}=-B _{2}+576=0.\] This concludes the proof of (4.13) and completes the proof of Lemma 4.2. ### Dynamics of the finite dimensional part We obtain in this section the ODE satisfied by the finite dimensional part \(\hat{\varepsilon}_{\natural}\). We claim the following. **Lemma 4.4** (Dynamics of the finite dimensional part \(\hat{\varepsilon}_{\natural}\)).: _Let \(\varepsilon(s)\in\mathcal{S}_{A}(s)\), we have (i)_ (Decomposition of \(\tilde{E}\)) _The function \(\tilde{E}\) defined by (3.40) can be decomposed as_ \[\tilde{E}(y,s)=\sum_{k=0,k\neq\ell}^{2\ell-1}\Big{[}\hat{E}_{k}-\hat{ \varepsilon}_{k}^{\prime}+\big{(}1-\frac{k}{\ell}\big{)}\hat{\varepsilon}_{k} \Big{]}\varphi_{2k}(y)+\big{(}\hat{E}_{\ell}-\hat{\varepsilon}_{\ell}^{\prime} -\frac{2}{s}\hat{\varepsilon}_{\ell}\big{)}\varphi_{2\ell}+\tilde{R}(y,s), \tag{4.14}\] _where \(\hat{E}_{k}\) is introduced in (4.7) and satisfies the estimate (4.8),_ \[\|\tilde{R}(s)\|_{L^{2}_{\rho}}\lesssim s^{-3}. \tag{4.15}\] _(ii)_ (Dynamics of \(\hat{\varepsilon}_{\natural}\)) \[\sum_{k=0,k\neq\ell}^{2\ell-1}\left|\hat{\varepsilon}_{k}^{\prime}+\big{(}1- \frac{k}{\ell}\big{)}\hat{\varepsilon}_{k}\right|\lesssim s^{-2},\quad\left| \hat{\varepsilon}_{\ell}^{\prime}+\frac{2}{s}\hat{\varepsilon}_{\ell}\right| \lesssim s^{-3}. \tag{4.16}\] Proof.: (i) From the definitions (3.40) and (3.31) of \(\tilde{E}\) and \(\hat{E}\), the decomposition (4.7) and \(\mathscr{H}_{\frac{1}{2\ell}}\varphi_{2k}=-\frac{k}{\ell}\varphi_{2k}\), we have by \[\tilde{E}(y,s) =\hat{E}(y,s)-\partial_{s}\hat{\varepsilon}_{\natural}+(\mathscr{ H}_{\frac{1}{2\ell}}+\mathrm{Id})\hat{\varepsilon}_{\natural}-\tilde{P}_{1}y \partial_{y}\hat{\varepsilon}_{\natural}+\tilde{P}_{2}\hat{\varepsilon}_{ \natural}+NL(\hat{\varepsilon}_{\natural})\] \[=\sum_{k=0}^{2\ell-1}\Big{[}\hat{E}_{k}-\hat{\varepsilon}_{k}^{ \prime}+\big{(}1-\frac{k}{\ell}\big{)}\Big{]}\varphi_{2k}+\hat{R}-\tilde{P}_ {1}y\partial_{y}\hat{\varepsilon}_{\natural}+\tilde{P}_{2}\hat{\varepsilon}_{ \natural}+NL(\hat{\varepsilon}_{\natural}),\] where \(NL(\hat{\varepsilon}_{\natural})=d\hat{\varepsilon}_{\natural}^{2}+y\hat{ \varepsilon}_{\natural}\partial_{y}\hat{\varepsilon}_{\natural}\), \(\tilde{P}_{1}\) and \(\tilde{P}_{2}\) are defined by \[\tilde{P}_{1}=\frac{1}{d}-\Psi,\quad\tilde{P}_{2}=2d\Psi-2+\xi\partial_{\xi}\Psi.\] From the expansion (3.27) of \(\Psi\), we have the rough estimate \[\forall y\geq 0,\quad|\tilde{P}_{1}(y,s)|+|y\partial_{y}\tilde{P}_{1}(y,s)|+| \tilde{P}_{2}(y,s)|\lesssim\frac{\langle y\rangle^{2\ell}}{s}.\] From the bootstrap bounds (3.41) and (3.42), we have \[\forall y\geq 0,\quad|\hat{\varepsilon}_{\natural}(y,s)-\hat{\varepsilon}_{ \ell}\varphi_{2\ell}|\lesssim\frac{1}{s^{2}}\langle y\rangle^{4\ell-2},\quad| \hat{\varepsilon}_{\natural}(y,s)|\lesssim\frac{\log s}{s^{2}}\langle y \rangle^{4\ell-2}.\] Using these estimates, (4.8) and Cauchy-Schwarz inequality, we end up with \[\|\hat{R}-\tilde{P}_{1}y\partial_{y}(\hat{\varepsilon}_{\natural}(y,s)-\hat{ \varepsilon}_{\ell}\varphi_{2\ell})+\tilde{P}_{2}(\hat{\varepsilon}_{ \natural}(y,s)-\hat{\varepsilon}_{\ell}\varphi_{2\ell})+NL(\hat{\varepsilon}_ {\natural})\|_{L^{2}_{\rho}}\lesssim s^{-3}. \tag{4.17}\] We claim the following: \[\langle-\tilde{P}_{1}y\partial_{y}\varphi_{2\ell}+\tilde{P}_{2}\varphi_{2\ell },\varphi_{2\ell}\rangle_{\rho}\;\hat{\varepsilon}_{\ell}(s)=-\frac{2}{s}\hat {\varepsilon}_{\ell}(s)+\mathcal{O}(s^{-4}\log s). \tag{4.18}\] We recall from (3.27) the expansion \(\Psi(y,s)=\frac{1}{d}-\frac{\varphi_{2\ell}}{B_{\ell}s}+\mathcal{O}(s^{-2} \langle y\rangle^{2\ell})\), and write (keep track only terms of order \(\mathcal{O}(s^{-1})\)) \[-\tilde{P}_{1}y\partial_{y}\varphi_{2\ell}+\tilde{P}_{2}\varphi_{2\ell}=-\frac{ 2}{B_{\ell}s}\Big{(}d\varphi_{2\ell}^{2}+y\varphi_{2\ell}\partial_{y}\varphi_{ 2\ell}\Big{)}+\mathcal{O}(s^{-2}\langle y\rangle^{6\ell}).\] A direct computation (Matlab symbolic) yields \[\frac{1}{\|\varphi_{2\ell}\|_{\rho}^{2}}\langle d\varphi_{2\ell}^{2}+y\varphi_{ 2\ell}\partial_{y}\varphi_{2\ell},\varphi_{2\ell}\rangle_{\rho}=\left\{\begin{array} []{ll}39360&\text{ if }(d,\ell)=(3,3)\\ 576&\text{ if }(d,\ell)=(4,2)\end{array}\right.\equiv B_{\ell},\] which agrees with the formal computation given at page 10 where the constant \(B_{\ell}\) is the projection of the nonlinear term (in the original setting) onto the eigenmode \(\phi_{2\ell}\). This proves (4.18) and concludes the proof of (4.14). (ii) We project the equation (3.37) onto the eigenmode \(\varphi_{2k}\) and use the orthogonality (3.35) to get \[0=\langle-\tilde{V}_{1}y\partial_{y}\tilde{\varepsilon}+\tilde{V}_{2}\tilde{ \varepsilon}+NL(\tilde{\varepsilon})+\tilde{E},\varphi_{2k}\rangle_{\rho}.\] From (3.39), (3.27) and the bootstrap bounds (3.41), (3.42), we have the rough bound \[\forall y\geq 0,\quad|\tilde{V}_{1}(y,s)|+|y\partial_{y}\tilde{V}_{1}(y,s)|+| \tilde{V}_{2}(y,s)|\lesssim\frac{\langle y\rangle^{4\ell-2}}{s}.\] We use Cauchy-Schwarz inequality, integration by parts and the fact that \(\rho\) is exponential decay to estimate \[\big{|}\langle-\tilde{V}_{1}y\partial_{y}\tilde{\varepsilon}+\tilde{V}_{2} \tilde{\varepsilon},\varphi_{2k}\rangle_{\rho}\big{|}\lesssim s^{-1}\|\tilde{ \epsilon}(s)\|_{L^{2}_{\rho}}.\] For the nonlinear term, we use the relation \(\tilde{\varepsilon}=\hat{\varepsilon}-\hat{\varepsilon}_{\natural}\), the pointwise estimate (4.2) and the bootstrap bounds (3.41), (3.42) to get \[\forall y\geq 0,\quad|\tilde{\varepsilon}(y,s)|+|y\partial_{y}\tilde{ \varepsilon}(y,s)|\lesssim A^{6}s^{-1-\frac{3}{2\ell}}\langle y\rangle^{4\ell- 2}.\] Then, using Cauchy-Schwarz inequality and the exponential decay of \(\rho\) yields \[\big{|}\langle NL(\tilde{\varepsilon}),\varphi_{2k}\rangle_{\rho}\big{|} \lesssim A^{6}s^{-1-\frac{3}{2\ell}}\|\tilde{\varepsilon}(s)\|_{L^{2}_{\rho}}.\] Putting all these estimates together with (4.14) and the bootstrap bound (3.43) yield (4.16) and completes the proof of Lemma 4.4. ### \(L^{2}_{\rho}\)-estimate We give the formulation to control \(L^{2}_{\rho}\) of \(\tilde{\varepsilon}\). The orthogonality (3.35), which provides the spectral gap (3.36), plays a crucial role in the improvement of \(L^{2}_{\rho}\) bootstrap estimate (3.43). We claim the following. **Lemma 4.5** (Energy estimate in \(L^{2}_{\rho}\)).: _Let \(A\geq 1\) and \(s\geq s_{0}=s_{0}(A)\gg 1\) and \(\varepsilon(s)\in\mathcal{S}_{A}(s)\), there is \(\delta>0\) such that_ \[\frac{d}{ds}\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}\leq-\frac{1}{2}\|\tilde {\varepsilon}\|_{L^{2}_{\rho}}^{2}+Cs^{-6}, \tag{4.19}\] _where \(C\) is independent of \(A\)._ Proof.: The proof is just a standard energy estimate in \(L^{2}_{\rho}\) from the equation (3.37). We take the scalar product of (3.37) with \(\tilde{\varepsilon}\) in \(L^{2}_{\rho}\) and use the spectral gap (3.36) to get \[\frac{1}{2}\frac{d}{ds}\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}\leq-\| \tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}+\Big{|}\langle\tilde{\mathcal{V}} \tilde{\varepsilon}+NL(\tilde{\varepsilon})+\tilde{E},\tilde{\varepsilon} \rangle_{\rho}\big{|}.\] From the definition (3.38) of \(\tilde{\mathcal{V}}\) and integration by part, we get \[\Big{|}\langle\tilde{\mathcal{V}}\tilde{\varepsilon},\tilde{\varepsilon} \rangle_{\rho}\Big{|}=\Big{|}-\frac{1}{2}\int_{0}^{\infty}\tilde{V}_{1}y\partial _{y}\tilde{\varepsilon}^{2}\rho dy+\int_{0}^{\infty}\tilde{V}_{2}\tilde{ \varepsilon}^{2}\rho dy\Big{|}\lesssim\int_{0}^{\infty}\big{(}|y\partial_{y} \tilde{V}_{1}|+\langle y\rangle^{2}|\tilde{V}_{1}|+|\tilde{V}_{2}|\big{)}\tilde{ \varepsilon}^{2}\rho dy.\] From the definition (3.39) of \(\tilde{V}_{1}\) and \(\tilde{V}_{2}\), we have the rough bound \[\forall y\geq 0,\quad|y\partial_{y}\tilde{V}_{1}|+\langle y\rangle^{2}|\tilde{V}_ {1}|+|\tilde{V}_{2}|\lesssim s^{-1}\langle y\rangle^{C}, \tag{4.20}\] for some constant \(C>0\). Let \(0<\kappa\ll 1\) be a small constant such that \[\forall y\leq s^{\kappa},\quad s^{-1}\langle y\rangle^{C}\leq s^{-\kappa}.\] We also get from (4.2) and the definition (3.34) of \(\hat{\varepsilon}_{\natural}\) to have the pointwise bound \[\forall y\geq 0,\quad|\tilde{\varepsilon}(y,s)|\lesssim|\hat{\varepsilon}(y,s)|+| \hat{\varepsilon}_{\natural}(y,s)|\lesssim A^{3}s^{-1-\frac{3}{2\ell}}\langle y \rangle^{C}. \tag{4.21}\] By splitting the integral and using (4.20), (4.21), we obtain \[\left|\langle\tilde{\mathcal{V}}\tilde{\varepsilon},\tilde{ \varepsilon}\rangle_{\rho}\right| \lesssim s^{-1}\int_{0}^{s^{\kappa}}\langle y\rangle^{C}\tilde{ \varepsilon}^{2}\rho dy+A^{6}s^{-3-\frac{3}{\ell}}\int_{s^{\kappa}}^{\infty} \langle y\rangle^{C}e^{-\frac{|y|^{2}}{2\ell}}dy\] \[\lesssim s^{-\kappa}\|\tilde{\varepsilon}\|_{L_{\rho}^{2}}^{2}+A ^{6}e^{-\eta s^{2\kappa}},\] for some \(\eta>0\). Arguing in a similar way to estimate the nonlinear term by using (4.21) and integration by parts, we obtain \[\left|\langle NL(\tilde{\varepsilon}),\tilde{\varepsilon}\rangle _{\rho}\right| \lesssim\Big{|}\int_{0}^{\infty}d\tilde{\varepsilon}^{3}\rho dy+ \frac{1}{3}\int_{0}^{\infty}y\partial_{y}\tilde{\varepsilon}^{3}\rho dy \Big{|}\lesssim\int_{0}^{\infty}\langle y\rangle^{2}|\tilde{\varepsilon}|^{3} \rho dy\] \[\lesssim A^{3}s^{-1-\frac{3}{2\ell}}\int_{0}^{s^{\kappa}}\langle y \rangle^{C}|\tilde{\varepsilon}|^{2}\rho dy+A^{9}s^{-3-\frac{9}{2\ell}}\int_ {s^{\kappa}}^{\infty}\langle y\rangle^{3C+d+1}e^{-\frac{|y|^{2}}{2\ell}}dy\] \[\lesssim A^{3}s^{-\kappa}\|\tilde{\varepsilon}\|_{L_{\rho}^{2}}^{2}+ A^{9}e^{-\eta s^{2\kappa}}.\] For the error term, we use the decomposition (4.14), the orthogonality (3.35), Cauchy-Schwarz inequality and the estimate (4.15) to obtain \[\left|\langle\tilde{E},\tilde{\varepsilon}\rangle_{\rho}\right|=\left|\langle \tilde{R},\tilde{\varepsilon}\rangle_{\rho}\right|\leq\frac{1}{4}\|\tilde{ \varepsilon}\|_{L_{\rho}^{2}}^{2}+C\|\tilde{R}\|_{L_{\rho}^{2}}^{2}\leq\frac{1 }{4}\|\tilde{\varepsilon}\|_{L_{\rho}^{2}}^{2}+Cs^{-6}.\] Putting together all the estimates and take \(s_{0}=s_{0}(A)\gg 1\) yields the desired formulation and concludes the proof of Lemma 4.5. ### Estimate for the intermediate region We perform an energy estimate to control the solution in the intermediate region \(y\lesssim s^{\frac{1}{2\ell}}(\xi\lesssim 1)\). We claim the following. **Lemma 4.6** (Energy estimate in the intermediate region).: _Let \(A\geq 1\) and \(s\geq s_{0}=s_{0}(A)\gg 1\) and \(\varepsilon(s)\in\mathcal{S}_{A}(s)\). We have_ \[\frac{d}{ds}\|\hat{\varepsilon}(s)\|_{\flat}^{2} \leq-\delta\|\hat{\varepsilon}(s)\|_{\flat}^{2}+Cs^{-2-\frac{3}{ \ell}}, \tag{4.22}\] \[\frac{d}{ds}\|y\partial_{y}\hat{\varepsilon}(s)\|_{\flat}^{2} \leq-\delta\|y\partial_{y}\hat{\varepsilon}(s)\|_{\flat}^{2}+C \big{(}\|\hat{\varepsilon}(s)\|_{\flat}^{2}+s^{-2-\frac{3}{\ell}}\big{)},\] (4.23) \[\frac{d}{ds}\|(y\partial_{y})^{2}\hat{\varepsilon}(s)\|_{\flat}^{2} \leq-\delta\|(y\partial_{y})^{2}\hat{\varepsilon}(s)\|_{\flat}^{2}+C \big{(}\|y\partial_{y}\hat{\varepsilon}(s)\|_{\flat}^{2}+\|\hat{\varepsilon}( s)\|_{\flat}^{2}+s^{-2-\frac{3}{\ell}}\big{)}, \tag{4.24}\] _where \(\delta>0\) and \(C=C(K)>0\) is independent of \(A\)._ Proof.: We begin with (4.22). To ease the notation, we write in this proof \[\chi_{{}_{K}}(y)=\chi(y),\quad\int_{0}^{\infty}\square dy=\int\square dy.\] From the equation (3.29), we have \[\frac{1}{2}\frac{d}{ds}\|\hat{\varepsilon}(s)\|_{\flat}^{2}=\int\big{(}1-\chi \big{)}\big{(}\mathscr{H}\hat{\varepsilon}+\hat{\mathcal{V}}\hat{\varepsilon} +NL(\hat{\varepsilon})+\hat{E}\big{)}\frac{\hat{\varepsilon}}{y^{4\ell+3}}dy.\] We rewrite from the definition (3.9) of \(\mathscr{H}\) and \(\Psi=Q+\hat{\Psi}\), \[\mathscr{H}+\hat{\mathcal{V}}=\Delta_{d+2}+\Big{(}\Psi-\frac{1}{2}\Big{)}y \partial_{y}+\Big{(}2d\Psi-1+y\partial_{y}\Psi\Big{)}.\] We compute by integration by parts and use the fact that the compact support of \(\partial_{y}\chi\) and \(\partial_{y}^{2}\chi\) is in \((K,2K)\), \[\int(1-\chi)\frac{\hat{\varepsilon}\Delta_{d+2}\hat{\varepsilon}} {y^{4\ell+3}}dy \leq-\int(1-\chi)\frac{|\partial_{y}\hat{\varepsilon}|^{2}}{y^{4 \ell+3}}dy+C\int\hat{\varepsilon}^{2}\Big{(}\frac{|\partial_{y}\chi|}{y^{4 \ell+4}}+\frac{|\partial_{y}^{2}\chi|}{y^{4\ell+3}}\Big{)}dy+C\int\frac{\hat{ \varepsilon}^{2}(1-\chi)}{y^{4\ell+5}}dy\] \[\leq-\int(1-\chi)\frac{|\partial_{y}\hat{\varepsilon}|^{2}}{y^{4 \ell+3}}dy+\frac{C}{K^{4\ell+3}}\int_{K}^{2K}|\hat{\varepsilon}|^{2}dy+\frac{C }{K^{2}}\|\hat{\varepsilon}\|_{\flat}^{2}.\] Using the relation \(\hat{\varepsilon}=\tilde{\varepsilon}+\hat{\varepsilon}_{\natural}\) and the bootstrap bounds in Definition 3.1, we obtain \[\int_{K}^{2K}|\hat{\varepsilon}|^{2}dy\leq\frac{e^{\frac{K^{2}}{ \ell}}}{K^{d+1}}\|\hat{\varepsilon}\|_{L^{2}_{\rho}}^{2}\leq\frac{e^{\frac{K^{ 2}}{\ell}}}{K^{d+1}}\big{(}\|\tilde{\varepsilon}\|_{L^{2}_{\rho}}^{2}+\|\hat{ \varepsilon}_{\natural}\|_{L^{2}_{\rho}}^{2}\big{)}\] \[\leq\frac{e^{\frac{K^{2}}{\ell}}}{K^{d+1}}\Big{(}\frac{A^{6}}{s^ {6}}+\frac{A^{4}\log^{2}s}{s^{4}}\Big{)}\leq s^{-2-\frac{3}{\ell}}. \tag{4.25}\] Using integration by parts, we derive \[\int\frac{(1-\chi)\hat{\varepsilon}}{y^{4\ell+3}}\left[\Big{(} \Psi-\frac{1}{2}\Big{)}y\partial_{y}\hat{\varepsilon}+\Big{(}2d\Psi-1+y \partial_{y}\Psi\Big{)}\hat{\varepsilon}\right]dy\] \[=-\int\frac{(1-\chi)\hat{\varepsilon}^{2}}{y^{4\ell+3}}\Big{[}(2 \ell+1)\big{(}\frac{1}{2}-\Psi\big{)}-\frac{1}{2}y\partial_{y}\Psi+1-2d\Psi \Big{]}dy+\int\frac{\hat{\varepsilon}^{2}\partial_{y}\chi}{2y^{4\ell+2}}dy.\] We now use the monotonicity of \(Q\) stated in (3.6) and the fact that \(\|\hat{\Psi}(s)\|_{\infty}=\mathcal{O}(s^{-\frac{1}{2\ell}})\) to have \[\forall y\geq 0,\quad\frac{1}{2}-\Psi(y,s)\geq\frac{1}{2\ell}-Cs^{-\frac{1}{ 2\ell}},\quad\Psi(y,s)\leq\frac{1}{d}+Cs^{-\frac{1}{2\ell}},\quad y\partial_{ y}\Psi(y,s)\leq Cs^{-\frac{1}{2\ell}},\] hence, \[\forall y\geq 0,\quad(2\ell+1)\big{(}\frac{1}{2}-\Psi\big{)}-\frac{1}{2}y \partial_{y}\Psi+1-2d\Psi\geq\frac{2\ell+1}{2\ell}-1-Cs^{-\frac{1}{2\ell}}= \frac{1}{2\ell}-Cs^{-\frac{1}{2\ell}}\geq\frac{1}{4\ell}.\] The term with cutoff \(\partial_{y}\chi\) is simply estimated as in (4.25), \[\Big{|}\int\frac{\hat{\varepsilon}^{2}\partial_{y}\chi}{2y^{4\ell+2}}dy\Big{|} \lesssim K^{-4\ell-2}\int_{K}^{2K}|\hat{\varepsilon}|^{2}dy\lesssim s^{-2- \frac{3}{\ell}}.\] Hence, by taking \(K\gg 1\) large, the full linear term is estimate by \[\int(1-\chi)\frac{\hat{\varepsilon}(\mathscr{H}+\hat{\mathcal{V}})\hat{ \varepsilon}}{y^{4\ell+3}}dy\leq-\int(1-\chi)\frac{|\partial_{y}\hat{ \varepsilon}|^{2}}{y^{4\ell+3}}dy-\frac{1}{6\ell}\|\hat{\varepsilon}(s)\|_{ \flat}^{2}+Cs^{-2-\frac{3}{\ell}}. \tag{4.26}\] As for the nonlinear term, we estimate by using (4.3), \[\Big{|}\int(1-\chi)\frac{\hat{\varepsilon}NL(\hat{\varepsilon})}{ y^{4\ell+3}}dy\Big{|}=\Big{|}\int(1-\chi)\frac{\hat{\varepsilon}^{2}(d\hat{ \varepsilon}+y\partial_{y}\hat{\varepsilon})}{y^{4\ell+3}}dy\Big{|}\\ \leq(\|\hat{\varepsilon}(s)\|_{\infty}+\|y\partial_{y}\hat{ \varepsilon}(s)\|_{\infty})\|\hat{\varepsilon}(s)\|_{\flat}^{2}\lesssim A^{8}s^{- \frac{1}{\ell}}\|\hat{\varepsilon}(s)\|_{\flat}^{2}.\] For the error term, we use Cauchy-Schwarz inequality and (4.10), \[\int(1-\chi)\frac{|\hat{\varepsilon}||\hat{E}|}{y^{\ell +3}}dy\leq\frac{1}{8\ell}\|\hat{\varepsilon}(s)\|_{\flat}^{2}+C\|\hat{E}\|_{ \flat}^{2}\leq\frac{1}{8\ell}\|\hat{\varepsilon}(s)\|_{\flat}^{2}+Cs^{-2-\frac {3}{\ell}}.\] Collecting all the above bounds and taking \(K\gg 1\) and \(s_{0}(A)\gg 1\), we end up with \[\frac{1}{2}\frac{d}{ds}\|\hat{\varepsilon}(s)\|_{\flat}^{2}\leq \Big{(}-\frac{1}{6\ell}+\frac{1}{8\ell}+\frac{CA^{8}}{s^{\frac{1}{\ell}}}\Big{)} \|\hat{\varepsilon}(s)\|_{\flat}^{2}+Cs^{-2-\frac{3}{\ell}}\leq-\delta\|\hat{ \varepsilon}(s)\|_{\flat}^{2}+Cs^{-2-\frac{3}{\ell}},\] for some \(\delta>0\), which concludes the proof of (4.22). The derivation of (4.23) and (4.24) is similar as for (4.22). Indeed, the equations satisfied by \[g_{1}=y\partial_{y}\hat{\varepsilon},\quad g_{2}=y\partial_{y}g_{1},\] have the same forms as for \(\hat{\varepsilon}\) with an extra commutator, \[\partial_{s}g_{1} =(\mathscr{H}+\hat{\mathcal{V}})g_{1}+[y\partial_{y},\mathscr{H} +\hat{\mathcal{V}}]\hat{\varepsilon}+y\partial_{y}(NL(\hat{\varepsilon})+ \hat{E}),\] \[\partial_{s}g_{2} =(\mathscr{H}+\hat{\mathcal{V}})g_{2}+[y\partial_{y},\mathscr{H} +\hat{\mathcal{V}}]g_{1}+y\partial_{y}([\mathscr{H}+\hat{\mathcal{V}},y \partial_{y}]\hat{\varepsilon})+(y\partial_{y})^{2}(NL(\hat{\varepsilon})+ \hat{E}).\] The linear part is estimated as in (4.26) that provides a dissipative term and a coercive estimate with the constant \(-\frac{1}{6\ell}\). The commutator term \([y\partial_{y},\mathscr{H}+\hat{\mathcal{V}}]\hat{\varepsilon}\) is then controlled either by the dissipation or by the norm \(\|\hat{\varepsilon}(s)\|_{\flat}^{2}\), and similarly for the term \([y\partial_{y},\mathscr{H}+\hat{\mathcal{V}}]g_{1}\). The nonlinear term and the error term are estimated by integration by parts, Cauchy-Schwarz inequality and the dissipation with the provided estimate (4.10) of the error term that we omit the detail here. This concludes the proof of Lemma 4.6. ### Estimate for the outer region This section is devoted to the control of the remainder \(\varepsilon\) in the outer region \(y\gg s^{\frac{1}{2\ell}}(\xi\gg 1)\) based on the well-known semigroup properties of the Hermite operator \(\mathscr{L}_{\eta}=\Delta-\eta z\cdot\nabla\) with \(\eta=\frac{1}{2}\). Let \(\chi_{{}_{K}}\) be the cut-off function defined by (3.21) and recall the definition of \(\varepsilon^{\rm ex}\), \[\varepsilon^{\rm ex}(y,s)=\varepsilon(y,s)(1-\chi_{{}_{K}}(\xi)),\quad\xi=ys^ {-\frac{1}{2\ell}}.\] In what follows, we write without distinguishing \[\varepsilon=\varepsilon(z,s)\equiv\varepsilon(y,s),\quad\varepsilon^{\rm ex} =\varepsilon^{\rm ex}(z,s)\equiv\varepsilon^{\rm ex}(y,s),\quad y=|z|,\ \ z\in\mathbb{R}^{d},\] and notice that \[z\cdot\nabla_{z}\varepsilon\equiv y\partial_{y}\varepsilon,\quad\nabla\cdot( z\varepsilon)=z\cdot\nabla\varepsilon+d\varepsilon\equiv y\partial_{y} \varepsilon+d\varepsilon,\] From (3.8), we have the equation satisfied by \(\varepsilon^{\rm ex}\), \[\partial_{s}\varepsilon^{\rm ex}=\big{(}\mathscr{L}_{\eta}-{\rm Id }\big{)}\varepsilon^{\rm ex}+F+\mathcal{E}^{bd},\quad\eta=\frac{1}{2}, \tag{4.27}\] where \[F =\big{(}1-\chi_{{}_{K}}\big{)}\big{[}\big{(}2y^{-2}+Q\big{)}y \partial_{y}\varepsilon+(2dQ+y\partial_{y}Q)\varepsilon+NL(\varepsilon)+E \big{]},\] \[=\big{(}1-\chi_{{}_{K}}\big{)}\big{[}P_{1}y\partial_{y}\varepsilon +P_{2}\varepsilon+NL(\varepsilon)+E\big{]}=\big{(}1-\chi_{{}_{K}}\big{)}\hat{F}, \tag{4.28}\] \[\mathcal{E}^{bd} =\Big{(}-\partial_{s}\chi_{{}_{K}}+\Delta\chi_{{}_{K}}-\frac{1}{2 \ell}y\partial_{y}\chi_{{}_{K}}\Big{)}\varepsilon+2\partial_{y}\chi_{{}_{K}} \partial_{y}\varepsilon. \tag{4.29}\] We restate some well-known semigroup properties of the Hermite operator \(\mathscr{L}_{\eta}=\Delta-\eta z.\nabla\) acting on general functions (not necessary radially symmetric) defined from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) **Lemma 4.7** (Properties of the semigroup \(e^{s\mathscr{L}_{\eta}}\)).: _The kernel of the semigroup \(e^{s\mathscr{H}_{\eta}}\) is given by_ \[e^{s\mathscr{L}_{\eta}}(z,\xi)=\frac{1}{[2\pi\eta(1-e^{-s})]^{\frac{d}{2}}}\exp \Big{(}-\frac{\eta}{2}\frac{|ze^{-s/2}-\xi|^{2}}{(1-e^{-s})}\Big{)}. \tag{4.30}\] _The action of \(e^{s\mathscr{L}_{\eta}}\) on the function \(g:\mathbb{R}^{d}\to\mathbb{R}\) is defined by_ \[e^{s\mathscr{L}_{\eta}}g(z)=\int_{\mathbb{R}^{d}}e^{s\mathscr{L}_{\eta}}(z,\xi )g(\xi)d\xi.\] _We have the following properties: (i) \(\big{\|}e^{s\mathscr{L}_{\eta}}g\big{\|}_{\infty}\leq\|g\|_{\infty}\) for all \(g\in L^{\infty}(\mathbb{R}^{d})\). (ii) \(\big{\|}e^{s\mathscr{L}_{\eta}}\nabla g\big{\|}_{\infty}\leq\frac{C}{\sqrt{1-e ^{-s}}}\|g\|_{\infty}\) for all \(g\in L^{\infty}(\mathbb{R}^{d})\)._ Proof.: The formulation (4.30) can be verified by a direct check after a simple change of variable, thanks to the fact that the function \(\rho_{0}(z)=e^{-\frac{\eta|z|^{2}}{2}}\) satisfies \(\Delta\rho_{0}+\eta z\cdot\nabla\rho_{0}+d\eta\rho_{0}=0\) for all \(z\in\mathbb{R}^{d}\). The estimates in (i)-(ii) are straightforward from (4.30). **Lemma 4.8** (Estimates in the outer region).: _For \(A\geq 1\) and \(s_{0}=s_{0}(A)\gg 1\) and \(\varepsilon(s)\in\mathcal{S}_{A}(s)\), we have for all \(\tau\in[s_{0},s]\),_ \[j=0,1,\quad\|(y\partial_{y})^{j}\varepsilon^{\rm ex}(s)\|_{L^{\infty}}\leq e^{ -(s-\tau)}\|(y\partial_{y})^{j}\varepsilon^{\rm ex}(\tau)\|_{L^{\infty}}+\frac {C(K)A^{3+j}}{\tau^{\frac{1}{\ell}}}(1+s-\tau), \tag{4.31}\] _and_ \[\|y\varepsilon^{\rm ex}(s)\|_{L^{\infty}}\leq e^{-(s-\tau)}\|y\varepsilon^{\rm ex }(\tau)\|_{L^{\infty}}+\frac{C(K)A^{3}}{\tau^{\frac{1}{2\ell}}}(1+s-\tau), \tag{4.32}\] Proof.: We use Duhamel's formula and item (i) of Lemma 4.7 to write from (4.27) for all \(\tau\in[s_{0},s]\), \[\|\varepsilon^{\rm ex}(s)\|_{L^{\infty}}\leq e^{-(s-\tau)}\|\varepsilon^{\rm ex }(\tau)\|_{L^{\infty}}+\int_{\tau}^{s}e^{-(s-s^{\prime})}\big{(}\|F(s^{\prime} )\|_{L^{\infty}}+\|\mathcal{E}^{bd}(s^{\prime})\|_{L^{\infty}}\big{)}ds^{\prime},\] Due to the the cut-off \(\chi_{{}_{K}}(\xi)\), the boundary term \(\mathcal{E}^{bd}(\varepsilon)\) is located in the zone \(Ks^{\frac{1}{2\ell}}\leq y\leq 2Ks^{\frac{1}{2\ell}}(K\leq\xi\leq 2K)\) and it is bounded by using the estimate from the intermediate region. In particular, we have from (4.2) and the bootstrap bounds (3.41) and (3.42) the following estimate for \(j=0,1\), \[K\leq\xi\leq 2K,\quad|(y\partial_{y})^{j}\varepsilon(y,s)|\leq|(y\partial_{y})^{ j}\hat{\varepsilon}(y,s)|+|(y\partial_{y})^{j}\hat{\varepsilon}_{\natural}(y,s)| \leq C(K)A^{3}s^{-\frac{1}{\ell}}.\] Hence, from the definition (4.29), we obtain \[\|\mathcal{E}^{bd}(s)\|_{L^{\infty}}\lesssim\|\varepsilon(s)\|_{L^{\infty}(K \leq\xi\leq 2K)}+\|y\partial_{y}\varepsilon(s)\|_{L^{\infty}(K\leq\xi\leq 2K)} \lesssim C(K)A^{3}s^{-\frac{1}{\ell}}.\] For the term \(F\), we use the decay of \(Q\) that is \[\forall y\geq Ks^{\frac{1}{2\ell}},\quad|Q(\xi)|+|\xi\partial_{\xi}Q(\xi)| \lesssim\xi^{-2}\lesssim K^{-2}s^{-\frac{1}{\ell}},\] and from the definition (3.12) of \(NL(\varepsilon)\) and the bootstrap bound (3.45) and the bound (4.6), we get \[\|F\|_{L^{\infty}} \lesssim s^{-\frac{1}{\ell}}\big{(}\|\varepsilon^{\rm ex}\|_{L^{ \infty}}+\|y\partial_{y}\varepsilon^{\rm ex}\|_{L^{\infty}}\big{)}+\| \varepsilon^{\rm ex}\|_{L^{\infty}}^{2}+\|\varepsilon^{\rm ex}\|_{L^{\infty}} \|y\partial_{y}\varepsilon^{\rm ex}\|_{L^{\infty}}+\|E\|_{L^{\infty}}\] \[\lesssim A^{9}s^{-\frac{2}{\ell}}+s^{-\frac{1}{\ell}}\lesssim s^{- \frac{1}{\ell}},\] for \(s_{0}(A)\gg 1\) so that \(A^{9}s_{0}^{-\frac{1}{\ell}}\lesssim 1\). We gather all these estimates and simply bound \(\int_{\tau}^{s}e^{-(s-s^{\prime})}s^{\prime-\frac{1}{\ell}}ds^{\prime}\lesssim \tau^{-\frac{1}{\ell}}(1+s-\tau)\) to conclude the estimate (4.31) for \(j=0\). The proof of (3.45) for the case \(j=1\) is similar as for \(j=0\) by using (ii) of Lemma 4.7. The only difference is due to the extra commutator term in the equation satisfied by \[g^{\rm ex}=z\cdot\nabla\varepsilon^{\rm ex}\equiv y\partial_{y}\varepsilon^{ \rm ex},\] which reads as \[\partial_{s}g^{\rm ex}=\big{(}\mathscr{L}_{\eta}-\mathrm{Id}\big{)}g^{\rm ex} +[z\cdot\nabla,\Delta_{d+2}]\varepsilon^{\rm ex}+z\cdot\nabla(F+\mathcal{E}^{ bd}), \tag{4.33}\] where \[[z\cdot\nabla,\Delta_{d+2}]\varepsilon^{\rm ex}=-2\Delta\varepsilon^{\rm ex} =-2\nabla\cdot\Big{(}\frac{zg^{\rm ex}}{y^{2}}\Big{)}.\] Let \[g=z\cdot\nabla\varepsilon\equiv y\partial_{y}\varepsilon,\] we write from the definition (4.28) of \(F\), \[z\cdot\nabla F=-y\partial_{y}\chi_{{}_{K}}(\hat{F}+yP_{1}g)+ \partial_{y}(yP_{1}g(1-\chi_{{}_{K}}))\] \[\qquad\qquad+(1-\chi_{{}_{K}})\Big{[}(y\partial_{y}P_{1}+P_{2}-yP _{1})g+y\partial_{y}P_{2}\varepsilon+E+NL(\varepsilon)\Big{]},\] and from the definition (3.12) of \(NL\), \[(1-\chi_{{}_{K}})y\partial_{y}NL(\varepsilon)=(1-\chi_{{}_{K}})\Big{[}(2d-1) \varepsilon g+g^{2}\Big{]}-y\partial_{y}\chi_{{}_{K}}\varepsilon g+\partial_{ y}(y\varepsilon^{\rm ex}g).\] Hence, \[z\cdot\nabla F=\partial_{y}\big{(}yP_{1}g(1-\chi_{{}_{K}})+y\varepsilon^{\rm ex }g)+G, \tag{4.34}\] where we can bound \(G\) in \(L^{\infty}\) from the bootstrap estimates (3.45), the decay of \(Q\), the support of \(\chi_{{}_{K}}\) and its derivatives, \[\|G(s)\|_{L^{\infty}}\lesssim s^{-\frac{1}{\ell}},\quad\|yP_{1}g(1-\chi_{{}_{K }})+y\varepsilon^{\rm ex}g)\|_{L^{\infty}}\lesssim A^{9}s^{-\frac{3}{2\ell}} \lesssim s^{-\frac{1}{\ell}}.\] Similar, we have \[z\cdot\nabla\mathcal{E}^{bd}=2\partial_{y}(\partial_{y}\chi_{{}_{K}}g)+G^{bd},\] where \(G^{bd}\) and \(\partial_{y}\chi_{{}_{K}}g\) have supports on \(\{Ks^{\frac{1}{2\ell}}\leq y\leq 2Ks^{\frac{1}{2\ell}}\}\) that can be bounded using the estimate (4.2), \[\|G^{bd}(s)\|_{L^{\infty}}+\|\partial_{y}\chi_{{}_{K}}g\|_{L^{\infty}}\lesssim A ^{3}s^{-\frac{1}{\ell}}.\] We now use the Duhamel's formula applied to (4.33), Lemma 4.7 and (4.6) to get \[\|g^{\rm ex}(s)\|_{L^{\infty}} \leq e^{-(s-\tau)}\|g^{\rm ex}(\tau)\|_{L^{\infty}}+\int_{\tau}^ {s}\frac{e^{-(s-s^{\prime})}}{\sqrt{1-e^{-(s-s^{\prime})}}}\Big{[}\|y^{-1}g^{ \rm ex}\|_{L^{\infty}}+\|\partial_{y}\chi_{{}_{K}}g\|_{L^{\infty}}+\|\partial _{y}\chi_{{}_{K}}g\|_{L^{\infty}}\Big{]}\] \[\qquad\qquad+\int_{\tau}^{s}e^{-(s-s^{\prime})}\Big{[}\|G(s^{ \prime})\|_{L^{\infty}}+\|G^{bd}(s^{\prime})\|_{L^{\infty}}+\|y\partial_{y}E(s ^{\prime})\|_{L^{\infty}}\Big{]}ds^{\prime}\] \[\lesssim e^{-(s-\tau)}\|g^{\rm ex}(\tau)\|_{L^{\infty}}+A^{3}\int _{\tau}^{s}\frac{e^{-(s-s^{\prime})}}{\sqrt{1-e^{-(s-s^{\prime})}}}(s^{\prime} )^{-\frac{1}{\ell}}ds^{\prime}+\int_{\tau}^{s}e^{-(s-s^{\prime})}(s^{\prime})^{ -\frac{1}{\ell}}ds^{\prime}\] \[\lesssim e^{-(s-\tau)}\|g^{\rm ex}(\tau)\|_{L^{\infty}}+A^{3}\tau ^{-\frac{1}{\ell}}(1+s-\tau).\] This concludes the proof of (4.31) for \(j=1\). The estimate (4.32) for \(\|y\varepsilon^{\rm ex}\|_{L^{\infty}}\) follows the same proof as for (4.31), except that the bound on the error \(\|yE(1-\chi_{{}_{K}})\|_{L^{\infty}}\lesssim s^{-\frac{1}{2\ell}}\). This completes the proof of Lemma 4.8. ### Proof of Proposition 3.4 and Theorem 1.1 In this section we give the proof of Proposition 3.4 to complete the proof of Proposition 3.3. Theorem 1.1 is a direct consequence of Proposition 3.3. Proof of Proposition 3.4.: The basic idea is to improve the bootstrap estimates given in Definition 3.1, except for the first \(\ell\) modes \((\hat{\varepsilon}_{k})_{0\leq k\leq\ell-1}\). Regarding the constants, we fix them in the following order: we fix \(K\gg 1\) a large constant independent of \(A\), then \(A=A(K)\gg 1\), then \(s_{0}=s_{0}(A)\gg 1\). We recall from the assumption that \[\varepsilon(s)\in\mathcal{S}_{A}(s)\quad\forall s\in[s_{0},s_{1}]\quad\text{ and}\quad\varepsilon(s_{1})\in\partial\mathcal{S}_{A}(s_{1}).\] (i) (_Improve bootstrap estimates_) Let's begin with \(\hat{\varepsilon}_{\ell}\) and argue by contradiction that there is \(\bar{s}\in[s_{0},s_{1}]\) such that \[|\hat{\varepsilon}_{\ell}(s)|<\frac{A^{2}\log s}{s^{2}}\;\forall s\in[s_{0}, \bar{s}),\quad\hat{\varepsilon}_{\ell}(\bar{s})=\pm\frac{A^{2}\log\bar{s}}{ \bar{s}^{2}},\] then, we have by equation (4.16) (consider that case \(\hat{\varepsilon}(\bar{s})>0\), similar for the negative case) \[-\frac{2A^{2}\log\bar{s}}{\bar{s}^{3}}+\frac{C}{\bar{s}^{3}}\hat{\varepsilon} _{\ell}^{\prime}(\bar{s})\geq A^{2}\frac{d}{ds}\frac{A^{2}\log s}{s^{2}}\Big{|} _{\bar{s}}=\frac{A^{2}}{\bar{s}^{3}}-\frac{2A^{2}\log\bar{s}}{\bar{s}^{3}},\] which can not happen for \(A\) large enough. Therefore, \(\hat{\varepsilon}_{\ell}(s)\) never touches its boundary, \[|\hat{\varepsilon}_{\ell}(s_{1})|<\frac{A^{2}\log s_{1}}{s_{1}^{2}}.\] As for the modes \(\hat{\varepsilon}_{k}\) with \(k=\ell+1,\cdots,2\ell-1\), we integrate the ODE (4.16) forward in time and use the fact that the eigenvalue is negative to conclude that \(\hat{\varepsilon}_{k}(s)\) can not touch its boundary as well. The same way for \(\|\bar{\varepsilon}(s)\|_{L^{2}_{\rho}}\) and \(\|(y\partial_{y})^{j}\hat{\varepsilon}\|_{y}\) thanks to the energy estimates derived in Lemmas 4.5 and (4.6). The improvement of \(\|(y\partial_{y})^{j}\varepsilon^{\mathrm{ex}}\|_{L^{\infty}}\) and \(\|y\varepsilon^{\mathrm{ex}}\|_{L^{\infty}}\) follows from Lemma 4.8 by taking \(\lambda=\log A\gg 1\) and \(s_{0}\geq\lambda\) such that for all \(\tau\geq s_{0}\) and \(s\in[\tau,\tau+\lambda]\), we have \[\tau\leq s\leq\tau+\lambda\leq\tau+s_{0}\leq 2\tau,\quad\text{hence},\quad \frac{1}{2\tau}\leq\frac{1}{s}\leq\frac{1}{\tau}\leq\frac{2}{s}.\] This give us the bound \[\frac{C(K)A^{3+j}}{\tau^{\frac{1}{2}}}(1+s-\tau)\lesssim\frac{C(K)A^{3+j}\log A }{s^{\frac{1}{2}}}<\frac{A^{4+j}}{s^{\frac{1}{2}}},\] for \(A\) large enough. This concludes that \(\varepsilon(s_{1})\) can only touch its boundary \(\partial\mathcal{S}_{A}(s_{1})\) at the first \(\ell\) modes \((\hat{\varepsilon}_{k})_{0\leq k\leq\ell-1}\). (ii) (_Transverse crossing_) The estimate (3.48) follows from a direct computation thanks to (4.16), \[\frac{1}{2}\frac{d}{ds}\sum_{k=0}^{\ell-1}\hat{\varepsilon}_{k}^{2}(s_{1})= \sum_{k=0}^{\ell-1}\Big{[}(1-k/\ell)\hat{\varepsilon}^{2}(s_{1})+\mathcal{O}( s^{-2}|\hat{\varepsilon}(s_{1})|)\Big{]}\geq\frac{A^{4}-CA^{2}}{s_{1}^{4}}>0, \tag{4.35}\] for \(A\) large enough. This completes the proof of Proposition 3.4 as well as Proposition 3.3. Proof of Theorem 1.1.: (i) and (ii) follows from the definition of the shrinking set 3.1 and the relation \(w=dv+y\partial_{y}v\) and \(\phi_{2\ell}=d\varphi_{2\ell}+y\partial_{y}\varphi_{2\ell}\). As for (iii), we use the same argument as in Herrero-Velazquez [33] for the classical nonlinear heat equation (see also Bebernes-Bricher [1], Zaag [61], [26] for a similar approach), we only sketch the computation for the reader convenience. We introduce the auxiliary function \[g(x_{0},\xi,\tau)=(T-t_{0})u(x,t),\quad x=x_{0}+\xi\sqrt{T-t_{0}},\quad t=t_{0}+ \tau(T-t_{0}),\] where \(t_{0}=t_{0}(x_{0})\) is uniquely determined by \[|x_{0}|=K_{0}\sqrt{T-t_{0}}|\log(T-t_{0})|^{\frac{1}{2\ell}},\quad K_{0}\gg 1.\] We have the relation \[\log(T-t_{0})\sim 2\log|x_{0}|,\quad T-t_{0}\sim\frac{|x_{0}|^{2}}{K_{0}^{2} \big{(}2|\log|x_{0}||\big{)}^{\frac{1}{\ell}}}.\] From (1.10), we have \[u^{*}(x_{0})=\lim_{t\to T}u(x,t)=(T-t_{0})^{-1}\lim_{\tau\to 1}g(x_{0},0,\tau)=(T-t_{0})^ {-1}\hat{g}_{K_{0}}(1).\] We compute from (2.31), \[\hat{g}_{K_{0}}(1)=F(K_{0})\sim(d-2)c_{\ell}^{-\frac{1}{\ell}}K_{0}^{-2},\] which gives \[u^{*}(x_{0})\sim(d-2)\left(\frac{2}{c_{\ell}}\right)^{\frac{1}{\ell}}\frac{| \log|x_{0}||^{\frac{1}{\ell}}}{|x_{0}|^{2}}\quad\text{as}\ \ |x_{0}|\to 0.\] This completes the proof of Theorem 1.1.
2303.18182
Parisi's hypercube, Fock-space frustration and near-AdS$_2$/near-CFT$_1$ holography
We consider a model of Parisi where a single particle hops on an infinite-dimensional hypercube, under the influence of a uniform but disordered magnetic flux. We reinterpret the hypercube as the Fock-space graph of a many-body Hamiltonian and the flux as a frustration of the return amplitudes in Fock space. We will identify the set of observables that have the same correlation functions as the double-scaled Sachdev-Ye-Kitaev (DS-SYK) model, and hence the hypercube model is an equally good quantum model for near-AdS$_2$/near-CFT$_{1}$ (NAdS$_2$/NCFT$_1$) holography. Unlike the SYK model, the hypercube Hamiltonian is not $p$ local. Instead, the SYK model can be understood as a Fock-space model with similar frustrations. Hence we propose this type of Fock-space frustration as the broader characterization for NAdS$_2$/NCFT$_1$ microscopics, which encompasses the hypercube and the DS-SYK models as two specific examples. We then speculate on the possible origin of such frustrations.
Micha Berkooz, Yiyang Jia, Navot Silberstein
2023-03-31T16:24:00Z
http://arxiv.org/abs/2303.18182v2
# Parisi's hypercube, Fock-space frustration and near-AdS\({}_{2}\)/near-CFT\({}_{1}\) holography ###### Abstract We consider a model of Parisi where a single particle hops on an infinite-dimensional hypercube, under the influence of a uniform but disordered magnetic flux. We reinterpret the hypercube as the Fock-space graph of a many-body Hamiltonian, and the flux as a frustration of the return amplitudes in Fock space. We will show that this model has the same correlation functions as the double-scaled Sachdev-Ye-Kitaev (DS-SYK) model, and hence is an equally good quantum model for near-AdS\({}_{2}\)/near-CFT\({}_{1}\) (NAdS\({}_{2}\)/NCFT\({}_{1}\)) holography. Unlike the SYK model, the hypercube Hamiltonian is not \(p\)-local. Instead, the SYK model can be understood as a Fock-space model with similar frustrations. Hence we propose this type of Fock-space frustration as the broader characterization for NAdS\({}_{2}\)/NCFT\({}_{1}\) microscopics, and speculate the possible origin of such frustrations. Two dimensional nearly-anti de Sitter (NAdS\({}_{2}\)) spacetime arises ubiquitously as the near-horizon geometry of near-extremal black holes in higher dimensions. In holographic theories, this means in the appropriate near extremal states an AdS\({}_{D+1}\)/CFT\({}_{D}\) (\(D>1\)) duality flows to a near-AdS\({}_{2}\)/near-CFT\({}_{1}\) (NAdS\({}_{2}\)/NCFT\({}_{1}\)) duality at low energy. In fact, considerable progress has been made by directly constructing microscopic models for NCFT\({}_{1}\), most notable of which is the Sachdev-Ye-Kitaev (SYK) model [1; 2; 3; 4; 5]. In the SYK model, a system of \(N\) Majorana fermions interact through a \(p\)-body interaction in which a fermion can couple to any of the rest. At low energy, the model's thermodynamics and correlators reproduce those of the Jackiw-Teitelboim gravity - a dilaton gravity theory which can arise by dimensionally reducing higher-dimensional gravity to NAdS\({}_{2}\) spacetime [6]. In addition, the SYK model is also important as a solvable model of quantum chaos in \(p\)-local systems per se (and can probably also be realized experimentally). Its low energy solution is obtained by using Schwinger-Dyson equations in the limit \(N\to\infty\) with \(p\) fixed. But the double-scaled SYK (DS-SYK) limit \(p,N\to\infty\) with \(\lambda=2p^{2}/N\) fixed can be solved exactly in \(\lambda\) for all energy scales using combinatorics. The latter technique also allows for the reconstruction of the AdS\({}_{2}\) dynamics (and generlizes it to a \(q\)-deformed AdS\({}_{2}\)) [7; 8; 9; 10]. There are, however, additional models that are not even \(p\)-local but have the same combinatorial solution. The simplest such example is the hypercube model of Parisi [11], made out of \(d\) qubits, along with a Hamiltonian with interactions which couple together all degrees of freedom in each term. It is therefore interesting to identify which microscopic aspects of the SYK model are essential and which are spurious for the NAdS\({}_{2}\)/NCFT\({}_{1}\) holography, as well as clarify whether quantum chaos is similar in these models. We will try to pinpoint exactly what is in common of the two models, in terms of dynamics and in terms of the appropriate set of observables, and use it as a stepping stone toward a broader characterization of NAdS\({}_{2}\)/NCFT\({}_{1}\) microscopics. We can then hope it is this broader characterization that survives the examination from the AdS\({}_{D+1}\)/CFT\({}_{D}\) (\(D>1\)) viewpoint. More details will be covered in a companion paper [12]. Parisi introduced a \(d\)-dimensional hypercubic model where there are superconducting dots living on the hypercube vertices, whose energy is frustrated by a uniform (position-independent) but disordered magnetic flux. Here we remove the superconducting dots of the original theory, and the physics becomes that of a single particle hopping on the hypercube vertices under the influence of the same flux. By doing so we drastically change the role of the flux: the flux frustrates energies in the original theory but now frustrates the return amplitudes of a hopping particle. The former tends to increase the glassiness of a system, and the latter tends to delocalize wavefunctions and hence to thermalize a system. The hypercube has \(2^{d}\) vertices which we denote by \(\{-1/2,+1/2\}^{d}\) with \(\sigma_{\mu}^{3}/2\) (\(\mu=1,\ldots,d\)) being the position operators of the particle. We will use a gauge that is different from Parisi's original choice. The point is to use a rotationally covariant gauge so that the insertions of probe operators become much simpler. Our Hamiltonian is \[H=-\frac{1}{\sqrt{d}}\sum_{\mu=1}^{d}D_{\mu}:=-\frac{1}{\sqrt{d}}\sum_{\mu=1}^{ d}(T_{\mu}^{+}+T_{\mu}^{-}), \tag{1}\] where \(T_{\mu}^{-}=(T_{\mu}^{+})^{\dagger}\) and \[T_{\mu}^{+}=\prod_{\nu=1,\nu\neq\mu}^{d}e^{\frac{i}{2}F_{\mu\nu}\sigma^{3}_{ \nu}}\sigma_{\mu}^{+},\quad\sigma_{\mu}^{+}=\frac{\sigma_{\mu}^{1}+i\sigma_{ \mu}^{2}}{2}. \tag{2}\] The \(\sigma_{\mu}^{i}(i=1,2,3)\) is the \(i\)-th Pauli matrix acting on the \(\mu\)-th qubit, and \(F_{\mu\nu}\) is the antisymmetric tensor of the background flux which takes values in \((-\pi,\pi]\). We have chosen a normalization for \(H\) such that it has a compact spectral support at \(d=\infty\) (as we will also do for DS-SYK). The fluxes \(F_{\mu\nu}\) are quench-disordered, and identically and independently distributed with the additional requirement that (\(\left\langle\cdot\cdot\right\rangle\) stands for an ensemble average) \[\left\langle\sin F_{\mu\nu}\right\rangle=0. \tag{3}\] The distribution is otherwise completely general and \[q:=\left\langle\cos F_{\mu\nu}\right\rangle \tag{4}\] is a tunable parameter. \(T_{\mu}^{+}\) is a hopping operator that transports the particle in the forward \(\mu\) direction, while assigning to it a random phase due to the disordered flux. The holonomy \(T_{\nu}^{-}T_{\mu}^{-}T_{\nu}^{+}T_{\mu}^{+}\) in the \(\mu\nu\) plane then gives the return amplitude of hopping counter-clockwise around this plaquette. However it is more convenient to study the holonomy in terms of \(D_{\mu}\) operators, which combines the forward and backward hoppings: \[\begin{split}&\mathcal{W}_{\mu\nu}=D_{\nu}D_{\mu}D_{\nu}D_{\mu}= \cos F_{\mu\nu}-i\sin F_{\mu\nu}\sigma_{\mu}^{3}\sigma_{\nu}^{3},\\ &\left\langle\mathcal{W}_{\mu\nu}\right\rangle=q.\end{split} \tag{5}\] We can also think of \(\mathcal{W}_{\mu\nu}\) as the mutual frustration of different terms in the Hamiltonian. We can view the Hamiltonian (1) as a many-body system of \(d\) interacting qubits with the hypercube being its Fock-space graph [13]: if we view each basis vector as a point, and connect two points whenever the corresponding basis vectors have a nonzero transition amplitude, then we get back to the picture of a single particle hopping on a hypercube. The many-bodyness is encoded in the requirement that a Fock-space graph should have a diverging vertex degree (\(d\rightarrow\infty\)). In this manner we have reinterpreted the hypercube as living in a Hilbert space rather than real space. The spectrum of the model is solved by moment method via \(q\)-deformed oscillators [11; 14; 15]. The \(2k\)-th moment of the hypercube model can be written as \[2^{-d}\left\langle\mathrm{Tr}H^{2k}\right\rangle= 2^{-d}d^{-k}\sum_{\left\{\mu_{i}\right\}}\langle\mathrm{Tr}\ D_{\mu_{1}}D_{ \mu_{2}}\ \ldots D_{\mu_{2k}}\rangle. \tag{6}\] Since trace is a sum of return amplitudes, a forward hopping must be paired with a backward hopping, which means the subscripts \(\mu_{1},\ldots,\mu_{2k}\) must form \(k\) pairs. At leading order in \(1/d\), we can focus on the case where these \(k\) indices are all distinct (any further coincidence among the \(k\) pairs will be suppressed by \(1/d\)). We can use chord diagrams to represent such pairings: draw \(2k\) points on a circle representing the subscripts, and connect two points by a chord if the corresponding subscripts are paired. We illustrate one example in the left panel of Fig. 1. To evaluate a chord diagram, we can move the operators until the paired operators become adjacent to each other, and in the process we generate phase terms by applying Eq. (5) repeatedly (and that \(D_{\mu}^{2}=1\)). The result is that we pick up an independent \(\cos F\) for each interlacing ordering of two pairs of hoppings and the moments are \[2^{-d}\left\langle\mathrm{Tr}H^{2k}\right\rangle=\sum_{\mathrm{ diagrams}}q^{\mathrm{number\ of\ chord\ intersections}}. \tag{7}\] The corresponding spectral density is given by [16] (an efficient way of evaluating the sum using a transfer matrix is given in [7]): \[\rho(E)=\frac{\Gamma_{q^{2}}\left(\frac{1}{2}\right)}{\pi\sqrt{1+ q}}\left[1-\frac{E^{2}}{4}(1-q)\right]^{\frac{1}{2}}\prod_{l=1}^{\infty}\left[1- \frac{(1-q)q^{l}E^{2}}{(1+q^{l})^{2}}\right],\] \[\Gamma_{q^{2}}\left(\frac{1}{2}\right)=\sqrt{1-q^{2}}\prod_{j=0}^ {\infty}(1-q^{2j+2})(1-q^{2j+1})^{-1}. \tag{8}\] The double-scaled SYK model can be solved in a similar way [17; 18] and the coincidence with the hypercube model's spectral density was noted in [19; 20]. In this paper we will extend the similarity to correlation functions, and explain why the coincidence is not accidental at all. The SYK model is as follows. Consider \(N\) Majorana fermions \(\{\psi_{i},\psi_{j}\}=2\delta_{ij},\ i,j=1..N\) and the Hamiltonian \[H_{\mathrm{SYK}}=\Sigma_{I}J_{I}\Psi_{I} \tag{9}\] where \(I\) is a multi-index of length \(p\) (\(p\) is an even integer): \[\begin{split}& I=\{i_{1},i_{2},\ldots,i_{p}\},\quad 1\leq i_{1}<i_ {2}<\cdots<i_{p}\leq N,\\ &\Psi_{I}=i^{p/2}\psi_{i_{1}}\psi_{i_{2}}\cdots\psi_{i_{p}},\ \ (\Psi_{I}^{2}=1)\.\end{split} \tag{10}\] Moreover, \(J_{I}\) are Gaussian random variables that are independently and identically distributed with variance \[\left\langle J_{I}^{2}\right\rangle=\binom{N}{p}^{-1}. \tag{11}\] The main feature that the SYK shares with the Parisi model is a similar structure of holonomies encoding frustrations, given by \[\mathcal{W}_{IJ}=\Psi_{I}\Psi_{J}\Psi_{I}\Psi_{J}=(-1)^{|I\cap J|}, \tag{12}\] where \(|I\cap J|\) is the cardinality of the intersection of \(I\) and \(J\). The subscript \(I\) plays a similar role as \(\mu\) does in the hypercube model (which specifies there the direction of hopping). Comparing with Eq. (5), we see that the SYK frustrations are generated by uniform fluxes of \(0\) and \(\pi\) Figure 1: Left: a chord diagram contributing to \(2^{-d}\left\langle\mathrm{Tr}H^{6}\right\rangle\), which represents the hopping sequence \(D_{\nu}D_{\rho}D_{\mu}D_{\rho}D_{\nu}D_{\mu}\). This diagram has a value of \(q^{2}\). Right: a chord diagram contributing to a two-point insertion \(2^{-d}\left\langle\mathrm{Tr}H^{2}OH^{2}O\right\rangle\). The dashed line represents the \(O\) chord and the solid lines represent the \(H\) chords. This diagram has a value of \(q\overline{q}^{2}\). By uniformity we mean that the holonomy \(\mathcal{W}_{IJ}\) only depends on \(I\) and \(J\) but does not depend on which state it acts on. Namely, in the Fock space a general loop produces a phase that depends on its shape and orientation, but is independent of its position. To accomplish a complete analogy with the hypercube model, we would still need the holonomies on different plaquettes to be statistically independent and have a tunable average value. This is achieved by going to the double-scaled SYK limit: \[N,p\rightarrow\infty,\text{ with fixed }\frac{p^{2}}{N}. \tag{13}\] In this limit multi-index intersections become an independently random process for each pair of \(I\) and \(J\), and \(|I\cap J|\) is Poisson distributed with a mean value \(p^{2}/N\), giving an average holonomy \[q=\langle(-1)^{|I\cap J|}\rangle_{I,J}=e^{-2p^{2}/N}, \tag{14}\] where the average is over all possible values of \(I\) and \(J\)[17]. This \(q\) plays the identical role in DS-SYK as the \(\langle\cos F\rangle\) plays in the hypercube model. \(p\)**-local vs. frustrated Hamiltonians:** The SYK Hamiltonian (9) is manifestly \(p\)-local (\(p\) being the length of the interaction). The Parisi Hamiltonian is not of that form as each term in (2) depends on all the available qubits. Nevertheless the solution is the same. The real criterion which allows for the same solution using chord diagrams is the fact that the frustrations satisfy \[[\mathcal{W}_{\mu\nu},D_{\rho}]=0\text{ or }[\mathcal{W}_{IJ},\Psi_{K}]=0 \tag{15}\] with probability 1 in the thermodynamic limit. The holonomies, or frustrations, are effectively short-range and do not interfere with most of the many-body interaction terms. This is another way of phrasing the uniformity requirement for the frustrations. **Observables:** To exhibit a full solution of the Parisi model at the same level as the DS-SYK model, we need a rich enough set of observables and show that their correlation functions are the same. As we shall see below, the chord combinatorics for probes in the hypercube model is again identical to that of the DS-SYK. As a consequence they develop the same infrared behaviour, which implies the hypercube model also has a NCFT\({}_{1}\) limit and its Out-of-Time-Order correlator (OTOC) has an exponential growth in time with a maximal Lyapunov exponent (which matches the fast-scrambling nature of black holes [21; 22; 23]). We shall see that the operator conformal dimensions in both models can be understood as a ratio of frustrations. What are the appropriate probe operators in this model? Consider how we probe a near-extremal black hole in higher dimensional AdS\({}_{D+1}\). We expect that single-trace operators of the dual higher-dimensional CFT\({}_{D}\) to become complicated by the time they flow to NCFT\({}_{1}\). Therefore our best chance is to give a statistical description for them. The Hamiltonian is one of the single-trace operators, so we may expect other single-trace operators to have a similar form. Since the Hamiltonian (1) is built from hopping operators \(D_{\mu}\), we suggest the following class of operators as suitable observables: \[O=-\frac{1}{\sqrt{d}}\sum_{\mu=1}^{d}\tilde{D}_{\mu}:=-\frac{1}{\sqrt{d}}\sum _{\mu=1}^{d}(\tilde{T}_{\mu}^{+}+\tilde{T}_{\mu}^{-}), \tag{16}\] where \(\tilde{T}_{\mu}^{+}\) is defined in the same manner as \(T_{\mu}^{+}\) in Eq. (2), but with a different uniform and disordered flux \(\tilde{F}_{\mu\nu}\) which may or may not correlate with \(F_{\mu\nu}\). Similar logic applies to SYK probes and suggests that they can be chosen to be a product of \(\tilde{p}\) fermions \(O_{\text{SYK}}=\sum_{I}\tilde{J}_{I}\Psi_{I}\), where \(\tilde{I}\) is an index set of length \(\tilde{p}\)[7; 8]. We can generalize Eq. (16) further and take \(O\) to be a sum of products of a finite number of hoppings, twisted by random phases, but this does not add any new physics as far as observables are concerned. It does open new options for Fock-space dynamics as we will discuss later. Odd number of insertions of \(\tilde{D}_{\mu}\) are exponentially suppressed because \(\langle D\tilde{D}\rangle=\langle\cos[(F-\tilde{F})/4]\rangle^{d-1}\to 0\), and hence we only consider even number of insertions. Moments with two-point insertions have the form \[\left\langle\text{Tr}H^{k_{2}}OH^{k_{1}}O\right\rangle= \frac{1}{d^{\frac{k_{1}+k_{2}+2}{2}}}\sum_{\nu_{1},\nu_{2},\{\mu _{1}\}}\langle\text{Tr }D_{\mu_{1}}\ldots D_{\mu_{k_{2}}}\] \[\tilde{D}_{\nu_{1}}D_{\mu_{k_{2}+1}}\ldots D_{\mu_{k_{1}+k_{2}}} \tilde{D}_{\nu_{2}}\rangle. \tag{17}\] Due to the same exponential suppression, two \(\tilde{D}\)'s must pair up and the \(D\)'s must pair up among themselves. Therefore we can obtain the two-point functions by chord diagrams where one type of chord (marked by dashed lines) connects the \(O\) insertions (\(O\)-chords), and another type connects the Hamiltonians (\(H\)-chords). We draw an example in the right panel of Fig. 1. Note also that \[\tilde{D}_{\nu}D_{\mu}\tilde{D}_{\nu}D_{\mu}=\cos\frac{F_{\mu\nu} +\tilde{F}_{\mu\nu}}{2}-i\sin\frac{F_{\mu\nu}+\tilde{F}_{\mu\nu}}{2}\sigma_{ \mu}^{3}\sigma_{\nu}^{3},\] \[\left\langle\tilde{D}_{\nu}D_{\mu}\tilde{D}_{\nu}D_{\mu}\right\rangle :=\tilde{q}=\left\langle\cos\frac{F_{\mu\nu}+\tilde{F}_{\mu\nu}}{2} \right\rangle. \tag{18}\] which is a generalization of Eq. (5). The remaining steps are entirely analogous to the discussion without insertions. The two-point moment at leading order is given by the sum of chord diagrams \[2^{-d}\left\langle\text{Tr}H^{k_{2}}OH^{k_{1}}O\right\rangle\] \[= \sum_{\text{diagrams}}q^{\#H\cdot H\text{ intersections }}\tilde{q}^{\#O\cdot H\text{ intersections}}, \tag{19}\] and the four-point insertion rule works out similarly: \[2^{-d}\left\langle\mathrm{Tr}H^{k_{k}}OH^{k_{3}}OH^{k_{2}}OH^{k_{1}}O\right\rangle\] \[=\sum_{\mathrm{diagrams}}q^{\#H\text{-}H\text{ intersections }}q^{\#O\text{-}H\text{ intersections}}\] \[\times\tilde{q}_{12}^{\#O\text{-}O\text{ intersections}}, \tag{20}\] where "# \(H\)-\(H\) intersections" means the total number of intersections among \(H\)-chords in a diagram and likewise for \(O\)-\(H\) and \(O\)-\(O\), where the weight for the latter is \(\tilde{q}_{12}:=\left\langle\cos\tilde{F}_{\mu\nu}\right\rangle\). These are exactly the same chord diagram rules for the DS-SYK, and there the \(q\) parameters are \[q=e^{-2p^{2}/N},\quad\tilde{q}=e^{-2p\tilde{p}/N},\quad\tilde{q}_{12}=e^{-2 \tilde{p}^{2}/N}. \tag{21}\] The \(\mathrm{NCFT}_{1}\) limit of both models is given by [8] \[q,\tilde{q}\to 1^{-},\quad\log\tilde{q}/\log q\ \ \text{fixed} \tag{22}\] in the temperature range \[(-\log q)^{\frac{3}{2}}\ll T\ll(-\log q)^{\frac{1}{2}}. \tag{23}\] In this regime the the correlation functions have a conformal form and the operator dimensions are given by \[\Delta_{O}=\log\tilde{q}/\log q, \tag{24}\] which in the DS-SYK implies \(\Delta_{O}=\tilde{p}/p\) and in the hypercube model implies \[\Delta_{O}=\frac{\left\langle(F_{\mu\nu}+\tilde{F}_{\mu\nu})^{2}\right\rangle }{4\left\langle F_{\mu\nu}^{2}\right\rangle}. \tag{25}\] **Operator growth and the Parisi model as a typified SYK model:** Next we will argue the following relation between the DS-SYK model and the Parisi model, which is that the latter is a quantum model for operator growth in the former (with one small tweak). Consider taking some other random operator in SYK whose size scales as \(\sqrt{N}\) (but with different coefficients than the Hamiltonian) and denote it by \(\mathcal{O}_{\mathrm{base}}\). Under time evolution it evolves into an operator in \[\mathrm{span}\{\mathcal{O}_{\mathrm{base}}\mathcal{O}_{I_{1}}...\mathcal{O}_{ I_{k}},\ k\geq 0\}. \tag{26}\] So it is described by evolution on the hypercube of operators \[d=\binom{N}{p},\ (Z_{2})^{d}\rightarrow\{\mathcal{O}_{\mathrm{base}}\Pi_{I} \mathcal{O}_{I}^{n_{I}},n_{I}=\{0,1\}^{d}\} \tag{27}\] when we start the evolution at the origin. The right-hand side is an over-complete set of operators but this is a valid description for motions that start at the origin and make fewer than \(\mathcal{O}(N)\) hops (or we can go to sparse SYK [24; 25] where \(d\sim N\) and avoid the over-completeness). We can see explicitly how the dynamics on the hypercube arises. Consider the Heisenberg evolution of the operator and consider a plaquette whose corners are \(\mathcal{O}_{\mathrm{base}}\Psi_{S}\Psi_{I}^{0.1}\Psi_{J}^{0.1}\). We can move from a corner either by multiplying by a \(\Psi_{K}\) (\(K=I\) or \(J\)) from the left or from the right. So the Heisenberg evolution on a plaquette exactly moves us from one corner to an adjacent one. There is a slight modification of the picture before because there are two options for the phase on each edge depending on whether we act with \(\Psi_{K}\) on the left or on the right of the operator, but this does not change the discussion above by much. These phases are straightforward to compute. What is important is that the flux on a plaquette is uniform in the sense before, and depends only on \(I\cap J\). So the operator growth dynamics in the DS-SYK model in this case (before we start saturating the entire Hilbert space) is given by a Parisi model. **Fock space dynamics:** Clearly the model can be generalized by including more complicated patterns of hoppings in Fock space. In fact we can extrapolate between the pure Parisi model and SYK-type models as actions in Fock space by taking \(H=\sum_{\alpha,A}W_{\alpha,A}O_{\alpha,A}\) where \(O_{\alpha,A}\) is of the form \[\alpha=\{\mu_{1},..\mu_{p}\},A=\{n_{1},..,n_{p}\},\ n_{i}=\pm,\] \[O_{\alpha,A}=\Pi\sigma_{\mu_{j}}^{n_{j}}*\text{phase terms}. \tag{28}\] For example the complex SYK and complex DS-SYK [26; 27] are precisely these models, with phase factors present in the Jordan-Wigner representation of fermions and with the constraint that \(\sum_{i}n_{i}=0\) to enforce the \(U(1)\) symmetry. This suggests some interesting generalizations of the SYK model relevant for physical situations. Consider a quantum dot of many fermions with a conserved charge. If the dot is tuned such that there are no \(\psi_{i}^{\dagger}\psi_{j}\) terms in the Hamiltonian, then we expect that the model is given by a \(U(1)\)-invariant SYK model. But now we can generalize the \(U(1)\)-invariant model to \[H=\sum J_{ij}^{kl}\psi^{i}\psi^{j}\psi_{k}^{\dagger}\psi_{l}^{\dagger}e^{i\sum _{m}\varphi_{ijkl,m}\psi_{m}^{\dagger}\psi^{m}}. \tag{29}\] The additional phases can all be small but there are many of them - as in the Parisi model one cannot expand in the phases but rather they can modify the infrared behavior. **Discussions:** What is to be learnt from such a picture? Minimally we can say \(p\)-locality is not a broad enough characterization for \(\mathrm{NAdS}_{2}/\mathrm{NCFT}_{1}\) microscopics. Indeed, \(p\)-locality describes a large class of models whose double-scaled limit gives \(\mathrm{NAdS}_{2}/\mathrm{NCFT}_{1}\), an example other than the SYK is the \(p\)-quantum-spin model where the double-scaled limit was first discovered [17]. However, the hypercube model is not \(p\)-local yet follows exactly the same combinatorics. Instead, the Fock-space frustration picture encompasses both and hence is the broader characterization, which should be useful for model-building purposes. This is particularly important if we want to realize the \(\mathrm{NAdS}_{2}/\mathrm{NCFT}_{1}\) relation as the infrared of a renormalization-group flow in a holographic CFT\({}_{D}\) (\(D>1\)) in an extremal black hole state - we should really be looking for signatures of frustrations rather than \(p\)-locality. To summarize, we get chord combinatorics (as in Eqs. (7), (19) and (20)) and therefore automatically a NAdS\({}_{2}\)/NCFT\({}_{1}\) duality, if a model has a Fock-space frustration which is 1. uniform and quench-disordered, 2. independently and identically distributed on different (non-parallel) plaquettes of the Fock-space graph, with a real and tunable average holonomy. The NAdS\({}_{2}\)/NCFT\({}_{1}\) emerges as the variance of the flux (\(\langle F_{\mu\nu}^{2}\rangle\) in Parisi, \(p^{2}/N\) in DS-SYK) is tuned to zero after the thermodynamic limit is taken. These criteria need to be understood as large-system-size statements, and deviations suppressed by sufficiently high powers in the system size should be allowed [28; 29]. Also, these criteria are sufficient but not necessary, as there are regimes that give NAdS\({}_{2}\)/NCFT\({}_{1}\) but are beyond the description of chord diagram combinatorics, such as the fixed \(p\) and \(N\to\infty\) limit of SYK, which violates the second criterion by having holonomies which are untunable at large \(N\). Nonetheless, these criteria should not be violated too violently. For example, if we strongly violate the uniformity requirement by assigning to each hypercube edge with an independently random phase, we would end up with radically different chord combinatorics that does not deliver NAdS\({}_{2}\)/NCFT\({}_{1}\); a local spin chain model would strongly violate the second criterion by having frustrations only on a vanishingly small fraction of the graph faces. Tentatively, the uniformity requirement will be relaxed to some smooth-variation requirement in a broader setting, but we do not have a quantitative description at present. It is even less clear to us how the second criterion should be relaxed. Finally, since NAdS\({}_{2}\) appears as the long-throat part of a higher-dimensional geometry, we expect a large time scale separation in the dual CFT\({}_{D}\) (\(D>1\)), entailing an adiabatic scenario. We speculate that such random frustrations can arise as Berry curvatures when the fast degrees of freedom are integrated out [30; 31; 32]. We thank Jacobus Verbaarschot, Antonio Garcia-Garcia, Dario Rosa and Alexander Abanov for valuable discussions.
2309.05836
DKIST unveils the serpentine topology of quiet Sun magnetism in the photosphere
We present the first quiet Sun spectropolarimetric observations obtained with the Visible SpectroPolarimeter (ViSP) at the $4-$m Daniel K. Inouye Solar Telescope (DKIST). We recorded observations in a wavelength range that includes the magnetically sensitive Fe I $6301.5/6302.5$ $\AA$ doublet. With an estimated spatial resolution of 0.08'', this represents the highest spatial resolution full-vector spectropolarimetric observations ever obtained of the quiet Sun. We identified $53$ small-scale magnetic elements, including $47$ magnetic loops and $4$ unipolar magnetic patches, with linear and circular polarisation detected in all of them. Of particular interest is a magnetic element in which the polarity of the magnetic vector appears to change three times in only $400$ km and which has linear polarisation signals throughout. We find complex Stokes $V$ profiles at the polarity inversion lines of magnetic loops and discover degenerate solutions, as we are unable to conclusively determine whether these arise due to gradients in the atmospheric parameters or smearing of opposite polarity signals. We analyse a granule which notably has linear and circular polarisation signals throughout, providing an opportunity to explore its magnetic properties. On this small scale we see the magnetic field strength range from $25$ G at the granular boundary to $2$ kG in the intergranular lane (IGL), and sanity check the values with the weak and strong field approximations. A value of $2$ kG in the IGL is among the highest measurements ever recorded for the internetwork.
Ryan J. Campbell, P H. Keys, M. Mathioudakis, F. Woeger, T. A. Schad, A. Tritschler, A. G. de Wijn, H. N. Smitha, C. A. Beck, D J. Christian, D. B. Jess, R. Erdelyi
2023-09-11T21:34:00Z
http://arxiv.org/abs/2309.05836v2
# DKIST unveils the serpentine topology of quiet Sun magnetism in the photosphere ###### Abstract We present the first quiet Sun spectropolarimetric observations obtained with the Visible SpectroPolarimeter (ViSP) at the 4\(-\)m Daniel K. Inouye Solar Telescope (DKIST). We recorded observations in a wavelength range that includes the magnetically sensitive Fe I 6301.5/6302.5 A doublet. With an estimated spatial resolution of 0\(\farcs\)08, this represents the highest spatial resolution full-vector spectropolarimetric observations ever obtained of the quiet Sun. We identified 53 small-scale magnetic elements, including 47 magnetic loops and 4 unipolar magnetic patches, with linear and circular polarisation detected in all of them. Of particular interest is a magnetic element in which the polarity of the magnetic vector appears to change three times in only 400 km and which has linear polarisation signals throughout. We find complex Stokes \(V\) profiles at the polarity inversion lines of magnetic loops and discover degenerate solutions, as we are unable to conclusively determine whether these arise due to gradients in the atmospheric parameters or smearing of opposite polarity signals. We analyse a granule which notably has linear and circular polarisation signals throughout, providing an opportunity to explore its magnetic properties. On this small scale we see the magnetic field strength range from 25 G at the granular boundary to 2 kG in the intergranular lane (IGL), and sanity check the values with the weak and strong field approximations. A value of 2 kG in the IGL is among the highest measurements ever recorded for the internetwork. Sun: photosphere -- Sun: magnetic fields -- Sun: granulation 0000-0002-0001-0001]Ryan J. Campbell 0000-0001-8001-8001-8001-8001-8001-801X]P. H. Keys 0000-0001-801-801-801-801X]M. Mathioudakis 000-0001-801-801-801X]F. Woger 0000-0001-801-801-801X]T. Schad 000-0001-801-801-801X]A. Tirtscher 0000-0001-801-801-801X]A. G. de Wijn 0000-0001-801-801-801X]H. N. Smitha 0000-0001-801-801X]C. Beck 0000-0001-801-801X]D. J. Christian 0000-0001-801-801X]D. B. Jess 0000-0001-801-801X]R. Erdelyi ## 1 Introduction The discovery of transient, transverse magnetism in the quiet Sun by Lites et al. (1996) provided observational evidence suggesting that the "magnetic carpet" (Title & Schrijver, 1998) may not be predominantly longitudinal in nature. The short-lived linear polarisation, observed with the Zeeman effect, was found to be located at the edge of granules, between opposite polarity circular polarisation signals, but in the absence of any accompanying measurable Stokes \(V\) signal at the polarity inversion line (PIL). Later, Lites et al. (2008) used the spatial resolution afforded by the Hinode spacecraft and the magnetically sensitive 6302.5 A line to unveil the spatially averaged transverse magnetic flux density (555 Mx cm\({}^{-2}\)) as five times greater than the longitudinal equivalent (11 Mx cm\({}^{-2}\)). However, circular polarisation remains much more prevalent than linear polarisation in this data. Long integrations with the Hinode/SpectroPolarimeter (SP) in sit-and-stare mode have revealed linear polarisation in as much as half of the field of view (FOV), but with compromised spatiotemporal resolution due to an integration time of 6.1 minutes (Bellot Rubio & Orozco Suarez, 2012). Whether the vector magnetic field is predominantly horizontal (transverse) or vertical (longitudinal) in the solar photosphere is still debated, with the impact of photon noise on the retrieval of the magnetic inclination angle a source of controversy (Borrero & Kobel, 2011; Danilovic et al., 2016). Therefore, geometric approaches were also sought to circumvent this problem (e.g., Stenflo, 2010, 2013; Jafarzadeh et al., 2014; Lites et al., 2017, to name but a few). The reader is directed to Bellot Rubio & Orozco Suarez (2019) for a review. The conversion and transport of quiet Sun magnetic energy in the photosphere to kinetic energy in the chromosphere and corona could have an important role to play in explaining the high temperature of the upper solar atmosphere (Schrijver et al., 1998; Trujillo Bueno et al., 2004). Statistical analyses of this phenomenon with the Zeeman effect suggest that magnetic bi-poles do not appear uniformly on the solar surface and there are regions of the very quiet Sun where no bi-poles seem to emerge (Martinez Gonzalez et al., 2012). Martinez Gonzalez & Bellot Rubio (2009) showed that 23% of magnetic loops studied reached the chromosphere, perhaps driven by convective up-flows and magnetic buoyancy (Steiner et al., 2008), on a timescale of 8 minutes. Gosic et al. (2021) demonstrated that small magnetic bi-poles that emerge in the photosphere with field strengths between 400 G and 850 G can reach the chromosphere, but on a much longer timescale of up to 1 hour. Wiegelmann et al. (2013) extrapolated photospheric magnetic field line measurements into the chromosphere and corona, and concluded that the energy released by magnetic reconnection could not fully account for the heating of these layers. However, the \(110-130\) km resolution of their observations means this analysis does not account for small-scale braiding of magnetic field lines and their potential to drive heating via reconnection and dissipation, a mechanism which could explain the relative weakness of the small-scale magnetic field in the solar photosphere (Parker, 1972). Near infrared spectropolarimetric observations have a demonstrated ability to measure linear polarisation more effectively compared to photospheric diagnostics in the visible (Martinez Gonzalez et al., 2008; Lagg et al., 2016; Campbell et al., 2021). Recently, Campbell et al. (2023) revealed an internetwork region with a relatively large fraction of linear and circular polarisation (23% versus 60%) such that a majority of magnetised pixels displayed a clear transverse component of the magnetic field, despite having set Stokes profiles with signal \(<5\sigma_{n}\) to zero before the inversion, where \(\sigma_{n}\) is the noise level as determined by the standard deviation in the continuum. Observations of sub-granular, small-scale magnetic loops, with lifetimes of a few minutes and sizes of \(1-2^{\prime\prime}\), with unambiguous transverse components at the PIL, were made possible with use of an integral field unit such that the spatiotemporal resolution was not compromised. Simultaneous observations with visible and near infrared Zeeman diagnostics have also been demonstrated to produce conflicting results, due to a different in formation heights or magnetic substructure (Socas-Navarro & Sanchez Almeida, 2003; Martinez Gonzalez et al., 2008). Indeed, Martinez Gonzalez et al. (2006) have shown that the different height of formation of the 6301.5/6302.5 A lines, and their sensitivities to temperature and magnetic field strengths, causes degenerate solutions from inversions of the Stokes vector or difficulty with the line ratio technique. This is predicted to persist even at the 20 km resolution of numerical simulations (Khomenko & Collados, 2007). The unprecedented spatial resolution made possible by the 4-m Daniel K. Inouye Solar Telescope (DKIST; Rimmele et al., 2020), combined with the high polarimetric sensitivity and spectral resolution of the Visible SpectroPolarimeter (ViSP; de Wijn et al., 2022), affords the opportunity to observe the fine-structure of the magnetic flux elements in the photosphere. In this study, we investigate the spectropolarimetric properties of small-scale magnetic features of the solar internetwork, as observed by DKIST, with inversions utilised to recover the physical properties from the observed Stokes vector. Studying the small-scale photospheric magnetic field is one of the key research aims of the DKIST Critical Science Plan (Rast et al., 2021). In Section 2 we describe the dataset, its reduction and provide an estimate of the spatial resolution. In Section 3 we present the analysis of the data, beginning with a description of the inversions. We define three case studies and examine the spectropolarimetric properties and spatial structure of the small-scale magnetic features in detail. Finally we discuss the results in Section 4 before drawing our conclusions in Section 5, with a look to future observations. ## 2 Observations & Data Reduction ### Observations On 26 May 2022 between 17:45 and 19:31 UT observations of a quiet Sun region were acquired with the ViSP at the DKIST during its first cycle of observations from the operations and commissioning phase (OCP1). The observing sequence was designed to obtain narrow, repeated scans of the solar surface at disk centre. The selected slit width was \(0\farcs 041\) with a slit step of \(0\farcs 041\) and \(104\) slit positions, giving a map of \(4\farcs 26\)\(\times 76\farcs 14\). The cadence between the commencement of a given frame to the start of the next was 13 minutes and 17 seconds, and 8 frames were recorded in total. The first arm of the ViSP captured a spectral region which includes the magnetically sensitive photospheric Fe I line pair at 6301.5 A and 6302.5 A (with effective Lande g-factors of 1.67 and 2.5, respectively), while the third arm captured a spectral region which includes the chro mospheric Ca II 8542 A line. The second arm of the ViSP did not record observations. In this paper we focus our analysis on the Fe I doublet. The spatial sampling along the slit is 0\(\farcs\)0298/pixel for the arm which recorded the Fe I 6300 A spectral region. At each slit step, 24 modulation cycles of 10 modulation states were acquired. The exposure time was 6 ms, yielding a corresponding 1.44 seconds total integration time per step. The seeing conditions during the scan were very good, with the adaptive optics (AO) locked during 99.2% of the scan step positions and a mean continuum intensity contrast of 6.2%. However, the maximum continuum intensity contrast is closer to 9%. The mean Fried parameter reported by the DKIST wavefront correction system was 12 cm during these observations. The mean standard deviation across all pixels (except those which did not have the AO locked) at a continuum wavelength in Stokes \(Q\), \(U\), and \(V\) in the spectral region adjacent to the Fe I 6302.5 A line is \(7.5\times 10^{-4}\)\(I_{\rm c}\), \(7.4\times 10^{-4}\)\(I_{\rm c}\), and \(7.5\times 10^{-4}\)\(I_{\rm c}\), respectively. The linear dispersion in the 6300 A region is 12.85 mA/pixel and the spectral resolution is twice this value. ### Data reduction The ViSP calibration and reduction pipeline was applied to the data for dark current removal, flat fielding, and polarimetric calibration 1. The polarisation amplitudes in the continuum should be negligible. Stokes \(Q\), \(U\), or \(V\) signal at continuum wavelengths at disk centre can be generated by cross-talk (henceforth referred to as environmental polarisation) with Stokes \(I\) when the atmosphere is not "frozen" during the modulation process (Collados, 1999). We followed the method of Sanchez Almeida & Lites (1992) to remove environmental polarisation (Stokes \(I\to Q,U,V\) cross-talk only). The two O\({}_{2}\) telluric lines close to the Fe I 6300 A doublet provide a method for validating that this correction is appropriate, as there should be no residual signal in \(Q\), \(U\), and \(V\) at the wavelengths of these tellurics if the correction is successful. The mean correction amplitude across all pixels in Stokes \(Q\), \(U\), and \(V\), as determined in the continuum adjacent to the Fe I 6302.5 A line, was \(2.8\times 10^{-3}\)\(I_{\rm c}\), \(1.8\times 10^{-3}\)\(I_{\rm c}\), and \(4.3\times 10^{-4}\)\(I_{\rm c}\), respectively. There remains some small-amplitude artefacts and residual cross-talk in the data, specifically the continuum does not always have net-zero polarisation across the full spectrum. In order to correct this, a simple linear least-squares regression was performed individually to every Stokes \(Q\), \(U\), and \(V\) profile, with the wavelengths of the spectral and telluric lines masked from the function. The linear fit is then subtracted from each profile, resulting in a continuum that is consistently zero at every wavelength, with standard deviations due to photon noise permitted. Footnote 1: Version 2.0.2 of the ViSP level 0 to level 1 pipeline was used. ### Estimating the spatial resolution To estimate the maximal spatial resolution in the data, we chose a spectral point in the blue wing of the line at 6302.47 A. For this wavelength, the contrast is increased relative to the continuum. We selected those slit positions for which the contrast is above the mean value of approximately 6.2%, and computed the sum of their one-dimensional spatial power at each spatial frequency point. We also applied a Hann filter to each slice before computing the one-dimensional fast fourier transform (FFT), to ensure we do not introduce high spatial frequencies at the edges of each slice. The result is shown in Fig. 1. Assuming that the noise in the data is additive white Gaussian noise, the frequency at which the spatial power levels can be used as an upper estimate for the noise floor at all spatial frequency points. Increased power above this level clearly indicates the presence of real signal in the data at that spatial frequency. We estimate the maximal spatial resolution as 0\(\farcs\)08 for this scan. There is significant power at the corresponding spatial frequency even when the Hann window is applied. We also point out that the effective spatial resolution is expected to vary across the scan, perhaps significantly, as a function of the quality of the atmospheric seeing conditions and the performance of Figure 1: Summed power spectrum of the intensity, computed at a wavelength of 6302.47 Å for each step, summed across step positions with greater than the mean continuum intensity contrast of 6.2%. The _dotted, blue line_ shows the summed power spectrum windowed by a Hann filter before computing the FFT. The _solid, red line_ shows a median filter applied to the Hann windowed power spectrum. The _dashed, red vertical line_ indicates the frequency that equates to a spatial resolution of 0\(\farcs\)08. the DKIST adaptive optics correcting it. From an inspection of the smallest structures visible in Stokes \(I\), we estimate a spatial resolution of at least \(0\,\farcs 1\) (about 72 km). ## 3 Data Analysis & Results ### Inversion strategy We employed the Stokes Inversion based on Response Functions (SIR) code (Ruiz Cobo & del Toro Iniesta, 1992) with its Python-based parallelized wrapper (Gafeira et al., 2021) to invert the full Stokes vector in the Fe I line pair for every pixel in the dataset. The first inversion setup we describe here allows us to produce approximate maps of the full dataset. To achieve this we used a two-component inversion, with one magnetic and one non-magnetic model atmosphere per pixel element, with the contribution of each given by their respective filling factors, \(\alpha\). We inverted every pixel 30 times, with randomised initial values in the magnetic field strength, \(B\), line-of-sight (LOS) velocity, \(v_{\rm LOS}\), magnetic inclination, \(\gamma\), and magnetic azimuth, \(\phi\). The \(\gamma\) is defined as the angle between the magnetic vector and the observer's LOS (or solar normal, at disk centre), while the \(\phi\) is the angle of the magnetic field vector in the plane perpendicular to the observer's LOS (i.e. in the plane of the solar surface). For each pixel we selected the solution which had the minimum \(\chi^{2}\). Apart from temperature, \(T\), which had four nodes, all other parameters, including \(B\), had only one node, and thus were forced to be constant in optical depth. In this case, where only Figure 2: Three case studies of small-scale magnetism, including a serpentine-like magnetic configuration (_upper row_), two magnetic loops (_middle rows_), and a magnetic granule (_bottom row_). From _left to right_ is Stokes I at 6303.17 Å, \(L_{\rm tot}\), Stokes \(V\) at 6302.45 Å, \(v_{\rm LOS}\), \(\gamma\), \(\phi\), and \(\alpha_{m}B\). The last four parameters are derived from SIR inversions with one magnetic and one non-magnetic model atmosphere (1C configuration). The scanning direction (i.e. solar \(X\)) is shown on the \(x\)-axis and the slit direction (i.e. solar \(Y\)) is shown on the \(y\)-axis. The labelled (A.1, B.1-B.2, C.1-C.4) markers highlight the spatial location of the sample pixels: the full Stokes vector from pixel A.1 is shown in Fig. 3, B.1 is shown in Fig. 5, B.2 is shown in Fig. 6, C.1 is shown in Fig. 7, C.2 is shown in Fig. 8, and the circular polarisation profiles for C.3 and C.4 are shown in Fig. 9. The region outlined by the _cyan box_ in the upper row is shown in Fig. 4. one component is magnetic, we explicitly refer to the filling factor of the magnetic component, \(\alpha_{m}\), and the magnetic flux density, \(\alpha_{m}B\). The \(T\) in both components was always forced to be the same assuming lateral radiative equilibrium. The microturbulent velocity, \(v_{\rm mic}\) was included as a free parameter independently in each component, and although the macroturbulent velocity, \(v_{\rm mac}\), was also included as a free parameter, it was forced to be the same in both components, assuming it to primarily account for the spectral resolution of the spectrograph. We do not explicitly include an unpolarised stray light component to the inversions. Instead, we adopt an inversion with two model atmospheres per pixel element, allowing the filling factor of a non-magnetic model to encapsulate the effect of an unpolarised stray light contribution. Unlike with GREGOR, we do not have an estimate of the unpolarised stray light fraction (Borrero et al., 2016). Nevertheless, Campbell et al. (2021) investigated the statistical impact a varying unpolarised stray light fraction has on the retrieval of \(T\), \(B\), \(v_{\rm LOS}\), and \(\phi\) values using synthetic observations and found only the \(T\) contrast was impacted, with the other parameters invariant. In select cases the inversion will require additional free parameters in order to fit Stokes profiles with asymmetries or abnormal or complex shapes (e.g. a Stokes \(V\) profile which does not only have one positive and one negative lobe of equal amplitude). Further, the number of free parameters required may increase significantly when two magnetic components are required to reproduce the observed Stokes vector. We approached these cases with a principle of minimising the number of free parameters as much as possible. In order to reproduce a Stokes vector that is not adequately fit by the initial inversion, we enabled SIR to choose the optimum number of nodes in \(B\), \(v_{\rm LOS}\), and \(\gamma\) up to a maximum of 10 and up to 3 in \(\phi\). We then experimented by systematically reducing the number of nodes in each of these parame Figure 3: Sample observed Stokes vector with the best-fit inverted profiles (_left panels_) and model associated atmospheres (_right panels_), whose spatial location is labelled in the serpentine magnetic element in the upper row of Fig. 2 (pixel A.1). The _horizontal_ (_dot-dashed_) _lines_ show the noise thresholds (\(\pm 4\sigma_{n}\)), while the _vertical_ (_dotted_) _lines_ indicate the rest wavelengths of the Fe I doublet. The retrieved atmospheric parameters are shown as a function of optical depth. The \(\phi\), \(\alpha_{m}\), and \(v_{\rm mac}\) values were \(107^{\circ}\), \(0.47\), and \(0.015\) km sec\({}^{-1}\), respectively. This inversion is a 1C configuration. ters until SIR was unable to fit the profiles with a minimum number of free parameters. In these pixel-specific inversions, we repeated the inversion up to 1000 times with randomised initial model atmospheres. For clarity, in later sections, we label inversions with two magnetic model atmospheres as 2-component (2C) inversions. Inversions with one magnetic and one non-magnetic model atmosphere are labelled as 1-component (1C) inversions. In 2C inversions the non-magnetic model atmosphere is replaced by a magnetic one. Although in forthcoming sections we show the atmospheric parameters at a range of optical depths, we stress that the Fe I 6300 A doublet is only expected to be sensitive to perturbations in \(\gamma\), \(\phi\), \(B\), and \(\upsilon_{\rm LOS}\) in the range between \(\log(\tau_{5000\rm\AA})=-2.0\) and \(\log(\tau_{5000\rm\AA})=0.0\) in a significant way. A detailed analysis of the diagnostic potential of this line, in the context of other photospheric Fe I lines and their response functions, is provided by Cabrera Solana et al. (2005) and Quintero Noda et al. (2021). ### Case studies An inspection of the scans revealed an abundance of circular polarisation of both positive and negative polarities, as well as small patches of linear polarisation. It is also clear that there are some particularly strong, large patches of circular polarisation with much weaker magnetic flux patches in their immediate vicinity. In the current work we focus on the weaker, smaller-scale magnetic structures, and curate a few case studies. We used the SIR Explorer (SIRE) tool (Campbell, 2023; Campbell et al., 2023) to locate and analyse these case studies Figure 4: Maps of \(\gamma\) (_upper left_) and \(\phi\) (_upper right_) in the serpentine structure shown in the first case study of Fig. 2. The \(\gamma\) map is shown at \(\log(\tau_{5000\AA})=-1.0\). The direction of the \(\phi\) is marked with arrows whose value is given after binning by a \(3\times 3\) kernel for visual clarity. The azimuth is defined such that a value of 0 is aligned along the line from Solar East to West, increasing anti-clockwise. The variation in \(\gamma\) and \(\phi\) is also shown across slices labelled 1 (_lower left_) and 2 (_lower right_), whose initial and final locations are indicated by the _blue, dashed lines_. The _dotted lines_ shows the variation in \(\gamma\) at optical depths between \(\log(\tau_{5000\AA})=-2.0\) and \(\log(\tau_{5000\AA})=0.0\), while the _solid, red lines_ show the variation in \(\phi\), across the slices. The _dashed, gray horizontal lines_ mark the \(90^{\circ}\) polarity inversion point for \(\gamma\). in the full-scan inversions. We define a magnetic loop as two patches of opposite polarity circular polarisation in close contact (i.e. within only a few pixels of each other) with linear polarisation connecting them. This definition indicates that we have two vertical fields (identified by the two opposite polarity circular polarisation signals) which are connected with a horizontal field (identified by the linear polarisation signals) to complete the loop. These structures could be \(\Omega-\) or \(U-\)shaped loops. In the absence of linear polarisation signals, we regard the structure as a bi-pole. In addition to magnetic loops, we also identify unipolar magnetic flux patches with linear polarisation signals but only one polarity of circular polarisation. Finally we define a serpentine structure as one with more than one PIL and clear linear polarisation signals at each PIL. We further classify serpentine magnetic configurations by the number of PILs they have. In total we identified 53 small-scale magnetic elements of interest, including at least 47 magnetic loops and 4 unipolar magnetic patches, with linear and circular polarisation detected in all of them. We located several candidate serpentine structures but by examining the azimuth we were able to discern that they were in fact two magnetic loops in close contact due to a sharp discontinuity in azimuth values. In two of the remaining serpentine-like magnetic elements, the polarity of the magnetic field appeared to change three times across the structures, and in one changed only twice. However only one serpentine structure had linear polarisation throughout and at all three PILs. In the following sections we define three case studies from this analysis. We unveil the potential detection of a serpentine magnetic structure in section 3.2.1, discuss the problem of degenerate solutions at the PILs of magnetic loops in section 3.2.2, and analyse a magnetic granule in section 3.2.3. Overview maps of these case studies are shown in Fig. 2. The upper row shows the serpentine magnetic structure, the two middle rows show two magnetic loops with transverse magnetism at the PIL, and the bottom row shows the magnetic granule. Defined here is the total linear polarisation, \[L_{\mathrm{tot}}=\frac{\int_{\lambda_{b}}^{\lambda_{r}}[Q^{2}(\lambda)+U^{2}( \lambda)]^{\frac{1}{2}}d\lambda}{I_{c}\int_{\lambda_{b}}^{\lambda_{r}}d\lambda}, \tag{1}\] where \(\lambda_{r}\) and \(\lambda_{b}\) are the red (6302.7 A) and blue (6302.3 A) limits of integration in wavelength across the 6302.5 A line, while \(I_{c}\) is the Stokes \(I\) signal measured in the neighbouring continuum and spatially averaged along the slit direction for each scan step. Here we also show maps of the azimuth, which we have post-processed to force the values to vary between \(0^{\circ}\) and \(180^{\circ}\). We stress that we have not disambiguated the data. #### 3.2.1 Case study I: Small-scale serpentine magnetism The first case study is shown in the upper row of Fig. 2. Upon initial examination at a wavelength of 6302.39 A, this appeared to be a magnetic loop, where two patches of circular polarisation are in close contact and linear polarisation bridges the two patches. Upon closer inspection, a small adjustment in the wavelength position in SIRE reveals a circular polarisation pattern that is more complex than a single magnetic loop. When inspected at a wavelength of 6302.45 A, closer to the rest wavelength of the line, as shown in Fig. 2, the complexity of the magnetic topology in this structure became apparent. If one examines the \(\gamma\) map, it is clear that the polarity of the magnetic field changes sign at least three times across the structure (as opposed to once, as would be expected in a simple U- or \(\Omega\)-shaped loop). When the pixels in this structure have reasonably symmetrical, two-lobed Stokes \(V\) profiles, and linear polarisation is abundant, the change in \(\gamma\) is unambiguous and SIR produces good fits to the observed Stokes vectors. However, there is a narrow location at which complex Stokes \(V\) profiles are found and the initial inversion is unable to fit these profiles. A sample Stokes vector from this location, at the edge of the granule, whose spatial location is highlighted by the marker in Fig. 2, is shown in Fig. 3. This Stokes vector has all three polarisation parameters with signal greater than the \(4\sigma_{n}\) level, but the Stokes \(V\) profile has two negative lobes. We therefore increased the number of free parameters permitted in \(B\), \(\gamma\), and \(v_{\mathrm{LOS}}\) to 10 each, and in \(\phi\) to 3, and repeated the inversion with 200 randomised initialisations per pixel, and systemically reduced the number of nodes in each parameter. We found that the best fits could be achieved with 1 node in \(B\), but 3 each in \(\gamma\) and \(v_{\mathrm{LOS}}\) for the magnetic component. For the non-magnetic component we found only 2 nodes in \(v_{\mathrm{LOS}}\) were required. The synthetic vector shown in Fig. 3 resulted from this process. We then ran the inversion with this configuration in a small area, shown in Fig. 4, so we could investigate how \(\gamma\) changed as a function of optical depth. Shown in Fig. 4 is the variation in \(\gamma\) across two \(1\farcs 22\) (882 km) slices generated by cubic interpolation between two points, as a function of optical depth between \(\log(\tau_{5000\mathrm{\AA}})=-2.0\) and \(\log(\tau_{5000\mathrm{\AA}})=0.0\), where the spectral line is expected to be responsive to changes in \(\gamma\) according to the response functions. Regardless of the optical depth considered, the polarity of the magnetic field changes three times across the slices. However, the Figure 5: Two solutions to the Stokes vector belonging to pixel B.1 based on two magnetic models (2C) and one magnetic model (1C). The Stokes \(V\) signal has a complex shape. In the _left column_ the _red, solid lines_ show the synthetic Stokes vector for 2C, and the _blue, dotted lines_ show the same for 1C. The model parameters for 2C and 1C are shown in the _middle column and right column_, respectively. The location of this pixel is indicated in Fig. 2. For 2C, the \(\alpha\) values of model 1 and 2 were 0.93 and 0.07, respectively, the \(\phi\) values were \(27^{\circ}\) and \(191^{\circ}\), respectively, and the \(v_{\rm mac}\) was 0.77 km sec\({}^{-1}\) for both models. For 1C, the \(\alpha\) values of model 1 and 2 were 0.17 and 0.83, respectively, the \(\phi\) value was \(22^{\circ}\), and the \(v_{\rm mac}\) was 0.94 km sec\({}^{-1}\) for both models. Figure 6: As in Fig. 5, but for the Stokes vector belonging to pixel B.2. For 2C, the \(\alpha\) values of model 1 and 2 were 0.83 and 0.17, respectively, the \(\phi\) values were \(158^{\circ}\) and \(23^{\circ}\), respectively, and the \(v_{\rm mac}\) was 0.71 km sec\({}^{-1}\) for both models. For 1C, the \(\alpha\) values of model 1 and 2 were 0.93 and 0.07, respectively, the \(\gamma\) and \(\phi\) values for model 1 were \(90^{\circ}\) and \(203^{\circ}\), respectively and the \(v_{\rm mac}\) was 0.94 km sec\({}^{-1}\) for both models. There is no significant Stokes \(V\) signal. spatial position at which this polarity inversion occurs can depend on the optical depth. We emphasise that this is occurring on a small, sub-granular scale, as from the point where the first polarity inversion occurs to the last is a distance of about 400 km in slice 1. To truly characterise this magnetic element as serpentine we must also examine the azimuth. If the azimuth showed a sharp, discontinuous change between two distinct values this could indicate that we have actually observed two separate magnetic loops that are not connected. Figure 4 also shows the variation in \(\phi\) across the slices. In the upper (lower) slice, \(\phi\) varies smoothly from a minimum of \(86^{\circ}\) (\(84^{\circ}\)) to a maximum of \(151^{\circ}\) (\(153^{\circ}\)). To confidently constrain the azimuth one needs to measure both Stokes \(Q\) and \(U\). However, the amplitude of the noise in the weaker linear polarisation parameter places a limit on the value that \(\phi\) could take when only one is confidently measured. Therefore, in these cases, and especially when the amplitude of the stronger linear polarisation parameter is large, the \(\phi\) value can still be constrained. When heavily noise contaminated, maps of \(\phi\) appear random. The map of \(\phi\) in Fig. 4 is smooth-varying across the magnetic element, including in those pixels where only one linear polarisation parameter has a maximum amplitude greater than \(4\sigma_{n}\). This is in contrast to areas surrounding the magnetic element, where the map of \(\phi\) has a salt-and-pepper pattern. The spatial coherence of this map indicates that there is enough linear polarisation signal available in the magnetic element to constrain the azimuth and deduce that it varies across the slices. While a change of \(69^{\circ}\) in \(\phi\) is not an insignificant variation, we note that the change occurs gradually. This suggests the magnetic field vector is changing direction not just along the LOS, but in the plane of the solar surface across the slices, but not in a sharp, discontinuous way that would indicate the existence of two distinct magnetic elements. Indeed, after the \(\phi\) value reaches its peak in each slice, it continues to vary gradually. We also repeated the inversion in the small area in Fig. 4 where the maximum number of nodes in \(B\) in the magnetic component was increased from 1 to 3, and the maximum number of nodes in \(v_{\rm LOS}\) in the non-magnetic component was increased from 2 to 3. We found that the increased number of free parameters made a very marginal improvement in the quality of the fits in a small number of pixels, but made no significant difference to the \(\gamma\) or \(\phi\) variations in this magnetic element; most importantly, the polarity of the magnetic field changed three times across both slices. We found no evidence that SIR made use of the additional free parameter in \(v_{\rm LOS}\). #### 3.2.2 Case study II: Degenerate solutions at polarity inversion lines One of the most likely places to find complex, abnormal Stokes profiles (i.e. a larger or smaller number of lobes than expected, or with significant asymmetries, or both) in the quiet Sun is at, or near, the PIL in magnetic loops. In the middle rows of Fig. 2 we group together two simple magnetic loops for the second case study. These structures have in common that linear polarisation is found at the PIL, however, they differ significantly in terms of their circular polarisation signals at the same location. The magnetic loop shown in the second row in Fig. 2 is found in the intergranular lane (IGL) next to a tiny granule. From the azimuth map, we can immediately determine that this magnetic loop is a distinct structure compared to its surroundings. The sample Stokes vector B.1 shown in Fig. 5 demonstrates that the linear polarisation signals are strong, with small asymmetries, while the circular polarisation signal has a three-lobed shape, and SIR is unable to reproduce it without an increase in the number of free parameters. The middle panel in Fig. 5 shows the atmospheric parameters which resulted in a good fit to the observed Stokes vector with two magnetic model atmospheres (2C), including 1 node in \(B\), 2 in \(v_{\rm LOS}\), 2 in \(\gamma\), and 1 in \(\phi\) per model. SIR has selected one magnetic component with a larger filling factor (\(\alpha=0.93\)) that is weaker (\(B=133\) G) to co-exist with another magnetic component with a much smaller filling factor (\(\alpha=0.07\)) and much stronger magnetic field (\(B=902\) G). The superposition of the opposing linear stratifications in \(v_{\rm LOS}\) and \(\gamma\) is essential to produce the synthetic vector, in particular to produce the circular polarisation profile which could result both from the mixing of opposite polarity Stokes \(V\) signals within the spatiotemporal resolution element and from gradients in \(v_{\rm LOS}\). Reducing the number of nodes in either of these parameters significantly degraded the fit. This sample pixel is not unique in this magnetic loop, as many similar Stokes \(V\) profiles are found along the PIL. Furthermore, the solution provided by the inversion is not unique, and the mixing of two magnetic components in the spatiotemporal resolution element is not the only physical scenario that could be responsible for this Stokes vector. We encounter a degeneracy problem, where very different physical solutions provide equally plausible fits to the observed Stokes vector, when trying to fit complex profiles. In Fig. 5 we also present an alternative, degenerate solution that does not require a 2C configuration. In this case, a 1C configuration with a 638 G magnetic field that is constant in optical depth was sufficient, but both the magnetic and non-magnetic model atmospheres required strong linear gradients in \(v_{\rm LOS}\) achieved with 2 nodes each, and crucially the magnetic model atmosphere required a polynomial stratification in \(\gamma\) that was achieved with 5 nodes. The second magnetic loop shown in the third row in Fig. 2 is a narrow patch of magnetic flux along the granule-IGL boundary. The structure is about \(1\,\farcs 5\) long but only \(0\,\farcs 5\) wide. The Stokes vector from pixel B.2 is taken from the PIL, but in contrast to the former magnetic loop (pixel B.1) there is a complete absence of a Stokes \(V\) signal. However, despite this, we once again encounter degenerate solutions as there are large asymmetries in the amplitudes of the \(\sigma\)-lobes of the linear polarisation profiles. Figure 6 show solutions based on 1C and 2C configurations. These solutions have in common that SIR selected an almost perfectly transverse magnetic field. Indeed, as we have no circular polarisation we also find there is no benefit to including a gradient in \(\gamma\). In the 1C solution, we required 3 nodes each in \(B\) and \(v_{\rm LOS}\) for the magnetic model atmosphere. The \(\alpha_{m}\) for this solution was very large (\(\alpha_{m}=0.93\)) and the magnetic model atmosphere is nearly stationary at optical depths where the response functions peak, while the non-magnetic model atmosphere has a strong down-flow (\(v_{\rm LOS}=3.8\) km sec\({}^{-1}\)). We find that the stratification in \(B\), decreasing with height until reaching zero near where the response functions peak, is essential. Additionally, a linear gradient in \(v_{\rm LOS}\) was insufficient to fully achieve the asymmetric amplitude of Stokes \(Q\) and \(U\). As for the 2C solution, we found it to be essential that the model atmospheres have different \(v_{\rm LOS}\) stratifications and different \(\phi\) values. The primary model atmosphere, with the larger filling factor (\(\alpha=0.83\)), has a very weak magnetic field strength (\(B=28.0\) G) and is nearly stationary in the LOS (\(v_{\rm LOS}=\) -0.1 km sec\({}^{-1}\)), while the secondary model atmosphere has a polynomial stratification in \(v_{\rm LOS}\) and has a much larger magnetic field strength (\(B=676\) G). We were able to reduce the number of nodes in \(B\), \(\gamma\), \(\phi\), and \(v_{\rm LOS}\) to 1 in both model atmospheres with the exception of the \(v_{\rm LOS}\) of the secondary model atmosphere, which had 3 nodes and could Figure 7: As in Fig. 3, but for vector C.1. The filling factors of model 1 and 2 were 0.79 and 0.21, respectively, while the \(\phi\) value was \(146^{\circ}\). The \(v_{\rm mac}\) was 0.73 km sec\({}^{-1}\) for both models. This inversion is a 1C configuration. The Stokes \(V\) signal has a single lobe. Figure 8: As in Fig. 3, but for vector C.2. The filling factors of model 1 and 2 were 0.31 and 0.69, respectively, while the \(\gamma\) and \(\phi\) values were 78\({}^{\circ}\) and 152\({}^{\circ}\). The \(v_{\rm mac}\) was 0.95 km sec\({}^{-1}\) for both models. This inversion is a 1C configuration. The Stokes \(V\) signal exhibits a regular double-lobed, anti-symmetric shape. Figure 9: In the _lower panels_ the Stokes \(V\) profiles from pixels C.3 and C.4 are shown on the _left_ and the _right_, respectively. Profile C.3 has over-plotted the WFA fit obtained from the derivative of Stokes \(I\) (_blue, dashed line_). Both profiles have the synthetic SIR profile over-plotted (_red, solid line_). The _horizontal, grey, dashed-dotted lines_ show the 4\(\sigma_{n}\) threshold and the _vertical, blue, dotted lines_ show the locations of the blue and red lobes used for the SFA estimate for \(B\). not be reduced without losing the required asymmetries in the linear polarisation profiles. We tested the inversions for pixels B.1 and B.1 under 1C and 2C configurations where we allowed the temperature of both model atmospheres to vary independently. In all cases the increase in the number of free parameters did not improve the quality of fit, and further the key features of the parameter stratifications in \(B\), \(\gamma\), and \(v_{\rm LOS}\) were invariant. Ultimately we found no evidence that increasing the number of free parameters provided either a significantly different solution or an improvement to the quality of the fit to the observed Stokes vector, so we favour the simpler solutions. #### 3.2.3 Case study III: Magnetic anatomy of a granule The bottom row in Fig. 2 shows our final case study. Notably, we find there is significant linear and circular polarisation signal throughout almost the entirety of the granule. We take this opportunity to explore the different spectropolarimetric and magnetic properties of the granule and surrounding IGLs, which is the convective building block of the solar surface, for the first time at 70 km resolution. To this end we select four representative pixels. To begin, in the lower, left-most patch of circular polarisation, linear polarisation is pervasive, while both the linear and circular polarisation profiles are highly asymmetric and the circular polarisation is even single-lobed. Fig. 7 shows a sample Stokes vector (pixel C.1) from this patch, with its location at the granule-IGL boundary indicated by the blue marker in Fig. 2. The initial 1C inversion was unable to accurately fit the asymmetric Stokes profiles, and so the number of free parameters was increased. The very good fit shown in Fig. 7 is the result, which was achieved with 3 nodes in \(B\), 3 in \(v_{\rm LOS}\), 3 in \(\gamma\), and 1 in \(\phi\) for the magnetic model atmosphere, and only 1 node in \(v_{\rm LOS}\) for the non-magnetic model atmosphere. We found that a 1C configuration was sufficient and did not require a 2C configuration. The \(B\) stratification required to fit this profile is a gradient which decreases with increased height in the atmosphere, from very strong, kG magnetic field with \(B>2000\) G in the deepest layers. However, we stress that SIR is simply extrapolating to \(\log(\tau_{\rm 5000\AA})=1.0\). At \(\log(\tau_{\rm 5000\AA})=0.0\) the field strength is 851 G. The magnetic model atmosphere (\(\alpha_{m}=0.79\)) is highly transverse in the optical depth range the lines are responsive to but has a gradient which changes polarity and becomes more vertical higher in the atmosphere. In order to produce the asymmetries in all four Stokes parameters, the magnetic model atmosphere also required a strong linear gradient in \(v_{\rm LOS}\) while the non-magnetic component has a single, positive \(v_{\rm LOS}\) value throughout the atmosphere. Deep in the granule, in the central patch of circular polarisation, linear polarisation is again pervasive but the circular polarisation is much more symmetric and two-lobed. Fig. 8 shows a sample Stokes vector (pixel C.2) whose location is indicated by the cyan marker in Fig. 2. The synthetic vector provided by the initial 1C inversion is a good fit in this case, with the exception of small asymmetries in the linear polarisation profiles. By adding a gradient to the \(v_{\rm LOS}\) of the magnetic model atmosphere, SIR was able to obtain better fits. The \(B\) of this pixel is moderate but the pixel had a large \(\alpha_{m}\) such that the \(\alpha_{m}B\) is relatively large (\(B=380\) G, \(\alpha_{m}=0.31\), \(\alpha_{m}B=118\) Mx cm\({}^{-2}\)). In the lower, right patch of circular polarisation, most of the profiles do not have significant linear polarisation and the circular polarisation profiles are typically asymmetric. While the profiles may be asymmetric or not, the change in polarity is nevertheless unambiguous between these three patches of small-scale magnetism. It is perhaps not surprising that the profiles are asymmetric, and therefore indicative of velocity gradients, in pixel C.1 given that it is located at the granule-IGL boundary. Pixel C.2, on the other hand, is more firmly located in the granule. The Stokes vectors found in the IGLs surrounding the examined granule were also inspected. Stokes \(V\) profiles with varying degrees of asymmetries are found. We find that the field strengths at these locations differ by two orders of magnitude, and at the high end of these extremes the \(B\) exceeds 2 kG. We are motivated to sanity check these values with the weak field approximation (WFA) and strong field approximation (SFA) in order to ensure the inversions are well calibrated (Campbell et al., 2021, 2023), by estimating the longitudinal magnetic flux density with the WFA and magnetic field strength with the SFA. In the weak regime, as Landi Degl'Innocenti & Landolfi (2004) demonstrate, the amplitude of Stokes \(V\) is proportional to the longitudinal magnetic flux density (where \(B_{\parallel}=B\cos\gamma\)), \[V(\lambda)\approx-\Delta\lambda_{B}\cos\gamma\frac{dI(\lambda)}{d\lambda}, \tag{2}\] where, \[\lambda_{B}\approx 4.6686\times 10^{-13}\lambda_{0}^{2}g_{\rm eff}B, \tag{3}\] where \(\lambda_{0}\) is the rest wavelength and \(g_{\rm eff}\) is the effective Lande g-factor of the spectral line. Equation 2 is only valid if the magnetic Zeeman splitting is negligible relative to the Doppler width of a line (i.e. \(\Delta\lambda_{B}/\Delta\lambda_{d}\ll 1\)) and when \(\gamma\), \(\phi\), \(B\), \(v_{\rm LOS}\), the Doppler width, and any broadening mechanisms are invariant along the LOS in the formation region of the line. On the other hand, by measuring \(2\Delta\lambda_{B}\) as the separation of the lobes of Stokes \(V\) when the field is strong enough and when \(\Delta\lambda_{B}/\Delta\lambda_{d}\gg 1\), it is possible to obtain \(B\) directly through Eqn. 3 (Khomenko et al., 2003; Nelson et al., 2021; Campbell et al., 2023). As in Asensio Ramos (2011), writing the expressions for \(\Delta\lambda_{B}\) and \(\Delta\lambda_{d}\) allows us to say a line is in the weak field regime when the magnetic field strength fulfills: \[B\ll\frac{4\pi mc}{g_{\rm eff}\lambda_{0}e}\sqrt{\frac{2kT}{M}+v_{mic}}, \tag{4}\] where \(m\) is the mass of an electron, \(c\) is the speed of light, \(\lambda_{0}\) is the rest wavelength of the spectral line, \(e\) is the charge of an electron, \(k\) is the Boltzmann constant, and \(M\) is the mass of the species. For the 6302.5 A line, assuming \(v_{\rm mic}=1\) km sec\({}^{-1}\) and \(T=5800\) K, we estimate a limit of 760 G. Raising the \(T\) to 7000 K would increase the limit to 809 G. Eliminating the \(v_{\rm mic}\) would decrease the limit to 610 G. We sampled pixels at the upper-most granular boundary with low-amplitude, symmetrical Stokes \(V\) profiles, and applied the WFA. The circular polarisation profile from the pixel labelled C.3 in Fig. 2 is shown in Fig 9. In the initial 1C inversion, SIR fit this profile with a model atmosphere with \(B=73.4\) G, \(\gamma=57.0^{\circ}\), \(\alpha_{m}=0.5\), and \(v_{\rm LOS}=0.1\) km/sec. Therefore, according to SIR, \(\alpha_{m}B\cos\gamma=21.2\) Mx cm\({}^{-2}\). The \(B\) value is an order of magnitude smaller than the limit we estimated using Eqn. 4, so we are satisfied that using the WFA on this pixel is appropriate. Meanwhile, the WFA estimate is obtained from the derivative of Stokes \(I\) and through Eqn. 2 as 20.0 Mx cm\({}^{-2}\). The pixel with the weakest \(B\) that we found which satisfied the \(4\sigma_{n}\) amplitude threshold, and which had symmetrical lobes, was 25 G, or \(\alpha_{m}B\cos\gamma=12.8\) Mx cm\({}^{-2}\). For the same pixel, the WFA estimate was 11.7 Mx cm\({}^{-2}\). We sampled pixels in the right-most IGL which has an extremely strong magnetic field and applied the SFA by measuring the separation of the lobes of Stokes \(V\). The circular polarisation profile from the pixel labelled C.4 in Fig. 2 is shown in Fig. 9. In this case, the initial 1C inversion fit this profile with a very strong magnetic field (\(B=2048\) G, \(\alpha_{m}=0.07\), \(\alpha_{m}B=139.3\) Mx cm\({}^{-2}\)) and the positive and negative lobes of Stokes \(V\) are separated by 15 increments in wavelength. The SFA estimate is thus 2078 G as a single increment in wavelength is equivalent to 138.55 G. We are both satisfied that using the SFA is appropriate and that the magnetic field strength is accurate as there are many pixels in this IGL with field strengths greater than 1.8 kG. ## 4 Discussion We have presented the first spectropolarimetric observations of the quiet Sun with DKIST, the first \(4-\)m class and largest solar optical telescope ever built. With an estimated spatial resolution of \(0\farcs 08\), these observations represent the highest spatial resolution full-vector spectropolarimetric observations ever obtained of the quiet Sun; previous high resolution observations were achieved by the Swedish Solar Telescope (\(0\farcs 16\), e.g. Schnerr & Spruit, 2011), the Sunrise balloon-borne experiment (\(0\farcs 14-0\farcs 16\), e.g. Lagg et al., 2010; Wiegelmann et al., 2013), the GREGOR telescope (\(0\farcs 3-0\farcs 4\), e.g. Lagg et al., 2016; Campbell et al., 2023), and the Hinode spacecraft (\(0\farcs 32\), e.g. Lites et al., 2008; Bellot Rubio & Orozco Suarez, 2012). After locating at least 53 magnetic elements of interest, we curated three case studies and examined their physical properties in detail to showcase how the small-scale magnetic fields are spatially organised at this resolution. We examined a particular magnetic element where the polarity inverts three times across the structure (see Fig 4). In addition to a complex topology in terms of the inclination angle, we also observe the azimuthal angle changing significantly across its structure. We believe that the evidence indicates that it is plausible that the magnetic field lines of this small-scale magnetic structure are serpentine with significant, coherent variation in the plane of the solar surface. We are not aware of any previous studies that have observed these complex magnetic structures in the quiet Sun on a sub-granular scale. Hinode observations have demonstrated that in simple magnetic loops the azimuth can vary significantly in a few minutes (Martinez Gonzalez et al., 2010) and simulations of the twisting of magnetic field lines during small-scale reconnection events show that gradual azimuthal changes in the photosphere are plausible in more complex magnetic topologies (Danilovic, 2009; Hansteen et al., 2017). Perhaps it is possible that these complex topologies have not been observed at facilities with lower diffraction limited resolutions than DKIST precisely because they lack the required angular and spectral resolution. Similarly, the complex nature of the serpentine feature could be lost with poorer seeing conditions. However, we also emphasise that this is the only example of serpentine magnetism that we discovered in the dataset which had three PILs and linear polarisation signals throughout the structure, and in the other magnetic loops we see a much more simple variation in \(\gamma\), and typically insignificant variation in \(\phi\). An alternative interpretation would be that we observed two magnetic loops in very close proximity. The fact that the azimuth increases to reach a maximum in one part of the slice but then decreases somewhat less significantly in the remainder of the slice could support this argument. However, the evidence does not conclusively support this interpretation either because there is no discontinuity in the variation of the azimuth like can be observed in the magnetic loop shown in the second row of Fig. 2. We note that without disambiguation of the azimuth we cannot be certain that the gradual, coherent variation we measure is correct. If this alternative explanation was true we argue that we should not observe linear polarisation signals at the middle, apparent PIL between the magnetic loops. In other words, we should not observe linear polarisation signals at all three apparent PILs. We note that linear polarisation signals are pervasive across the entire slice, and thus we are able to observe two crests in the magnetic field lines. However, we also note that we did discover a second serpentine structure with three PILs in our analysis, and the middle PILs did not have linear polarisation signals. Ultimately, only repeated observations with a high-cadence time series would allow us to conclusively determine the correct scenario. Finally we point out that the temporal resolution may play a role due to the scanning speed of the observations. The step cadence of these scans is about 7.4 seconds. A horizontal flow would have to be on the order of 4 km sec\({}^{-1}\) to be similar to the stepping cadence. However, it would be premature to say this is a factor which makes either scenario less or more plausible; for instance, the gradual variation in the azimuth that we measured could be explained by supposing that we observed the temporal evolution of a serpentine structure. A physical scenario that demands an increased number of free parameters commonly occurs at the PIL of magnetic elements, and detecting linear polarisation at these locations has been a major challenge (Kubo et al., 2010, 2014). We examined two magnetic loops which had very different properties at the PIL in terms of circular polarisation, but which both had strong linear polarisation signals. In one case, we uncovered a Stokes vector which had a complex Stokes \(V\) profile. Three-lobed Stokes \(V\) profiles were also observed in the quiet Sun by Viticchie & Sanchez Almeida (2011) with Hinode/SP and Martinez Gonzalez et al. (2016); Kiess et al. (2018); Campbell et al. (2021) with GREGOR. Initially we attempted a 2C inversion, but with gradients in \(\gamma\) as well as in \(B\) and \(v_{\rm LOS}\) (see Fig. 5). We encountered a problem of degeneracy as we found this profile could also be explained with a more extreme gradient in \(\gamma\) but without two magnetic model atmospheres (see Fig. 5). The second interpretation is consistent with Khomenko et al. (2005), who showed using magnetoconvection simulations that gradients in \(\gamma\) and \(v_{\rm LOS}\) can generate irregular Stokes \(V\) profiles. In this case it is not as simple as saying the approach with the lower number of free parameters is the correct one - the fact that this profile was located along the PIL means it is plausible that the 3-lobed Stokes \(V\) profile is the physical manifestation of a smearing of signals of opposite polarities, or even mixing within the spatiotemporal resolution element of the observations that remains unresolved. We note that in the 1C case \(\gamma\) changes polarity along the LOS in the optical depth range in which these lines are sensitive to perturbations in \(\gamma\), and there are other examples in this study where this occurs (see Fig. 3 and Fig. 7). Previously Rezaei et al. (2007) had shown this was possible when the Fe I 6301.5/6302.5 A doublet showed two-lobed Stokes \(V\) profiles with different polarities in a single spectrum, but in our observations we find the Stokes \(V\) profiles of both lines are compatible. In another case, we observe a long, narrow magnetic loop at the granule-IGL boundary which has an absence of Stokes \(V\) at the PIL. It is most plausible that in this case the magnetic field at the PIL is completely transverse, however, like in the former example, opposite polarity Stokes \(V\) signals may have mixed such that their signals have completely cancelled each other. However, the absence of a Stokes \(V\) signal cannot be presented as evidence for the latter scenario. We also find degenerate solutions to this Stokes vector, based on a single magnetic model and two magnetic models (see Fig. 6), showing that degeneracy is an issue even in the absence of a Stokes \(V\) profile. This degeneracy issue is a serious problem which calls into question how accurately we can infer physical parameters from inversions, but we argue this issue could be significantly improved by observing more photospheric spectral lines. This is distinct from the degeneracy problem described by Martinez Gonzalez et al. (2006) because in all the inversions presented in this study we forced the temperature of each component to be the same. Ultimately, if the solutions requiring two magnetic models are correct it suggests that, even at the spatiotemporal resolution of DKIST/ViSP, magnetic flux is yet hidden to spectropolarimetric observations that use the Zeeman effect in magnetic loops (or bi-poles). Significant effort was made to try and balance the need for an increased number of free parameters to fit complex Stokes profiles with the risk of producing unphysical or unrealistic solutions from the inversions. In particular, velocity gradients in the LOS often manifest in Stokes \(Q\), \(U\), and \(V\) as an amplitude asymmetry that cannot be modelled with a \(v_{\rm LOS}\) stratification that is constant in optical depth. In the final case study we examined a magnetic feature spanning a granule which had lin ear polarisation signals throughout most of its structure. The magnetic nature of granules is important because it has implications for constraining magnetohydrodynamic simulations, particularly in terms of circulation models, emergence of magnetic flux, and the formation of IGLs (Rempel, 2018). It is notable that this granule has pervasive linear polarisation, in addition to circular polarisation, because magnetic flux emergence in granules was observed in early Hinode/SP observations to be longitudinal in nature (Orozco Suarez et al., 2008), but the emergence of horizontal magnetic flux has been observed in a granule (Gomory et al., 2010). In the granule, we found symmetric Stokes profiles (see Fig. 8), but at the granule-IGL boundary we sampled a Stokes vector from a pixel which had significant asymmetries in the polarisation parameters (see Fig. 7). We inverted this Stokes vector with an approach which sought to minimise the number of free parameters as much as possible, but were only adequately able to reproduce it using gradients in \(B\), \(\gamma\), and \(v_{\rm LOS}\). Single-lobed Stokes \(V\) profiles are common in quiet Sun Hinode observations, accounting for up to 34% of profiles (Viticchie and Sanchez Almeida, 2011). They cover about 2% of the surface, are associated with inclined magnetic fields and flux emergence and submergence processes, and are also reproduced by simulations (Sainz Dalda et al., 2012). There is an expectation that the existence of canopy fields mean that the magnetic field becomes statistically more horizontal with height (Stenflo, 2013; Danilovic et al., 2016). However, in our analysis we found the opposite in individual cases (see Fig. 3 and 7). Although it is well understood that depth-averaged parameters can be well retrieved from inversions of photospheric diagnostics without the inclusion of gradients in the magnetic or kinetic parameters (see, for instance, Campbell et al., 2021; Quintero Noda et al., 2023), we offer a word of caution that at DKIST resolutions it is clear that any future statistical analysis of this data will be inadequate unless the need for gradients in \(v_{\rm LOS}\) (and other parameters) are accounted for in the inversions due to their prevalence in these small-scale magnetic structures. We sampled relatively symmetric circular polarisation profiles at the granular boundaries of the magnetic granule - with one side of the granule harbouring smaller amplitude profiles, indicative of a weak magnetic field, and the other with amplitudes twice their size and much larger lobe separations - and applied the weak and strong field approximations, to retrieve manual estimates of the \(\alpha_{m}B\) and \(B\). This allowed us to sanity check our measurements of vastly different magnetic field strengths spanning two orders of magnitude. However, we note that the difference in \(\alpha_{m}B\) is not as large as the difference in \(B\), indicating that the determination of \(\alpha_{m}\) plays a role. However, we expect that inversions of these lines are able to constrain both the \(\alpha_{m}\) and \(B\)(Asensio Ramos, 2009) and expect to be able to make distinctions between weak and strong fields in these lines when \(\alpha_{m}\) is a free parameter (del Toro Iniesta et al., 2010). We emphasise that such symmetrical Stokes \(V\) profiles are in the minority in this structure, especially in the IGLs, and so we do not recommend attempting to apply these approximations more broadly to these data except in limited specific cases. The extremely large field strengths we recorded are at the high end of what is expected to result from magnetic field amplification in bright points based on high resolution spectropolarimetric observations (see, for instance, Beck et al., 2007; Utz et al., 2013; Keys et al., 2019). The analysis presented in this study is conducted under the assumption of local thermodynamic equilibrium (LTE). Smitha et al. (2020) have shown that for the Fe I 6300 A doublet there can be departures from LTE due to resonance line scattering and over-ionisation of Fe atoms by ultraviolet photons. The former process makes the line stronger, while the latter makes the line weaker, which means for the Fe I 6300 A doublet these processes compensate each other to an extent. Nevertheless, departures from LTE can introduce errors or uncertainties in the derived atmospheric parameters. ## 5 Conclusions We have analysed spectropolarimetric observations of the quiet Sun with the first 4\(-\)m class optical telescope in service to solar physics. This allowed us to reveal, for the first time, the serpentine nature of the sub-granular, small-scale magnetic field, as we demonstrated that the polarity of the magnetic field changed at least three times in only 400 km. The three polarity reversals are evident from an inspection of the Stokes \(V\) profiles and confirmed by the inversions. From the inversions we were also able to deduce that the azimuth changed in a significant but coherent way. While beyond the scope of this study, it is clear that a statistical analysis based on an non-LTE inversion, not just of the Ca II 8542 A line but also the Fe I doublet, is the next logical step and will be the subject of future work. We were unable to trace the temporal evolution of these structures. Observations of these structures at high-cadence (\(30-90\) seconds) should be a priority for DKIST in the future with the Diffraction Limited Near Infrared Spectropolarimeter (DL-NIRSP) instrument (Jaeggli et al., 2022). We found that, particularly at PILs of magnetic loops, degenerate solutions in inversions remains a significant issue even at the spatial resolution provided by a 4 m telescope, and this is true even in the absence of a Stokes \(V\) signal. Although previous work involving simulations and synthetic observations have shown that atmospheric parameters can be well recovered on average (see, for instance, Campbell et al. (2021); Quintero Noda et al. (2023)), in our case studies we found that to truly characterise the plasma on small scales we had to permit a larger number of free parameters to produce the required asymmetries in the Stokes profiles and abnormal Stokes \(V\) profiles. We are unable to distinguish between gradients in atmospheric parameters along the LOS and smearing of signals in the plane of the solar surface due to the spatial point spread function of the telescope. We concur with the advice provided by Cabrera Solana et al. (2005); Quintero Noda et al. (2021) and suggest that observations of the Fe I 6302.5 A line with the deep photospheric Fe I 15648.5 A line should be prioritised, as observations of diagnostics which sample a range of heights in the photosphere could provide a method for distinguishing between these two otherwise plausible scenarios when simultaneously inverted. These observations would additionally benefit from the demonstrated ability of the Fe I 15648.5 A line to observe higher linear polarisation fractions (Martinez Gonzalez et al., 2008; Lagg et al., 2016; Campbell et al., 2023). We thank the anonymous referee for their suggestions which improved this manuscript. RJC, DBJ and MM acknowledge support from the Science and Technology Facilities Council (STFC) under grant No. ST/P000304/1, ST/T001437/1, ST/T00021X/1 and ST/X000923/1. DBJ acknowledges support from the UK Space Agency for a National Space Technology Programme (NSTP) Technology for Space Science award (SSc 009). DBJ and MM acknowlege support from The Leverhulme Trust grant RPG-2019-371. This research has received financial support from the European Union's Horizon 2020 research and innovation program under grant agreement No. 824135 (SOLARNET). The research reported herein is based in part on data collected with the Daniel K. Inouye Solar Telescope (DKIST), a facility of the National Solar Observatory (NSO). NSO is managed by the Association of Universities for Research in Astronomy, Inc., and is funded by the National Science Foundation. Any opinions, findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Association of Universities for Research in Astronomy, Inc. DKIST is located on land of spiritual and cultural significance to Native Hawaiian people. The use of this important site to further scientific knowledge is done so with appreciation and respect. This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977. The observational data used during this research is openly available from the DKIST Data Centre Archive under proposal identifier pid_1_36. **DKIST (Rimmele et al., 2020).** **SIR explorer (Campbell, 2023), Astropy (Astropy Collaboration et al., 2022), Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020).**
2307.16458
Cosmological First-Order Vacuum Phase Transitions in an Expanding Anisotropic Universe
We examine the anisotropy originated from a first-order vacuum phase transitions through three-dimensional numerical simulations. We apply Bianchi Type-I metric to our model that has one scalar field minimally coupled to the gravity. We calculate the time evolution of the energy density for the shear scalar and the directional Hubble parameters as well as the power spectra for the scalar field and the gravitational radiation although there are a number of caveats for the tensor perturbations in Bianchi Type-I universe. We run simulations with different mass scales of the scalar field, therefore, in addition to investigation of anisotropy via the shear scalar, we also determine at which mass scale the phase transition completes successfully, hence, neglecting the expansion of the Universe does not significantly affect the results. Finally, we showed that such an event may contribute to the total anisotropy depending on the mass scale of the scalar field and the initial population of nucleated bubbles.
A. Savaş Arapoğlu, A. Emrah Yükselci
2023-07-31T07:36:14Z
http://arxiv.org/abs/2307.16458v1
# Cosmological First-Order Vacuum Phase Transitions ###### Abstract We examine the anisotropy originated from a first-order vacuum phase transitions through three-dimensional numerical simulations. We apply Bianchi Type-I metric to our model that has one scalar field minimally coupled to the gravity. We calculate the time evolution of the energy density for the shear scalar and the directional Hubble parameters as well as the power spectra for the scalar field and the gravitational radiation although there are a number of caveats for the tensor perturbations in Bianchi Type-I universe. We run simulations with different mass scales of the scalar field, therefore, in addition to investigation of anisotropy via the shear scalar, we also determine at which mass scale the phase transition completes successfully, hence, neglecting the expansion of the Universe does not significantly affect the results. Finally, we showed that such an event may contribute to the total anisotropy depending on the mass scale of the scalar field and the initial population of nucleated bubbles. Introduction The first direct detection of the gravitational waves (GWs) [1] has opened a new era in observation of the Universe since they can carry the information related to very source phenomenon thanks to their weakly-interacting character. Although recent results of the observation with low-frequency GWs may point out an astrophysical origin [2] as is the first one, yet this is another important step towards mapping the stochastic GW background that may have contributions originated from the events of the early stages in the Universe as well. The imprints of such GWs may be detected through space-based GW detectors that are planned to be built in the future [3; 4; 5; 6; 7; 8]. Cosmological first-order phase transitions (PTs), which may have possibly occurred in the early Universe, are one type of phenomenon that can create GWs as an outcome [9; 10]. In spite of the fact that the well-known examples are the electroweak and the quark-hadron PTs, they may have taken place at any scale in the early Universe between QCD [11] and GUT [12] scales, respectively. However, the standard model of particle physics does not predict first-order phase transitions [13; 14], yet there are many extensions of it to allow that (see e.g. [9; 10; 15] and references therein). This type of event could take place through the bubble nucleation mechanism, theory of which was studied at zero [16; 17] and finite [18; 19] temperatures in flat space-time whereas the gravitational effects were discussed later in Ref. [20]. It is known that the most probable initial profile for the bubbles has a \(O(4)\) symmetric form [21] although there is no such proof for a curved space-time yet. The GWs originated from first-order PTs are created by the shear stress caused by the deformation of the symmetric structure of colliding bubbles. The theoretical approach to examine such an event was studied in Ref. [22] where collided portions of the bubbles are not taken into account. However, it was shown through the numerical simulations [23; 24; 25] that those parts should be considered since the scalar field oscillates around its true vacuum and give rise to another peak in the gravitational radiation power spectrum related to its mass in addition to the maximum value associated with the mean separation between bubbles. This has the potential to determine the parameters of a model and/or even the model itself. Recently, in the context of the scalar-tensor theories, the scalar field non-minimally coupled to the gravity has also been studied through the numerical simulations [26]. Another example is the study of the two-step phase transition related to electroweak symmetry breaking [27]. In addition to numerical approaches, it has been shown that it is also possible to analytically calculate the power spectra to some extent for the gravitational radiation formed during the bubble collision phase [28; 29; 30]. In order to investigate the anisotropy we implement Bianchi Type-I metric where each direction has a different scale factors unlike the Friedmann-Lemaitre-Robertson-Walker metric which is, indeed, a particular type of Bianchi Type-I model in this manner. This model has been widely used in the cosmological context (see e.g. [31] and references therein) since it is one of the simplest extensions for isotropic space-time and it may even offer some solutions to the well-known problems such as \(H_{0}\) tension [32]. Moreover, on the GW front with another aspect of the anisotropy, a possible detection of it in the stochastic GW background may even enable to distinguish between superimposed sources [33]. On the other hand, in light of recent observations, it has been reported that any sort of anisotropy is not encountered in the data [34]. However, the picture may change after the inclusion of more data into the analysis, and this may lead to understand the birthplace of an observed GW signal, in other words, whether it has an astrophysical or cosmological origins. Although there is no concrete evidence yet to determine the origins of the signal [35; 36], analysis in Ref. [37] on the NanoGRAV data [2] indicates that some cosmological sources, e.g. strong first-order PTs, can provide comparable results with the astrophysical ones such as supermassive black hole binaries. The paper is structured as follows: In Section (2), we describe the main equations of the model; in Section (3) we provide the modified equations in accordance with the numerical scheme; in Section (4) we define the quantities to be followed during the simulations; in Section (5) we present the outcomes of the simulations and discuss the results in Section (6). ## 2 Set-up In this section we provide the main equations that will be used throughout the paper. To this end, we start with the Einstein field equations given as \[\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}g_{\mu\nu}=M_{\rm Pl}^{-2}\, \mathcal{T}_{\mu\nu} \tag{1}\] where the Planck mass is defined via \(M_{\rm Pl}^{-2}\equiv 8\pi G\) and is equal to \(M_{\rm Pl}=2.435\times 10^{18}\) GeV in natural units, i.e. \(c=\hbar=1\), which is adopted in this work. In the presence of only one scalar field minimally coupled to the gravity the energy-momentum tensor is given by the following expression \[\mathcal{T}_{\mu\nu}=\nabla_{\!\mu}\,\phi\,\nabla_{\!\nu}\,\phi-\frac{1}{2}\; g_{\mu\nu}\,\nabla^{\sigma}\phi\,\nabla_{\!\sigma}\,\phi-g_{\mu\nu}\,V(\phi)\,. \tag{2}\] On the other hand, the equation of motion for the scalar field is obtained as \[\nabla^{\sigma}\nabla_{\!\sigma}\,\phi-\frac{\partial V(\phi)}{\partial\phi}=0 \tag{3}\] and we use the potential in the form of \[V(\phi)=\frac{1}{2}M^{2}\phi^{2}+\frac{1}{3}\delta\phi^{3}+\frac{1}{4}\lambda \phi^{4}-V_{\rm c} \tag{4}\] where \(M\), \(\delta\), \(\lambda\), and \(V_{\rm c}\) are constants. For the background evolution, we implement the Bianchi Type-I metric as \[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a_{1}^{2}(t)\,\mathrm{d}x^{2}+a_{2}^{2}(t)\, \mathrm{d}y^{2}+a_{3}^{2}(t)\,\mathrm{d}z^{2} \tag{5}\] where \(a_{1}(t)\), \(a_{2}(t)\), and \(a_{3}(t)\) are the scale factors in \(x\), \(y\), and \(z\) directions and they are the functions of time only. For future convenience, we also define \[H_{i}\equiv\frac{\dot{a}_{i}}{a_{i}}\ \ (i=1,2,3)\:,\qquad\quad H\equiv\frac{1}{3} \big{(}H_{1}+H_{2}+H_{3}\big{)} \tag{6}\] where \(H_{i}\) is the directional and \(H\) is the average Hubble parameters. The metric given in Eq. (5) yields \(tt\), \(xx\), \(yy\), and \(zz\) components of Eq. (1), respectively, in the following forms \[\frac{\dot{a}_{1}}{a_{1}}\frac{\dot{a}_{2}}{a_{2}}+\frac{\dot{a}_{ 1}}{a_{1}}\frac{\dot{a}_{3}}{a_{3}}+\frac{\dot{a}_{2}}{a_{2}}\frac{\dot{a}_{3} }{a_{3}} =M_{\rm Pl}^{-2}\left[\frac{1}{2}\left(\langle\dot{\phi}^{2}\rangle +\frac{\langle\phi_{x}^{2}\rangle}{a_{1}^{2}}+\frac{\langle\phi_{y}^{2}\rangle }{a_{2}^{2}}+\frac{\langle\phi_{z}^{2}\rangle}{a_{3}^{2}}\right)+\langle V( \phi)\rangle\right] \tag{7}\] \[\frac{\ddot{a}_{2}}{a_{2}}+\frac{\ddot{a}_{3}}{a_{3}}+\frac{\dot{ a}_{2}}{a_{2}}\frac{\dot{a}_{3}}{a_{3}} =M_{\rm Pl}^{-2}\left[\frac{1}{2}\left(-\langle\dot{\phi}^{2}\rangle -\frac{\langle\phi_{x}^{2}\rangle}{a_{1}^{2}}+\frac{\langle\phi_{y}^{2} \rangle}{a_{2}^{2}}+\frac{\langle\phi_{z}^{2}\rangle}{a_{3}^{2}}\right)+ \langle V(\phi)\rangle\right]\] (8) \[\frac{\ddot{a}_{1}}{a_{1}}+\frac{\ddot{a}_{3}}{a_{3}}+\frac{\dot{ a}_{1}}{a_{1}}\frac{\dot{a}_{3}}{a_{3}} =M_{\rm Pl}^{-2}\left[\frac{1}{2}\left(-\langle\dot{\phi}^{2}\rangle +\frac{\langle\phi_{x}^{2}\rangle}{a_{1}^{2}}-\frac{\langle\phi_{y}^{2} \rangle}{a_{2}^{2}}+\frac{\langle\phi_{z}^{2}\rangle}{a_{3}^{2}}\right)+ \langle V(\phi)\rangle\right]\] (9) \[\frac{\ddot{a}_{1}}{a_{1}}+\frac{\ddot{a}_{2}}{a_{2}}+\frac{\dot{ a}_{1}}{a_{1}}\frac{\dot{a}_{2}}{a_{2}} =M_{\rm Pl}^{-2}\left[\frac{1}{2}\left(-\langle\dot{\phi}^{2}\rangle +\frac{\langle\phi_{x}^{2}\rangle}{a_{1}^{2}}+\frac{\langle\phi_{y}^{2} \rangle}{a_{2}^{2}}-\frac{\langle\phi_{z}^{2}\rangle}{a_{3}^{2}}\right)+ \langle V(\phi)\rangle\right] \tag{10}\] where the angle brackets denotes the spatial average over all simulation box, the dot represents the derivative with respect to \(t\), and the subscript letters \(x,y,z\) stand for the spatial derivatives in the corresponding directions. Moreover, for the sake of simplification we eliminate the time derivative of the scalar field from Eqs. (8), (9), (10) with the help of the constraint equation, i.e. Eq. (7), and obtain the final form of the equations of motion for the scale factors as follows \[\frac{\ddot{a}_{1}}{a_{1}}+\frac{\dot{a}_{1}}{a_{1}}\bigg{(}\frac{ \dot{a}_{2}}{a_{2}}+\frac{\dot{a}_{3}}{a_{3}}\bigg{)} =M_{\rm Pl}^{-2}\left[\frac{\langle\phi_{x}^{2}\rangle}{a_{1}^{2}} +\langle V(\phi)\rangle\right] \tag{11}\] \[\frac{\ddot{a}_{2}}{a_{2}}+\frac{\dot{a}_{2}}{a_{2}}\bigg{(}\frac {\dot{a}_{1}}{a_{1}}+\frac{\dot{a}_{3}}{a_{3}}\bigg{)} =M_{\rm Pl}^{-2}\left[\frac{\langle\phi_{y}^{2}\rangle}{a_{2}^{2}} +\langle V(\phi)\rangle\right]\] (12) \[\frac{\ddot{a}_{3}}{a_{3}}+\frac{\dot{a}_{3}}{a_{3}}\bigg{(}\frac {\dot{a}_{1}}{a_{1}}+\frac{\dot{a}_{2}}{a_{2}}\bigg{)} =M_{\rm Pl}^{-2}\left[\frac{\langle\phi_{z}^{2}\rangle}{a_{3}^{2}} +\langle V(\phi)\rangle\right]\:. \tag{13}\] On the other hand, the equation of motion for the scalar field, Eq. (3), becomes \[\ddot{\phi}+3H\dot{\phi}-\sum_{k=1}^{3}\frac{\partial_{k}^{2}\phi}{a_{k}^{2}}+ \frac{\partial V(\phi)}{\partial\phi}=0 \tag{14}\] where the average Hubble parameter, \(H\), is defined in Eq. (6) and the potential for the scalar field is given in Eq. (4). Regarding the gravitational waves, the transverse-traceless (TT) part of the tensor perturbations, \(h_{ij}\), can be related to an auxiliary tensor, \(u_{ij}\), by defining a projection operator [38] as \[h_{ij}(t,{\bf k})=\Lambda_{ij,lm}({\bf\hat{k}})\,u_{lm}(t,{\bf k}) \tag{15}\] where \(u_{lm}(t,{\bf k})\) is the Fourier transform of \(u_{ij}(t,{\bf x})\) and the projection operator is defined as \[\Lambda_{ij,lm}(\mathbf{\hat{k}})=P_{im}(\mathbf{\hat{k}})P_{jl}(\mathbf{\hat{k}} )-\,\frac{1}{2}P_{ij}(\mathbf{\hat{k}})P_{lm}(\mathbf{\hat{k}})\,,\qquad P_{ ij}(\mathbf{\hat{k}})=\delta_{ij}-\,\frac{k_{i}k_{j}}{k^{2}}\;. \tag{16}\] Then, this method yields the equation of motion in the following form \[\ddot{u}_{ij}+3H\dot{u}_{ij}-\sum_{k=1}^{3}\frac{\partial_{k}^{2}u_{ij}}{a_{k} ^{2}} =M_{\rm Pl}^{-2}\,\frac{\partial_{i}\phi\,\partial_{j}\phi}{a_{i}\,a_{j}}\;. \tag{17}\] Although we use TT gauge for the tensor perturbations here, this not need to be entirely true since there is a gauge fixing problem in Bianchi Type-I model [39; 40]. Therefore, we should note that the results related to the tensor perturbations, i.e. the gravitational waves, represented in this work are valid up to a gauge transformation. We will make some more comments on this issue in the following sections. ## 3 Numerical Procedure The code which we used for this work is the same with that of Ref. [26]. It has been written in Python programming language with the help of Cython[41] extension for the intensive iterations. Parallel processing has been realized by the pencil decomposition and the communication between processes has been ensured by mpi4py[42] package. We constructed similar algorithms given in Ref. [43] for the Fourier transforms. We implement the staggered leapfrog algorithm for advance in time with 7-point stencil for the Laplacian operator. In accordance with this scheme, it is necessary to eliminate the first time derivatives of the variables in the equations of motion to achieve the stability and the consistency for the numerical calculations. To this end, we define the new variables and the constants as \[\psi\equiv\sqrt{a_{1}a_{2}a_{3}}\,\frac{\phi}{\phi_{\rm t}}\;, \qquad v_{ij}\equiv\frac{M_{\rm Pl}^{2}}{\phi_{\rm t}^{2}}\sqrt{a_{1}a_{2}a_{3 }}\,u_{ij}\,,\qquad U\equiv\frac{1}{\phi_{\rm t}^{2}M^{2}}(a_{1}a_{2}a_{3})V\;, \tag{18}\] \[d\tau\equiv Mdt\,,\qquad{\bf r}\to M{\bf r}\,,\qquad m_{\rm Pl }\equiv M_{\rm Pl}/M\,,\qquad\alpha\equiv\delta/M\,,\qquad\beta\equiv\phi_{ \rm t}/M\;. \tag{19}\] We will denote the derivative with respect to \(\tau\) by the prime symbol in the following sections whereas we keep the same notation for the spatial derivatives, in other words, the subscript \(x\) should be understood as the derivative with respect to \(Mx\). In addition to that, we use fixed spatial resolution with \(dx=0.44\) in general and the value of the Courant factor is taken \(c=0.4\) for all simulations. ### Numerical Equation Set With definition of the new variables the equation of motion for the scalar field given in Eq. (3) takes the following form \[\psi^{\prime\prime}+K\psi-\sum_{k=1}^{3}\frac{\partial_{k}^{2}\psi}{a_{k}^{2}}+ \frac{\partial U\!(\psi)}{\partial\psi}=0 \tag{20}\] where \[K\equiv\frac{1}{4}\left[\frac{a_{1}^{\prime 2}}{a_{1}}+\frac{a_{2}^{\prime 2}}{a_{ 2}}+\frac{a_{3}^{\prime 2}}{a_{3}}\right]-\frac{1}{2}\left[\frac{a_{1}^{ \prime\prime}}{a_{1}}+\frac{a_{2}^{\prime\prime}}{a_{2}}+\frac{a_{3}^{\prime \prime}}{a_{3}}+\frac{a_{1}^{\prime}}{a_{1}}\frac{a_{2}^{\prime}}{a_{2}}+\frac {a_{1}^{\prime}}{a_{1}}\frac{a_{3}^{\prime}}{a_{3}}+\frac{a_{2}^{\prime}}{a_{2 }}\frac{a_{3}^{\prime}}{a_{3}}\right] \tag{21}\] and the redefined potential is given by \[U(\psi)=\frac{1}{2}\psi^{2}+\frac{1}{3}\,\frac{\alpha\beta}{\sqrt{a_{1}a_{2}a _{3}}}\psi^{3}+\frac{1}{4}\,\frac{\lambda\beta^{2}}{a_{1}a_{2}a_{3}}\psi^{4}- \frac{a_{1}a_{2}a_{3}}{\beta^{2}}V_{\rm c} \tag{22}\] where we set \(V_{\rm c}\) such that \(V(\phi_{\rm t})=0\). One may also choose a small constant instead of zero potential value as the cosmological constant. However, this is not in the scope of this paper and it needs to be considered with more realistic setups for long-time simulations. On the other hand, the equations for the scale factors become \[\frac{a_{1}^{\prime\prime}}{a_{1}}+\frac{a_{1}^{\prime}}{a_{1}} \bigg{(}\frac{a_{2}^{\prime}}{a_{2}}+\frac{a_{3}^{\prime}}{a_{3}}\bigg{)}= \frac{\beta^{2}}{m_{\rm Pl}^{2}}(a_{1}a_{2}a_{3})^{-1}\left[\frac{\langle\psi _{x}^{2}\rangle}{a_{1}^{2}}+\langle U(\psi)\rangle\right] \tag{23}\] \[\frac{a_{2}^{\prime\prime}}{a_{2}}+\frac{a_{2}^{\prime}}{a_{2}} \bigg{(}\frac{a_{1}^{\prime}}{a_{1}}+\frac{a_{3}^{\prime}}{a_{3}}\bigg{)}= \frac{\beta^{2}}{m_{\rm Pl}^{2}}(a_{1}a_{2}a_{3})^{-1}\left[\frac{\langle\psi _{y}^{2}\rangle}{a_{2}^{2}}+\langle U(\psi)\rangle\right]\] (24) \[\frac{a_{3}^{\prime\prime}}{a_{3}}+\frac{a_{3}^{\prime}}{a_{3}} \bigg{(}\frac{a_{1}^{\prime}}{a_{1}}+\frac{a_{2}^{\prime}}{a_{2}}\bigg{)}= \frac{\beta^{2}}{m_{\rm Pl}^{2}}(a_{1}a_{2}a_{3})^{-1}\left[\frac{\langle\psi _{z}^{2}\rangle}{a_{3}^{2}}+\langle U(\psi)\rangle\right]\;. \tag{25}\] Finally, the equation of motion for the tensor perturbations is obtained as \[v_{ij}^{\prime\prime}+Kv_{ij}-\sum_{k=1}^{3}\,\frac{\partial_{k}^{2}v_{ij}}{a _{k}^{2}}=(a_{1}a_{2}a_{3})^{-1/2}\,\frac{\partial_{i}\psi\,\partial_{j}\psi}{ a_{i}\,a_{j}} \tag{26}\] where \(K\) is defined in Eq. (21). These are the equations that will be solved numerically. The structure of the equations for the scalar field and the tensor perturbations are already in a suitable form for the leapfrog algorithm. However, the equations for the scale factors need a modification since the first time derivatives are one half step behind the corresponding variable at each step. In order to synchronize the variables and their first time derivatives we also keep their values from the previous step, meaning that we calculate the derivatives for this particular purpose as \(a_{1}^{\prime}(t)\approx[a_{1}^{\prime}(t+\Delta t/2)+a_{1}(t-\Delta t/2)]/2\) where \(\Delta t\) is the time step. We use those values to calculate the expression given in Eq. (21) as well. ### Initial Conditions In order to start the simulations we use the thin-wall approximation [16] to determine the initial profile of the scalar field, that is, we implement \[\psi(t=0,r)=\frac{1}{2}\biggl{[}1-\tanh\biggl{(}\frac{r-R_{\rm c}M}{l_{0}M} \biggr{)}\biggr{]} \tag{27}\] where \(R_{\rm c}\) and \(l_{0}\) are the critical radius and the bubble wall length, respectively, which can be found from the following expressions [23] \[\psi(R_{\rm c})=\frac{1}{2}\;,\qquad\psi(r^{\pm})=\frac{1}{2}\biggl{[}1-\tanh \biggl{(}\pm\frac{1}{2}\biggr{)}\biggr{]}\;,\qquad l_{0}M=r^{+}-r^{-}\;. \tag{28}\] Furthermore, the time derivative of the scalar field is taken initially to be zero, i.e. \(\psi^{\prime}(t=0)=0\). On the other hand, the bubble nucleation points in the lattice are randomly determined and the bubbles are nucleated simultaneously at the beginning of the simulations. As for the scale factors we choose \[a_{1}(t=0)=a_{2}(t=0)=a_{3}(t=0)=1 \tag{29}\] and in order to determine the initial values for the derivatives of the scale factors we use Eq. (7) written with the new variables as \[a_{1}^{\prime}a_{2}^{\prime}+a_{1}^{\prime}a_{3}^{\prime}+a_{2}^{\prime}a_{3}^ {\prime}=\frac{\beta^{2}}{m_{\rm Pl}^{2}}\biggl{[}\frac{1}{2}\bigl{(}\langle \psi_{x}^{2}\rangle+\langle\psi_{y}^{2}\rangle+\langle\psi_{z}^{2}\rangle \bigr{)}+\frac{1}{4}\bigl{(}a_{1}^{\prime}+a_{2}^{\prime}+a_{3}^{\prime} \bigr{)}^{2}\langle\psi^{2}\rangle+\langle U(\psi)\rangle\biggr{]} \tag{30}\] with the help of Eq. (29). Moreover, assuming that \(a_{1}^{\prime}=a_{2}^{\prime}=a_{3}^{\prime}\) initially we get \[a_{1}^{\prime}(t=0)=a_{2}^{\prime}(t=0)=a_{3}^{\prime}(t=0)=\pm\frac{1}{3I_{ 2}-1}\sqrt{\frac{1}{3}I_{1}(1-3I_{2})} \tag{31}\] where \[I_{1}=\frac{\beta^{2}}{m_{\rm Pl}^{2}}\biggl{[}\frac{1}{2}\Bigl{(}\langle \psi_{x}^{2}\rangle+\langle\psi_{y}^{2}\rangle+\langle\psi_{z}^{2}\rangle \Bigr{)}+\langle U(\psi)\rangle\biggr{]}\quad,\qquad I_{2}=\frac{\beta^{2}}{m _{\rm Pl}^{2}}\langle\psi^{2}\rangle\;. \tag{32}\] Finally, we set \(v_{ij}(t=0)=v_{ij}^{\prime}(t=0)=0\) for the tensor perturbations. ## 4 Densities, Power Spectra, and Shear Scalar Here we give the definitions for the densities, the power spectra for both the scalar field and the gravitational waves, and the shear scalar which will show the amount of anisotropy in simulations with different configurations. Starting with the densities for the scalar field we have \[\bar{\rho}_{K}\equiv\frac{1}{2}\bigl{\langle}\dot{\phi}^{2}\bigr{\rangle}\;, \qquad\bar{\rho}_{G}\equiv\frac{1}{2}\biggl{\langle}\sum_{k=1}^{3}\frac{ \partial_{k}^{2}\phi}{a_{k}^{2}}\biggr{\rangle}\;,\qquad\bar{\rho}_{V}\equiv \bigl{\langle}V(\phi)-V(\phi_{\rm t})\bigr{\rangle} \tag{33}\] which are the kinetic, the gradient, and the potential energies, respectively. On the other hand, for the energy density of the gravitational waves, the following expression is calculated \[\bar{\rho}_{\rm gw}({\bf x},t)=\frac{1}{8}M_{\rm Pl}^{2}\sum_{i,j}\Big{\langle} \dot{h}_{ij}({\bf x},t)\,\dot{h}_{ij}({\bf x},t)+\nabla h_{ij}({\bf x},t)\, \nabla h_{ij}({\bf x},t)\Big{\rangle}\,. \tag{34}\] Although we keep the gradient terms explicitly, we need to emphasize that they almost have no effect on the results. The power spectrum for the scalar field is expressed by \[{\cal P}_{\phi}({\bf k},t)=\frac{k^{3}}{2\pi^{2}}\big{\langle}\phi({\bf k},t) \,\phi^{*}({\bf k},t)\big{\rangle} \tag{35}\] and for the gravitational waves we use \[\frac{d\Omega_{\rm gw}}{d\ln k}=\frac{1}{3H^{2}}\,\frac{k^{3}}{16\pi^{2}} \big{(}P_{\dot{h}}({\bf k},t)+k^{2}P_{\dot{h}}({\bf k},t)\big{)}\,. \tag{36}\] We also implement the following normalization \[\frac{d\Omega_{\rm gw}}{d\ln k}\longrightarrow\frac{1}{(H_{*}R_{*}\Omega_{\rm vac })^{2}}\,\frac{d\Omega_{\rm gw}}{d\ln k} \tag{37}\] where \(H_{*}\) is the average Hubble parameter value at the time of the transition, \(R_{*}\) is the mean bubble separation equals to \(({\cal V}/N_{\rm b})^{1/3}\) in which \({\cal V}\) and \(N_{\rm b}\) are the physical volume of the simulation box and the number of bubbles, respectively. We should emphasize that the above definitions for the power spectra are not entirely correct in the anisotropic case. However, we will represent them regardless and left those calculations to future studies. Because, in addition to investigating the mass scale for the scalar field, our main focus in this paper is to demonstrate the time evolution of the shear scalar defined as \[\sigma^{2}=\frac{1}{6}\Big{[}(H_{1}-H_{2})^{2}+(H_{1}-H_{3})^{2}+(H_{2}-H_{3}) ^{2}\Big{]} \tag{38}\] in order to quantify the anisotropy in the background. The Friedmann equation can then be written in the following form \[H^{2}=\frac{\sigma^{2}}{3}+\frac{1}{3}M_{\rm Pl}^{-2}\left[\frac{1}{2}\left( \langle\dot{\phi}^{2}\rangle+\sum_{k=1}^{3}\frac{\langle(\partial_{i}\phi)^{ 2}\rangle}{a_{i}^{2}}\right)+\langle V(\phi)\rangle\right] \tag{39}\] or in terms of energy density parameters \(1=\Omega_{\sigma^{2}}+\Omega_{\phi}\) where we have defined the energy density parameter for the shear scalar as follows \[\Omega_{\sigma^{2}}\equiv\frac{\sigma^{2}}{3H^{2}} \tag{40}\] and put all the terms of the scalar field into \(\Omega_{\phi}\) since we will not use it further. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \# & \(N\) & \(N_{\rm b}\) & \(M_{\rm Pl}/M\) & \(\lambda\) & \(\delta/M\) & \(M_{\rm t}/M\) & \(\phi_{\rm t}/M\) & \(\rho_{\rm vac}/M^{4}\) & \(R_{\rm c}M\) & \(l_{0}M\) \\ \hline \hline \(1\) & & & \multirow{2}{*}{320} & 1 & & & & & & & & \\ \cline{1-1} \cline{5-10} \(2\) & & & \(10\) & & & & & & & & \\ \cline{1-1} \cline{5-10} \(3\) & \multirow{2}{*}{640} & \multirow{2}{*}{100} & \multirow{2}{*}{0.5} & \multirow{2}{*}{-1.632} & \multirow{2}{*}{1.14} & \multirow{2}{*}{2.45} & \multirow{2}{*}{0.495} & \multirow{2}{*}{7.15} & \multirow{2}{*}{1.71} \\ \cline{1-1} \cline{5-10} \(5\) & & \(40\) & & & & & & & \\ \cline{1-1} \cline{5-10} \(6\) & & \(600\) & & & & & & & & \\ \hline \(7\) & \(1280\) & \(320\) & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Parameter values used to create different configurations for the simulations. \(N\) and \(N_{\rm b}\) are the number of grid points and the number of bubbles, respectively. From left to right, first five constants are the free parameters that have been chosen to test the dependency of the results and in accordance with the previous works of Refs. [23; 26]. The mass (\(M_{\rm t}/M\)) and the scalar field value (\(\phi_{\rm t}/M\)) in true vacuum, vacuum energy density (\(\rho_{\rm vac}/M^{4}\)), the critical radius (\(R_{c}M\)), and bubble wall thickness (\(l_{0}M\)) are calculated as explained in the text. For \(N=640\) (\(N=1280\)) we take \(dx=0.44\) (\(dx=0.22\)). Figure 1: Two-dimensional slices of the simulations with \(M_{\rm Pl}/M=1,10,100\) from left to right, respectively, for two different times. For all simulations, we take \(N=640\) and \(N_{\rm b}=320\). Note that the physical scales are different in each slice due to the difference in expansion rate. Simulation results We have simulated several configurations of the model with different parameter values as listed in Table (1). In order to distinguish their effects on the results we have changed the number of nucleated bubbles (simulations 3-6) and the number of grid points (simulations 6, 7) as well as the mass scale (simulations 1, 2, 3) which is one of the main concern of this study in the context of free parameters. The values of the other constants that depend on the free parameters have also been calculated and given in the same table. The outcomes that will be reported and discussed in detail below are the Hubble parameters, densities, power spectra, and the shear scalar whose time evolution is another main interest of our work as it is one of the key indicators of anisotropy. First of all, we have run simulations to see the mass scale at which the phase transition can be completed successfully. We see from two-dimensional slices of the simulations represented in Fig. (1) that the phase transition does not complete for the configurations with \(M_{\rm Pl}/M=1,10\). This is due to the fact that the expansion rate is higher for relatively small values of \(M_{\rm Pl}/M\) as one can deduce this result from the right-hand side of Eqs. (25). The high expansion rate causes that universe to outgrow with a speed much more than the enlargement of the bubbles, therefore, the bubble collision phase either can not be completed entirely or does not happen at all. For \(M_{\rm Pl}/M=10\) the bubbles expand for a while at the start of the simulation and the ones close to each other collide partially, but then, the expansion rate of the universe eventually dominates the dynamics, whereas the bubble collision does not even occur in the case of \(M_{\rm Pl}/M=1\). On the other hand, for \(M_{\rm Pl}/M=100\) we notice that this kind of effect does not take place and the bubble collision phase is completed successfully in a time less than \(H_{*}^{-1}\). At this point it is necessary to emphasize that this mass value is beyond even the GUT scale let alone the EW phase transition epoch, which are around \(10^{15}\) GeV and 100 GeV, respectively. Therefore, the completion of a phase transition at either GUT or EW scale is not affected by the expansion of the Universe. On the other hand, this inference can also be supported by the time evolution of the average scale factors and the Figure 2: Time evolution of the average Hubble parameters (left) and the scale factors (right) for the configurations with different mass scales. Here \(N=640\), \(N_{\rm b}=320\) for all simulations. average Hubble parameters represented in Fig. (2). Higher mass ratios correspond to lower expansion rates in comparison at the same time scale. However, we should note that the scale factor of the case with \(M_{\rm Pl}/M=10\) is less than that of \(M_{\rm Pl}/M=1\) at the initial stages of their time evolution although it becomes slightly larger afterwards as can be seen in the figure as well. Nevertheless, the time evolution of those two curves are very similar and they differ from that of \(M_{\rm Pl}/M=100\) almost two orders of magnitude. However, the cases with \(M_{\rm Pl}/M=1,10\) are the scenarios we did not take into account further as they do not fulfill the requirements to complete the phase transition. Since we have confirmed that it is required to choose \(M_{\rm Pl}/M\gtrsim 100\) roughly in order the transition to complete, now we will investigate the effect of the other parameters on the outcomes such as the number of initiated bubbles, \(N_{\rm b}\), and the number of grid points, \(N\). We examine the impact of \(N_{\rm b}\) through four different simulations with \(N_{\rm b}=10,40,320,600\) fixing \(M_{\rm Pl}/M=100\) and \(N=640\) for all runs. The results are shown in Fig. (3) for the differences in the directional Hubble parameters, in Fig. (4) for a sample of the gradient energy densities together with the differences in the directional components, and in Fig. (5) for the average Hubble parameters and the energy density parameter of the shear scalar. As seen from Fig. (3) although there is only one order of magnitude between them, the maximum value in the differences decreases with increasing number of bubbles except for the case of \(N_{\rm b}=10\) for which we have found that the transition does not complete and gives results similar to the case with \(M_{\rm Pl}/M=10\) given in Fig. (1). Regarding the difference in the gradient energies, we have found that they are in the same order of magnitude for all four different simulations Figure 3: Time evolution of the absolute differences between the directional Hubble parameters for the configurations with different values of number of bubbles as depicted in the figures. For all simulations we take \(N=640\) and \(M_{\rm Pl}/M=100\). and we have represented one example of them in Fig. (4). As seen from the figure, the directional quantities peak at early stages of the simulations corresponding to the bubble collision phase and then decreases smoothly throughout the run. Together with the corresponding Hubble parameters the results for the shear scalar defined in Eq. (38) are represented in Fig. (5) in terms of its energy density parameter given in Eq. (40). The curves show that the value of the shear scalar increases to some extent with decreasing number of bubbles and then starts to get smaller after some value in accordance with discussion about the difference between the components of the directional Hubble parameters in the previous paragraph and we should recall that for \(N_{\rm b}=10\) the transition is not accomplished. The shear scalar has almost the same shape throughout its time evolution in different simulations as if it was shifted depending on the number of bubbles. Nevertheless, the maximum values occur around \(10^{-8}-10^{-10}\) right after the completion of the bubble collision phase and then within our time scale for the simulations it reaches \(10^{-11}-10^{-12}\) decreasing gradually. Moreover, we have also provided a result drawn with a dashed line on Figure 4: Time evolution of in the directional average gradient energies (left) and the absolute differences (right) for a sample configuration with \(N_{\rm b}=320\), \(N=640\), and \(M_{\rm Pl}/M=100\). Figure 5: Time evolution of the Hubble parameters (left) and the shear scalar (right) for the simulations with different values of number of bubbles and grid points. Here we take \(M_{\rm Pl}/M=100\) for all simulations. For the solid lines, \(N=640\), and \(dx=0.44\) whereas \(N=1280\), \(dx=0.22\), and \(N_{\rm b}=320\) for the dashed line. the right panel of Fig. (5) in order to check the effect of resolution of the simulation box. We see that the shear scalar gets slightly smaller for higher resolution with the same number of bubbles. Nevertheless, for all cases the shear scalar increases during the bubble collision phase and then it decreases as the scalar field oscillates around its true vacuum. Since we do not expect an anisotropic structure to develop in this configuration at late times, we can conclude that the maximum value for the shear scalar energy density parameter that we have found is around \(10^{-8}\). Additionally, we have also provided the result of a longer run for a simulation with \(N_{\rm b}=320\), \(N=640\), and \(M_{\rm Pl}/M=100\) in Fig. (6). We see that the energy density parameter of the shear scalar reaches values around \(10^{-14}\) at the end of that simulation. We need to note that this value already matches one of the most stringent constraint on today's value for the energy density parameter of the shear scalar [32], that is, in the order of \(10^{-15}\), and, moreover, it continues to decrease. The results for the power spectra of the scalar field and the GW energy density, defined in Eqs. (35) and (36) respectively, are represented in Fig. (7). We have shown only one example for the case of \(N_{\rm b}=320\) due to the fact that change in number of bubbles does not effect the shape of the spectrum neither for the scalar field nor for the GW energy density. We understand from the figures that the bubble collision phase is completed successfully before \(tH_{*}=1\) since the scalar field already oscillates around its true vacuum corresponding to a peak of its power spectrum near the mass value \(M_{\rm t}\) and the power spectrum of the GW energy density develops secondary peak there as well. The overall magnitude in both power spectra decreases due to the expansion while keeping the same shape. Therefore, the characteristic shapes for both power spectra are the same with the results of previous works [23; 26]. However, as we have mentioned before the tensor perturbations should be investigated in detail for Bianchi Type-I model and, in accordance with the spirit of the model, possible anisotropies in the power spectra with compatible definitions are needed to take into consideration which we left for future studies. Figure 6: Time evolution of the shear scalar of a simulation with \(M_{\rm Pl}/M=100\), \(N=640\), and \(N_{\rm b}=320\) for a longer run in comparison with the ones given in Fig. (5). ## 6 Conclusion In this paper, we have examined the cosmological first-order vacuum phase transitions in an anisotropic expanding universe modeled by Bianchi Type-I metric. To do this we have used a model with a scalar field that is minimally coupled to the gravity and has a typical potential for the first-order phase transitions. After representing the main equations in their analytical forms, we have put them into numerical set in accordance with the leapfrog algorithm. Then, we have integrated the equations of motion for the scalar field and for the directional scale factors as well as for the tensor perturbations, the results of which are valid up to a gauge transformation due to the fact that in Bianchi Type-I model the TT gauge should be modified [39; 40]. In addition to that it is also important to check the anisotropy in the GW power spectrum to either validate or eliminate a model or the source of the signal through possible upcoming observations even by taking the periodicity of the simulation box into account [44]. Nevertheless, main purpose of this work was to find out the mass scale at which the bubble collision phase is accomplished and, additionally, to track the anisotropy by determining the behavior of the shear scalar defined in Eq. (38), in other words, to consider the anisotropy in the background evolution due to the scalar field responsible from the transition. We have run several simulations with different number of initiated bubbles which determines the initial conditions and correspondingly has the major impact for the time evolution of all variables. In addition to that due to the computational costs we have simulated only one configuration with higher resolution and the one with a longer run in comparison with the others. Before investigating the shear scalar, we have represented the results for three simulations with different mass scales, namely \(M_{\rm Pl}/M=1,10,100\), which have shown that the phase transition does not complete for the runs roughly \(M_{\rm Pl}/M\lesssim 100\). In those cases either the bubbles expand for a while and then the expansion of the universe prevents them to coalesce entirely or they do not find a chance to collide at all because of the expansion Figure 7: Power spectrum of the scalar field (left) and the GW energy density (right) for a configuration with \(N_{\rm b}=320\). Here we take \(N=640\) and \(M_{\rm Pl}/M=100\). of the universe. We have given the results of examples for those two cases with the mass scales of \(M_{\rm Pl}/M=10\) and \(M_{\rm Pl}/M=1\), respectively, in Fig. (1) together with the case of \(M_{\rm Pl}/M=100\) that was adopted for the rest of the simulations. We did not use mass scales greater than that because of the computational costs and, moreover, this value is enough to examine the anisotropy in first place due to the fact that higher rates for \(M_{\rm Pl}/M\) suppress the expansion of the Universe more and more. After determining the order of minimum mass scale, that is \(M_{\rm Pl}/M\approx 100\), at which the phase transition can be completed successfully, we have run simulations to determine the time evolution of the energy density parameter for the shear scalar by examining the effect of different initial conditions created through different number of initiated bubbles. But before this we have shown in Fig. (3) that absolute differences in the directional Hubble parameters are in the order of \(10^{-5}\) for \(N_{\rm b}=10,40,320\) while it is around \(10^{-6}\) for \(N_{\rm b}=600\) at most. Additionally, we have also provided the directional gradient energies and their differences in Fig. (4) for the same configurations with the number of bubbles mentioned and have shown that their differences are in the order of \(10^{-6}\). In Fig. (5) we have presented the results for the Hubble parameters and the shear scalars. Moreover, we have also given the outcomes for a longer run of a specific configuration in Fig. (6). As indicated before from the results of the directional Hubble parameters it seems that the relatively small number of bubbles give rise to high values for the shear scalar except for the case of \(N_{\rm b}=10\) which seems to be a counter example for this conclusion at first glance, but the bubble collision phase is not completed for that simulation. Therefore, as one may guess before, in addition to the mass scale, the proportion of the number of initiated bubbles to whole simulation box is another quantity that also determines whether a phase transition can be completed or not. This can also be seen from Fig. (5) for the Hubble parameters where the case of \(N_{\rm b}=10\) is different from the others at the beginning of the simulation. Nevertheless, we have found that before decreasing smoothly, the energy density parameter for the shear scalar gains a peak between \(10^{-8}-10^{-10}\) which occurs at bubble collision phase. With the aforementioned longer run we have shown that \(\Omega_{\sigma^{2}}\) becomes close to one of the constraints obtained for its today's value [32]. Additionally, it seems that the expansion of the Universe does not effect the phase transition for a typical mass scales of \(M_{\rm Pl}/M\gtrsim 100\) with a fairly distributed number of initiated bubbles, since hereby we have tested impact of the expansion itself as well besides the anisotropy. ## Acknowledgements This work is supported by The Scientific and Technological Research Council of Turkiye (TUBITAK) through grant number 121F066. Computing resources used in this work were provided by the National Center for High Performance Computing of Turkiye (UHeM) under grant number 5013072022 and the simulations were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).
2302.14491
Formalization of $p$-adic $L$-functions in Lean 3
The Euler--Riemann zeta function is a largely studied numbertheoretic object, and the birthplace of several conjectures, such as the Riemann Hypothesis. Different approaches are used to study it, including $p$-adic analysis : deriving information from $p$-adic zeta functions. A generalized version of $p$-adic zeta functions (Riemann zeta function) are $p$-adic $L$-functions (resp. Dirichlet $L$-functions). This paper describes formalization of $p$-adic $L$-functions in an interactive theorem prover Lean 3. Kubota--Leopoldt $p$-adic $L$-functions are meromorphic functions emerging from the special values they take at negative integers in terms of generalized Bernoulli numbers. They also take twisted values of the Dirichlet $L$-function at negative integers. This work has never been done before in any theorem prover. Our work is done with the support of \lean{mathlib} 3, one of Lean's mathematical libraries. It required formalization of a lot of associated topics, such as Dirichlet characters, Bernoulli polynomials etc. We formalize these first, then the definition of a $p$-adic $L$-function in terms of an integral with respect to the Bernoulli measure, proving that they take the required values at negative integers.
Ashvni Narayanan
2023-02-28T11:19:22Z
http://arxiv.org/abs/2302.14491v1
# Formalization of \(p\)-adic \(L\)-functions in Lean 3 ###### Abstract The Euler-Riemann zeta function is a largely studied numbertheoretic object, and the birthplace of several conjectures, such as the Riemann Hypothesis. Different approaches are used to study it, including \(p\)-adic analysis : deriving information from \(p\)-adic zeta functions. A generalized version of \(p\)-adic zeta functions (Riemann zeta function) are \(p\)-adic \(L\)-functions (resp. Dirichlet \(L\)-functions). This paper describes formalization of \(p\)-adic \(L\)-functions in an interactive theorem prover Lean 3. Kubota-Leopoldt \(p\)-adic \(L\)-functions are meromorphic functions emerging from the special values they take at negative integers in terms of generalized Bernoulli numbers. They also take twisted values of the Dirichlet \(L\)-function at negative integers. This work has never been done before in any theorem prover. Our work is done with the support of mathlib 3, one of Lean's mathematical libraries. It required formalization of a lot of associated topics, such as Dirichlet characters, Bernoulli polynomials etc. We formalize these first, then the definition of a \(p\)-adic \(L\)-function in terms of an integral with respect to the Bernoulli measure, proving that they take the required values at negative integers. formal mathlib, algebraic number theory, Lean, mathlib 30 ## 2 \(p\)-adic \(L\)-functions Section 3, introduce generalized Bernoulli numbers in Section 4, construct the \(p\)-adic \(L\)-function in Section 5, and evaluate it at negative integers in Section 6, finishing with a summary in Section 7. ### Mathematical overview We give a brief overview of the mathematics formalized in this project. \(L\)-functions are a fundamental object, appearing almost everywhere in modern number theory. The Dirichlet \(L\)-function associated to a Dirichlet character \(\chi\) is given by \[L(s,\chi)=\sum_{n=1}^{\infty}\frac{\chi(n)}{n^{s}}=\prod_{p\text{ prime}}\frac{1}{1-\chi(p)p^{-s}}\] where \(s\) is a complex variable with \(Re(s)>1\). This can be analytically extended to the entire complex plane, with a simple pole at \(s=1\) when \(\chi=1\). Note also that \(L(s,1)\) is the same as the Riemann zeta function. Moreover, it is known that \(L(1-n,\chi)=-\frac{B_{n,\chi}}{n}\), where \(B_{n,\chi}\) are the generalized Bernoulli numbers. In this paper, we construct, for an integer prime \(p\), a \(p\)-adic analogue of \(L(s,\chi)\), called the Kubota-Leopoldt \(p\)-adic \(L\)-function, denoted \(L_{p}(s,\chi)\). This is generally done by continuously extending the function \(L_{p}(1-n,\chi):=(1-\chi(p)p^{n-1})L(1-n,\chi)\) to the complete \(p\)-adic space \(\mathbb{C}_{p}\). In fact, \(L_{p}(s,1)\) is analytic except for a pole at \(s=1\) with residue \(1-\frac{1}{p}\) (Theorem 5.11, [6]). Formalization of the \(p\)-adic \(L\)-functions via analytic continuation was hard, since \(\mathbb{C}_{p}\) did not exist in mathlib at the time. Following [6], we instead define it in terms of an "integral" with respect to the Bernoulli measure. We explain these terms below. A profinite space is a compact, Hausdorff and totally disconnected space. The \(p\)-adic integers \(\mathbb{Z}_{p}\), which are the completion of the integers \(\mathbb{Z}\) with respect to the valuation \(\nu_{p}(p^{\alpha}\prod_{p_{i}\neq p}p_{i}^{\alpha_{i}})=\alpha\) are a profinite space. One may also think of them as the inverse limit of the discrete topological spaces \(\mathbb{Z}/p^{n}\mathbb{Z}\), that is, \(\mathbb{Z}_{p}=\operatorname{proj}\lim_{n}\mathbb{Z}/p^{n}\mathbb{Z}\). Locally constant functions are those for which the preimage of any set is open. Given a profinite space \(X\) and a normed ring \(R\), one can show that the locally constant functions from \(X\) to \(R\) (denoted \(LC(X,R)\)) are dense in the space of continuous functions from \(X\) to \(R\) (denoted \(C(X,R)\)). Given an abelian group \(A\), a distribution is defined to be an \(A\)-linear map from \(LC(X,A)\) to \(A\). A measure \(\phi\) is defined to be a bounded distribution, that is, \(\forall f\in LC(X,R)\), \(\exists K>0\) such that \(||\phi(f)||\leq K||f||\), where \(||f||=\sup_{x\in X}||f(x)||\). An example of a measure is the Bernoulli measure. Given a natural number \(d\) coprime to \(p\) and a clopen set \(U_{n,a}\) of \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), the characteristic function \(\chi_{n,a}\) (defined to be \(1\) on \(U_{n,a}\) and \(0\) otherwise) is a locally constant function. Given a natural number \(c\) that is coprime to \(d\) and \(p\), we then define the Bernoulli measure \(E_{c}\) by : \[E_{c}(\chi_{n,a}):=\left\{\frac{a}{dp^{n+1}}\right\}-c\bigg{\{}\frac{c^{-1}a}{ dp^{n+1}}\bigg{\}}+\frac{c-1}{2}\] Given a measure \(\mu\), the integral with respect to \(\mu\) is \(\int fd\mu:=\mu(f)\) for any locally constant function \(f\), and extending this definition to \(C(X,R)\). In fact, this is an \(R\)-linear map. Finally, the \(p\)-adic \(L\)-function is defined to be an integral with respect to the Bernoulli measure. The characterizing property of the \(p\)-adic \(L\)-function is its evaluation at negative integers : \[L_{p}(1-n,\chi)=-(1-\chi\omega^{-n}(p)p^{n-1})\frac{B_{n,\chi\omega^{-n}}}{n}\] for \(n\geq 1\). When defined as an integral, additional work is needed to prove this. Our contributions to this theory include a formalized definition of the \(p\)-adic \(L\)-function in generality, taking values in a normed complete non-Archimedean \(\mathbb{Q}_{p}\)-algebra, instead of just \(\mathbb{C}_{p}\). Further, it takes as input continuous monoid homomorphisms, also known as elements of the weight space. We have also developed an extensive theory for Dirichlet characters, Bernoulli numbers and polynomials, generalized Bernoulli numbers, properties of \(p\)-adic integers and modular arithmetic, making substantial contributions to the number_theory section of mathlib. We use non-traditional methods to define and prove classical results, often choosing to work with those which are easier to formalize, later proving their equivalence to the original. ### Lean and mathlib Lean 3 is a functional programming language and interactive theorem prover based on dependent type theory. This project is based on Lean's mathematical library mathlib 3, which is characterized by its decentralized nature with over 300 contributors. Thus, it is impossible to cite every author who contributed a piece of code that we used. We assume the reader is familiar with structures such as def, abbreviation, lemma, theorem, which are used constantly. An important property of Lean is its typeclass inference system - Lean "remembers" properties given to a structure or class embedded in an instance structure. This is explained in detail in [5]. We shall also use several tactics in proofs, such as rw, apply, conv and refine 1. Footnote 1: [https://leanprover-community.github.io/mathlib_docs/tactics.html](https://leanprover-community.github.io/mathlib_docs/tactics.html) has a full list of tactics in Lean ## 2 Preliminaries ### Filters and convergence None of our mathematical proofs require filters on paper, however, we find that working with them makes formalizing our proofs significantly less cumbersome. Due to the efforts of Johannes Holzl, Jeremy Avigad, Patrick Massot and several others, we have a vast API for filters in Lean. We shall not delve into the details of what a filter is, but instead explain how they are used to formalize convergence and limits. For a sequence of functions \((f_{n})_{n\in\mathbb{N}}\), the expression \(\lim_{n\to\infty}f_{n}(x)=a\) is represented as : ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ## 4 \(p\)-adic \(L\)-functions ### Modular arithmetic and units Some fundamental objects with which we shall work throughout are the finite spaces \(\mathbb{Z}/n\mathbb{Z}\). Note that proving properties for zmod n is equivalent to proving them for any finite cyclic group. Given a positive \(n\in\mathbb{N}\), \(\mathbb{Z}/n\mathbb{Z}\) is the same as fin n(and \(\mathbb{Z}\) for \(n=0\)), the set of natural numbers upto \(n\). It is also the set of equivalence classes obtained via the relation on \(\mathbb{Z}:\)\(a\sim b\iff n\mid a-b\). It has a natural group structure, and is given the discrete topology, making it a topological group. Some maps used constantly include val:zmod n + \(\mathbb{N}\), which takes any element to its smallest nonnegative reprentative less than n; and cast_hom:zmod n + R, a coercion to a ring, obtained by composing the canonical coercion with val. If R has characteristic dividing n, the map is a ring homomorphism. Given coprime naturals \(m,n\), an important equivalence is chinese_remainder:zmod (m * n) \(\simeq\)** zmod m * zmod n. About 45 additional lemmas were required, which have been put in a separate file, zmod/properties.lean. Every monoid M has an associated space of invertible elements or units, denoted units M or M\({}^{\times}\). We use the map units.coe_hom:M\({}^{\times}\) + M to identify a unit in its parent space frequently. Given a monoid_hom (abbreviated as **) R ** S for monoids R and S, one can obtain a homomorphism R\({}^{\times}\) ** S\({}^{\times}\) by units.map. ## 3 Dirichlet characters and the Teichmuller character An important task was to formalize Dirichlet characters, an integral part of the definition of the \(p\)-adic \(L\)-function. Dirichlet characters are often not found to be defined in this technical manner. Another addition is the definition of Dirichlet characters of level and conductor 0. The words character and Dirichlet character are used interchangeably. Dirichlet characters are usually defined as group homomorphisms from \(\mathbb{Z}/n\mathbb{Z}^{\times}\) to \(\mathbb{C}^{\times}\) for some natural number \(n\). A lot of properties traditionally known for groups hold more generally and are defined in greater generality in mathlib for monoids. In the same spirit, we define Dirichlet characters to be monoid homomorphisms on any monoid : abbreviation dirichlet_character (R : Type*) [monoid R] (n : N) := (zmod n)\({}^{\times}\) -\(\Rightarrow\) R \({}^{\times}\) /=- The level of a Dirichlet character. -/ abbreviation lev {R : Type*} [monoid R] {n : N} (\(\chi\) : dirichlet_character R n) : N := n If we gave the definition of Dirichlet characters a def structure, dirichlet_character would become a Type distinct from (zmod n)\({}^{\times}\) ** R\({}^{\times}\), making compositions with monoid_hom complicated; hence we used abbreviation instead. Note that the linter returns an extra unused argument warning (for \(\chi\)) for the latter definition. Given a Dirichlet character \(\chi\), asso_dirichlet_character \(\chi\) returns a monoid homomorphism from \(\mathbb{Z}/n\mathbb{Z}\) to \(R\), which is \(\chi\) on the units and 0 otherwise. noncomputable abbreviation asso_dirichlet_character {R : Type*} [monoid_with_zero R] {n : N} (\(\chi\) : dirichlet_character R n) : zmod n \(\rightarrow\) R : = { to_fun := function.extend (units.coe_hom (zmod n)) ((units.coe_hom R) \(\circ\)\(\chi\)) 0,..} ``` Lean requires us to tag this definition noncomputable, since we are producing data from an existential statement, classical.some (appearing in function.extend), which has no computational content (see Chapter 11 of [1]). One would like to shift between compatible Dirichlet characters of different levels. For this, we construct the following tools : ``` /--ExtendstheDirichletcharacter\(\chi\)oflevelntolevelm,forn|m.-/ defchange_level{m:N}(hm:n|m): dirichlet_characterRn->*dirichlet_characterRm:= { to_fun:=\(\lambda\)\(\psi\),\(\psi\).comp(units.map(zmod.cast_homhm(zmod n))),..} ``` * /-- \(\chi_{0}\)of level d factors through \(\chi\)of level n if d | n and \(\chi_{0}\)= \(\chi\)\(\circ\) (zmod n \(\rightarrow\) zmod d). -/ structure factors_through (d : N) : Prop := (dvd : d | n) (ind_char : \(\chi_{0}\): dirichlet_character R d, X = \(\chi_{0}\).change_level dvd) The notions of primitivity and conductor of a Dirichlet character follow easily : /-- The set of numbers for which a Dirichlet character is periodic. =/ def conductor_set : set N := {x : N | x.factors_through x} /-- The minimum natural number n for which a character is periodic. -/ noncomputable def conductor : N := Inf (conductor_set \(\chi\)) /-- A character is primitive if its level is equal to its conductor. -/ def is_primitive : Prop := x.conductor = n /-- The primitive character associated to a Dirichlet character. -/ noncomputable def asso_primitive_character : dirichlet_character R \(\chi\).conductor := classical.some ( x.factors_through_conductor).ind_char Here, classical.some makes an arbitrary choice of an element from a nonempty space, and classical.some_spec lists down properties of this element coming from the space. When \(a=b\), while dirichlet_character R a and dirichlet_character R b are "mathematically" equal, Lean does not think of them as the same type. This gets complicated when additional layers, such as change_level are added to the equation. A general method to resolve such problems is by using the tactic subst, which would substitute \(a\) with \(b\); however, that failed. Instead, we used the concept of heterogeneous equality (leq, or ==) to deal with this. The tactic congr' helped reduce to expressions of heterogeneous equality, which were then solved with the help of lemmas such as : ``` lemmachange_level_neq{a b : N}{S : Type*}[comm_monoid_with_zero S] (\(\chi\): dirichlet_character S a)(h : a = b): change_level(show a | b, from by {rw h}) \(\chi\) == \(\chi\) ``` This states that, for \(a=b\), changing the level of a Dirichlet character of level \(a\) to \(b\) is heterogeneously equal to itself. Traditionally only for primitive characters, our definition of multiplication of characters extends to any two characters. This takes as input characters \(\chi_{1}\) and \(\chi_{2}\) of levels n and m respectively, and returns the primitive character associated to \(\chi_{1}^{\prime}\chi_{2}^{\prime}\), where \(\chi_{1}^{\prime}\) and \(\chi_{2}^{\prime}\) are obtained by changing the levels of \(\chi_{1}\) and \(\chi_{2}\) to lcm n m. ``` noncomputabledefmul{m : N}{\(\chi_{1}\): dirichlet_character R n} (\(\chi_{2}\): dirichlet_character R m):= asso_primitive_character(change_level \(\chi_{1}\) (dvd_lcm_left n m) * change_level \(\chi_{2}\) (dvd_lcm_right n m)) ``` This multiplication is not trivially commutative or associative, with respect to this definition. We need the notion of odd and even characters. A character \(\chi\) is odd if \(\chi(-1)=-1\), and even if \(\chi(-1)=1\). For a commutative ring, any character is either odd or even : ``` lemmais_odd_or_is_even{S : Type*}[comm_ringS][no_zero_divisors S] {m : N}(\(\psi\): dirichlet_character S m): \(\psi\).is_odd \(\psi\).is_even ``` ### Teichmuller character The initial effort was to formalize the definition of the Teichmuller character (denoted \(\omega\)) directly. However, it was discovered that Witt vectors, and in particular Teichmuller lifts had previously been added to mathlib by Johan Commelin and Robert Lewis. This reiterates the importance of the collaborative spirit of Lean, and of making definitions in the correct generality. It is beyond the scope of this text to define Witt vectors and do it justice. We refer interested readers to Section 2.4 of [3]. For a commutative ring \(R\) and a prime number \(p\), one can obtain a ring of Witt vectors \(\mathbb{W}(R)\). When we take \(R=\mathbb{Z}/p\mathbb{Z}\), we get ``` defequiv:W(zmodp)\(\simeq\)+*Z[p] ``` One also obtains the Teichmuller lift \(R\to\mathbb{W}(R)\). Given \(r\in R\), the 0-th coefficient is \(r\), and the other coefficients are 0. This map is a multiplicative monoid homomorphism and is denotedteichmuller. Combining this with the previous two definitions, we obtain our definition of the Teichmuller character : ``` noncomputableabbreviationteichmuller_character_mod_p(p:N) [fact(nat.primep)]:dirichlet_characterZ[p]p:=units.map (((witt_vector.equivp)_to_monoid_hom).comp(witt_vector_teichmullerp)) ``` We use [factp.prime] to make the primality of \(p\) an instance. This map takes \(x\in\mathbb{Z}/p\mathbb{Z}^{\times}\) to a root of unity \(y\in\mathbb{Z}_{p}\) such that \(y\equiv x(\texttt{mod p})\). Often we view this as taking values in a \(\mathbb{Q}_{p}\)-algebra \(R\), by composing it with algebra_map\(\mathbb{Q}\)_[p]R, which identifies elements of \(\mathbb{Q}_{p}\) in \(R\). Since we mostly deal with \(\omega^{-1}\) taking values on \(R^{\times}\), we define this asteichmuller_character_mod_p'. We proved properties of Teichmuller characters inteichmuller_character_lean, such as, for odd primes \(p\), the Teichmuller character is odd, and 1 otherwise : ``` lemmaeval_neg_one(hp:2<p):teichmuller_character_mod_pp(=1)=-1 ``` ## 4 Bernoulli polynomials and the generalized Bernoulli number The Bernoulli numbers \(B^{\prime}_{n}\) are generating functions given by \(\sum B^{\prime}_{n}\frac{t^{n}}{n!}=\frac{t}{e^{t}-1}\). They appear in the computation of sums of powers of naturals, \(\sum_{n}n^{k}\). Note that several authors think of Bernoulli numbers \(B_{n}\) to be defined as \(\sum B_{n}\frac{t^{n}}{n!}=\frac{t}{1-e^{-t}}\). The difference between these two is : \(B_{n}=(-1)^{n}B^{\prime}_{n}\), with \(B^{\prime}_{1}=-\frac{1}{2}\). A reformulation gives : \[B^{\prime}_{n}=1-\sum_{k=0}^{n-1}\binom{n}{k}\frac{B^{\prime}_{k}}{n-k+1}\] In mathlib, \(B^{\prime}_{n}\) was already defined (by Johan Commelin) as above. However, we needed \(B_{n}\), which we then defined as : ``` defbemoulli(n:N):Q:=(-1)^n*bernoulli^n ``` The Bernoulli polynomials, denoted \(B_{n}(X)\), a generalization of the Bernoulli numbers, are generating functions \(\sum_{n=0}^{\infty}B_{n}(X)\frac{t^{n}}{n!}=\frac{te^{tX}}{e^{t}-1}\). This gives : \[B_{n}(X)=\sum_{i=0}^{n}\binom{n}{i}B_{i}X^{n-i}\] We defined the Bernoulli polynomials as : ``` defpolynomial.bernoulli(n:N):polynomialQ:= \(\Sigma\)iinrange(n+i),monomial(n-i)((bernoullii)*(choosenni)) ``` Here, monomial n a translates to \(aX^{n}\), and \(\sum\)i in s, f i translates to \(\sum_{i\in s}f(i)\), for a finest (or finite set) s. A small aspect of this naming convention is that if the namespaces for Bernoulli numbers and polynomials are both open (which is often the case), in order to use the Bernoulli numbers, one needs to use _root_.bernoulli. We shall use them interchangeably here, when the context is clear. An important fact is, \(\forall n\), \((n+1)X^{n}=\sum_{k=0}^{n}\binom{n}{k}B_{k}(X)\) : theoremsum_bernoulli (n : N) : monomial n (n + 1 : Q) = \(\Sigma\)k in range (n + 1), ((n + 1).choose k : Q). bernoulli k ``` These proofs are relatively straightforward. Most of this work is now part of mathlib, and has been used to give a formalized proof of Faulhaber's theorem. ### Generalized Bernoulli numbers Generalized Bernoulli numbers are integral to our work, since these are related to the special values of \(p\)-adic \(L\)-functions and Dirichlet \(L\)-functions. Given a primitive Dirichlet character \(\chi\) of conductor \(f\), the generalized Bernoulli numbers are defined as (section 4.1, [6]) \(\sum_{n=0}^{\infty}B_{n,\chi}\frac{t^{n}}{m!}=\sum_{s=1}^{f}\frac{\chi(a)t^{ n}}{e^{f-1}}\). For any multiple \(F\) of \(f\), Proposition 4.1 of [6] gives us : \[B_{n,\chi}=F^{n-1}\sum_{a=1}^{F}\chi(a)B_{n}\bigg{(}\frac{a}{F}\bigg{)}\] This is much easier to work with, so we use this as our definition instead, taking \(F=f\) : ``` defgeneral_bernoulli_number {$ : Type*}[comm_semiring S][algebra Q S] {n : N}(\(\psi\): dirichlet_character S n) (m : N) : S := (algebra_map Q S ((\(\psi\).conductor)^(m - 1 : Z))) * \(\Sigma\)a in finest.range \(\psi\).conductor, asso_dirichlet_character (asso_primitive_character \(\psi\))a.succ * algebra_map Q S ((bernoulli m).eval (a.succ / \(\psi\).conductor : Q)) ``` Contrary to the traditional definition, this is for all characters, and \(\psi\) takes values in any commutative \(\mathbb{Q}\)-algebra, instead of \(\mathbb{C}\). One had to also explicitly mention that m - 1 must be taken to have type \(\mathbb{Z}\), since Lean would otherwise infer it to have type \(\mathbb{N}\), which might have caused errors (subtraction on \(\mathbb{N}\) and \(\mathbb{Z}\) are different). ### A special property of generalized Bernoulli numbers An important property of these numbers is : Let \(\chi\) be an even Dirichlet character of level \(dp^{m}\) for \(d\) coprime to the odd prime \(p\), with \(m\) positive. Suppose \(R\) is a nontrivial commutative non-Archimedean normed \(\mathbb{Q}_{p}\)-algebra with no zero divisors. For \(k>1\), \[\lim_{n\to\infty}\frac{1}{dp^{n}}\sum_{0<a<dp^{n};(a,dp)=1}\chi\omega^{-k}(a) a^{k}=(1-\chi\omega^{-k}(p)p^{k-1})B_{k,\chi\omega^{-k}}\] Instead of giving R a non-Archimedean structure(which did not exist in mathlib when this project began), we give as input its consequences, conditions na and na'. This is formulated in Lean as : theoremlim_even_character' (na' : \(\langle\)n : N) (f : (zmod n)\({}^{\times}\) \(\rightarrow\)R), \(\|\Sigma\)i : (zmod n)\({}^{\times}\), f i \(\|\leq\)\(\bigsqcup\) (i : (zmod n)\({}^{\times}\)), \(\|\)f i \(\|\)) (na : \(\vee\) (n : N) (f : N \(\rightarrow\)R), \(\|\Sigma\)(i : N) in finest.range n, f i \(\|\leq\)\(\bigsqcup\) (i : zmod n), \(\|\)f i.val \(\|\)) : tendsto (\(\wedge\) (n : N), (1 / \(\uparrow\)(d * p - n)). \(\Sigma\)(i : N) in finest.range (d * p - n), asso_dirichlet_character (\(\chi\).mul (teichmuller_character_mod_p' p R ~ k)) \(\uparrow\)i * \(\uparrow\)i ~ k) at_top ( \(\mathcal{N}\)(general_bernoulli_number {\(\chi\).mul (teichmuller_character_mod_p' p R ~ k)) k)) ## 8 \(p\)-adic \(L\)-functions The proof of this theorem follows from the proof in Lemma 7.11 of [6], a point of difference being that our theorem holds more generally for \(R\) being a non-Archimedean normed commutative \(\mathbb{Q}_{p}\)-algebra with no zero divisors, instead of \(\mathbb{C}_{p}\). Majorly, it equates the two sides modulo \(p^{n}\) for a sufficiently large \(n\), and uses the fact that \[\lim_{n\to\infty}\frac{1}{dp^{n}}\sum_{0<a<dp^{n};(a,dp)=1}\chi\omega^{-m}(a)a ^{m}=0\] The formalization is very calculation intensive, and is a good example of a small proof on paper being magnified in Lean, because there are multiple coercions and arithmetic calculations to be dealt with. Unfortunately, tactics such as ring and simp that usually help with these fail here. It is translated in Lean as : ``` lemmasum_even_character_tendsto_zero_of_units: tendsto(\n,\(\Sigma\)(i:(zmod(d*p^{n}))^{\times}),((asso_dirichlet_character (x.mul(teichmuller_character_mod_p'pR^k)))i*i^(k-1))) at_top(N0) ``` The proof of this theorem is in tendsto_zero_of_sum_even_char.lean. ## 5 Construction of the \(p\)-adic \(L\)-function ### Density of locally constant functions For any compact Hausdorff totally disconnected space X and a commutative normed ring A, we have proved that \(LC(X,A)\) is a dense subset of \(C(X,A)\). Formalizing this took about 500 lines of code (now in mathlib), and is based on the fact that locally compact Hausdorff totally disconnected spaces have a clopen basis : ``` lemmaloc_compact_Haus_tot_disc_of_zero_dim{H:Type*}[t2_spaceH] [locally_compact_spaceH][totally_disconnected_spaceH]: is_topological_basis{s:setH|is_clopens} ``` This turned out to be hard to formalize. Given a set \(s\) of \(H\), Lean gives a subset \(V\) of \(s\) the type V:set s; however, Lean does not recognize \(V\) as a subset of \(H\). As a result, to use compact_spacess\(\longleftrightarrow\)is_compact(s:setH), one must construct V':set H to be the image of \(V\) under the closed embedding coe:s - H. This process must be repeated each time a subset of \(H\), which is also a topological subspace, is considered. Finally, it must be shown that all these coercions match up in the big topological space \(H\). ### Clopen sets of the \(p\)-adic integers \(\mathbb{Z}_{p}\) is a profinite space (as shown in section 2.4 of [3]). It is the inverse limit of finite discrete topological spaces \(\mathbb{Z}/p^{n}\mathbb{Z}\) for all \(n\), and has a clopen basis of the form \(U_{a,n}:=proj_{n}^{-1}(a)\) for \(a\in\mathbb{Z}/p^{n}\mathbb{Z}\), where \(proj_{n}\) is the canonical projection ring homomorphism to_zmod_pow n:Z_[p] ++x_zmod(p^n). We first define the collection of sets \((U_{a,n})_{a,n}\) : ``` defclopen_basis:set(setZ_[p]):= {x:setZ_[p]|(n:N)(a:zmod(p^n)), x=set.preimage(padic_int_to_zmod_pown){a}} ``` We show that clopen_basis forms a topological basis and that every element is clopen : ``` theoremclopen_basis_clopen:(clopen_basisp).is_topological_basis\(\wedge\) \(\chi\in(\)clopen_basisp),is_clopenx ``` The mathematical proof is to show that for any \(\epsilon\)-ball, one can find \(U_{a,n}\) inside it. This is true because, given \(n\in\mathbb{N}\) and \(x\in\mathbb{Z}/p^{n}\mathbb{Z}\), the preimage of \(x\) under to_zmod_pow n is the same as the ball centered at \(x\) (now considered as an element of \(\mathbb{Z}_{p}\)) with radius \(p^{1-n}\). The following lemmas prove useful : lemma appr_spec (n : N) (x : Z_[p]) : x - appr x n (ideal.span {p^n} : ideal Z_[p]) lemma has_coe_t_eq_coe (x : Z_[p]) (n : N) : (((appr x n) : zmod (p^n)) : Z_[p]) = ((appr x n) : Z_[p]) For x:Z_[p], appr x n is the smallest natural number in x (mod p^n). In the latter lemma, the RHS is a coercion of appr x n, which has type \(\mathbb{N}\), to Z_p. The LHS is a coercion of appr x n to zmod (p ^n) to Z_p. This statement is not true in general, that is, given any natural number \(n\), it is not true that the lift of \(n\) to Z\({}_{p}\) is the same as the composition of its lift to \(\mathbb{Z}/p^{n}\mathbb{Z}\) and \(\mathbb{Z}_{p}\). It works here because the coercion from \(\mathbb{Z}/p^{n}\mathbb{Z}\) to \(\mathbb{Z}_{p}\) is not the canonical lift. It is a composition of a coercion from \(\mathbb{Z}/p^{n}\mathbb{Z}\) to \(\mathbb{N}\), which takes \(a\in\mathbb{Z}/p^{n}\mathbb{Z}\) to the smallest natural number in its \(\mathbb{Z}/p^{n}\mathbb{Z}\) equivalence class. One can similarly show that the sets \(U_{b,a,n}:=proj_{1}^{-1}(b)\times proj_{2,n}^{-1}(a)\) form a clopen basis for \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), where \(proj_{1}\) is the first canonical projection on \(b\in\mathbb{Z}/d\mathbb{Z}\) and \(proj_{2,n}\) the composition of the second projection on \(a\in\mathbb{Z}_{p}\) with \(proj_{n}\) described above. We call this set clopen_basis' p d. Its properties are formalized in padic_int.clopen_properties.lean. ### Distributions and measures In this section, \(X=\varproj_{i\in\mathbb{N}}X_{i}\) denotes a profinite space with \(X_{i}\) finite and projection maps \(\pi_{i}:X\to X_{i}\) and surjective maps \(\pi_{ij}:X_{i}\to X_{j}\) for all \(i\geq j\). Henceforth, we use \(G\) to denote an abelian group, \(A\) for a commutative normed ring, \(R\) for a commutative complete normed ring which is also a \(\mathbb{Q}_{p}\)-algebra, and \(LC(X,Y)\) for the space of locally constant functions from \(X\) to \(Y\). We fix a prime \(p\) and an integer \(d\) such that \(gcd(d,p)=1\). The topology on \(C(X,A)\) comes from its normed group structure induced by the norm on \(A:\,||f-g||=sup_{x\in X}||f(x)-g(x)||\). In fact, this topology is the same as the topology defined on bounded functions on \(X\), since \(X\) is a compact space. Since the API for bounded continuous functions on compact spaces was developed at around the same time (created by Oliver Nash), we used the existing lemmas such as equiv_bounded_of_compact. A distribution (from Section 12.1 of [6]) is a \(G\)-linear function \(\phi:LC(X,G)\to G\). This is already a Type, hence we do not redefine it. Measures (not to be confused with measure theory measures) are bounded distributions : def measures := {\(\varphi\) : (locally_constant X A) \(\rightarrow\)[A] A / \(\exists\) K : R, 0 < K \(\wedge\) \(\forall\) f : (locally_constant X A), \(\|\varphi\|\leq\) K * \(\|\)inclusion X A f\(\|\) } ``` The map inclusion identifies the locally constant function f as a continuous function. The boundedness of the distribution makes the measure continuous. ### The Bernoulli measure The Bernoulli measure is an essential measure. We make a choice of an integer \(c\) with \(gcd(c,dp)=1\), and \(c^{-1}\) is an integer such that \(cc^{-1}\equiv 1\) mod \(dp^{2n+1}\). For a clopen set \(U_{a,n}\), we define \[E_{c}(\chi_{U_{a,n}})=E_{c,n}(a)=\left\{\frac{a}{dp^{n+1}}\right\}-c\bigg{\{} \frac{c^{-1}a}{dp^{n+1}}\bigg{\}}+\frac{c-1}{2}\] In Lean, this translates to (note that fract x represents the fractional part of \(x\)) : def bernoulli_distribution := \(\lambda\) (n : N) (a : (zmod (d * (p^n)))), fract ((a : Z) / (d*p^(n + 1))) - c * fract ((a : Z) / (c * (d*p^(n + 1)))) + (c - 1)/2 ``` The original plan was to define a set of the form : defbernoulli_measure(hc:c.gcdp=1):= {x:locally_constant(zmodd\(\times\)Z_[p])R\(\rightarrow_{l}\)[R]R|\(\vee\)(n:N) (a:zmod(d*(p'n))),x(char_fnR(clopen_from.is_clopenpdna))= (algebra_mapQR)(E_cpdhcna)} ``` and to show that it is nonempty. char_fn is a locally constant characteristic function on a clopen set (1 on the set and 0 otherwise), taking as input the range of the function and the fact that the set is clopen. However, information is lost this way, since one then has to use classical.some to extract the underlying measure. We use an elegant approach : ``` /-Asequencehasthe'is_eventually_constant'predicateifalltheelementsofthesequenceareeventuallythesame.-/ defis_eventually_constant{\(\alpha:\)Type*}(a:N\(\rightarrow\alpha\)):Prop:= {n|\(\vee\)m,n\(\leq\)m\(\rightarrow\)a(nat.succm)=aam>.nonempty structureeventually_constant_seq{\(\alpha:\)Type*}:= (to_seq:N\(\rightarrow\alpha\)) (is_eventually_const:is_eventually_constantto_seq) ``` Given a locally constant function \(f\) from \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\) to \(R\), we define the eventually constant sequence from_loc_const: ``` ``` {to_seq:=\(\lambda\)(n:N), \(\Sigma\)ain(zmod'(d*p'n)_), f(a).((algebra_mapQ[p]R)(bernoulli_distributionpdcna)), is_eventually_constant:=_,} ``` for all natural numbers \(n\). zmod' is the universal finest of zmod. We shall look into the proof of this sequence being eventually constant later. Given a locally constant function f:locally_constant((zmodd)\({}^{\times}\)xZ_[p]\({}^{\times}\))R,anelementofthesetbernoulli_measure is given by : ``` sequence_limit(from_loc_constpdR(loc_const_ind_fn_pdf)) ``` where loc_const_ind_fn is a locally constant function on \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\) that takes value \(f\) on the units of the domain, and 0 otherwise. The linearity properties follow easily. Notice that bernoulli_distribution takes locally constant functions on \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), while bernoulli_measure takes locally constant functions on \(\mathbb{Z}/d\mathbb{Z}^{*}\times\mathbb{Z}_{p}^{*}\). This had to be done since our clopen basis was defined on \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), and while it is easy to show the same results for the units on paper, it requires a bit of work in Lean. We now prove that bernoulli_measure is indeed a measure, that is, it is bounded. The bound we choose is \(K:=1+\parallel c\parallel+\parallel\frac{c-1}{2}\parallel\). The proof is as follows : let \(\phi\) denote loc_const_ind_fn. We want \(\parallel E_{c}(\phi(f))\parallel\leq K\parallel f\parallel\). It suffices to prove this for \(\chi_{n,a}\), because one can find an \(n\) such that \(\phi(f)=\sum_{a\in\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p^{*}2}}\phi(f)(a) \chi_{n,a}:\) ``` lemmaloc_const_eq_sum_char_fn(f:locally_constant((zmodd)\(\times\)Z_[p]R)(hd:d.gcdp=1)):=n:N,f=\(\Sigma\)ain(finset.range(d*p'n)), f(a).char_fnR(clopen_from.is_clopenpdna) ``` This proof is akin to proving that from_loc_const is eventually constant, using discrete quotients. The discrete quotient on a topological space is given by an equivalence relation such that all equivalence classes are clopen : ``` structure(X:Type*)[topological_spaceX]discrete_quotient:= (rel:X\(\rightarrow\)X\(\rightarrow\)Prop) (equiv:equivalencerel) (clopen:\(\vee\)x,is_clopen(set_of(relx))) ``` The last statement translates to, \(\forall x\in X,\{y|y\sim x\}\) is clopen. Given two discrete quotients \(A\) and \(B\), \(A\leq B\) means \(\forall x,y\in X\), \(x\sim_{A}y\implies x\sim_{B}y\). Any locally constant function induces a discrete quotient via its clopen fibers : ``` deflocally_constant.discrete_quotient:discrete_quotientX:= {frel:=\lambdaa\b,fb=fa,...} ``` We now define a function : ``` /--Adiscretequotientinducedby'to_zmod_pow'.-/ defdiscrete_quotient_of_to_zmod_pow: N-discrete_quotient(zmodd\(\times\mathbb{Z}\)_[p]):= N-n,(\(\lambda\)a,b,to_zmod_pown.a.2=to_zmod_pown.b.2\(\land\)a.1=b.1,_,_) ``` For \(a=(a_{1},a_{2})\) and \(b=(b_{1},b_{2})\) in \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), this represents the relation \(a\sim b\iff a_{2}(\text{mod }p^{n})=b_{2}(\text{mod }p^{n})\wedge a_{1}=b_{1}\). Then, given a locally constant function \(f\) on \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), for \(N\) large enough, the fibers of \(f\) mod \(p^{N}\) are contained in the basic clopen sets of \(p^{N}\) : ``` lemmale:\(\exists\)N:N, discrete_quotientof_to_zmod_powpdN\(\leq\)discrete_quotientf ``` The proofs now follow from this fact : \(\exists N,\forall m\geq N\), \[\sum_{a\in\mathbb{Z}/dp^{m+1}\mathbb{Z}}f(a)E_{c,m+1}(a)=\sum_{a\in\mathbb{Z}/ dp^{m}\mathbb{Z}}f(a)E_{c,m}(a)\] The required \(N\) is classical.some (discrete_quotientof_to_zmod_pow.lef)+1. We also define the following : ``` /--Setofall'b\(\in\)zmod(d*p^m)'suchthat'b=amod(d*p^n)'for 'a\(\in\)zmod(d*p^n)'.-/ defequil_class(nm:N)(a:zmod(d*p^n)):= {b:zmod(d*p^m)}|(b:zmod(d*p^n))=a} ``` Then, we have the following lemma : ``` lemmazmod'_succ_eq_bUnion: zmod'(d*p^(m+1))=(zmod'(d*p^n)).bUnion(\(\lambda\)a:zmod(d*p^m),set_to_finset(equil_classm(m+1))a) ``` This lemma says that any element of \(\mathbb{Z}/dp^{m+1}\mathbb{Z}\) comes from equi_classm(m+1)b for some \(b\in\mathbb{Z}/dp^{m}\mathbb{Z}\). The proof is now complete with the following lemma : ``` lemmabernoulli_distribution_sum'(x:zmod(d*p^m)): \(\Sigma\)(y:zmod(d*p^m.succ))in (\(\lambda\)a:zmod(d*p^m),((equi_classm.succ)a).to_finset)x, bernoulli_distributionpdcm.succy=bernoulli_distributionpdcm.x ``` which says, for \(x\in\mathbb{Z}/dp^{m}\mathbb{Z}\), \(E_{c,m}(x)=\sum_{y}^{\prime}E_{c,m+1}(y)\), fory\(\in\)equi_classm(m+1)x. ### Integrals The last piece in the puzzle is the integral. We use the same notation as in the previous section. Given a measure \(\mu\), and a function \(f\in LC(X,R)\), \(\int fd\mu:=\mu(f)\). As in Theorem 12.1 of [6], this can be extended to a continuous \(R\)-linear map \(\int_{X}fd\mu:C(X,R)\to R\). This follows from the fact that \(LC(X,R)\) is dense in \(C(X,R)\); as a result, the map from \(LC(X,R)\) to \(C(X,R)\) is dense_inducing, that is, it has dense range and the topology on \(LC(X,R)\) is induced from the topology on \(C(X,R)\). The continuity of the extension of the integral follows from the fact that every measure \(\mu\) is uniformly continuous : ### Construction There are several possible definitions for the \(p\)-adic \(L\)-function, the most common being a meromorphic function \(L_{p}(s,\chi)\) on \(\{s\in\mathbb{C}_{p}\mid|s|<p\}\) obtained by analytic continuation, such that \[L_{p}(1-n,\chi)=-(1-\chi\omega^{-n}(p)p^{n-1})\frac{B_{n,\chi\omega^{-n}}}{n}\] for \(n\geq 1\) (Theorem 5.11, [6]). Due to the absence of \(\mathbb{C}_{p}\) in mathlib at the time, and the difficulty of showing analytic continuity(even on paper), our definition is instead motivated by Theorem 12.2, [6], which states that, for \(s\in\mathbb{Z}_{p}\), and Dirichlet character \(\chi\) with conductor \(dp^{m}\), with \(gcd(d,p)=1\) and \(m\geq 0\), for a choice of \(c\in\mathbb{Z}\) with \(gcd(c,dp)=1\) : \[(1-\chi(c)\langle c\rangle^{s+1})L_{p}(-s,\chi)=\int_{(\mathbb{Z}/d\mathbb{Z}) ^{\times}\times\mathbb{Z}_{p}^{\times}}\chi\omega^{-1}(a)\langle a\rangle^{s}dE _{c} \tag{1}\] where \(\langle a\rangle=\omega^{-1}(a)a\), and \(b^{s}=exp(log_{p}(b))\) (the exponential and logarithm are defined in terms of power series expansions). Instead of using the variable \(s\) (which takes values in a subset of \(\mathbb{C}_{p}\)), we choose to use an element of the weight space, the set of continuous monoid homomorphisms from \(\mathbb{Z}/d\mathbb{Z}^{\times}\times\mathbb{Z}_{p}^{\times}\) to \(R\). We replace \(\langle a\rangle^{s}\) with w:continuous_monoid_hom A. The advantage is that our \(p\)-adic \(L\)-function can now be defined over a more general space : a nontrivial normed commutative complete non-Archimedean \(\mathbb{Q}_{p}\)-algebra with no zero divisors. Given a Dirichlet character \(\chi\) of level \(dp^{m}\) with \(gcd(d,p)=1\) and \(m>0\), we now define the \(p\)-adic \(L\)-function to be : \[L_{p}(w,\chi):=\int_{(\mathbb{Z}/d\mathbb{Z})^{\times}\times\mathbb{Z}_{p}^{ \times}}\chi\omega^{-1}(a)wdE_{c}\] def p_adic_L_function := measure.integral (bernoulli_measure R hc hc' hd na) (units.coe_hom R).comp (dirichlet_char_extend p d R mhd (change_level (\chi.mul ((teichmuller_character_mod_p' p R))))) * w.to_monoid_hom, cont_palf mhd _w) Here, dirichlet_char_extend extends \(\chi\) from \((\mathbb{Z}/dp^{m}\mathbb{Z})^{\times}\) to \((\mathbb{Z}/d\mathbb{Z})^{\times}\times\mathbb{Z}_{p}^{\times}\) via the restriction map. The last term cont_palf proves the continuity of the given function, since Lean takes an element of type C((zmod d)\({}^{\times}\) x \(\times\) Z_[p]\({}^{\times}\), R). We have absorbed the constant term given in the LHS of (1). This was done because Theorem 12.2 lets \(L_{p}(-s,\chi)\) take values in \(\mathbb{C}_{p}\). In a general ring \(R\), as we have chosen, division need not exist. One would then need the factor to be a unit, which may not always happen (for example, consider \(R=\mathbb{Q}_{p}\)). Thus, our \(p\)-adic \(L\)-function differs from the original by a constant factor. This factor can be easily removed if one assumes \(R\) has division. ## 6 Evaluation at negative integers We shall now prove that our chosen definition of the \(p\)-adic \(L\)-function is equivalent to the original one, that is, it takes the same values at negative integers : for \(n>1\), \[L_{p}(1-n,\chi)=-(1-\chi\omega^{-n}(p)p^{n-1})\frac{B_{n,\chi\omega^{-n}}}{n} \tag{2}\] For this section, we assume that \(R\) is a non-Archimedean normed commutative \(\mathbb{Q}_{p}\)-algebra, which is complete, nontrivial, and has no zero divisors. The scalar multiplication structure obtained from \(\mathbb{Q}\) and \(\mathbb{Q}_{p}\) are compatible, given by is_scalar_tower \(\mathbb{Q}\) Q_[p] R (see Section 4.2 of [2]). The prime \(p\) is odd, and we choose positive natural numbers \(d\) and \(c\) which are mutually coprime and are also coprime to \(p\). The Dirichlet character \(\chi\) has level \(dp^{m}\), where \(m\) is positive. We also assume \(\chi\) is even and \(d\) divides its conductor. Let us first explain why we need the latter condition. ### Factors of the conductor We explain here why we need \(d\) to divide the conductor of \(\chi\). In this section, we do not differentiate between the associated Dirichlet character and the Dirichlet character. Recall that \(\chi\omega^{-1}\) actually denotes the Dirichlet character multiplication of \(\chi\) and \(\omega^{-1}\), as explained in Section 3. In order to translate between sums on \(\mathbb{Z}/dp^{n}\mathbb{Z}^{\times}\) and \(\mathbb{Z}/dp^{n}\mathbb{Z}\), one needs that, for all \(x\in\mathbb{Z}/dp^{n}\mathbb{Z}\) such that \(x\) is not a unit, \(\chi\omega^{-k}(x)=0\) for all \(k>0\). This is equivalent to saying, \(\forall y\in\mathbb{N}\), such that \(gcd(y,d)\neq 1\) and \(gcd(y,p)\neq 1\), \(gcd(y,(\chi\omega^{-k}).\texttt{conductor})\neq 1\). Given coprime natural numbers \(k_{1},k_{2}\) and a character \(\psi\) of level \(k_{1}k_{2}\), one can find primitive characters \(\psi_{1}\) and \(\psi_{2}\) of levels \(k_{1}\) and \(k_{2}\) respectively such that \(\psi=\psi_{1}\psi_{2}\) : ``` lemmaeq_mul_of_coprime_of_dvd_conductor{mn:N}[fact(0<m*m)] (\(\chi\):dirichlet_characterR(m*n))(h>:m|\(\chi\).conductor) (hcoep:n.coprimeh):=\(\exists\)(\(\chi_{1}\):dirichlet_characterRm) (\(\chi_{2}\):dirichlet_characterRm),\(\chi_{1}\):_is_primitive\(\wedge\)\(\chi\)= \(\chi_{1}\):change_level(dvd_mul_rightmm)*\(\chi_{2}\):change_level(dvd_mul_leftmm) ``` Thus, given \(k>0\), we can find primitive characters \(\chi_{1}\) and \(\chi_{2}\) with conductors \(z_{1}\) and \(z_{2}\) such that \(z_{1}\mid d\) and \(z_{2}\mid p^{m}\) and \(\chi_{1}\chi_{2}=\chi\omega^{-k}\). The condition that \(d\) divides the conductor of \(\chi\) ensures that \(z_{1}=d\). As a result, if \(gcd(y,d)\neq 1\), then \(gcd(y,z_{1}z_{2})\neq 1\), so \(\chi\omega^{-k}(y)=0\) as needed. ### Main Result Note that the same result holds when \(\chi\) is odd or when \(p=2\), the proofs differ slightly. We shall skip most of the details of the proof, since these are heavily computational. We shall instead highlight the key concepts that are used. Our reformulation of (2) is : ``` theoremp_adic_L_function_eval_neg_int_new: (p_adic_L_functionm<n(mul_inv_pow(n-1)))= (algebra_mapQR)(1/n:Q)* (1-(\(\chi\)(zmod.unit_of_coprimec_)* (mul_inv_pow(zmod.unit_of_coprimech',_)))* (1-(asso_dirichlet_character( (\chi\_{mul}((\text{technumber}_character_mod_p'pR)^n)))p+p^(n-1)))* (general_bernoulli_number (\(\chi\).mul((teichnumber_character_mod_p'pR)^n))n) ``` Here, mul_inv_pow is our translation of \(\langle a\rangle^{s}\). The proof consists of two steps : breaking up the integral in the LHS into three sums, and evaluating each of these sums. This is very calculation intensive, and was the longest part of the project. The proof is very similar to the proof of Theorem 12.2 in [6]. Since \(LC((\mathbb{Z}/d\mathbb{Z})^{\times}\times\mathbb{Z}_{p}^{\times},R)\) is dense in \(C((\mathbb{Z}/d\mathbb{Z})^{\times}\times\mathbb{Z}_{p}^{\times},R)\), we observe that the integral \(L_{p}(1-n,\chi)\) is the same as : \[L_{p}(1-n,\chi)=\lim_{j\to\infty}\sum_{a\in(\mathbb{Z}/dp^{j}\mathbb{Z})^{ \times}}E_{c,j}(\chi\omega^{-1}(a)\langle a\rangle^{n-1})\] \[=\lim_{j\to\infty}\bigg{(}\sum_{a\in(\mathbb{Z}/dp^{j}\mathbb{Z})^{\times}} \chi\omega^{-n}a^{n-1}\bigg{\{}\frac{a}{dp^{j}}\bigg{\}} \tag{3}\] \[-\sum_{a\in(\mathbb{Z}/dp^{j}\mathbb{Z})^{\times}}\chi\omega^{-n}a^{n-1} \bigg{(}c\bigg{\{}\frac{c^{-1}a}{dp^{j}}\bigg{\}}\bigg{)} \tag{4}\] \[+\bigg{(}\tfrac{c-1}{2}\bigg{)}\sum_{a\in(\mathbb{Z}/dp^{i}\mathbb{Z})^{\times}} \chi\omega^{-n}a^{n-1}\bigg{)} \tag{5}\] Going from the first equation to the second took about 600 lines of code, which can be found in neg_int_eval.lean. While the proof (on paper) is only a page long, this is very calculation heavy in Lean, because one needs to shift between elements coerced to different types, such as \(\mathbb{Z}/(dp^{j})\mathbb{Z}\), \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}/p^{j}\mathbb{Z}\), \(\mathbb{Z}/d\mathbb{Z}\times\mathbb{Z}_{p}\), \(R\) and their units. Moreover, when each of these types occur as locally constant or continuous functions, one needs to separately prove that each of these functions is also (respectively) locally constant or continuous. Other difficulties include several different ways to obtain the same term, such as equiv.inv_fun, equiv.symm, ring_equiv.symm and ring_equiv.to_equiv.inv_fun. We have constructed several lemmas to simplify traversing between these terms. Each of these sums are then evaluated separately. The first sum in (3) follows from Theorem 1, after translations between zmod (d * p^n)\({}^{\times}\) and finest.range (d * p^n). This is done by the following lemma, which says \[\mathbb{Z}/dp^{k}\mathbb{Z}\simeq\{x\in\mathbb{N}\mid gcd(x,d)\neq 1\}\cup\{x \in\mathbb{N}\mid gcd(x,p)\neq 1\}\cup(\mathbb{Z}/dp^{k}\mathbb{Z})^{\times}\] lemma helper_U_3 (x : N) : range (d * p^x) = finite.to_finset (finite_of_finite_inter (range (d * p^x)) ({x | \(\neg\) x.coprime d})) ((finite.to_finset (finite_of_finite_inter (range (d * p^x)) ({x | \(\neg\) x.coprime p}))) ) finite.to_finset (finite_of_finite_inter (range (d * p^x)) ({x | x.coprime d} \(\cap\) {x | x.coprime p}))) ``` Each of these are made to be a finest, since finest.sum requires the sum to be over a finest.We use this lemma to break our sum over finest.range (d * p^n) into units and non-units. The condition that \(d\) divides the conductor is then used to show that the associated Dirichlet character is 0 everywhere on the non-units. These calculations can be found in lim_even_character_of_units.lean. Evaluating the middle sum (4) is the most tedious. It is first broken into two sums, so that the previous result can be used. Then, a change of variable from \(a\) to \(c^{-1}a\) is applied. The variable \(c\) is coerced to \(\mathbb{Z}/dp^{2k}\mathbb{Z}\), increasing the number of coercions significantly, thus lengthening the calculations. This can be found in second_sum.lean. Finally, the last sum (5) is 0. This is where one uses that \(\chi\) is even. This follows from Theorem 2. On paper, it is a one-line proof, done by substituting \(a\) in the summand with \(-a\) and doing caluclations mod \(p^{n}\). However, since we work in a more general setting, we must go through lengthy roundabout ways instead. Putting these sums together concludes the proof. ## 7 Conclusion ### Analysis We list some of the observations that arose while working on this paper. The tactic rw does not always work inside sums. As a result, one must use the conv tactic to get to the expression inside the sum. While using the conv tactic, one is said to be working in conv mode. Using the conv tactic not only lengthens the proof, but also limits the tactics one can use; Another way around sums is to use simp_rw, however, this increases compilation time of the proof. Moreover, simp_rw rewrites the lemma as many times as applicable, and is an unsuitable choice if one wants to apply the lemma just once. Another recurring problem was the ratio of implicit to explicit variables. The \(p\)-adic \(L\)-function, for example, has 19 arguments, of which 7 are explicit, and \(p\), \(d\) and \(R\) are implicit. Excluding \(R\) often means that either Lean guesses or abstracts the correct term, or it asks for them explicitly. In the latter case, one also gets as additional goals all the hypotheses that are dependent on \(R\) and implicit, such as normed_comm_ring R. The other alternative is to explicitly provide terms using \(\mathfrak{G}\), however this leads to very large expressions. We also ran into some instance errors. For example, since char_zero is a class, we would like to give the lemma char_zero R an instance structure. However, the proof is dependent on R having the [algebra Q_[p] R] structure. Lean would then claim that this is a dangerous instance (for \(p\) being an explicit variable) and that \(p\) is a metavariable (for \(p\) being an implicit variable). Thus, we made it a lemma instead, and had to explicitly feed it into implicit arguments. While most properties regarding Bernoulli numbers and polynomials and locally constant functions have been put into mathlib, the rest of the work is on a private repository. The author hopes to push the work directly to Lean 4, once the required port is complete. ### Statistics Given the decentralized nature of mathlib, it is quite difficult to calculate the number of lines of code already existing in mathlib which were used in this project. When initially completed, this project had about 15000 lines of code. A major refactor was then conducted, in an effort to reduce length of individual proofs. We tried to uphold the spirit of mathlib, constructing lemmas in as much generality as possible. The code currently consists of 28 files and about 7500 lines, grouped into appropriate categories where possible, according to the sections of this paper. ### Related work There are several projects that require Dirichlet characters and properties of the \(p\)-adic integers. These include the project on the formalization of Fermat's last theorem for regular primes2. There is also an effort by Prof David Loeffler which involves formalization of the classical Dirichlet \(L\)-function, that is somewhat dependent on this work. Our work on Bernoulli numbers has been used to give a formal proof of Faulhaber's theorem. Footnote 2: [https://github.com/leanprover-community/fft-regular](https://github.com/leanprover-community/fft-regular) In the future, the author hopes to be able to work on Iwasawa theory, for which the \(p\)-adic \(L\)-function is a key ingredient. She also hopes to formalize more properties of Bernoulli numbers, that are a fundamental component of number theory.
2301.03456
UB3: Best Beam Identification in Millimeter Wave Systems via Pure Exploration Unimodal Bandits
Millimeter wave (mmWave) communications have a broad spectrum and can support data rates in the order of gigabits per second, as envisioned in 5G systems. However, they cannot be used for long distances due to their sensitivity to attenuation loss. To enable their use in the 5G network, it requires that the transmission energy be focused in sharp pencil beams. As any misalignment between the transmitter and receiver beam pair can reduce the data rate significantly, it is important that they are aligned as much as possible. To find the best transmit-receive beam pair, recent beam alignment (BA) techniques examine the entire beam space, which might result in a large amount of BA latency. Recent works propose to adaptively select the beams such that the cumulative reward measured in terms of received signal strength or throughput is maximized. In this paper, we develop an algorithm that exploits the unimodal structure of the received signal strengths of the beams to identify the best beam in a finite time using pure exploration strategies. Strategies that identify the best beam in a fixed time slot are more suitable for wireless network protocol design than cumulative reward maximization strategies that continuously perform exploration and exploitation. Our algorithm is named Unimodal Bandit for Best Beam (UB3) and identifies the best beam with a high probability in a few rounds. We prove that the error exponent in the probability does not depend on the number of beams and show that this is indeed the case by establishing a lower bound for the unimodal bandits. We demonstrate that UB3 outperforms the state-of-the-art algorithms through extensive simulations. Moreover, our algorithm is simple to implement and has lower computational complexity.
Debamita Ghosh, Haseen Rahman, Manjesh K. Hanawal, Nikola Zlatanov
2022-12-26T09:24:22Z
http://arxiv.org/abs/2301.03456v1
# UB3: Best Beam Identification in Millimeter Wave Systems via Pure Exploration Unimodal Bandits ###### Abstract Millimeter wave (mmWave) communications have a broad spectrum and can support data rates in the order of gigabits per second, as envisioned in 5G systems. However, they cannot be used for long distances due to their sensitivity to attenuation loss. To enable their use in the 5G network, it requires that the transmission energy be focused in sharp pencil beams. As any misalignment between the transmitter and receiver beam pair can reduce the data rate significantly, it is important that they are aligned as much as possible. To find the best transmit-receive beam pair, recent beam alignment (BA) techniques examine the entire beam space, which might result in a large amount of BA latency. Recent works propose to adaptively select the beams such that the cumulative reward measured in terms of received signal strength or throughput is maximized. In this paper, we develop an algorithm that exploits the unimodal structure of the received signal strengths of the beams to identify the best beam in a finite time using pure exploration strategies. Strategies that identify the best beam in a fixed time slot are more suitable for wireless network protocol design than cumulative reward maximization strategies that continuously perform exploration and exploitation. Our algorithm is named Unimodal Bandit for Best Beam (UB3) and identifies the best beam with a high probability in a few rounds. We prove that the error exponent in the probability does not depend on the number of beams and show that this is indeed the case by establishing a lower bound for the unimodal bandits. We demonstrate that UB3 outperforms the state-of-the-art algorithms through extensive simulations. Moreover, our algorithm is simple to implement and has lower computational complexity. mmWave, Bandit learning, pure exploration ## I Introduction There is a growing demand for higher data rates with the advent of emerging data-intensive applications like virtual reality, mobile gaming, and HD quality video streaming. The wireless networks have improved in terms of data rates but are still constrained by the available bandwidth in the sub-6 GHz spectrum to meet the data rates required for emerging applications. The millimeter wave (mmWave) band, with a spectrum ranging from 30 GHz to 300 GHz, offers an abundant spectrum and can support data rates of gigabits per second that are envisioned in 5G networks. Significant efforts in the standardisation of mmWave systems, such as IEEE 802.11ad [1, 2] and ongoing IEEE 802.11ay [3], are underway, and the 5G networks with mmWave systems are on the path of commercialisation through extensive field trials. Though mmWave systems offer higher data rates, they come with a set of challenges--the small wavelengths of mmWaves make them suffer significant attenuation, resulting in a rapid deterioration of signal strength. Thus, unlike traditional terrestrial communication systems, mmWave communication requires highly directional communication with energy focused on narrow beams to achieve the required signal strengths at the receiver. Some challenges in using mmWave systems in mobile communications are highlighted in standards [1, 4]. However, on the brighter side, small wavelengths allow transmission antennas with small form factors to be packed closely and focus signal energy in specific directions, forming sharp beams. The other challenge in mmWave communication is that transmitter and receiver beams need to be aligned before data transfer; otherwise, any advantage of high data rates is not realised. A few degrees of misalignment in the beam directions between transmitter and receiver can reduce the data rates from gigabits per second to a few megabits per second, jeopardising the gain from the high spectrum of mmWave systems [5, 6, 7]. This gives rise to the problem of beam alignment (BA), where one needs to find the best transmitter and receiver beam pair that provides the best rates. The BA problem is critical to building better 5G communication networks with mmWave systems. Our goal in this paper is to learn the best beam pair in a given number of slots with a high probability. One naive approach to performing BA is an exhaustive search of all available beams at the base station (BS) and user equipment (UE). This strategy does not scale well because it has the complexity of order \(\mathcal{O}(K^{2})\), where \(K\) is the number of beams at BS and UE. The IEEE standard 802.11ad [1] decouples the BA by performing it in two stages. In the first stage, the BS uses a quasi-omnidirectional beam while the UE scans through its beams to identify the best one. In the next stage, the UE uses a quasi-omnidirectional beam while the BS scans through its beam to identify the best one. This search strategy has a complexity that is linear in \(K\). However, as discussed in [8, 9], this method can still take time in seconds. The initial access (IA) phase in 5G mmWave provides a mechanism to identify beam directions between a BS and UE [10, 11, 12, 13]. BSs periodically use the IA phase to discover new UEs and check if the best beam for already existing UEs has changed [14]. During the IA, BS transmits synchronization signals that UE can measure and report back the received signal strengths (RSS). IA can be used to explore and identify the best pair, as no data is transferred in this phase. Further, BS can adapt to the non-stationary environment by periodically rerunning the IA phase, where the periodicity can depend on the mobility rate and the atmospheric conditions. We address the BA and user tracking issues in mmWave using the fixed-budget pure exploration Multi-Armed Bandit (MAB) framework, where pure exploration is performed in the IA phase. The BA problem has already been addressed using the MAB framework, which uses cumulative regret minimization algorithms that balance exploration and exploitation to find the best beam pair [7, 15, 16]. However, the stopping time in these algorithms is random, leading to the following difficulties: 1) The learning phase may extend beyond the IA phase and reduce the number of time slots available for data transfer. 2) Due to continuous exploration, sub-optimal beams can be used for data transfer, resulting in outages. To overcome these issues, we complete the learning phase within a fixed number of time slots (budget) of the IA phase using adaptive exploration. Moreover, we exploit the structural properties of the RSS across the beams to accelerate the learning process. Several studies validate that the RSS of the beams in mmWave systems follows a multi-modal structure, with one peak corresponding to the line-of-sight path and others corresponding to the non-line-of-sight paths [7, 15]. Often, there is one dominant peak, and the multi-modal functions can be treated as unimodal. Bandits with a unimodal structure are well studied in the literature in the cumulative regret setting with optimal algorithms [17, 18, 19]. However, the fixed-budget pure exploration bandit with unimodal structure is not well studied, and optimal algorithms are not known. In this work, we develop a new fixed-budget pure exploration algorithm that exploits the unimodal structure. The new algorithm is named _Unimodal Bandit for Best Beam (UB3)_ and is based on the idea of sequential elimination of sub-optimal beams. _UB3_ achieves an error probability of the order of \(\mathcal{O}(\log K\exp(-T_{1}D_{L}^{2}))\) after \(T_{1}\) time slots of IA phase, where \(D_{L}\) is the minimum gap between two successive means of the arms. When no unimodal structure is assumed (unstructured), the best known achievable error probability is \(\mathcal{O}(\log K\exp(-T_{1}/H\log K))\)[20], where \(K\) is the number of beams and \(H\) is the problem-dependent constant. Thus, by exploiting unimodal structure, we achieve a better error probability where the error exponent does not depend on the number of beams, and demonstrate that this is anticipated by establishing a lower bound for unimodal bandits. Extensive simulation on realistic wireless networks demonstrates _UB3_ identifies the best arm with a probability more than 95% within \(100\) time slots for \(16\) beams, while the other state-of-the-art algorithms need \(500\) time slots. This translates into throughput gains of more than 15% compared to other algorithms. Moreover, _UB3_ does not require any prior knowledge of channel fluctuations. In summary, our contributions are as follows: * We set up the problem of beam alignment in mmWave systems as a fixed-budget pure exploration multi-armed bandit problem in Sec. II. * We exploit the unimodal structure of the RSS of the beams and develop an algorithm named _Unimodal Bandit for Best Beam (UB3)_ to identify the best beam with a high probability in fixed time slots or within the IA phase of the BA in Sec. III. * We provide an upper bound on the error probability of _UB3_ in identifying the best beam and show that the error exponent does not depend on the number of beams. We demonstrate that is anticipated by establishing a lower bound for unimodal bandits in Sec. IV. * We perform extensive simulations to validate the superior performance of _UB3_ compared to other state-of-the-art algorithms like _HOSUB_, _HBA_, _LSE_ and _Sequential Halving_ in Sec. V. In agreement with the theoretical bounds, _UB3_ is not affected by the number of beams. ### _Related Work_ As there is growing interest in mmWave systems in academia and industry, various aspects of mmWave are being studied. For the recent advances in the mmWave systems, we refer to surveys [21, 22, 23]. Several approaches are proposed to solve the BA problem. The compressive sensing-based signal processing methods [24, 25, 26] utilize the sparse characterization of mmWave to learn the best beam. They work better when accurate channel state information is available. [27] proposes an Optimized Two-Stage Search (OTSS) where a suitable candidate set of beams is identified in the first stage based on the received signal profile, and in the second stage, the best beam from the surviving set is selected with additional measurements. Codebook-based hierarchical methods are proposed in [28, 29, 30] which also require channel state information for BA. [31, 32] utilizes the location information to perform fast BA, which is feasible only when the location information of the UE is available at the BS. [33, 34] use Kalman filters to detect the angles of arrivals and departures to track UE. Recently, machine learning [35], and deep learning [36, 37] methods have been used for BA, which requires offline training of the models. Our work is closer to [7, 15, 16, 38] which uses an online learning approach to optimize the BA problem. The authors in [7] develop an algorithm named _Unimodal Beam Alignment (UBA)_ that exploits the unimodal structure of received power. The algorithm is built on the OSUB algorithm [17] by adding stopping criteria. The algorithm assumes that the mean powers are known, and the stopping criteria is based on these mean powers. The _Hierarchical Beam Alignment (HBA)_[15] algorithm also exploits the unimodal/multimodal structures of beam signal strengths to narrow down on the optimal beam. _HBA_ has shown better performance for beam identification than _UBA_. However, for \(T\) time slots, the computational complexity of _HBA_ is \(O(T^{2})\), as in each time slot, the algorithm restarts the search, making the running time linear in each time slot. Moreover, _HBA_ requires knowledge of channel fluctuations, which is not practical. The _Hierarchical Optimal Sampling of Unimodal _Bandits (HOSUB)_[38] exploits the benefits of hierarchical codebooks and the unimodality structure of the beam signal strengths to achieve fast beam steering. Simulations show better performance of _HOSUB_ compared to _HBA_, as well as a large reduction in computational complexity. However, the authors in [38] did not provide any theoretical guarantees on their proposed algorithm. The authors in [16] develop an algorithm named _MAMBA_ that aims to maximize the cumulative rate obtained over a period using the Thompson sampling algorithm named _Adaptive Thompson Sampling (ATS)_. Unlike the _UBA_ and _HBA_, the exploration never ends in _ATS_ and may keep selecting the sub-optimal beams. Our work develops an online learning algorithm for BA using a fixed-budget pure exploration setup [20, 39]. We exploit the unimodal structure of beam RSS to eliminate the sub-optimal beams and narrow the beam search space quickly. Fixed-budget pure exploration strategies are more suitable for the BA problem, as the exploration can be completed in the IA phase, and no exploration is required during the data transfer, simplifying the design of protocols. To our knowledge, this has not been studied in 5G networks with mmWave systems. ## II Problem Setup In this section, we discuss the system and channel model used in mmWave system. We follow the setup and notation similar to [15]. ### _System Model_ We consider a point-to-point mmWave wireless system between a transmitter, refer as mmWave BS, and a receiver, refer as mmWave UE, in a static environment as shown in Fig. 1. We focus on analog beamforming and consider one ADC with an RF chain that focuses on one direction at a time. The transmitter has \(K\) phased-array antennas, where each antenna has a phase shift to form a narrow directional beam. The antennas are evenly spaced by a distance \(D\approx\lambda/2\) forming a uniform linear array, where \(\lambda\) is the carrier wavelength. As the IEEE 802.11ad can decouple the BA, we consider that the receiver keeps a quasi-omnidirectional beam and the transmitter scans over the beam space to identify the best beam. This is a reasonable assumption due to the small form factor and fewer antennas on UEs. In the following, we focus on beam alignment (BA) at the BS side, and the extension that involves both BS and UE is straightforward. ### _Channel Model_ Due to the sparse characteristics of mmWave channel, we consider Saleh-Valenzuela channel model [40]. Suppose there are \(L\) paths where one is the dominant line-of-sight (LOS) path and \(L-1\) non-line-of-sight (NLOS) paths. Let \(\mathbb{C}\) denote the set of complex numbers. The channel vector between the transmitter and the receiver is given by \[\mathbf{h}=g_{0}\mathbf{a}(v_{0})+\sum_{l=1}^{L-1}g_{l}\mathbf{a}(v_{l})\in \mathbb{C}^{K\times 1} \tag{1}\] where \(\mathbf{a}(\mathbf{v})=\left\{e^{j\frac{k_{B}L}{\lambda}kv}:0\leq k\leq K-1 \right\}\in\mathbb{C}^{K\times 1}\) denote the vector of sinusoids at spatial angle \(v\), \(g_{0}\) is the channel gain of LOS path, and \(g_{l},1\leq l\leq L-1\) is the channel gains of \(l^{th}\) NLOS path. \(v=\cos\theta\) denotes the spatial angle of the channel associated with physical angle \(\theta\). As assume that the channel remains static for a duration \(T\) time slots and the channel vector keeps invariant in the BA process. Let \(\mathbf{B}\in\mathcal{C}^{K\times K}\) denote the unitary discrete Fourier transform (DFT) of the transmit beam space, where the \(k^{th}\) column corresponds to the \(k^{th}\) beam, i.e., \[\mathbf{B}=[\mathbf{b_{1}},\mathbf{b_{2}},\ldots,\mathbf{b_{K}}]=\frac{1}{ \sqrt{K}}[\mathbf{a}(\mathbf{w_{1}}),\mathbf{a}(\mathbf{w_{2}}),\ldots, \mathbf{a}(\mathbf{w_{K}})] \tag{2}\] where \(w_{k}=\frac{2k-K}{K}\) denotes the spatial angle of the \(k^{th}\) beam. Then, the received signal of \(k^{th}\) beam (with receiver omnidirectional) is given by \[y_{k}=\sqrt{P}\mathbf{h}^{H}\mathbf{b_{k}}+n, \tag{3}\] where \(n\) denotes the additive white Gaussian noise with mean noise power \(N_{0}W\), with noise power density \(N_{0}\) and channel bandwidth \(W\). We denote the received signal strength (RSS) of the \(k^{th}\) beam as \(r_{k}=|y_{k}|^{2}\) and denotes its mean value as \(\mu_{k}=\mathbb{E}\left[r_{k}\right]\). Let \(k^{*}=\operatorname*{arg\,max}_{k}\mu_{k}\). Then the beam \(b_{k^{*}}\) denotes the optimal beam with the best mean RSS. **Definition 1**.: _(Unimodality): The unimodality structure indicates that, \(\forall b_{k}\in\mathbf{B}\), there exist a path such that \(\mu_{1}<\mu_{2}<\cdots<\mu_{k^{*}}\) and \(\mu_{k^{*}}>\mu_{K^{*}+1}>\cdots>\mu_{K}\)._ For the case when only LOS path is present with channel gain \(g\) and spatial angle \(v\), mean RSS is given as \(\mu_{k}=\frac{Pg^{2}}{K^{2}}\delta(w_{k}-v)+N_{o}W\) for all \(b_{k}\in\mathbf{B}\), where \(\delta(x)=\frac{\sin(K\!-\!Dx/\lambda)}{\sin^{2}(\pi Dx/\lambda)}\) denotes the antenna directivity function for angular misalignment \(x\). For \(b_{k}\in\mathbf{B}\), \(\mu_{k}\) is a function of angular misalignment \(w_{k}-v\) and has unimodal property [15, Thm. 1]. In the following, we only consider the beams with the unimodal property as in [7] and later discuss how to extend our method to the multimodal case involving multiple NLOS of paths. **Remark 1**.: _We assumed that the Modulation and Coding Scheme (MCS) on each beam is fixed. However, we can easily accommodate different MCS on each beam by treating the RSS as a vector corresponding to MCS and considering the mean rate as done in [16]._ Fig. 1: A point-to-point mmWave Communication System. ### _Problem Formulation_ We assume a slotted system where the length of the IA phase is \(T_{1}\) time slot. As specified in the 5G standards, we let the BS rerun IA phase periodically after every \(T\) time slot where \(T>T_{1}\). We assume that during the period \(T\), the environment is stationary such that the best beam remains the same. We focus on one period between two successive reruns of the IA phase and index the time slots in that period as \(t=1\) to \(t=T\), refer to Fig. 2. In each time \(t\), the BS can select one of the beams from set \(\mathbf{B}\) and obtain as feedback the RSS at the receiver. The feedback is obtained through ACK/NACK sent back by the receiver - BS measures the signal strength of the received ACK/NACK, which gives the RSS at the receiver [16]. During the IA duration of \(T_{1}\) time slots, no information signals are transmitted and throughput or error probability is not a concern. However, during the period of \(T-T_{1}\) data is transmitted and it is desirable to obtain high throughput in this period using the best possible beam. We are thus interested in algorithms that output an optimal beam at the end of \(T_{1}\) time slots (IA phase). We model the problem as a fixed-budget pure exploration multi-armed bandit [41, 20] where the goal is to identify the optimal arm within a fixed budget with high confidence. Following the terminology of multi-armed bandits, we refer to beams as arms. A policy is any strategy that selects an arm in each time slot given the past observations. Let \(k_{t}\in\mathbf{B}\) denotes the arm selected by a policy at time \(t\). By playing an arm \(k_{t}\), the policy observes the feedback \(r_{k_{t}}\) which is a noisy RSS. The choice of \(k_{t}\) can depend on the beams selected in the past and their associated RSS values. We assume that RSS values observed in each time slot are independently and identically distributed across the arms and time. The distribution of RSS is governed by channel fluctuations, such as shadow fading and the disturbance effect, and follows unknown fixed distributions within a time period \(T\). However, without loss of generality, we assume that values of RSS are bounded in some interval. For a given policy \(\pi\), let \(\hat{k}^{\pi}_{T_{1}}\) denote the index of the arm output by \(\pi\) at the end of \(T_{1}\) time slots. Let \(\Pi\) denote the set of all policies of pure-exploration that output algorithms within a fixed budget of \(T_{1}\). Then our goal is to find a policy in \(\Pi\) that minimizes the probability that the arm output at the end of \(T_{1}\) is not an optimal arm, i.e., \[\min_{\pi\in\Pi}\Pr\left(b^{\pi}_{k_{T_{1}}}\neq b_{k^{*}}\right),\] where for each policy, \(\Pr(\cdot)\) is calculated with respect to the samples induced by the policy. We note that our criteria is different from those set in _UBA_[7] and _HBA_[15] which aim to minimize cumulative regret. Though both _UBA_ and _HBA_ have stopping criteria beyond which they play a fixed arm, the stopping time can be random, making their practical implementation challenging. Whereas, the policy considered by us completes the exploration phase deterministically after time \(T_{1}\) which makes their implementation easier in a wireless setup. Fig. 2 depicts the structure of our policy. ## III Algorithms In this section, we propose an algorithm named _UB3-Unimodal Bandit for Best Beam_ that finds the optimal beam after exploiting the unimodal structure of mean RSS within \(T_{1}\) time slots. The algorithm is based on the _Line Search Elimination (LSE)_ algorithm developed in [18], where the algorithm samples and eliminate arms in multiple phases till one arm survives after \(L+1\) phases. The pseudo-code of _UB3_ is given in Unimodal Bandit for Best Beam (UB3). It is parameter free and only takes \(K\) and \(T_{1}\) as inputs. _UB3_ runs in \(L+1\) phases. Arms are sampled and eliminated in each phase such that only one arm survives after the \(L+1\) phase. We first explain the number of rounds in each phase. Let \(N_{l}\), for \(l=1,2,\ldots,L+1\), denotes the number of samples in phase \(l\). Then, \[N_{l}=\left\{\begin{array}{ll}\frac{2^{L-2}}{3^{L-1}}T_{1}&\quad\text{for $l=1,2$}\\ \frac{2^{L-(l-1)}}{3^{L-(l-2)}}T_{1}&\quad\text{for $l=3,4,\ldots,L+1$}\end{array}\right. \tag{4}\] which satisfies that \[2\times\frac{2^{L-2}T_{1}}{3^{L-1}}+\sum_{l=3}^{L+1}\frac{2^{L-(l-1)}T_{1}}{3^ {L-(l-2)}}=T_{1}. \tag{5}\] After the first two phases, the number of samples increases by a factor of \(3/2\) in each subsequent phase. This increase in the number of samples helps to distinguish between the empirical means of the remaining arms, which are likely to be closer. _UB3_ works as follows. Let \(\mathbf{B_{l}}=\{b_{1},b_{2},\ldots,b_{l}\}\) denote the set of arms available in phase \(l\) and \(j_{l}:=|\mathbf{B_{l}}|\) is the number of arms in the set \(\mathbf{B_{l}}\). In phase \(l=1,2,\ldots L\), the algorithm selects four arms \(\{k^{M},k^{A},k^{B},k^{N}\}\in\mathbf{B_{l}}\), which include the two extremes and two middle arms uniformly spaced from them (lines 4-7). Each of the arms is sampled for \(\frac{N_{l}}{4}\) number of times (line 8). At the end of the phase, their empirical means Fig. 2: Beam exploration (IA) phase followed by data transfer phase. denoted \(\hat{\mu}_{i}\) (line 9) are obtained as follows: \[\hat{\mu}_{i}^{l}=\frac{1}{N_{l}/4}\sum_{s=1}^{N_{l}/4}r_{i_{s}}^{l},\quad\forall i \in\{k^{M},k^{A},k^{B},k^{N}\} \tag{6}\] where \(r_{i_{s}}^{l}\) denotes the \(s^{th}\) noisy RSS sample from \(i^{th}\) arm in phase \(l\). Based on these empirical means, we eliminate at most \(1/3^{rd}\) of the number of arms from the remaining set1. More specifically, if the arms \(k^{M}\) or \(k^{A}\) has the highest empirical means, then we eliminate all the arms succeeding \(k^{B}\) in the set \(\mathbf{B_{1}}\) (line 11 and 12). Similarly, if the arms \(k^{B}\) or \(k^{N}\) has the highest empirical means, then we eliminate all the arms preceding \(k^{A}\) in the set \(\mathbf{B_{1}}\) (line 13 and 14). Fig. 3 gives a pictorial representation of the elimination of arms in two possible cases. The remaining set of arms are then transferred to the next phase. In phase \(L+1\), we are left with three arms. Each one of them is sampled \(\frac{N_{L+1}}{3}\) number of times and the one with the highest empirical mean is the output of the algorithm as the optimal arm (lines 18-23). Footnote 1: If the number of arms in a phase is not a multiple of \(4\), then less than \(1/3^{rd}\) will be eliminated in that phase. **Remark 2**.: _Arms between \(k^{M}\ \&\ k^{A}\) or \(k^{B}\ \&\ k^{N}\) are eliminated in phase, and the arms between \(k^{A}\ \&\ k^{B}\) always survive._ After phase \(l=1,2,\ldots,L\), \(\lfloor\frac{2}{3}j_{l}\rfloor\) of the arms survive. For ease of exposition, we will drop the \(\lfloor\rfloor\) function since this drop will influence only a few constants in the analysis. Thus, after the end of \(L\) phases there will be three arms as \[\left(2/3\right)^{L}K=3\implies L=\frac{\log_{2}K/3}{\log_{2}3/2}. \tag{7}\] Therefore, the _UB3_ outputs the best beam as \(\hat{k}_{L+1}\) which is equivalent to \(b_{\hat{k}_{T_{1}}}\) after exploring for \(T_{1}\) time slots. In the next section, we upper bound the error probability of _UB3_. ## IV Analysis In this analysis, we find an upper bound and lower bound for the probability of best arm elimination for fixed-budget pure exploration bandit with unimodal structure. We first upper bound the error probability of _Unimodal Bandit for Best Beam (UB3)_. For analysis, we use the following assumption: **Assumption 1**.: _There exists a constant \(D_{L}>0\) such that \(|\mu_{k}-\mu_{k-1}|\geq D_{L}\) for \(2\leq k\leq K\)._ We note that this assumption is the same as that used in from [18, Assumption 3.4] to analyze unimodal bandits in the regret minimization setting. ### _Upper Bound for Algorithm UB3_ **Theorem 1**.: _Let UB3 is run for \(T_{1}\) time slots in \(L+1\) number of phases, where \(L=\frac{\log_{2}K/3}{\log_{2}3/2}\) with output \(\hat{k}_{L+1}\). Then, the probability that \(\hat{k}_{L+1}\) output by UB3 is not the best arm after \(L+1\) phases is bounded as_ \[P(\hat{k}_{L+1}\neq k^{*})\leq 2\exp\left\{-\frac{T_{1}}{18}D_{L}^{2} \right\}+2\exp\left\{-\frac{T_{1}K}{32}D_{L}^{2}\right\}\] \[+2\exp\left\{-\frac{T_{1}K}{72}D_{L}^{2}\right\}+2(L-2)\exp\left\{ -\frac{T_{1}}{16}D_{L}^{2}\right\}. \tag{8}\] Proof.: The proof is given in Appendix A. Observe that the dominant first and last terms in the upper bound do not depend on \(K\). The error probability is thus of order \(\mathcal{O}\left(\log_{2}K\exp\left\{-\frac{T_{1}D_{L}^{2}}{16}\right\}\right)\), where the error exponent term \(\left(\exp\left\{-\frac{T_{1}D_{L}^{2}}{16}\right\}\right)\) does not depend on \(K\). The _Sequential Halving (Seq, Halv.)_ algorithm is proposed in [42] for non-unimodal (unstructured) bandits and provided an upper bound for the probability of not Fig. 3: Different cases of elimination in phase \(l\). choosing the optimal arm. The error probability is shown to be \(O\left(\log_{2}(K)\exp\left\{-\frac{T_{1}}{\log(K)H_{2}}\right\}\right)\). This bound matches with the lower bound derived in [43] for unstructured bandits up to multiplicative factor of \(\log_{2}(K)\), where \(H_{2}=\max\limits_{k\neq k}\frac{k}{\Delta_{k}^{i}}\) is the complexity parameter depended upon the sub-optimality gap. Note that the error exponent in this bound of _Seq. Halu_. has \(\log_{2}(K)\) factor. By exploiting the unimodal property, we show off this factor. We next consider the lower bound for fixed-budget pure exploration with the unimodal structure which confirms that error exponent should be indeed independent of number of beams for any optimal algorithm. ### _Lower bound for pure exploration unimodal bandit_ A lower bound on the probability of error for the fixed budget without assuming any structure is established in [43]. We adapt the proof to include the unimodal structure to derive a lower bound. To this end, we first define a set of bandit instances as follows. Let us consider \(K\) arms that follow the unimodal structure. Let \(\{p_{k}\}_{1\leq k\leq K}\) be \(K\) real numbers in the interval \([1/4,1/2]\) with \(p_{k^{*}}=\frac{1}{2}\) and \(p_{1}\leq p_{2}\leq\cdots\leq p_{k^{*}-1}\leq p_{k^{*}}\geq p_{k^{*}+1}\geq \cdots\geq p_{K}\). For any \(1\leq k\leq K\) we consider the distribution \(\nu_{k}\) to be Bernoulli distribution with mean \(p_{k}\), i.e. \(\nu_{k}:=\text{Ber}(p_{k})\). We consider another distribution \(\nu_{k}^{\prime}\) as Bernoulli distribution with mean \(1-p_{k}\), i.e., \(\nu_{k}^{\prime}:=\text{Ber}(1-p_{k})\) with mean \(1-p_{k}\). We define \(K\) bandit problem as following. For \(i\in\{1,\ldots,K\}\) define the product distributions \(\mathcal{G}^{i}:=\nu_{1}^{i}\otimes\nu_{2}^{i}\otimes\cdots\otimes\nu_{K}^{i}\) where \[\nu_{k}^{i}:=\begin{cases}&\nu_{k}1\{k\neq i\}+\nu_{k}^{\prime}1\{k=i\},\text { if }k\in\{k^{*}-1,\\ &k^{*},k^{*}+1\}\\ &\nu_{k},\text{ otherwise}\end{cases}\] where \(1\{A\}\) denotes the indicator function. It is easy to note that only bandit instances \(\mathcal{G}^{i}\), where \(i\in\{k^{*}-1,k^{*},k^{*}+1\}\) satisfy the unimodality structure and the not the others. Flipping of the reward for all other arms will result in a non-unimodal problem. Thus unlike [43] we have 3 bandit problems in the neighbourhood of \(k^{*}\). We define \(d_{k}:=p_{k^{*}}-p_{k}=\frac{1}{2}-p_{k}\), for any \(1\leq k\leq K\). Set \(\Delta_{k}^{i}=d_{i}+d_{k},\text{ if }k\neq i\) and \(\Delta_{i}^{i}=d_{i},\) for any \(i\in\{k^{*}-1,k^{*}+1\}\) and any \(k\in\{1,\ldots,K\}\). Note that \(\{\Delta_{k}^{i}\}_{k}\) denotes the arm gaps of the bandit problem \(i\). We also define for any \(i\in\{k^{*}-1,k^{*}+1\}\) the quantity \[\bar{H}(i):=\sum_{k\in\{i-1,i+1\}}\frac{1}{(\Delta_{k}^{i})^{2}}\text{ and }\bar{h}=\sum_{i\in\{k^{*}-1,k^{*}+1\}}\frac{1}{d_{i}^{2}\bar{H}^{\prime}(i)}.\] Theorem 2 from [43] can be rephrased in this setting as follows. **Theorem 2**.: _For any bandit strategy that returns the arm \(\hat{k}_{T_{1}}\) at time \(T_{1}\), it holds that_ \[\max_{i\in\{k^{*}-1,k^{*}+1\}} P_{i}(\hat{k}_{T_{1}}\neq i)\] \[\geq\frac{1}{6}\exp\left(-60\frac{T_{1}}{\bar{H}(k^{*})}-2\sqrt{T _{1}\log(18T_{1})}\right), \tag{9}\] _and also_ \[\max_{i\in\{k^{*}-1,k^{*}+1\}} P_{i}(\hat{k}_{T_{1}}\neq i)\] \[\geq\frac{1}{6}\exp\left(-60\frac{T_{1}}{\bar{h}\bar{H}(i)}-2 \sqrt{T_{1}\log(18T_{1})}\right). \tag{10}\] The proof of this theorem follows the lines similar to [43, Thm. 2] after applying the change of measure rule to on the restricted set of arms. We skip the details. **Corollary 1**.: _Assume that_ \[T_{1}\geq\max\left(\bar{H}(k^{*}),\bar{H}(i)\bar{h}\right)^{2}\frac{4\log(6T_ {1}K)}{(60)^{2}}.\] _For any bandit strategy that returns the arm \(\hat{k}_{T_{1}}\) at time \(T_{1}\), it holds that_ \[\max_{i\in\{k^{*}-1,k^{*}+1\}} P_{i}(\hat{k}_{T_{1}}\neq i)\geq\frac{1}{6}\exp\left(-120 \frac{T_{1}}{\bar{H}(k^{*})}\right),\] _and also_ \[\max_{i\in\{k^{*}-1,k^{*}+1\}} P_{i}(\hat{k}_{T_{1}}\neq i)\geq\frac{1}{6}\exp\left(-120 \frac{T_{1}}{\bar{H}(i)}\right).\] We can establish a lower bound using this corollary. **Theorem 3**.: _For any unimodal bandit strategy that returns arm \(\hat{k}_{T_{1}}\) at time \(T_{1}\),_ \[\max_{i\in\{k^{*}-1,k^{*}+1\}} P_{i}(\hat{k}_{T_{1}}\neq i)\geq\frac{1}{6}\exp\left(-75 \frac{T_{1}}{\bar{H}(i)}\right). \tag{11}\] Proof.: The proof is given in Appendix B. We see that unlike in the case of the lower bound found in [43], the lower bound of the error probability is not dependent on \(\log_{2}(K)\). In addition, the complexity factor depends only on the sub-optimality gap between the optimal arm and its neighbours. This observation is similar to the lower bound on cumulative regret established in [17]. ## V Numerical Simulations In this section, we corroborate our theoretical results using simulations. We first describe the simulation setup with parameters used and present the results in the following subsections. ### _Simulation Parameters_ We use the IEEE \(802.11\)ad system with parameters as described in [15] for numerical simulations. The carrier frequency (\(f\)) is set at \(60\) GHz and the bandwidth is set at \(2.16\) GHz. The transmit power \(P=50\) dBm is shared between \(K\) antennas which vary from \(16\) to \(128\). For the line of sight (LOS) path, we have the path loss model as \[PL(dB)=-27.5+20\log_{10}(f)+10\alpha\log_{10}(d)+\chi, \tag{12}\] where \(d\) is the transmission distance, the path loss exponent \(\alpha\) is taken as \(1.74\), and \(\chi\) is the shadow fading component which follows Normal distribution with zero mean and \(2\) dB variance. In (12) \(f\) is in MHz and \(d\) is in meters. Depending upon the beam selected the signal strength varies in \([-80,-20]\) dBm. The algorithm parameter are kept at \(\rho_{1}=3\), \(\gamma=0.5\) and \(\zeta=0.1\). The channel parameters for the simulations are given in Table I. Simulation results are averaged over \(1000\) iterations and confidence intervals are shown (when significant). We compare the performance of _Unimodal Bandit for Best Beam (UB3)_ with the following algorithms: * **Sequential Halving (Seq. Halv.) [42]:** This algorithm is used for pure exploration in non-unimodal bandits for a fixed \(T_{1}\) time slots. The algorithm was proved to be optimal [43], and hence a comparison would give the idea about, how the additional information of unimodality would improve the performance. * **Linear Search Elimination (LSE) [18]:** Although this algorithm was proposed for continuous arm unimodal bandit problems, we have considered the algorithm for fixed \(T_{1}\) time slots and for discrete arms. A comparison of _UB3_ with _LSE_ is pertinent as it is a well-known algorithm for unimodal bandits. * **Hierarchical Beam Alignment (HBA) [15]:** This algorithm was shown to have good performance for regret minimization, when compared to existing algorithms, considering the prior knowledge of channel fluctuations. Our comparison with _HBA_ will of throughput comparison for the period after the best beam has been identified. * **Hierarchical Optimal Sampling of Unimodal Bandits (HOSUB) [38]:** This algorithm exploits the benefits of hierarchical codebooks and the unimodality of RSS to achieve the best arm in the fixed \(T_{1}\) time slots. Our comparison with _HOSUB_ will of throughput comparison for the period after the best beam has been identified. We did not include _UBA_ algorithm [7] for comparison since _HBA_ was shown to have better performance for beam identification. Also, the _ATS_ algorithm is not compared against as it is designed to minimize the cumulative regret and does not stop the exploration process. We note the computational complexity of complexity _HBA_ scales quadratically in \(T_{1}\) whereas it is of \(O(T_{1})\) in _UB3_, where \(T_{1}\) is the duration of IA phase. In addition, _HBA_ requires prior knowledge of channel fluctuations, i.e, the variance of the noise parameter, which is not required in _UB3_. ### _Comparison with other pure exploration algorithms_ We define the probability of error as the probability of not identifying the best beam after sampling for \(T_{1}\) number of time slots. We consider \(T_{1}\) as the exploration (IA) phase. We first compare the probability of error for _UB3_ algorithm with _LSE_ and _Seq. Halv._ which are used for pure exploration. We compare the probability of error performance of the algorithms for arm sizes of \(K=\{16,64,128\}\). The probability of error against the exploration time slots (\(T_{1}\)) is shown in Fig. 4. _LSE_, which has an equal number of sampling for the arms in all phases, has the worst probability of error performance than _Seq. Halv._. The solid lines in the figure are for \(K=16\), the dashed lines are for \(K=64\) and the dot-dashed lines are for \(K=128\). The comparisons are done for distance of \(d=20,60\) and \(80\) m. The probability of error increases as we increase the beam size. However, for a small number of arms, both _Seq. Halv._ and _UB3_ have comparable performance, but as the number of beams increases, _UB3_ has a lesser probability of error compared to _Seq. Halv._ as evident from the case of \(K=64\) and \(K=128\). However, _UB3_ can identify the best beam with a probability of more than 95% within \(100\) time slots for \(16\) number of beams, while the other state-of-the-art algorithms take need at least \(200\) time slots for executions. Note that for _Seq. Halv._ require at least \(200,400,900\) rounds for \(K=16,64,128\), respectively, to complete their execution hence their graph start after that many slots. Moreover, as distance increases the probability of error of all the algorithms increases for each beam size. It is to be noted that even though the minimum time slots requirement (as a function of \(K\)) for _LSE_ is much smaller than both _UB3_ and _Seq. Halv._ for its feasible execution, the number of samples it runs for arms neighbouring to \(k^{*}\) is much lesser resulting in more probability of error. _Seq. Halv._ needs at least \(K\log_{2}(K)\) number of time slots to complete one phase and has samples for all arms in every phase. Hence, it has much fewer time slots remaining when the algorithm is executed in the neighborhood of \(k^{*}\) as compared to _UB3_. Thus, the minimum time slots requirement for _UB3_ as a function of \(K\) is much lesser than that of _Seq. Halv._, in addition to having a better probability of error performance. This demonstrates the advantage of exploiting the unimodality of the reward function. ### _Comparison of throughput performance_ In this subsection we compare the throughput performance of _UB3_ with the _HBA_ and _HOSUB_ algorithms. We look at the mean cumulative throughput, which is defined as the product of the mean value of the selected beam at the end of exploration normalized with the mean power of the best beam and the remaining available time slots, i.e, \[\text{Throughput}=\mu_{b_{L+1}}^{\text{norm}}\times(T-T_{1}),\] where \(T\) is the total available time slots and \(T_{1}\) is the number of time slots available for exploration of the best beam. Note that _HBA_ will not have a fixed \(T_{1}\), and hence we will find the throughput after the expected exploration time \(E(T_{1})\), obtained as the average of many runs. Thus the throughput will be fixed for _HBA_ for a fixed \(T\), while it will vary for varying \begin{table} \begin{tabular}{c c} \hline \hline **Parameter** & **Value** \\ \hline Carrier frequency (\(f\)) & \(60\) Ghz \\ Bandwidth (W) & \(2.16\) GHz \\ Noise spectrum density (\(N_{0}\)) & \(-174\) dBm/Hz \\ Shadow fading variance (log-normal \(\sigma\)) & 2 dB \\ Number of beams (\(N\)) & 16-128 \\ HBA parameters (\(\rho_{1},\gamma,\zeta\)) & \((3,0.5,0.1)\) \\ Distance considered (d) & \((20,40,60,80)\) m \\ Path loss exponent & \(1.74\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Parameters for simulation \(T_{1}\) for _UB3_ and _HOSUB_. Fig. 5 compares the cumulative throughput of _UB3_, _HBA_ and _HOSUB_ for different \(K\) and distances of \(d=20,60\) and \(80\) m for \(T=3000\) time slots. As the number of beams (arms) increases, the beam becomes narrower, and hence the reward for the best beam also increases. In that case, _HOSUB_ requires more exploration time slots to learn the optimal beam, thereby exploiting sub-optimal beams in the data transmission phase for the given \(T_{1}\) time slots. On the other hand, for _HBA_, since the average time required to find the best arm increases, the average throughput will decrease as the number of arms increases. This increasing and decreasing of throughput for _HBA_ is seen in Fig. 5 and Fig. 6. However, the _UB3_ algorithm is not much affected by the number of increases in arms for finding the best arm, and hence the throughput will only increase with increasing gain for the best arm. _UB3_ can improve the throughput by more than 45% compared to _HBA_ and by more than 15% compared to _HOSUB_. This too is evident from both Fig. 5 and Fig. 6. However, the throughput will decrease as we increase the transmission distance, refer Fig. 6. Finally, we compare the throughput performance for varying path loss exponent given as \(\alpha\in\{1.74,1.94,2.14\}\), as shown in Fig. 7. The path loss exponent increases when there are more barriers; for example when the receiver moves from outdoor to indoor. The throughput indeed decreases for all _HBA_, _HOSUB_ and _UB3_, but _UB3_ still outperforms _HBA_ and _HOSUB_. ## VI Conclusion and future work We investigated the problem of beam alignment in mmWave systems using the multi-armed bandits (MAB). While earlier works used the cumulative regret minimization setting to learn Fig. 4: Error performance of _UB3_ vs \(T_{1}\) for different no. of beams (\(K\)) and distance (\(d\)). Fig. 5: Throughput performance of _UB3_ vs \(T_{1}\) for different no. of beams (\(K\)) and \(d\). Fig. 6: Throughput performance of _UB3_ vs \(d\) for different no. of beams (\(K\)) for \(T_{1}=500\). the best arm, we used the fixed-budget pure-exploration setting framework exploiting the unimodal structure of the received signal strength of the beams. We developed an algorithm named Unimodal Bandit for Best Beam (UB3) that identified the best beam with high probability. We gave an upper bound on the error probability of UB3 and established that it is optimal. Simulations validated the efficiency of _UB3_ which can identify the best beam using a smaller number of explorations that can translate to improvement in throughput by more than 15% compared to other state-of-art algorithms. Due to its simple structure, _UB3_ is easy to implement and comes with a lower computational complexity - _UB3_ has a computational complexity of \(\mathcal{O}(T)\), whereas it is \(\mathcal{O}(T^{2})\) for _HBA_[15]. The _UBA_ algorithm in [7] needs to solve a convex optimization problem in each time which is expensive. _UB3_ works well when only the LOS path is present and the RSS of beams satisfies the unimodal property. However, when NLOS paths are present, we are faced with multimodal functions. _UB3_ can be adapted to handle the multi-modal functions by using backtracking ideas proposed in [44]. In backtracking, eliminated arms are revisited to check if it is done by mistake and thus will not be stuck in a sub-optimal set of beams. It is interesting to evaluate the _UB3_ algorithms with backtracking on multimodal function and establish its performance guarantees. ## VII Appendix In this section, we will provide proof of the main results. ### _Proof of Theorem 1_ _Proof. UB3_ runs for \(T_{1}\) horizon in \(L+1\) number of phases that satisfies (5), where \(L=\frac{\log_{2}K/3}{\log_{2}3/2}\) and outputs the arm \(\hat{k}_{L+1}\). We will now upper bound the probability of error as, \[P(\hat{k}_{L+1}\neq b_{k^{*}}) =\sum_{l=1}^{L+1}P(b_{k^{*}}\text{ elim. in }l|b_{k^{*}}\text{ not elim. in }<l)\] \[\leq\sum_{l=1}^{L+1}P(b_{k^{*}}\text{ elim. in }l). \tag{13}\] The best arm is eliminated in phase \(l\) in the following cases: 1. \(b_{k^{*}}\in\{k^{M},\dots,k^{A}\}\), and \(\hat{\mu}_{k^{B}}^{l}\) or \(\hat{\mu}_{k^{N}}^{l}\) is greater than both \(\hat{\mu}_{k^{M}}^{l}\) and \(\hat{\mu}_{k^{A}}^{l}\) 2. \(b_{k^{*}}\in\{k^{B},\dots,k^{N}\}\), and \(\hat{\mu}_{k^{M}}^{l}\) or \(\hat{\mu}_{k^{A}}^{l}\) is greater than both \(\hat{\mu}_{k^{B}}^{l}\) and \(\hat{\mu}_{k^{N}}^{l}\) The two cases are illustrated in Fig. 8. From Remark 2, \(b_{k^{*}}\) will not get eliminated if \(b_{k^{*}}\in\{k^{A},\dots,k^{B}\}\). However we will upper bound the probability of error by assuming that \(b_{k^{*}}\) will always fall in the above two cases. Notice that Case 1 and Case 2 are symmetrical. Hence we can consider that \(b_{k^{*}}\) will always fall in either one of the cases. Without loss of generality, we consider Case 1. \[P(b_{k^{*}}\text{ elim. in }l)\leq P(b_{k^{*}}\text{ elim. in }l)\] \[\leq P(\hat{\mu}_{k^{*}B}^{l}\geq\hat{\mu}_{k^{M}}^{l}\text{ and }\hat{\mu}_{k^{A}}^{l}|b_{k^{*}}\in\{k^{M},\dots,k^{A}\})\] \[\quad+P(\hat{\mu}_{k^{N}}^{l}>\hat{\mu}_{k^{M}}^{l}\text{ and }\hat{\mu}_{k^{A}}^{l}|b_{k^{*}}\in\{k^{M},\dots,k^{A}\})\] \[\leq 2P(\hat{\mu}_{k^{B}}^{l}>\hat{\mu}_{k^{M}}^{l}\text{ and }\hat{\mu}_{k^{A}}^{l}|b_{k^{*}}\in\{k^{M},\dots,k^{A}\}), \tag{14}\] where the last inequality is due to the fact that, for Case 1, \(\mu_{k^{B}}\geq\mu_{k^{N}}\) by unimodality. Now for Case 1, \(\mu_{k^{A}}\) is always greater than \(\mu_{k^{B}}\), but \(\mu_{k^{M}}\) may not be greater than \(\mu_{k^{B}}\). Then, we can further upper bound (14) as \[P(b_{k^{*}}\text{ elim. in }l)\leq 2P(\hat{\mu}_{k^{B}}^{l}>\hat{\mu}_{k^{A}} ^{l}|b_{k^{*}}\in\{k^{M},..,k^{A}\}). \tag{15}\] Applying Hoeffding's inequality in (15), we have \[P(\hat{\mu}_{k^{B}}^{l}>\hat{\mu}_{k^{A}}^{l})\leq\exp\left\{-\frac{1}{2} \frac{N_{l}}{4}\left(\Delta_{A,B}\right)^{2}\right\}, \tag{16}\] where \(\Delta_{A,B}=\mu_{k^{A}}-\mu_{k^{B}}\) which is greater than \(0\) for Case 1. From Assumption 1, and the fact that there are at least \(\frac{\hat{\mu}}{3}\) arms between \(k^{A}\) and \(k^{B}\), for Case 1 we have, \(\Delta_{A,B}\geq(j_{l}/3)D_{L}\) Fig. 8: Different cases of elimination in any phase \(l\). \(b_{k^{*}}\) will not get eliminated if it is in between arms \(k^{A}\) and \(k^{B}\). Fig. 7: Throughput performance of _UB3_ vs \(T_{1}\) for different path-loss exponent \(\alpha\) and \(d=20\). Thus from (14) and (16) we have, \[P(b_{k^{*}}\text{~{}elim.~{}in~{}}l)\leq 2\exp\left\{-\frac{N_{l}}{72}\bigg{(}j_{l }D_{L}\bigg{)}^{2}\right\}. \tag{17}\] Using \(j_{l}=\left(\frac{2}{3}\right)^{l}K\) in (17) we can find the probability of best arm getting eliminated in phase 1 and 2, phase \(L+1\), and the rest of the phases separately. Using (7), we have \[P(b_{k^{*}}\text{~{}elim.~{}in~{}}1\&2)\leq 2\exp\left\{-\frac{T_{1}K}{32}D_{L}^{2}\right\}\] \[+2\exp\left\{-\frac{T_{1}K}{72}D_{L}^{2}\right\}. \tag{18}\] For phase \(L+1\), since the best arm is selected among 3 arms when each arm is sampled \(T_{1}/9\) times, we have \[P(b_{k^{*}}\text{~{}elim.~{}in~{}phase~{}}L+1)\leq 2\exp\left\{-\frac{T_{1}}{ 18}D_{L}^{2}\right\}. \tag{19}\] From (17), the error probability for the remaining phases is \[P(\text{best arm elim.~{}in~{}phase~{}}3\text{~{}to~{}phase~{}L})\] \[\leq 2\sum_{l=3}^{L}\exp\left\{-\frac{T_{1}}{18}\frac{K^{2}}{9} \left(\frac{2}{3}\right)^{2(l-1)}\frac{2^{L-l+1}}{3^{L-l+2}}D_{L}^{2}\right\}\] \[\leq 2\sum_{l=3}^{L}\exp\left\{-\frac{T_{1}K}{48}\left(\frac{2}{3 }\right)^{l}D_{L}^{2}\right\}\] \[\leq 2(L-2)\exp\left\{-\frac{T_{1}}{16}D_{L}^{2}\right\}. \tag{20}\] By (13), (18), (19) and (20), we obtain the upper bound as \[P(\hat{k}_{L+1}\neq b_{k^{*}})\] \[\leq 2\exp\left\{-\frac{T_{1}}{18}D_{L}^{2}\right\}+2\exp\left\{- \frac{T_{1}K}{32}D_{L}^{2}\right\}\] \[+2\exp\left\{-\frac{T_{1}K}{72}D_{L}^{2}\right\}+2(L-2)\exp\left\{ -\frac{T_{1}}{16}D_{L}^{2}\right\}.\qed\] ### _Proof of Theorem 3_ Proof.: We have, \(p_{k}=\frac{1}{2}-d_{k}\) such that \(p_{k}\in[1/4,1/2]\) and follows unimodality and \(p_{k^{*}}=\frac{1}{2}\). Upper bounding \(\bar{h}\), we have \[\bar{h} =\sum_{i\in\{k^{*}-1,k^{*}+1\}}\frac{1}{d_{i}^{2}\bar{H}(i)}\] \[=\frac{1}{d_{k^{*}-1}^{2}\bar{H}(k^{*}-1)}+\frac{1}{d_{k^{*}+1}^{ 2}\bar{H}(k^{*}+1)}\] \[=(I)+(II). \tag{21}\] We will upper bound (I) and (II), \[d_{k^{*}-1}^{2}\bar{H}(k^{*}-1)=d_{k^{*}-1}^{2}\sum_{k\in\{k^{*}-2,k^{*}\}} \frac{1}{(d_{k^{*}-1}+d_{k})^{2}}\] Since \(d_{k^{*}}=0\) and \(d_{k^{*}-2}\geq d_{k^{*}-1},\) we get \[d_{k^{*}-1}^{2}\bar{H}(k^{*}-1) \leq 1+\frac{1}{4}=\frac{5}{4} \tag{22}\] \[d_{k^{*}+1}^{2}\bar{H}(k^{*}+1) =d_{k^{*}+1}^{2}\sum_{k\in\{k^{*},k^{*}+2\}}\frac{1}{(d_{k^{*}+1} +d_{k})^{2}}\] Since \(d_{k^{*}}=0\) and \(d_{k^{*}+2}\geq d_{k^{*}+1},\) we get \[d_{k^{*}+1}^{2}\bar{H}(k^{*}+1) \leq 1+\frac{1}{4}=\frac{5}{4}. \tag{23}\] By (22) and (23) we get \[\bar{h}\geq\frac{4}{5}+\frac{4}{5}=\frac{8}{5}.\] Putting the value of \(\bar{h}\) in Corollary we get \[\implies\max_{i\in\{k^{*}-1,k^{*}+1\}}P_{i}(\hat{k}_{T}\neq i)\geq exp\left(-7 5\frac{T}{\bar{H}(i)}\right).\qed\]
2309.17201
Ghost channels and ghost cycles guiding long transients in dynamical systems
Dynamical descriptions and modeling of natural systems have generally focused on fixed points, with saddles and saddle-based phase-space objects such as heteroclinic channels/cycles being central concepts behind the emergence of quasi-stable long transients. Reliable and robust transient dynamics observed for real, inherently noisy systems is, however, not met by saddle-based dynamics, as demonstrated here. Generalizing the notion of ghost states, we provide a complementary framework that does not rely on the precise knowledge or existence of (un)stable fixed points, but rather on slow directed flows organized by ghost sets in ghost channels and ghost cycles. Moreover, we show that appearance of these novel objects is an emergent property of a broad class of models, typically used for description of natural systems.
Daniel Koch, Akhilesh Nandan, Gayathri Ramesan, Aneta Koseska
2023-09-29T12:52:06Z
http://arxiv.org/abs/2309.17201v2
# Beyond fixed points: transient quasi-stable dynamics emerging from ghost channels and ghost cycles ###### Abstract Dynamical description of natural systems has generally focused on fixed points, with saddles and saddle-based phase space objects such as heteroclinic channels and heteroclinic cycles being central concepts behind the emergence of quasi-stable dynamics or long transients. Reliable and robust quasi-stable dynamics observed for real, inherently noisy systems is, however, not met by saddle-based dynamics, as demonstrated here. Generalizing the notion of ghost states, we provide a complementary framework for emergence of sequential quasi-stable dynamics that does not rely on (un)stable fixed points, but rather on slow directed flows on ghost manifolds from which _ghost channels_ and _ghost cycles_ are generated. Moreover, we show that these novel phase space objects are an emergent property of a broad class of models, typically used for description of natural systems. ghost states; he stateteroclinic channels; heteroclinic cycles; saddle fixed points; metastability; quasi-stable dynamics; slow manifolds Living and man-made, but also ecological or climate systems are classically described to exhibit asymptotic behavior, implying that the observed dynamics is retained in absence of a perturbation. Mathematically, such dynamics corresponds to invariant sets that represent objects in phase space, the simplest being stable fixed points that are separated by unstable fixed points or saddles (Fig. 1(a)-(c)). However, a growing body of empirical evidence suggests that real-world systems are often characterized by long transients which are quasi-stable (with anomalously slow relaxation [1; 2]) with fast switching between them. The duration of the quasi-stable dynamical patterns is much larger than one would expect from the characteristic elementary processes, whereas the switching is triggered by external signals or system-autonomously, and occur on a timescale much shorter than the one of the preceding dynamical pattern. Examples include neuronal firing patterns during olfactory sensing or discrimination tasks [3; 4; 5], pattern matching during camouflage in animals [6], cellular signaling systems [7; 8], ecological [9; 10; 11], as well as earth and climate systems [12; 13]. Particularly in the context of neuronal systems, the described dynamics has been often referred to as _metastable_[14; 15; 16]. Some forms of these observed long transients have been conceptualized by trapping of the system's dynamics ("crawl-by") in the vicinity of a saddle [10], whereas heteroclinic objects consisting of joined saddles are thought to explain the switching between different quasi-stable dynamical patterns [17; 18]. Recently, saddle-node "ghosts" have been additionally proposed to underlie transient cell signaling [7] or epigenetic memory [8], as well as regime-shifts in marine ecosystems [10]. Generalizing the concept of ghost states [1; 19; 20], we provide here a complementary theoretical framework that does not rely on (un)stable fixed points, but rather on transiently stable flows in phase space generated by ghost manifolds, from which _ghost channels_ and _ghost cycles_ can be created. The ghost manifold is represented by a very shallow slope in the quasi-potential landscape, which transiently captures the incoming trajectories, generating slow dynamics underlying quasi-stability (Fig. 1(d)). The ghost structures correspond to Lyapunov-unstable invariant set solutions which are bounded [21], enabling the dynamics to converge to specified areas in the system's state space. By defining dynamical criteria for the characterization of slow dynamics on the ghost manifolds, we demonstrate that they are more suitable than saddles and heteroclinic objects for the description of long transients and for reliable guiding of trajectories in inherently noisy systems. _Properties of ghost manifolds._--To derive and generalize the basic dynamical characteristics of ghost manifolds, let us consider a conceptual 2D-system of first order differential equations (Eq. 1): \(\dot{\mathbf{x}}=\mathbf{F}(\mathbf{x})\), where \(\mathbf{x}=(x,y)\in R^{2}\) and \(\mathbf{F}(\mathbf{x})=(f_{x},f_{y})\); \(f_{x}=\alpha+x^{2}\), \(f_{y}=-y\). For \(\alpha<0\), the system has a stable fixed point and a dissipative saddle (saddle value: \(\nu=\frac{Re\lambda_{x}}{\lambda_{u}}\approx 1.11\)), whereas for \(\alpha\to 0^{+}\) (Supplementary Fig. 1(a)), a ghost state or a so-called bottleneck exists [20]. The phase space regions that are characterized by slow dynamics can be identified using an auxiliary scalar function \(q(x)=\frac{1}{2}|\mathbf{F}(\mathbf{x})|\) that is related to the kinetic energy of the system [22]. \(q(\mathbf{x}^{*})=\mathbf{0}\) if and only if \(x^{*}\) is a fixed point of the system, whereas low kinetic energy (\(q\approx 0\), identified by minimizing \(|\mathbf{F}(\mathbf{x})|\)) in turn corresponds to regions with slow dynamics beyond fixed points. Calculating the kinetic energy for Eq. 1 shows that \(q\) is not only minimized around the saddle fixed point (Fig. 2(a)
2306.17484
Landmark Guided Active Exploration with State-specific Balance Coefficient
Goal-conditioned hierarchical reinforcement learning (GCHRL) decomposes long-horizon tasks into sub-tasks through a hierarchical framework and it has demonstrated promising results across a variety of domains. However, the high-level policy's action space is often excessively large, presenting a significant challenge to effective exploration and resulting in potentially inefficient training. In this paper, we design a measure of prospect for sub-goals by planning in the goal space based on the goal-conditioned value function. Building upon the measure of prospect, we propose a landmark-guided exploration strategy by integrating the measures of prospect and novelty which aims to guide the agent to explore efficiently and improve sample efficiency. In order to dynamically consider the impact of prospect and novelty on exploration, we introduce a state-specific balance coefficient to balance the significance of prospect and novelty. The experimental results demonstrate that our proposed exploration strategy significantly outperforms the baseline methods across multiple tasks.
Fei Cui, Jiaojiao Fang, Mengke Yang, Guizhong Liu
2023-06-30T08:54:47Z
http://arxiv.org/abs/2306.17484v2
# Landmark Guided Active Exploration with Stable Low-level Policy Learning ###### Abstract Goal-conditioned hierarchical reinforcement learning (GCHRL) decomposes long-horizon tasks into sub-tasks through a hierarchical framework and it has demonstrated promising results across a variety of domains. However, the high-level policy's action space is often excessively large, presenting a significant challenge to effective exploration and resulting in potentially inefficient training. Moreover, the dynamic variability of the low-level policy introduces non-stationarity to the high-level state transition function, significantly impeding the learning of the high-level policy. In this paper, we design a measure of _prospect_ for subgoals by planning in the goal space based on the goal-conditioned value function. Building upon the measure of prospect, we propose a landmark-guided exploration strategy by integrating the measures of prospect and novelty which aims to guide the agent to explore efficiently and improve sample efficiency. To address the non-stationarity arising from the dynamic changes of the low-level policy, we apply a state-specific regularization to the learning of low-level policy, which facilitates stable learning of the hierarchical policy. The experimental results demonstrate that our proposed exploration strategy significantly outperforms the baseline methods across multiple tasks. Hierarchical reinforcement learning (HRL), subgoal, exploration-exploitation, state-specific regularization. ## I Introduction Deep Reinforcement Learning (DRL) is a powerful approach for solving sequential decision-making problems, such as video games [1, 2] and robot navigation [3, 4, 5]. DRL models these problems as partially observable Markov decision processes (POMDPs), and learns the optimal policies by maximizing the cumulative discounted reward. However, in many complex tasks, agents often struggle to collect sufficient high-reward trajectories. Hierarchical reinforcement learning decomposes complex, long-horizon decision-making tasks into sub-tasks of different time scales and is a promising method for solving such long-horizon tasks. Goal-conditioned hierarchical reinforcement learning [6, 7, 8, 9] is a two-level hierarchical reinforcement learning paradigm, where the high-level policy decomposes the original task into a series of subgoals, and the low-level policy guides the agent to achieve these subgoals. The learning objective of the hierarchical policy is still to learn a decision function that maximizes the cumulative expected reward, so the learning of the high-level policy depends on the external rewards, while the learning of the low-level policy depends on the intrinsic rewards defined through the subgoals. Effective subgoals are crucial for achieving good performance and efficiency in goal-conditioned hierarchical reinforcement learning. Selecting reasonable subgoals that capture the task's semantics provides meaningful guidance for low-level policy learning. Pre-defined subgoal space [10, 11] and task-specific subgoal representation space [12, 13, 14, 15] learned online can be employed to better represent the action space of the high-level policy. Pre-defined subgoals can quickly supervise low-level policy learning via intrinsic rewards, while learned subgoal representations can be optimized for specific tasks. However, sampling actions in a large subgoal space can lead to inadequate exploration and inefficient training of the agent's high-level policy. To guide the agent to explore efficiently, some approaches [10, 11] reduce the complexity of the high-level action space by using adjacency constraints, which promotes the agent to explore reasonable states. To further avoid introducing additional non-stationarity in HRL, HESS [15] proposes an active exploration strategy that considers a combined measure of _novelty_ and _potential_ for subgoals after stabilizing the learning of subgoal representation space. The novelty measure aims to enhance the agent's ability to explore new states, while the potential measure guides the agent to explore in the direction that is more likely to expand the explored area. HESS is effective in guiding the agent to unexplored areas, but it ignores the impact of the final goal on exploration. Therefore, relying solely on the measures of novelty and potential may not always guide the agent to the most promising areas. Our insight is that the agent not only needs to expand the exploration area but also needs to pay attention to the areas that are more likely to achieve the final task goal. To this end, we design a _prospect_ measure for subgoals through landmark-based planning in the subgoal representation space. This measure can reflect the likelihood of exploring in the direction of a subgoal leading the agent to states closer to achieving the final task goal. Considering the measure of prospect and novelty for subgoals, we propose a Landmark-guided active Exploration strategy with Stable low-level Policy learning (LESP). The strategy incorporates the measure of prospect to guide the agent to explore subgoals that are more likely to lead to the final task goal. Additionally, In goal-conditioned hierarchical reinforcement learning, the high-level state transition function is dependent not only on the physical environment but also on the online-learned low-level policy. Due to the dynamic changes of the low-level policy, the non-stationarity of the high-level state transition function greatly hinders the learning of the high-level policy. Some prior works [16, 17] have attempted to re-label the actions of the high level state transitions (i.e., subgoals) to make them adaptable to the dynamic changes in the low-level policy. In this paper, we apply state-specific regularization to the learning of low-level policy to alleviate the non-stationarity caused by the dynamic changes in the low-level policy and facilitate the learning of the high-level policy. The proposed LESP strategy effectively balances the exploration-exploitation trade-off, and enables efficient learning of goal-conditioned hierarchical policies. We compare the proposed method LESP with the state-of-the-art baselines in the Mujoco experimental environment [18]. The experimental results demonstrate that LESP, which takes into account the guidance of the task goal for exploration, outperforms the baseline methods. Additionally, we conduct ablation experiments to verify the roles of different components of LESP. The article is organized as follows: Section II covers the relevant works. Section III discusses the essential preliminary concepts. Section IV describes the proposed method LESP. In Section V, experiments and results are discussed. Finally, this article is concluded in Section VI. ## II Related Work When dealing with long horizon decision-making tasks, how to guide the agent to explore promising trajectories is a crucial problem. Hierarchical reinforcement learning (HRL) [19, 20, 21, 22, 23] improves sample efficiency by decomposing complex tasks. Selecting reasonable subgoals during interaction with the environment can also avoid blind exploration. Related works are described from these two aspects. ### _Hierarchical Reinforcement Learning_ Hierarchical Reinforcement Learning divides the original task into sub-tasks at different timescales using a hierarchical structure. The high-level policy communicates with the low-level policy through subgoals, and the signal passed from the high-level policy to the low-level policy can vary across different tasks, ranging from using discrete value for option [24, 25, 26] to employing pre-defined subgoal space [10, 11, 16] or subgoal representation space learned online [12, 13, 14, 15, 27, 28]. The use of discrete-valued options naturally reduces the complexity of the high-level action space, but the limited rules decrease the adaptability of the solution to different tasks. On the other hand, learning subgoal representation often results in a high-dimensional action space, which can hinder the agent's ability to explore effectively. To explore effectively, HRAC [10] restricts the high-level action space within the \(k\)-step adjacency area through adjacency constraints, while HIGL [11] samples landmarks through coverage-based sampling and novelty-based sampling in the replay buffer, restricting the actions of the high-level policy within the domain of the most urgent landmark, but introduces additional non-stationarity. In order to choose appropriate subgoals to guide exploration, HESS [15] proposes an exploration strategy by considering the measure of novelty and potential for subgoals but ignores the guidance of the task final goal for exploration. In contrast, our approach plans landmarks in the subgoal representation space according to the task goal and designs a measure of _prospect_ that considers the influence of the task's final goal, proposing a more efficient hierarchical exploration strategy. ### _Subgoal Selection_ When employing deep reinforcement learning to solve complex sequential decision-making tasks, selecting appropriate subgoals for the agent can be an effective strategy. Several studies [29, 30, 31, 32, 33] focus on learning goal-conditioned value functions and rationally design reward functions to make the value function reflect the reachability between two states. These approaches allow for planning in the goal space based on reachability, with states on the planned path selected as subgoals. With good perception of the environment map, some rule-based methods [3, 34, 35] utilize heuristic search method to find an optimal trajectory, and sample subgoals along the trajectory based on physical priors. After utilizing a value function to learn the reachability between states in the state space, L3P [32] clusters the candidate states based on their reachability, with each cluster center representing a potential landmark. Then, a sparse topological graph is constructed with the potential landmarks as nodes and reachability as edges. Finally, subgoals are generated by conducting graph search algorithm on the constructed graph. On the other hand, HIGL [11] uses coverage-based sampling and novelty to sample landmarks from the replay buffer. To facilitate effective exploration with reasonable subgoals, the latest work HESS [15] samples candidate landmarks in the neighborhood of the agent's current state and selects subgoals based on the measures of novelty and potential. This active exploration strategy avoids introducing additional non-stationarity and speeds up the learning process of the hierarchical policy. Like previous works [11, 32], we also utilize a goal-conditioned value function to plan landmarks in the goal space. We designed a measure of _prospect_ based on the landmarks. Considering measures of _novelty_ and _prospect_ to select subgoals, we proposed an active hierarchical exploration strategy. ## III Preliminaries Reinforcement learning formulates the sequential decision-making problem as a Markov decision process (MDP) [36], defined as a tuple \(M=<\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma>\) where \(\mathcal{S}\) denotes the state space, \(\mathcal{A}\) denotes the action space, \(\mathcal{P}\) represents the state transition function that reflects the dynamics of the environment, \(r\) is the reward function typically designed by human experts for the task, and \(\gamma\in[0,1)\) is a discount factor. A policy \(\pi(a|s)\) maps a given state \(s\) to a probability distribution over the action \(a\). The goal of reinforcement learning is to learn a policy that maximizes the expected cumulative discounted reward \(\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}]\), where \(r_{t}\) is the immediate reward that the agent receives from the environment after taking action \(a_{t}\) at state \(s_{t}\). Reinforcement learning is mainly divided into two categories: the value-based methods [37, 38] and the policy gradient methods [39, 40, 41]. Value-based methods compute the state-action value function and choose actions greedily based on the computed value function. Policy gradient methods optimize the policy directly by the policy gradient computed on the value function. To encourage the agent to explore the state space, we adopt the Soft Actor-Critic (SAC) [39] algorithm for both the high-level and the low-level policies in our experiments. In SAC, the standard value loss function is: \[L_{Q}(\theta) =\mathbb{E}_{s_{t},a_{t},s_{t+1}\sim\mathcal{B},a_{t+1}\sim\pi_{d} }[\frac{1}{2}(Q_{\theta}(s_{t},a_{t})-(r(s_{t},a_{t})\] \[+\gamma Q_{\theta}(s_{t+1},a_{t+1})-\alpha\text{log}(\pi_{\phi}(a_ {t+1}|s_{t+1}))))^{2}] \tag{1}\] where \(\gamma\) is the discount factor, \(\alpha\) denotes the temperature coefficient, \(\mathcal{B}\) is the replay buffer, and \(\pi_{\psi}\) represents the policy. The policy loss function can be expressed as follows: \[L_{\pi}(\psi)=\mathbb{E}_{s_{t}\sim\mathcal{B},a_{t}\sim\pi_{\psi}}[\text{log} \pi_{\psi}(a_{t}|s_{t})-\frac{1}{\alpha}Q_{\theta}(s_{t},a_{t})] \tag{2}\] Goal-conditioned hierarchical reinforcement learning models long-horizon decision-making tasks as a goal-conditioned Markov decision process, \(M=<\mathcal{S},\mathcal{G},\mathcal{A},\mathcal{P},r,\gamma>\), where \(\mathcal{G}\) represents the goal space. As illustrated in Figure 1, goal-conditioned reinforcement learning is a hierarchical framework consisting of two policies. The high-level policy \(\pi_{h}(g|s)\) operates at a lower temporal resolution, sampling a high-level action every c time steps (i.e., \(t\equiv 0(\text{mod}c)\). When \(t\not\equiv 0(\text{mod}c)\), a predefined function such as the identity function is used to specify a subgoal for the low-level policy. Given the current state \(s_{t}\) and subgoal \(g_{t}\), the low-level policy \(\pi_{l}(a_{t}|s_{t},g_{t})\) produce low-level actions that are executed by the agent to interact with the environment. During training, since the goal of the low-level policy is to enable the agent to achieve the subgoals specified by the high-level policy, the low-level policy is optimized using intrinsic rewards \(-\left\|\phi(s_{t+1})-g_{t}\right\|_{2}\) computed from the sub-goal, where \(\phi\) is the sub-goal representation function that maps the state to the goal space. Similar to the standard reinforcement learning paradigm, the learning objective of the hierarchical structure is still to enable the agent to interact efficiently with the external environment, so the reward of the high-level policy is defined as the sum of \(c\) external rewards \(\sum_{t=0}^{c}r_{t}^{env}\) after executing a high-level action. In the goal-conditioned reinforcement learning, the high-level policy and the low-level policy can be trained simultaneously in an end-to-end manner. The high-level policy provides real-time subgoals to the low-level policy and guides its learning through intrinsic rewards. The dynamic low-level policy learned online influences the stationarity of the high-level state transitions. Therefore, a well-designed hierarchical exploration strategy can significantly enhance the learning efficiency of the hierarchical policy. ## IV Method In this section, we propose LESP: **L**andmark guided active **E**xploration with **S**table Low-level **P**olicy learning. We describe LESP from three aspects: measures for subgoals, stable low-level policy learning and hierarchical exploration strategy. ### _Measures for Subgoals_ In goal-conditioned hierarchical reinforcement learning, effectively guiding the agent's exploration is crucial for improving the algorithm performance. Previous count-based exploration methods [15, 11] define the novelty of subgoals based on the number of visits to states. However, solely relying on visit counts may not always guide the agent to explore promising states. The state-of-the-art exploration approach [15] introduces the measure of potential for subgoals, aiming to effectively guide the agent to explore the unexplored region. Our insight is that reasonable subgoals should not only guide the agent towards expanding the exploration area but also guide the agent to explore regions that are likely to lead to the ultimate goal. To address this, we design the measure of _prospect_ for subgoals, which reflects the subgoal's positivity towards achieving the final goal. The prospect measure requires specifying the exploration direction for the agent. Following prior work [11, 32], we choose to plan landmarks in the subgoal representation space. We aim to sample landmarks that cover a wide range of the goal space. To achieve this, we employ the Farthest Point Sampling (FPS) [43] algorithm to sample \(n_{cov}\) landmarks from the goal space. Starting with an initial candidate set, we iteratively add the farthest landmark to the sampled landmark set until a sufficient number of landmarks are sampled where Fig. 1: The framework of goal-conditioned hierarchical reinforcement learning (GCHRL, [6, 7, 8, 9]). \(\phi\) is the subgoal representation function that maps the state to the goal space. The hierarchical framework consists of a high-level policy and a low-level policy. The reward of the high-level policy is a sum of c (the low-level policy length) external rewards, while the reward of the low-level policy is the negative distance between the state and subgoal in the latent space. the distance between two states is measured by the Euclidean distance in the goal space. This sampling process ensures that the sampled landmarks are spread out and cover a diverse range of region in the goal space. To search for a promising path from the current state to the goal, we build a graph consisting of the current state, the goal, and the sampled landmarks as nodes; following previous works [11, 29, 30, 32] and [42], the edges between nodes are weighted based on the goal-conditioned value function \(V(s_{1},\phi(s_{2})\). This value function reflects the reachability between two states \(s_{1}\) and \(s_{2}\). Once the graph is built, we can plan the landmarks on the graph. By applying a shortest path planning algorithm, we can find a feasible path from the current state to the goal, selecting the landmark \(l_{sel}\) on the path that is closest to the current state as the region the agent should explore. Algorithm 1 describes the process of landmark selection. ``` Input: current state \(s_{t}\), goal \(g\), threshold \(\tau\), \(\phi(s)\) and goal-conditioned value function \(V(s,g)\) Output: selected landmark \(l_{sel}\) Initial \(n\) transitions \(T=(s,a,s^{\prime})\) from buffer \(B_{pre}\) \(N\leftarrow\textbf{FPS}(S=\{s\parallel(s,a,s^{\prime})\in T\})\cup\{g\}\) \(W_{i,j}\leftarrow\infty\) for\(\forall(n,n_{j})\in N\times N\)do \(w_{i,j}\leftarrow[-V(n_{i},\phi(n_{j}))]\) if\(w_{i,j}\leq\tau\)then \(W_{i,j}=w_{i,j}\) endif endfor \(Trajectory\ K\leftarrow\textbf{Shortest Path Planing}(N,W)\) \(k_{sel}\leftarrow\text{argmin}_{k_{l}\in K}-V(s_{t},\phi(k_{i}))\) \(l_{sel}=\phi(k_{sel})\) return\(l_{sel}\) ``` **Algorithm 1** Landmark Selection In practice, before training the hierarchical policy, we allow the agent randomly walk in the environment and collect trajectories stored in buffer \(B_{pre}\) (not the replay buffer \(\mathcal{B}\)). We then use these trajectories to train the goal-conditioned value function \(V(s,g)\). For the current state \(s_{t}\), after selecting landmark \(l_{sel}\) with Algorithm 1, the exploration strategy considers selecting a subgoal \(g_{t}\) near the current state \(s_{t}\), where the _prospect_ of the subgoal \(g_{t}\) is defined as follows: \[P(g_{t})=-\left\lVert g_{t}-l_{sel}\right\rVert_{2} \tag{3}\] The landmark selection and prospect calculation process are depicted in Figure 2. The prospect measure takes into account the influence of the goal on the subgoal selection by considering the feasible path to the goal in the latent space. In contrast to previous work, the prospect measure not only encourages the agent to explore unexplored region but also guides the agent to explore region that have a positive impact on achieving the task goal. To maintain the agent's ability to explore new states, similar to previous work [11, 15], we also consider the novelty measure for subgoals. The novelty of a subgoal is measured by discrete counting in the replay buffer \(\mathcal{B}\), defined as: \[N(s_{i})=\mathbb{E}_{s_{i+jc}\sim\mathcal{B}}[-\sum_{j=0}^{\lfloor(T-i)/c \rfloor}\gamma^{j}n(\phi(s_{i+jc}))] \tag{4}\] where \(c\) denotes the horizon for the low-level policy, \(\gamma\) is the discount factor and \(n(\phi(s))\) indicates the immediate count of \(s\) in the replay buffer \(\mathcal{B}\). Following HESS [15], In order to reduce the computational cost, we discretize the state space into cells and estimate \(n(\phi(s))\) by counting the number of times each cell is visited. ### _Stable Low-level Policy Learning_ Goal-conditioned hierarchical reinforcement learning is an effective paradigm for solving complex sequential decision-making tasks. However, the dynamic changes in low-level policy introduce non-stationarity in the high-level state transitions. Over time, taking the same high-level action in the Fig. 2: landmark selection and prospect calculation process. The calculation of prospect involves four stages: 1) **Sampling**: An adequate number of sample points are randomly selected from the state space. Then, the FPS algorithm is employed to sample \(n_{cov}\) landmark points. 2) **Building a graph**: The sampled landmarks, current position, and goal are used as nodes to build a graph. The edges of the graph represent the reachability between two nodes. 3) **Path planing**: Using the shortest path planning algorithm, a feasible path from the current position to the goal is determined based on the constructed graph. 4) **Calculation**: Landmark is sampled along the trajectory and selected as \(l_{sel}\). The prospect of the subgoals (within the neighborhood of the current position) is then calculated based on the selected landmark. same state can lead to completely different state transitions. This is because high-level state transitions are not solely determined by the environmental dynamics. To mitigate the non-stationarity in learning the low-level policy, inspired by the stable learning of subgoal representations in HESS [15], we introduce a state-specific regularization term \(L_{r}\) in the loss of the low-level policy. \(L_{r}\) is designed to limit the estimation differences of the low-level Q-function at different time steps without compromising learning efficiency. The definition of \(L_{r}\) is as follows: \[L_{r}(\theta)=\mathbb{E}_{s,g\sim\mathcal{B},a\sim\pi_{l}}[\lambda(s,g)\left\| Q_{\theta}(s,g,a)-Q_{\theta_{old}}(s,g,a)\right\|_{2}] \tag{5}\] where \(\mathcal{B}\) is the replay buffer, \(\pi_{l}\) is the low-level policy and \(\lambda(s,g)\) is the regularization weight of the state-subgoal pair \((s,g)\). In practice, we first calculate the loss \(L_{Q}(\theta)\) for different state-subgoal pairs using Equation 1. Then, we set \(\lambda\) to 1 for the \(k\%\) smallest state-subgoal pairs and 0 for the rest. This is to limit the variation of the value function for states where the agent has already explored sufficiently, while maintaining the learning efficiency of the low-level policy in unknown states. So the overall loss for the value function of the low-level policy is represented by \(L_{Q}(\theta)+L_{r}(\theta)\). ### _Hierarchical Exploration Strategy_ Active exploration strategy can directly influence the behavioral policy, avoiding the introduction of additional non-stationarity through intrinsic rewards. Selecting subgoals with high potential and novelty can facilitate the intelligent agent in expanding its exploration area. However, blindly exploring unknown region may lead to accumulation of ineffective experiences. In order to further enhance the efficiency of training the hierarchical policy, we propose an active hierarchical exploration strategy that takes into account the influence of the final goal. The proposed exploration strategy aims to maintain the agent's ability to explore unknown regions while prioritizing subgoal \(g_{t}\) with high prospect, specified as follows: \[\begin{split} g_{t}=\operatorname*{argmax}_{\phi(s)}N(\phi(s))+ \alpha P(\phi(s))\\ \operatorname{subject}\,\,\mathrm{to}\begin{cases}D(\phi(s), \phi(s_{t}))\leq r_{g}\\ s\in\mathcal{B}\end{cases}\end{split} \tag{6}\] where \(N\) and \(P\) are normalized measures of novelty and prospect, respectively, \(D\) represents the Euclidean distance, \(\mathcal{B}\) is the replay buffer, \(\phi\) is the subgoal representation function learned online, and \(\alpha\) is the balancing coefficient. Algorithm 2 describes our active hierarchical exploration strategy. In LESSON [13], it has been demonstrated that the triplet loss based on slow features can capture the relative positional relationships in the state space. Therefore, following LESSON, we train the subgoal representation function \(\phi\) using the triplet loss, as follows: \[\begin{split} L_{\phi}=\mathbb{E}_{(s_{t},s_{t+1},s_{t+c})\sim \mathcal{B}}[\left\|\phi(s_{t})-\phi(s_{t+1})\right\|_{2}\\ +max(0,\delta-\left\|\phi(s_{t})-\phi(s_{t+c})\right\|_{2})]\end{split} \tag{7}\] where \(\mathcal{B}\) is the replay buffer, \(\delta\) is the margin parameter of triplet loss, \(c\) is the low level policy length of the hierarchical framework. ``` Initialize:\(\pi_{h}(g|s),\pi_{l}(a|s,g)\) and \(\phi(s)\) for\(i=1..episodeNum\)do for\(t=0..T-1\)do if\(t\equiv 0\)(mod\(c\))then Uniformly sample a number \(n\) in the range (0,1) if\(n<p\left(p\in(0,1)\right)\)then Select landmark \(l_{sel}\) with Algorithm 1 Generate a candidate set of subgoals Calculate the prospect and novelty of subgoals by Eq.3 and Eq.4 Select subgoal \(g_{t}\) by Eq.6 else Execute \(g_{t}\sim\pi_{h}(\cdot|s_{t})\) Update \(\pi_{h}\) else \(g_{t}=g_{t-1}\) endif Execute \(a_{t}\sim\pi(\cdot|s_{t},g_{t})\) Store experiences in replay buffer \(\mathcal{B}\) Update \(\pi_{l}\) by Eq.1 and Eq.5 endfor if\(i\equiv 0\)(mod\(I\))then Update \(\phi\) using the triplet loss defined by Eq.7 endif endfor return:\(\pi_{h},\pi_{l},\phi\) ``` **Algorithm 2** LESP algorithm ## V Experiments In order to evaluate the performance of the landmark-guided hierarchical exploration scheme, we conduct experiments on challenging sparse reward tasks in the Mujoco environment. The experimental design primarily aims to answer the following questions: 1) How does the landmark-guided active exploration strategy compare to the state-of-the-art active exploration method HESS in terms of performance? 2) How does the Prospect measure guide exploration and improve performance? 3) What is the importance of each component in the proposed hierarchical exploration strategy, as well as the impact of the hyper-parameters on the experiments? ### _Experimental Setup_ We conduct experiments on several long-horizon decision making tasks based on the Mujoco engine, as illustrated in Figure 3. **Point Maze**: Simulate a ball initially positioned in the bottom-left corner of a U-shaped maze, with the objective of reaching the top-left corner of the maze. **Ant Maze**: Similar to Point Maze, but the agent is a simulated ant navigating through the maze environment. **Ant FourRooms**: The task environment is a maze with four rooms. A simulated ant starts from the bottom-left corner and aims to reach a distant goal located in the top-right corner. **Ant Push**: In this task, there is a movable block obstructing the path of the ant towards the target. The ant needs to move the block aside in order to reach the goal. **Ant Maze (W-shape)**: In this environment, the maze structure is larger, 32 \(\times\) 32. The start position is (14,0), and the goal is (14, 14). In the experiments, all of these task environments are designed with sparse rewards, making the tasks more challenging and suitable for evaluating the exploratory capabilities of the algorithms. The adoption of sparse rewards ensures that the agents must actively explore and discover effective strategies to achieve their goals. This setup provides a rigorous evaluation of the algorithms' performance in handling tasks with limited feedback and encourages the development of efficient exploration techniques. In all the experiments, the actor network for both the low-level and the high-level policies is a Multi-Layer Perceptron (MLP) with two hidden layers of dimension 256. The Critic network structure is the same as that of the actor network. The subgoal representation function is a MLP with one hidden layer of dimension 100. The activation function of the MLP is the ReLU function. All the experiments are carried out on NVIDIA GTX 2080 Ti GPU and optimized using the Adam optimizer. In all the experiments, the radius \(r_{g}\) of subgoal selection is set to 20. The discount factor \(\gamma\) is set to 0.99. The subgoal representation function is updated every 100 episodes. The batch size for both level policies is set to 128. ### _Comparative Analysis_ We conduct a comparative analysis of our proposed algorithm with several existing hierarchical reinforcement learning methods, including HISM, HSR, HESS, LESSON, and SAC. (i) H-ICM utilizes curiosity as intrinsic rewards to guide the agent's exploration in the environment and learn useful skills. (ii) HESS introduces a potential measure for subgoal and proposes an active exploration strategy with stable subgoal representation learning. (iii) HSR introduces a count-based exploration method that utilizes implicit state visit counting to guide the exploration process. (iv) LESSON utilizes triplet loss to learn subgoal representations online and achieves effective hierarchical exploration by selecting slow features as the subgoal space representation. (v) SAC is a reinforcement learning algorithm based on the actor-critic mechanism. In the experiment, both the high-level policy and low-level policy of the hierarchical framework are trained by the SAC algorithm. The experimental results depicted in Figure 4 demonstrate that, our proposed exploration strategy LESP outperforms all the baseline methods in handling hard-exploration tasks. The superiority of LESP can be attributed to its comprehensive consideration of both the novelty and the prospect in the exploration strategy. The calculation of prospect involves planning feasible trajectory based on reachability between landmarks, which takes into account the influence of task goal on exploration. Unlike the potential-based exploration approach in HESS, which only focuses on expanding the exploration area, LESP effectively guides the agent towards regions that have a positive impact on reaching the task goal. Additionally, we find that, in more complex tasks such as Ant FourRooms and Ant Maze (W-shape), which place greater demands on the exploration capability of the agent, the advantage of LESP in terms of sample utilization efficiency becomes more pronounced. It can be observed that, HICM and HSR fall short compared to our proposed method. HICM's performance is dependent on the learning error of the dynamic model; the instability of the intrinsic rewards restricts the learning of the high-level policy. HSR calculates state visit counts using the \(L_{1}\) norm of the successor representation, but relying solely on the successor representation, proves inadequate in promoting effective exploration. The experimental results demonstrate that, SAC exhibits the poorest performance when compared to the goal-conditioned hierarchical framework LESSON, which incorporates online learning of subgoal representations. This finding serves as strong evidence supporting the effectiveness of the goal-conditioned hierarchical framework. ### _Qualitative Analysis on Measures for Subgoals_ To assess the impact of subgoal measures on subgoal selection, we visualize the Prospect and Novelty measures in the AntMaze task, as shown in Figure 5. The visualization reveals that, subgoals with high Novelty are distributed throughout the candidate subgoal set, indicating that the agent can explore in various directions to expand the explored area. Figure 5 provides insightful observations regarding the Prospect measure. It indicates that subgoals guiding the agent closer to the goal exhibit higher Prospect. In contrast, subgoals that do not contribute significantly to task completion have lower Prospect. This suggests that the Prospect measure effectively Fig. 3: The Mujoco environments we used for hierarchical exploration experiment. (a) Point Maze. (b) Ant Maze. (c) Ant Maze (W-shape). (d) Ant Push. (e) Ant FourRooms. In our experiments, all the task environments are designed with sparse reward. This setup adds to the challenge of the task, as the agents must explore and discover effective strategies to achieve the desired goal despite the scarcity of rewards. captures the potential impact of subgoals on the agent's ability to reach the ultimate goal. As depicted in Figure 5, after considering both novelty and prospect measures comprehensively, the exploration strategy tends to prioritize subgoals with high Prospect. Meantime, the exploration strategy also maintains the ability to explore unknown region, thereby striking a balance between exploiting promising subgoals and continuing to explore uncharted areas. After training 300,000 time steps, the agent has made progress and could reach positions closer to the target. Furthermore, as the buffer expands, the novelty measure based on counting becomes more accurate. Subgoals with high novelty tend to concentrate in directions where exploration is insufficiently conducted. ### _Ablation Studies_ **Ablative analysis of various components**: In order to investigate the importance of different components in LESP, we conduct an ablation study on the proposed exploration strategy. We evaluate the performance of the algorithm under various experimental settings, including (i) the original exploration strategy (LESP) proposed in our work. (ii) LESP without stable value function, which means employing Equation 2 as the loss function to update the parameters of the value network in the low-level policy, without incorporating state-specific regularization. (iii) LESP without prospect, which means that the weight coefficient in Equation 6 is set to 0. (iv) replace prospect with potential, which means the prospect measure is replaced with the potential measure used in HESS and (v)HESS, state-of-the-art active exploration method. We conduct experiments in three different tasks: AntMaze (Images), AntMaze (W-shape), and AntFourRooms. The experimental results are depicted in Figure 6. As observed in Figure 6, replacing the prospect measure with potential leads to a significant decline in the performance of the exploration strategy. This highlights the superiority of the prospect measure over potential. While potential measure encourages the agent to expand the exploration area, prospect effectively guides the agent towards regions that positively contribute to accomplishing the task. In the experimental setting without prospect, the algorithm performs poorly across multiple complex tasks. This emphasizes the crucial role of the prospect measure in the exploration process. LESP demonstrates better performance compared to the setting without a stable value function, especially in the AntMaze (W-shape) task, LESP discovers effective trajectories at an earlier stage. Furthermore, even after replacing prospect with potential, the performance of the exploration strategy remains superior to that of HESS. This further highlights the importance of stable learning of the low-level state-action value function in promoting the stability of hierarchical policies. **Ablation studies on the hyper-parameters selection**: To evaluate the effectiveness of the hyper-parameters, including the number of landmark samples (\(n_{\text{cov}}\)), the balance coefficient (\(\alpha\)), and the low-level policy length (\(c\)), we conduct an ablation study. All tests are performed in the challenging Ant FourRooms task, and the experimental results are presented in Fig. 4: Learning curves of LESP and baselines on all environments. (a) Point Maze. (b) Ant Maze. (c) Ant Maze (W-shape). (d) Ant Push. (e) Ant FourRooms. (f) Ant Maze (Images). The x-axis represents the training time steps, while the y-axis represents the average success rate over 50 episodes. The experiments are evaluated for each algorithm using five different random seeds. The shaded area represents the \(95\%\) confidence interval. Figure 7. **Balance coefficient \(\alpha\)**: In the proposed exploration strategy, the balance coefficient \(\alpha\) is used to balance the importance of novelty and prospect measure. A larger \(\alpha\) value indicates that the exploration strategy prioritizes subgoals that have a more positive impact on guiding the agent towards the goal, while potentially diminishing the ability to explore new states. In complex scenarios, choosing an appropriate balance coefficient is crucial. From Figure 7 (a), it can be observed that larger values of \(\alpha\) in a reasonable range tend to yield better results. For all the experiments in Section 5.2, we set \(\alpha\) to 0.1 **Number of landmark samples \(n_{cov}\)**: LESP utilizes the FPS algorithm to sample \(n_{cov}\) candidate landmarks from the initial set of landmarks. It then selects the nearest landmark on the planned trajectory as the \(l_{sel}\) (selected landmark). The number of sampled landmarks, \(n_{cov}\), influences the efficiency of trajectory planning. If \(n_{cov}\) is too small, it may result in the inability to plan feasible path to the goal. On the other hand, if \(n_{cov}\) is too large, the selected \(l_{sel}\) may be too close to the current state \(s_{t}\). This can hinder the effectiveness of prospect in guiding the exploration process. So selecting an appropriate \(n_{cov}\) is indeed crucial for choosing subgoals that provide meaningful guidance during exploration. In the experiments conducted in Section 5.2, for the Ant FourRooms task, we set \(n_{cov}\) to 60, indicating a larger number of candidate landmarks to ensure comprehensive coverage of the environment. For the Ant Maze (W-shape) task, \(n_{cov}\) is set to 40. For other tasks, \(n_{cov}\) is set to 20. **Low-level policy length \(c\)**: Hierarchical reinforcement learning (HRL) decomposes long-horizon tasks into sub-tasks Fig. 5: The visualization of subgoal measures in the AntMaze task. (a) visualization at time steps 200000. (b) visualization at time steps 300000. The circular markers represent the candidate subgoal set sampled by the agent. The color intensity of the markers, ranging from red to blue, indicates the corresponding measure values. Fig. 6: Ablation studies on the components of proposed exploration strategy. In the conducted ablation experiments, three task scenarios are examined, namely: (a) Ant Maze (Images). (b) Ant Maze (W-shape). (c) Ant FourRooms. For each of these tasks, the experiments are evaluated using five different random seeds. with finite horizon, denoted as \(c\). Selecting an appropriate value for \(c\) is advantageous as it allows for proper allocation of task difficulty between the high-level policy and low-level policy. In active exploration approaches, the selected sub-goals are fed to the low-level policy with a finite horizon. Therefore, selecting an inappropriate value for \(c\) can result in the selected sub-goals being unable to effectively guide exploration, leading to the failure of the active exploration strategy. The experimental results are shown in Figure 7 (c). For all the experiments in Section 5.2,we set \(c\) to 20. ## VI Conclusion Goal-conditioned hierarchical reinforcement learning (HRL) is a paradigm that effectively addresses complex long-horizon problems. However, improving sample efficiency is a crucial and yet unresolved issue in reinforcement learning. To tackle challenging sparse reward tasks, we proposed an active exploration strategy LESP, which takes into account prospect and novelty measures for subgoals. We designed a prospect measure for subgoals in LESP. LESP generated promising trajectories by planning landmarks in the goal space. It then computed prospect measures for subgoals based on the selected landmark. Unlike HIGL, which guided exploration by constraining the high-level policy, LESP sampled subgoals in the vicinity of the current state to guild exploration. This active exploration approach avoided introducing additional non-stationarity to the high-level policy. In addition, to mitigate the impact of dynamic changes in the low-level policy on the high-level state transitions, we incorporated state-specific regularization into the training of the low-level policy. Experimental results demonstrated that LESP outperformed the state-of-the-art approach HESS. Additionally, some people may point out that LESP requires extra buffer to store samples of prior state space, as well as a goal-condition value function that reflects the reachability between two states. Our view on this is that, since an expert policy is not required for randomly walk in the environment and the goal-conditioned value function only focuses on the neighboring states, the additional computation cost brought by LESP is acceptable. For some complex, long-term tasks such as robot navigation and robotic arm control, LESP is of significant importance for efficient interaction between agents and the environment.
2303.18169
Dynamical fluctuations of random walks in higher-order networks
Although higher-order interactions are known to affect the typical state of dynamical processes giving rise to new collective behavior, how they drive the emergence of rare events and fluctuations is still an open problem. We investigate how fluctuations of a dynamical quantity of a random walk exploring a higher-order network arise over time. \newtext{In the quenched case, where the hypergraph structure is fixed, through large deviation theory we show that the appearance of rare events is hampered in nodes with many higher-order interactions, and promoted elsewhere. Dynamical fluctuations are further boosted in an annealed scenario, where both the diffusion process and higher-order interactions evolve in time. Here, extreme fluctuations generated by optimal higher-order configurations can be predicted in the limit of a saddle-point approximation.} Our study lays the groundwork for a wide and general theory of fluctuations and rare events in higher-order networks.
Leonardo Di Gaetano, Giorgio Carugno, Federico Battiston, Francesco Coghi
2023-03-31T16:03:37Z
http://arxiv.org/abs/2303.18169v2
# Dynamical fluctuations in a minimal model of higher-order networks ###### Abstract Although higher-order interactions are known to affect the typical state of dynamical processes giving rise to new collective behavior, how they drive the emergence of rare events and fluctuations is still an open problem. We investigate how fluctuations of a dynamical quantity of a random walk exploring a higher-order network arise over time. By focusing on a minimal model, we show that higher-order interactions always hamper the appearance of rare events, although the same structure facilitates visits of certain nodes, an event considered atypical on the corresponding system with only pairwise interactions. If the structure of interactions is not fixed but is optimally selected to favour a particular fluctuation, a phase transition emerges where a random walk is typically both homogeneously spread over the network and localised on a portion of it. Our study lays the groundwork for a wider and general theory of fluctuations and rare events in higher-order networks. The appearance of fluctuations in dynamical processes is pivotal in determining the future evolution of many real-world systems [1]. The emergence of rare events may be bolstered or hindered by the hosting complex environment, which often can be conveniently modeled as a complex network [2; 3; 4]. Large fluctuations in complex networks have been studied across a variety of processes, including percolation [5; 6; 7; 8], spreading [9; 10], and transport [11; 12; 13; 14]. A stream of research has focused on random walks as a versatile model of diffusion in discrete spaces [15; 16; 17; 18; 19] and on their rare event properties [20; 21; 22]. Large deviation theory has revealed that low-degree nodes are more susceptible than hubs to the appearance of atypical loads, possibly leading to dynamical phase transitions [23; 24; 25; 26]. Despite their success, graphs can only provide a constrained description of real-world systems, as links are inherently limited to model pairwise interactions only [27; 28]. Yet, from social [29; 30; 31; 32] to biological [33; 34; 35; 36] networks, in a wide variety of real-word systems interactions may occur among three or more units at a time. In recent years, hypergraphs [37] have emerged as a versatile tool to model systems with such higher-order interactions. Interestingly, taking into account higher-order interactions has shown to lead to new collective phenomena in a variety of dynamical processes [38], including diffusion [39; 40], contagion [41; 42; 43], synchronization [44; 45; 46; 47; 48] and evolutionary games [49; 50; 51]. While such studies have focused on characterising dynamical behavior at the typical state, understanding fluctuations and rare events statistics driven by the presence of higher-order interactions is to this day still an open problem. To this end, in this work we propose a study of fluctuations and rare events on higher-order networks using large-deviation theory tools. We focus on random walks on higher-order networks and on a particular time-additive observable that monitors the time the random walker spends in a certain region (_core_) of the hypergraph. Our study reveals how fluctuations arise in time for a random walk on a fixed hypergraph structure (_quenched_ case), and which higher-order structure is optimal to achieve them (_annealed_ case). We show that in the quenched case fluctuations of the core occupation time are always hampered by the higher-order structure, while the same higher-order interactions are best to visit nodes that would not be typically visited on the corresponding system with only pairwise interactions. In the annealed case, when the structure of interactions is not _a-priori_ fixed, a phase transition arises between a typically homogeneously-spread random walk and a random walk more localised on subportions of the hypergraph. Model We introduce a hypergraph \(G=(V,E)\), where \(V\) represents the set of nodes, and \(E=\{E_{1},E_{2},\ldots,E_{M}\}\) the set of hyperedges, i.e., \(E_{l}\) is an unordered collection of nodes belonging to the same hyperedge \(l\). We consider in particular an illustrative structure consisting of a _core_ node, labelled \(0\), connected with _peripheral_ nodes through a varying number of higher-order connections, labelled by \(i\in\{1,\ldots,N-1\}\). As shown in Fig. 1, the graph is composed by \(|V|=N\) nodes, a fully connected pairwise structure, i.e. \(\binom{N}{2}\) binary edges \(E_{i(2N-i-1)/2+j}=\{i,j\}\) for \((i,j)\in[0,N-1]^{2}\) and \(i<j\), and a binomially-distributed random number \(H\) of parameter \(p\in[0,1]\) of three-body interactions \(E_{N(N-1)/2+i}=\{0,i,j\}\) where \(i\) is an odd node and \(j-i=1\), i.e., all triangular interactions are centered in \(0\). Intuitively, the higher the number of higher-order interactions the better connected the core node is with the periphery of the hypergraph. For simplicity, in the following we constrain the higher-order structure so that each peripheral node can participate in at most one three-body interaction. As we will show, for this symmetric model, non-pairwise interactions affect the statistics of the core occupation time only through their total number \(\eta\). In particular, the probability of drawing a hypergraph with a number of three-body interactions \(H=\eta\) is given by \[\mathbb{P}(\eta)\coloneqq\mathbb{P}(H=\eta)=\binom{N_{\triangle}}{\eta}p^{ \eta}(1-p)^{N_{\triangle}-\eta}\quad. \tag{1}\] This minimal model is no special with regards to our findings below--it only simplifies calculations. We have checked this numerically with hypergraphs that allow for overlapping three-body interactions and since results do not qualitatively change we have decided to not include them. In summary, for the model we consider here, \(G\) comes as an instance of an ensemble of hypergraphs whose higher-order structure is fully described by two parameters only, namely, \(N\) and \(p\). We consider on \(G\) a discrete-time random walk \(X=\{X_{l}\}_{l=1}^{n}\), where \(X_{l}\) is the node where the random walk sits at time \(l\)[40]. The random walk is characterized by an unbiased dynamics encoded in the transition matrix \(\Pi=\{\pi_{ij}\}\) whose entries are \[\pi_{ij}=\frac{k_{ij}^{H}}{\sum_{l=1}^{N}k_{il}^{H}}\, \tag{2}\] where \(k_{ij}^{H}\) represents the hyperedge, i.e., the number of nodes, excluding \(i\), that are present in the hyperedges that are common to \(i\) and \(j\). As the random walk explores the graph, it collects information in the form of the time-additive observable \[T_{n}=\frac{1}{n}\sum_{l=1}^{n}\delta_{X_{l},0}\, \tag{3}\] which measures the fraction of time the random walk has spent on the core node \(0\) up to time \(n\). Typically, for a very long time, for a number \(H=\eta\) of higher-order interactions the fraction of time the walker spends in \(0\) can be calculated [40] and reads \[T_{\eta,\text{typ}}=\frac{4\eta+N-1}{8\eta+(N-1)^{2}}. \tag{4}\] The higher the number of triangular interactions, the better connected is the core with the periphery of the graph, and the longer the time the random walk will spend on \(0\). We consider dynamical fluctuations in two different physical scenarios. First, we study the mean behavior of rare events of \(T_{n}\) over the ensemble of hypergraphs (quenched case). Then, at the expense of an entropic cost associated with the logarithm of \(P(\eta)\) in (1), we let the random walk choose the optimal hypergraph that generates a particular atypical fluctuation of \(T_{n}\) (annealed case). Quenched fluctuationsHere, we study fluctuations of \(T_{n}\), which play a crucial role in the finite-time evolution of real-world systems, using large deviation theory [52; 53; 54]. This is possible because the leading scaling behavior of the probability distribution \(\mathbb{P}_{\eta,n}(t)\coloneqq\mathbb{P}_{\eta,n}(T_{n}=t)\) is exponential in time, i.e., \[\mathbb{P}_{\eta,n}(t)=e^{-nI_{\eta}(t)+\alpha(n)}\, \tag{5}\] where \(I_{\eta}(t)\) is the non-negative large-deviation rate function and \(o(n)\) denotes sub-linear corrections in \(n\). Further notice the subscript \(\eta\) which refers to the fact that, at this stage, we are studying fluctuations of \(T_{n}\) on a fixed graph with \(\eta\) higher-order interactions. To understand rare events, we need to calculate \(I_{\eta}\) in (5). However, evaluating \(I_{\eta}\) directly is often non-trivial, hence we resort to a change of ensemble in order to get more meaningful information on fluctuations. To this end, we introduce the Scaled Cumulant Generating Function (SCGF) \[\Psi_{\eta}(s)=\lim_{n\to\infty}\ln G_{\eta,n}(s)=\lim_{n\to\infty}\ln\mathbb{ E}\left[e^{nsT_{n}}\right]\, \tag{6}\] which characterizes the leading exponential behavior of the moment generating function \(G_{\eta,n}(s)\) associated with \(T_{n}\). For finite and connected hypergraphs, \(\Psi_{\eta}(s)\) is analytic, and one can calculate \(I_{\eta}(t)\) with a procedure described in the Gartner-Ellis theorem [52; 53; 54; 55] that makes use of the Legendre-Fenchel transform \[I_{\eta}(t)=\sup_{s\in\mathbb{R}}\left(st-\Psi_{\eta}(s)\right)\, \tag{7}\] which links the Laplace parameter \(s\) with a fluctuation \(T_{n}=t\) via \[t=\Psi_{\eta}^{\prime}(s). \tag{8}\] Because the random walk \(X\) is ergodic, the SCGF can be obtained spectrally as \[\Psi_{\eta}(s)=\ln\zeta_{s}\, \tag{9}\] where \(\zeta_{s}\), computed numerically, is the dominant eigenvalue of the so-called tilted matrix \[\Pi_{s}=\{(\pi_{s})_{ij}\}=\left\{\pi_{ij}e^{s\delta_{0,j}}\right\}. \tag{10}\] To account for average properties of the ensemble of hypergraphs considered, one can take a quenched average over the disorder--here characterized by the number \(\eta\) of higher-order interactions--of the function \(\Psi_{\eta}\). Recalling that \(H\) is a binomially distributed random variable with parameter \(p\) and that the maximum number of higher-order interactions is \(N_{\triangle}=\text{ceil}\left[(N-2)/2\right]\), the quenched average can explicitly be written as \[\Psi_{\text{q}}(s)=\sum_{\eta=0}^{N_{\triangle}}\mathbb{P}(\eta)\Psi_{\eta}(s)\, \tag{11}\] where q stands for quenched. [56] Given \(\Psi_{\text{q}}(s)\) in (11), the quenched rate function \(I_{\text{q}}(t)\) can be obtained via a Legendre-Fenchel transform of \(\Psi_{q}\) (rather than \(\Psi_{\eta}\)) in (7). Figure 1: Sketch of a realisation of the hypergraph \(G\). Dashed-lines representing binary interactions define the fully-connected binary backbone of the hypergraph. In pink, two higher-order interactions connect the core node \(0\) with the peripheral nodes \(1\), \(2\), \(3\), and \(4\). The stochastic dynamics is represented by arrows departing from certain nodes and pointing onto others (different thicknesses refer to different jump probabilities and depend on whether nodes share only pairwise or also higher-order interactions). To understand the role of higher-order interactions, we first look at whether fluctuations, of the same magnitude but for different values of \(p\), are more or less likely to appear on higher-order networks generated with different values of \(p\). To understand this, we re-scale \(t\) in \(I_{\rm q}(t)\) with the typical fraction of time spent in \(0\) by the random walk at a fixed parameter \(p\), namely \(T_{\rm typ}\), obtained by averaging (4) over \(\mathbb{P}(\eta)\). In Fig. 2(a) we plot the rate functions \(I_{\rm q}(\bar{t}=t/T_{\rm typ})\) for different values of \(p\) and compare them with the rate function for a graph with no higher-order interactions (\(p=0\) case). Because of the re-scaling, all rate functions are \(0\) at the typical value \(\bar{t}=1\). The likelihood is encoded in the shape of the rate function branches, the higher (lower) the branch the exponentially-less (more) likely is a fluctuation \(\bar{t}\neq 1\) to appear. We note that for \(p>0\), higher-order interactions always hinder fluctuations away from the typical value, making it harder for the random walk to visit a core-localized or periphery-delocalized phase in the fluctuations. This is because with increasing \(p\) the average number of higher-order interactions pointing to node \(0\) grows generating a 'containment' effect on the dynamics. As a direct consequence escaping from node \(0\) is made harder and dynamical fluctuations are (exponentially) suppressed. In Fig. 2(b), we also look at how, for a fluctuation \(t\) of \(T_{n}\) for the random walk on the graph with only pairwise interactions, the large deviation rate function \(I_{\rm q}(t)\) changes with increasing \(p\). On the one hand, what is typical for the case \(p=0\) becomes more and more atypical with increasing \(p\). On the other hand, rare values of \(T_{n}\) greater than the typical one for the case \(p=0\), can become typical just by increasing the number of higher-order interactions. On the opposite, rare values of \(T_{n}\) smaller than the typical one, become only more atypical by introducing higher-order interactions. In other words, by introducing higher-order interactions on a fully-pairwise network we make it easier for the random walk to spend more (less) time on the core (peripheral) node(s). _Annealed fluctuations_ We have hitherto considered the higher-order interaction structure of the hypergraph and the dynamics as two distinct objects and have studied how the dynamics itself is influenced by a fixed underlying structure. However, in a variety of real-world systems, structure and dynamics are characterised by a tight interplay [57]. In such a scenario, fluctuations arising for a dynamical observable, such as \(T_{n}\), will be sustained by an optimal, i.e., the most likely one in probability terms, underlying structure. Motivated by this, in the following we make use of our model to investigate how and which higher-order interactions can be considered optimal for realising a fluctuation of our dynamical observable. In practice, we calculate _annealed_ averages of the moment generating function \(G_{\eta,n}\) over the disorder as follows: \[G_{n}(s)=\sum_{\eta=0}^{N_{\Delta}}\mathbb{P}(\eta)G_{\eta,n}(s)\, \tag{12}\] where we remind the reader that fixing \(s\) corresponds to fixing \(t\) (on average) according to (8). By considering both the long-time and large-graph (and hence large \(N_{\triangle}\)) asymptotics, and introducing the re-scaled parameter \(h=\eta/N_{\triangle}\in[0,1]\), we approximate \(G_{n}(s)\) in (12) with its saddle-point solution for \(h\) as \[G_{n}(s)\approx e^{n\left(\ell^{-1}\log\mathbb{P}(h^{*})+\Psi_{\eta^{*}}(s) \right)}\, \tag{13}\] where we call \(\ell=n/N_{\triangle}\) annealing parameter and \(h^{*}\) is the saddle-point solution, i.e. the most probable fraction of higher-order interactions appearing that generates the fluctuation \(\Psi^{\prime}_{\eta^{*}}(s)=t\). We only consider the leading exponential behaviour in \(n\) of (13), i.e., \[\hat{\Psi}_{\ell}(s):=\ell^{-1}\log\mathbb{P}(h^{*})+\Psi_{\eta^{*}}(s)\, \tag{14}\] whose Legendre-Fenchel transform \(\hat{I}_{\ell}(t)\) is obtained replacing \(\Psi_{\eta}\) with \(\hat{\Psi}_{\ell}\) in (7). The annealing parameter \(\ell\) plays a key role. On the one hand, in the limit \(\ell\to 0\) the hypergraph size blows up and, because the disorder is self-averaging, all probability concentrates around the typical number of higher-order interactions, hence recovering the quenched average (11) for a fixed \(p\). [Notice however that it is only in the limit that the quenched average is recovered because the disorder is self-averaging. Indeed, for small but finite \(\ell\) spurious contributions of \(O(\ell^{-1})\) appear at exponent in (13) and shift \(\hat{I}_{\ell}\) upwards.] On the other hand, for \(\ell\to\infty\) the hypergraph size, although large, does not blow up and therefore disorder and dynamics 'interact' at the saddle-point solution of (13) selecting the most likely structure that realises the fluctuation associated to \(s\). We refer to \(\Psi_{a}(s)\coloneqq\hat{\Psi}_{\ell\to\infty}(s)\) as the _annealed_ SCGF and to \(I_{a}(t)\) as the annealed rate function which characterises the leading exponential behaviour of the probability distribution of \(T_{n}\) in such a scenario. Notice that all intermediate values of \(\ell\) do not necessarily define a good SCGF.[58] In particular, for \(\ell\) finite and small, one has \(N>n\) and therefore the ergodicity assumption necessary to derive \(\Psi_{\eta^{*}}(s)\) falls. It is only for \(n>N\) and \(n\) large that we get a good approximation--exact in the limit \(n\to\infty\)--for the distribution of \(T_{n}\). In Fig. 3(a) we plot \(\hat{I}_{\ell}\) for several values of \(\ell\). As expected, for small \(\ell\) we retrieve the quenched rate function \(I_{q}\) (for the same parameter \(p=0.5\) used here) which is realised by the typical number of higher-order interactions \(\eta^{*}=h^{*}N_{\triangle}\sim\text{ceil}\left[N_{\triangle}/2\right]\) throughout all fluctuations (see Fig. 3(b)). As we increase \(\ell\), the Figure 2: (a) Rate functions \(I_{\rm q}(\bar{t})\) for different probabilities \(p\) of having higher-order interactions in the hypergraph. The higher the \(p\), the narrower the rate functions for \(|\bar{t}|>1\). (b) Random representing how the rate function \(I_{\rm q}(t)\) behaves as a function of \(t\) and \(p\) (for visualization purposes see plot \(\sqrt{I_{\rm q}}\)). The light-blue line represents the typical value \(T_{\rm typ}\) which linearly increases with \(p\). Plots obtained for a hypergraph with \(N=1000\) nodes.
2309.15416
The Design and Implementation of an Extensible System Meta-Programming Language
System programming languages are typically compiled in a linear pipeline process, which is a completely opaque and isolated to end-users. This limits the possibilities of performing meta-programming in the same language and environment, and the extensibility of the compiler itself by end-users. We propose a novel redefinition of the compilation process in terms of interpreting the program definition as a script. This evaluation is performed in an environment where the full compilation pipeline is implemented and exposed to the user via a meta-object protocol, which forms the basis for a meta-circular definition and implementation of the programming language itself. We demonstrate the feasibility of this approach by bootstrapping a self-compiling implementation of Sysmel, a static and dynamic typed Smalltalk and C++ inspired programming language.
Ronie Salgado
2023-09-27T05:46:41Z
http://arxiv.org/abs/2309.15416v1
# The Design and Implementation of an Extensible System Meta-Programming Language ###### Abstract. System programming languages are typically compiled in a linear pipeline process, which is a completely opaque and isolated to end-users. This limits the possibilities of performing meta-programming in the same language and environment, and the extensibility of the compiler itself by end-users. We propose a novel redefinition of the compilation process in terms of interpreting the program definition as a script. This evaluation is performed in an environment where the full compilation pipeline is implemented and exposed to the user via a meta-object protocol, which forms the basis for a meta-circular definition and implementation of the programming language itself. We demonstrate the feasibility of this approach by bootstrapping a self-compiling implementation of Sysmel, a static and dynamic typed Smalltalk and C++ inspired programming language. m _CCS Concepts:_ **Software and its engineering \(\rightarrow\) Compilers; Interpreters; Translator writing systems and compiler generators; Dynamic compilers; Semantics; Syntax; Extensible languages.** _Keywords:_ metacircular language, meta programming, meta object protocol, extensible compiler ## 1. Introduction _System vs Non-System Language._ An important dichotomy in the classification of programming languages is on whether a programming language is meant for low-level close to the machine _System_ programming or not. System programming languages such as C and C++ tend to have semantics with a direct translation towards unoptimized machine operations. These semantics allows a programmer using these language having a mental model. This cognitive mental model facilitates learning and debugging activities (Bang et al., 2016). It allows system programmer to have direct control of the machine, which facilitates writing high-performance code by avoiding unneeded abstraction layers such as having bytecode interpreter, JIT or garbage collector that introduces latency and non-determinism in execution times. On the other hand, non-system programming languages such as Java, C# and Python, are languages that do not have a direct correspondence with machine operations. These non-system programming language facilitate the software development activity by providing abstractions such as automatic memory management, faster iteration cycles via interpretation, dynamic and duck typing, _etc._. The presence of these abstractions increase the runtime cost of the program execution, and they also sacrifice the capability of having this close to the metal mental model. However, these abstraction are desirable because they improve software development productivity, and they are used when execution performance can be sacrificed. _Language Impedance Mismatch._ In multiple application domains, the simultaneous usage of a system and a non-system programming language is required. A high performance critical core is written in the low-level system programming language. The UI and non-critical performance sections are commonly written in higher-level languages which are typically used for scripting purposes. This also facilitates the extensibility of an application by people without a programming expertise, like an end user of a scriptable application such as a spreadsheet editor (_e.g.,_ VBA in Microsoft Excel(Krizhevsky et al., 2014)). The usage of at least two completely different programming languages is a common practice in the videogame industry. Commercial game programming is an activity where high-performance and productivity is important (Krizhevsky et al., 2014), and striving for a balance between them is a necessity. Game engines such as Unreal Engine(Han et al., 2015) and Unity(Han et al., 2015), typically have a high performance core written in C++, and a high-level language like C# or Blueprint used for scripting and game design. Using multiple languages facilitates productivity in terms of reducing game testing and design iteration cycles by programming and non-programming people. However, the connection between two completely different languages such as C++ and the Blueprint, the visual scripting language used by Unreal, requires the maintenance or generation of wrapper code. These wrappers are typically maintained by hand or generated by a specialized offline tool that imposes restriction on the programming language features that can be used. They are some some general purposes tools like SWIG (Bang et al., 2016) for solving this problem, but their usage might be precluded by project specific constraints. _Fixed Compilation Pipeline._ The fixed compilation pipeline of C and C++ does not provide extension points in the language compiler itself. Accessing to the compiler data structures in an scriptable way inside might be an ideal mechanism for generating custom application specific reflection metadata required for supporting garbage collection, and automatic scripting language connection. Extensible compilation also facilitate metaprogramming, and the construction of DSL embedded directly in the host language (Dong et al., 2019). _Unified Programming Language and Environment._ We propose the design and construction of a programming language that can be used simultaneously in both context. We propose using this language as a script that defines how to build a program, whose execution constructs a _Program Entity Graph._ Different subgraphs can be obtained by tracing a subset of the program entities from an user specified root set. In the case of system programming, this root set is typically composed of only the main entry point. For a fully reflective environment, where the language is not used for system-programming, the root set is composed of the main entry and the global namespace object. By changing the set of program entities traced, we can compile down or up different features of the programming language, which facilitates adapting its usage for system and non-system program. In Section 2 we describe the design of Sysmel, a Smalltalk, Lisp and C++ inspired System Metaprogramming Language. In Section 3 we describe our bootstrapping processs along with the challenges faced by it. ## 2. Sysmel language ### Design _Influences._ In this section we describe the design and implementation of Sysmel, a System Metaprogramming Language, with a design inspired mostly in Smalltalk, Lisp and C/C++. With the objective of unifying system and non-system programming, with an extensible compiler and metaprogramming facilities we take strong inspiration on these three important historical languages: 1) from Lisp, we take important the concepts of macros as function from AST into AST (Krishnan et al., 2017), meta-circular evaluation (Krishnan et al., 2017), and the Meta Object Protocol used for defining an Object Oriented Programming environment (Han et al., 2017); 2) from Smalltalk, we take object-oriented programming via message passing, blocks as closures, the importance of a minimalistic syntax, and reflection as a mechanism for meta-circular definition; 3) and from C/C++, we take primitive types, pointers and direct memory access. We infer a concept of using static primitives types as a mechanism for defining translational. These type defined semantics facilitate a direct to machine operation mental-model. _Design Wishlist._ From these influences we distill the following feature wishlist that we want to support in a single programming language, even if there are conflicting between them: 1. Minimalistic convenient to use flexible syntax. 2. Everything should _look_ like an object at the syntax level. 3. Everything must be typed. Dynamic typing is realized via static typing. 4. Type inference support. The minimum is supporting the local type inference like the C++ _auto_ keyword. Stronger type inferences algorithms like Hindley-Milner (Han et al., 2017)(Han et al., 2017) are desirable, but not required. 5. Block closures. 6. Arbitrary compile time evaluation support. 7. Lisp style macros which are functions from AST into AST functions. 8. Primitive types which are directly supported by the target machine. 9. Pointers and direct memory accesses. 10. Manual memory management support. 11. Optional automatic memory management via garbage collection support. 12. Compile time and optional runtime reflection. 13. The ability for extending and modifying all of the compilation pipeline stages. _Syntax in a postcard._ The Sysmel syntax is strongly based on the syntax from Smalltalk, but there are additions and changes taken from C/C++ to facilitate supporting different programming styles. Sysmel syntax is minimalistic and it is based around the concepts of message sending, function application, and the construction of commonly used data structures. Everything is an expression and returns a value with a specific type. See Listing 1 for a postcard that illustrate the whole Sysmel base syntax. Higher-level syntactical constructs are realized via composing these base syntax elements, and by using metaprogramming techniques that manipulate the AST during compilation time. ### Semantic Analysis Meta-Object Protocol _AST Protocol._ The scanning and parsing stages can be done by using traditional approaches such as manually written recursive descent parsing, or by using more extensible approaches like parser combinators. Parsing produces nodes which conform to the Meta Object Protocol. The top level nodes respond to the _#analyzeAndEvaluateWithEnvironment_: message. This single message is used for simultaneous analysis and evaluation of the top-level parsed source code. The environment received by it is used for looking identifier values. Lambda nodes are evaluated into closure objects, which are composed of two parts: a capture vector, and a function definition object which contains analyzed arguments and body nodes. The analysis of the elements of a function definition are performed by sending the _#analyzeWithEnvironment_: message onto the argument, result type and body nodes. This message is responsible of returning a newly analyzed node where its value type is solved, and its children nodes are also analyzed recursively. Once a function definition is analyzed, it can be evaluated via two different mechanisms: 1) direct interpretation of the analyzed node, supported through the _#evaluateWithEnvironment_: message. 2) compilation into a bytecode or an IR that can be further optimized and compiled into machine code. We support these two mechanisms. In summary, the AST node semantic analysis MOP is composed of the following messages: _#analyzeAndEvaluateWithEnvironment_; _#analyzeWithEnvironment_; _#analyzeWithEnvironment_; _#compileBytecodesDirectlyWith_: and _#generateSSAValueWith_. **Extensible AST**AST nodes are defined as ordinary class instances inside of Sysmel. New AST nodes can be defined by just subclassing from ASTNode and then overriding the required methods. New AST nodes can be exposed in the language through macros. In fact, the local variable definition AST node is not present in the sysmel language syntax, but we expose it through two different mechanism: 1) macros like _#let:with:_ and _#let:type:with_; and 2) the _let_ metaboulder (See Section 2.3). **Function Application Analysis.** Function applications are analyzed in two phases: first as an unexpanded function application, where the functional object can be a macro. The macro is invoked with the non-analyzed parameter nodes as arguments. The node returned by the macro is analyzed recursively. In the case of expanded function applications, the analysis of the application node is delegated onto the functional object type. This allows treating any object or value as a functional object. In the case of ordinary functions, its evaluation is performed by constructing an activation environment with the evaluated arguments. In other cases, the _#applyWithArguments_: message is sent to the functional object. One important optimization is always performed if possible. We define functionally _pure functions_ in terms of _observable external side effects_, so we allow programmers to perform internal definitions through impure imperative mechanisms. For this reason any function can be marked as _pure_ function. A pure function application that only uses literal nodes is _always evaluated in compile time_, and the application node is replaced by a literal node with the evaluation result. This application of referential transparency for pure functions is mandatory, and we use it for constructing derived type literals. **Message Send Analysis.** In the case of message sends, there are also multiple analysis phases. First, the receiver node is analyzed, and the actual message send analysis is delegated onto the receiver node type. The receiver type first analyses the message selector. If the analyzed selector is a literal, then the corresponding macro or method is looked up through multiple dictionaries. If the found method is a macro, then it is expanded by receiving the receiver and argument nodes as parameters. If the method is not a macro, then there are two cases: if the method does not require dynamic dispatch (_i.e.,_ it cannot be overriden by a subclass), then the message send node is converted into a function application node which is analyzed recursively. If dynamic dispatch is required, then the remaining analysis of the message send is delegated onto the method type. If no method was found, if the receiver is not a dynamic type a semantic error is raised, otherwise a generic analysis is performed for the arguments of dynamic message sends. Type System ProtocolThe type system is another important side of the MOP. Types are also objects, and they are in fact instances of the class _Type_. The analysis of some AST nodes is delegated into specific types. This facilitates defining type specific translational semantics, binding message sends to methods statically, and defining type specific macros. We also use the type system for constructing pointer and reference types. C++ style references are used for constructing mutable variables. A reference type is constructed by responding to almost no message, except for #:- used for assignments, and _address_ used for converting a reference into pointer. Pointers can be converted into references through the _ message. With these messages we support the semantics of the C pointer operators (&, *). The type system MOP is much larger than the MOP used for AST nodes. The following is a non-exhaustive list of some messages that are part of the MOP exposed in Type: _#methodDictionary, #lookupSelector, #analyzeAndEvaluateMessageSendNode:forReceive:Environnent_: #analyzeAndType:CheckFunctionApplicationNode:withEnvironment_; #analyzeAndTypeCheckSolvedMessageSendNode:withEnvironment_; #ref, #pointer. ### Metabuilders The _Builder_ pattern in Smalltalk is a common pattern for the construction of objects through successive message sends in a chain. We extend this pattern onto the meta-level by defining the concept of a metabuilder. A metabuilder is a builder that operates on syntactic language elements, and they can be seen as statefull macros. Metabuilders are instanced by invoking a macro function or macro identifier known as the metabuilder factory. Metabuilders are ordinary objects where their classes override the _#analyzeAndEvaluateMessageSendNode:forReceiver:withEnvironment_: and _#analyzeMessageSendNode:withEnvironment_: methods by delegating them onto the metabuilder instance. This delegation is always possible for the simultaneous analysis and evaluation case, and it is only possible if the metabuilder instance is present on a literal node for the AST analysis case. We use metabuilder for making higher-level syntactic looking constructs which look familiar to C++ and Java programmers. We also use them for hiding the actual manipulation of the underlying program entities which are also constructed through ordinary message sends. See Listing 2 for an example on how code can look like a different language when using metabuilders, even though the base syntax from Listing 1 is still the same. ``` publicclassSampleClass superclass:Object;definition:{ publicfieldfirst=>Int32. publicmethodadd:(x:Int32)::=>Int32 :=first+x. }. functionsampleFunction(x:Int32,y:Int32)=>Int32 :=SampleClassnewfirst:x;add:+y. printLine(sampleFunction(2132,3132). ``` Listing 2: Metabuilder Usage ### Optimization and code generation pipeline High-Level IRThe optimization and code generation pipeline, unlike the semantic analysis is a much more traditional process. We perform code generation and optimization successive translation from the analyzed AST into different intermediate representations (IR). We first translate the AST into a high-level SSA based IR with a design inspired by LLVM (Levy, 2017), where we model our base language operation semantics like function applications, message sends, local variable allocation (alloc), pointer and object slot load and stores. At this level we represent primitive type intrinsics as function calls. In our current implementation we perform some optimizations like constant propagation, control flow simplification and inlining. We are planning on having many more optimizations at this level. Middle-Level IRThe high-level SSA form is translated into a mostly portable middle-level three address code IR which is also in SSA. The instructions of this IR is composed by tuples that contain a single machine primitive operation and its operands. We use this IR for performing lower-level optimizations like combining comparison with branches, and register allocation. We also compute the stack frame layout computation, and some phases of debug information generation during this stage before generating the next stage which is assembly. Low-Level IROur low-level IR is assembly code. We use this IR for final machine code generation, and also for generating object files with included debug information. A subset of the program object graph is serialized into the data segment of the resulting object file, and references between objects are annotated with the required relocations. We are capable of generating platform specific relocatable ELF, COFF and Mach-O object file. Since we do not implement a linker, we rely on the standard linker provided by each operating system for constructing actual standalone executable programs. ## 3. Bootstrapping Minimal C ImplementationDue to the circularity on the language definition, performing a proper bootstrap is a tricky and complicated process. We have attempted multiple times to construct and bootstrap this system. In our current implementation, we perform a phase0 compilation by constructing a minimal implementation in C. This minimal implementation takes care of parsing, base language semantic analysis, it uses the LISP2 compacting garbage collection algorithm (Bordes and Komodel, 2015). To reduce bootstrapping development iteration cycles we implemented a register based bytecode, and a simplistic x86_64 JIT. The bootstrap environment uses a logical object-model where raw native pointers are not exposed. In this object model we have three kind of objects: immediates (encoded as tagged pointers), byte tuples, and pointer tuples. All of the objects can be seen as a tuple that contains a type (another tuple), an identity hash value, and the size of the tuple itself in bytes. With this simplistic object model we construct a complete Smalltalk style image environment in Sysmel. The base objects and types are defined by hand in C, the intrinsic operations are exposed as functions which have the name of the primitive annotated. We also implemented a minimal bootstrap parser and semantic analyzer in C. Metastability ProblemsWhen writing the actual semantic analyzer of Sysmel in Sysmel we had to be extra careful. Each time a new MOP method is implemented, its new definition is immediately being used for subsequent semantic analysis. Running onto different meta-stability issues when performing this kind of definitions is a big issue. We solved these problems by introducing the new definitions on a specific order, and by also anotatting the new method as functions that require an eager semantic analysis before they are installed as the new semantic analyzed and executable method. Self Feeding AST and Program GraphThe traditional compiler bootstrapping process is performed via feeding the source code representation to subsequent versions of the compiler. In our case, we are compiling from the already analyzed in memory AST which is stored as part of the program entiy graph. The required program entities are traced from the global namespace object, and the _main_ entry point function. The analyzed AST is used for constructing the SSA based IR and subsequent lower-level IR before generating machine code. The program entity object graph is serialized onto the data section of the object files, and references are annotated with relocations. ## 4. Limitations Frontend not validated by bootstrapOne important limitation of our bootstrapping approach is that it only validates the quality of our middle-end and backend. The frontend of our compiler is not being validated by the bootstrapping process. Unlike a traditional bootstrap that starts clean from source-code, we start from already analyzed in memory AST nodes. We are skipping completely the source code parsing and semantic analysis stages during bootstrapping after the first phase. For this reason the frontend implementation is not being validated by the bootstrap. Memory usageOur bootstrapping process serializes a copy of the fully semantic analyzed AST and metaobjects that compose the program entity definitions. This incurs on a larger memory usage since a lot of compilation-only metadata is kept by the process. However, this same meta-data might be used to construct development and debugging tools. ## 5. Related Work Bootstrapping Reflective SystemsPolito et al. describe the complexities and the metastability issues that happen when bootstrapping a highly reflective system like Pharo (2015); Pharo et al. (2016). Embedding Languages in PharoHelvetia by Renggli is _et al._(Renggli et al., 2017) is a framework for embedding languages inside of Pharo. We take inspiration inspiration from Helvetia for multiple elements such as the quasi-quoting of preceding by an extra backquote the standard Common Lisp operators. Our monadic parser framework is based on PetitParser by Kurs _et al._(Kurs et al., 2017), another component used by Helvetia. Bee SmalltalkBee is a Smalltalk implementation which is also completely defined in itself. Bee is also capable of constructing native executable through similar reflection based serialization process. Instead of relying on supporting C style primitive types for constructing the base runtime, the usey different mechanisms which they call "undermethods, underprimitives and inline nativization of bytecodes." (Renggli et al., 2017) ## 6. Conclusions and future work We described the central concepts behind the metacircular definition of Sysmel, a programming language designed for system and non-system programming. We also proved the feasibility for constructing this system by bootstrapping an self-compiling version of Sysmel capable of compiling and optimizing itself through three full self-compilation cycles. In the future, we would like to continue improving our compilation and optimization infrastructure. We would like to perform benchmarks with a much more optimized version of Sysmel, on realistic applications to further validate the language. In this venue, it would also be desirable to validate the usage of Sysmel with people.
2309.12379
Wormhole solution free of ghosts in Einstein's gravity with two scalar fields
In this paper, we construct models that admit the traversable wormhole geometries in the framework of Einstein's gravity with two scalar fields. As well known, the energy conditions are broken and we show that there appears a ghost. The ghost can be, however, eliminated by imposing a constraint on the ghost field, which is a scalar. The constraint is similar to the mimetic one proposed by Chamseddine and Mukhanov to construct an alternative description of cold dark matter. We explicitly show that there does not appear any unstable mode although the energy conditions are broken. Therefore we obtain a model that realizes the traversable and stable wormhole.
Shin'ichi Nojiri, G. G. L. Nashed
2023-09-21T12:40:35Z
http://arxiv.org/abs/2309.12379v2
# Wormhole solution free of ghosts in Einstein's gravity with two scalar fields ###### Abstract In this paper, we construct models that admit the traversable wormhole geometries in the framework of Einstein's gravity with two scalar fields. As well known, the energy conditions are broken and we show that there appears a ghost. The ghost can be, however, eliminated by imposing a constraint on the ghost field, which is a scalar. The constraint is similar to the mimetic one proposed by Chamseddine and Mukhanov to construct an alternative description of cold dark matter. We explicitly show that there does not appear any unstable mode although the energy conditions are broken. Therefore we obtain a model that realizes the traversable and stable wormhole. Introduction Recently, observations involving the relativistic collision of two compact objects have resulted in the production of gravitational waves (GWs). These waves serve as invaluable instruments for examining the properties of the colliding entities. Moreover, the recent findings reported by LIGO [1; 2; 3; 4; 5] have provided compelling evidence that the field of gravitational wave astronomy will play a substantial role in advancing our understanding of gravitational interactions and extreme-gravity astrophysical phenomena. However, despite these notable advancements, recent observations have not ventured into the intricate intricacies of spacetime beyond the photon sphere. The expected forthcoming gravitational wave (GW) observations are in a position to record the ringdown phase. This phase is recognized by a series of damped oscillatory modes in the initial stages, commonly known as quasinormal modes (QNMs) [6; 7; 8; 9; 10; 11; 12]. This stage has the capacity to provide vital information about the composition of compact objects [13], particularly elucidating the physics in the vicinity of black holes' event horizons (BHs) and the potential presence of unexpected structural characteristics. Future gravitational wave (GW) observations hold the promise of providing us with a deeper understanding of compact objects that differ from black holes (BHs). These distinct compact entities, which lack event horizons, are commonly known as exotic compact objects (ECOs) [9; 11; 14]. Among the notable exotic compact object (ECO) solutions, wormholes (WHs) stand out. They are solutions to the Einstein equations that enable connections between different regions of the Universe or even between entirely separate Universes [15; 16]. Although they have distinct causal structures when compared to black holes (BHs), wormholes (WHs) can possess photon spheres. As a result, in gravitational wave (GW) data, the early stage of the ringdown signal has the potential to mask the ability to distinguish between wormholes (WHs) and black holes (BHs). Previous research has delved into the examination of Lorentzian wormholes within the framework of General Relativity (GR), as documented in prior studies [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. In these investigations, the establishment of conditions for traversable wormholes was accomplished by introducing a static spherically symmetric metric. Significantly, these conditions necessitate the inclusion of exotic or phantom matter, which violates the null energy condition (NEC). Wormholes that incorporate ordinary matter adhering to the null energy condition (NEC)[28; 29; 27] have been introduced and explored within the framework of modified gravity theories. These theories encompass Brans-Dicke theory[30; 31; 32; 33; 34], \(f(R)\) gravity [35; 36], Einstein-Gauss-Bonnet theory [37; 38], Einstein-Cartan theory, and general scalar-tensor theories [39; 40; 41; 42]. It's worth highlighting that wormhole (WH) solutions within the context of \(f(R)\) theories were extensively investigated in the work presented in [43]. Furthermore, wormholes (WHs) with a self-interacting scalar field were the focus of scrutiny in the study presented in [44]. Another pivotal element concerning a WH involves the breach of energy conditions within the context of GR, at least in the vicinity of the WH's throat [45; 46]. This implies that it is necessary to possess a quantity of exotic matter (where the stress-energy tensor (SET) of matter contravenes the null energy condition (NEC)) in order to maintain the stability of the WH throat. As a result, in certain reference frames, the energy density of matter can be perceived as negative. It is evident that there has been significant interest in constructing WHs while minimizing the requirement for exotic matter [47]. In reference [47], the authors have theoretically demonstrated that it is possible to minimize the exotic matter requirement to an infinitesimal level and confine it precisely at the throat of the WH by carefully selecting the WH's geometry. The mathematical procedure is referred to as the "cut and paste technique", and the resulting WH is termed a "thin-shell WH". Research on thin-shell WHs can be located in references [48; 49; 50; 51]. Nandi et al. [52] subsequently introduced an enhanced quantification method to precisely determine the amount of exotic matter present in a specific spacetime. Consequently, numerous arguments have been presented to substantiate the breach of energy conditions. In this context, the use of the phantom energy equation of state (EoS) has been employed to maintain traversable WHs, as demonstrated in references [53; 54; 55; 56]. An interpretation of half-wormholes in the bulk with gauge field was also investigated in [57] One intriguing aspect to contemplate is that within the phantom regime, the energy density grows over time, offering a conceptual basis for the existence of WHs. The stability analysis of a phantom WH geometry has been explored in reference [58]. While an array of WH solutions has been identified for the generalized Chaplygin gas [59; 60], changing cosmological constants [61], polytropic phantom energy [62], and ghost scalar fields [63]. Subsequently, the phantom energy EoS was employed to formulate precise, evolving WH structures in reference [64]. In this theoretical framework, it was determined that phantom energy can facilitate the existence of evolving WHs. Nonetheless, physicists consistently strive to evade energy condition violations or provide appropriate justifications for them. However, up to the present time, constructing a static WH geometry that complies with the energy conditions remains an unattained goal. Consequently, scientists are exploring various strategies to address this challenge. This observation prompted consideration of the potential existence of WH solutions within alternative theories of gravity. Examples include higher-order gravity theories [65; 66], cosmological WHs in higher dimensions [67], and the Einstein-Gauss-Bonnet theory [68; 69; 70; 37]. When examining \(f(R)\) gravity, it is conceivable to theoretically create traversable WHs without the necessity of exotic matter [71; 72] or by utilizing dark matter as a source [73]. Alternatively, researchers can explore the quest for WHs within the framework of third-order Lovelock gravity [38; 74], hybrid metric-Palatini gravity [75; 76], \(f(Q)\) gravity [77; 78; 79], and extended theories of gravity [80; 81]. The investigation of traversable WHs within the realm of \(f(R,T)\) gravity was conducted in references [82; 83; 84]. Simultaneously, in references [85; 86], authors identified precise WH solutions in \(f(R,T)\) gravity without the need for exotic matter. It is the aim of the present study to derive WH in the frame of Einstein's gravity coupled with two scalar fields. In [87], a general formulation to construct a model that admits arbitrarily given spherically symmetric and time-dependent geometry as a solution has been given in the framework of Einstein's gravity coupled with two scalar fields. We apply the formulation to the static wormhole formulation. As expected, the model includes ghost mode. The ghost mode often plays the role of the phantom and is consistent with the breakdown of the energy conditions. The ghost mode has, however, negative kinetic energy classically and generates negative norm states as a quantum theory. Therefore the existence of the ghost mode tells that the model is physically inconsistent. In this paper, we eliminate such ghosts using the mimetic constraint and make the ghost mode non-dynamical. We show that there does not appear unstable mode corresponding to the ghost mode although the energy conditions are still broken. This may tell that the breakdown of the energy conditions could not always imply physical inconsistency. The organization of this paper is as follows: In the next section, based on the formulation in [87], we construct a model whose solutions include a well-known wormhole geometry. We show that there appears a ghost in the model. In Section III, we show that one of the two scalar fields can be canonical and not a ghost. Because another one is a ghost in general, we propose a model to make the scalar field non-dynamical by imposing the mimetic constraint, and as a result, unstable modes corresponding to the existence of the ghost disappear. In Section IV, we confirm the absence of the ghost mode by using the perturbation from the wormhole geometry although the energy conditions are broken. The breakdown of the energy conditions may not always imply any physical inconsistency. The last section is devoted to the summary and discussion. ## II Wormhole based on Einstein's theory with two scalar fields Einstein's GR with two scalar fields \(\phi\) and \(\chi\) is described by the action as follows [87] \[S_{\text{GR}\phi\chi}=\int d^{4}x\sqrt{-g}\left[\frac{R}{2\kappa^ {2}}-\frac{1}{2}\,A(\phi,\chi)\partial_{\mu}\phi\partial^{\mu}\phi-B(\phi, \chi)\,\partial_{\mu}\phi\partial^{\mu}\chi\right.\] \[\left.-\frac{1}{2}\,C(\phi,\chi)\partial_{\mu}\chi\partial^{\mu} \chi-V(\phi,\chi)\right]\,. \tag{1}\] In this context, \(g\) represents the determinant of the metric tensor \(g_{\mu\nu}\), \(R\) denotes the Ricci scalar, and \(V(\phi,\chi)\) represents the potential of the scalar doublet. The values of the coefficients \(A\), \(B\), and \(C\) are contingent upon the properties of the scalars. Upon varying the action (1) with respect to the metric \(g_{\mu\nu}\), we derive the ensuing Einstein equation: \[\frac{1}{\kappa^{2}}\left(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right)= A(\phi,\chi)\partial_{\mu}\phi\partial_{\nu}\phi+B(\phi,\chi)\left( \partial_{\mu}\phi\partial_{\nu}\chi+\partial_{\nu}\phi\partial_{\mu}\chi \right)+C(\phi,\chi)\partial_{\mu}\chi\partial_{\nu}\chi\] \[-g_{\mu\nu}\left[\frac{1}{2}\,A(\phi,\chi)\partial_{\rho}\phi \partial^{\rho}\phi+B(\phi,\chi)\partial_{\rho}\phi\partial^{\rho}\chi+\frac{1 }{2}\,C(\phi,\chi)\partial_{\rho}\chi\partial^{\rho}\chi+V(\phi,\chi)\right]\,, \tag{2}\] Through the variation of action (1) concerning the scalar fields \(\phi\) and \(\chi\), we acquire the following expressions: \[0 =\frac{A_{\phi}}{2}\,\partial_{\mu}\phi\partial^{\mu}\phi+A\nabla ^{\mu}\partial_{\mu}\phi+A_{\chi}\partial_{\mu}\phi\partial^{\mu}\chi+\left(B_ {\chi}-\frac{1}{2}\,C_{\phi}\right)\partial_{\mu}\chi\partial^{\mu}\chi+B \nabla^{\mu}\partial_{\mu}\chi-V_{\phi}\,, \tag{3}\] \[0 =\left(-\frac{1}{2}\,A_{\chi}+B_{\phi}\right)\partial_{\mu}\phi \partial^{\mu}\phi+B\nabla^{\mu}\partial_{\mu}\phi+\frac{1}{2}\,C_{\chi} \partial_{\mu}\chi\partial^{\mu}\chi+C\nabla^{\mu}\partial_{\mu}\chi+C_{\phi} \partial_{\mu}\phi\partial^{\mu}\chi-V_{\chi}\,. \tag{4}\] Here, let us define \(A_{\phi}\) as \(\partial A(\phi,\chi)/\partial\phi\), and similarly for other derivatives. It's worth noting that Eqs.(3) and (4) can be derived from the Bianchi identity in conjunction with Eq.(2). In the following, we identify \[\phi=t\,,\quad\chi=r\,. \tag{5}\] As explained in the reference [87], making the assumption (5) doesn't result in any loss of generality. In the case of a spacetime with a general spherically symmetric yet time-dependent solution, the scalar fields \(\phi\) and \(\chi\) exhibit dependencies on both the time coordinate, denoted as \(t\), and the radial coordinate, denoted as \(r\). In the context of a given solution, the specific dependencies of \(\phi\) and \(\chi\) on both the time variable \(t\) and the radial variable \(r\) are determined as functions: \(\phi=\phi(t,r)\) and \(\chi=\chi(t,r)\). We may redefine the scalar fields to replace \(t\) and \(r\) with new scalar fields, \(\tilde{\phi}\) and \(\tilde{\chi}\), \(\phi\left(\tilde{\phi},\tilde{\chi}\right)\equiv\phi\left(t=\tilde{\phi},r= \tilde{\chi}\right)\) and \(\chi\left(\tilde{\phi},\tilde{\chi}\right)\equiv\chi\left(t=\tilde{\phi},r= \tilde{\chi}\right)\). Subsequently, we can associate the new scalar fields with the time and radial coordinates in (5). The transformation of variables from \(\left(\phi,\chi\right)\) to \(\left(\tilde{\phi},\tilde{\chi}\right)\) can be integrated into the redefinitions of \(A\), \(B\), \(C\), and \(V\) within the action (1). This demonstrates that making the assumption (5) doesn't lead to any loss of generality. Furthermore, as we will observe later, even when \(\phi\) is identified with \(t\) a static spacetime can still be achieved. Now we consider the following spherically symmetric line-element \[ds^{2}=-\mathrm{e}^{2\Phi\left(r\right)}dt^{2}+\left(1-\frac{b \left(r\right)}{r}\right)^{-1}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)\,. \tag{6}\] Here, \(\Phi\left(r\right)\) and \(b\left(r\right)\) represent arbitrary functions of the radial coordinate, and they are referred to as the redshift and shape functions, respectively. At the minimum wormhole throat, denoted as \(r_{0}\) with the condition \(b(r_{0})=r_{0}\), the wormhole serves as a connection between two distinct Universes, and the radial coordinate range follows the inequality \(0<r_{0}\leq r\leq\infty\). To prevent the formation of an event horizon or any singularities at the wormhole throat \(r_{0}\), the redshift function should remain well-defined across all points. To ensure the traversability of the wormhole, two conditions must be met: \(b\left(r\right)-rb^{\prime}\left(r\right)>0\), and \(1-\frac{b\left(r\right)}{r}>0\) (the second condition is required except the throat at \(r=r_{0}\)). (where \({}^{\prime}\) represents the derivative with respect to \(r\)). Additionally, for asymptotically flat wormhole solutions, the condition \(\frac{b\left(r\right)}{r}\to 0\) as \(r\rightarrow\infty\) is imposed, as outlined in [88]. Applying the field equation (1) to the line element (6), we obtain, \[\frac{b^{\prime}\left(r\right)}{r^{2}}= \frac{\left(\left(\left(r-b\left(r\right)\right)C\left(r\right)+ 2V\left(r\right)r\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r \right)\kappa^{2}}{2\mathrm{e}^{2\Phi\left(r\right)}r}\,,\] \[B\left(r\right)= 0\,,\] \[\frac{2r\left(r-b\left(r\right)\right)\Phi^{\prime}\left(r\right)- b\left(r\right)}{r^{3}}= \frac{\left(\left(\left(r-b\left(r\right)\right)C\left(r\right)-2V\left(r \right)r\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r\right)\kappa ^{2}}{2\mathrm{e}^{2\Phi\left(r\right)}r}\,,\] \[\frac{1}{r^{3}}\Big{\{}2r^{2}\left(r-b\left(r\right)\right)\Phi^ {\prime\prime}\left(r\right)+2\Big{(}r\left(r-b\left(r\right)\right)\Phi^{ \prime}\left(r\right)-\frac{b^{\prime}\left(r\right)r}{2}+\frac{b\left(r \right)}{2}\Big{)}\left(1+\Phi^{\prime}\left(r\right)r\right)\Big{\}}\] \[=\frac{\left(\left(\left(-r+b\left(r\right)\right)C\left(r\right) -2V\left(r\right)r\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r \right)\kappa^{2}}{\mathrm{e}^{2\Phi\left(r\right)}r}\,. \tag{7}\] Assuming the WH geometry to has the form [89; 90] \[\Phi\left(r\right)=\frac{r_{0}}{2r}\,,\quad b\left(r\right)=r \mathrm{e}^{-\gamma\left(r-r_{0}\right)}\,, \tag{8}\] with positive constants \(r_{0}\) and \(\gamma\) and substituting these expressions into Eq. (7), we obtain \[A\left(r\right) = \frac{\mathrm{e}^{\frac{r_{0}}{r}}}{4\kappa^{2}r^{4}\mathrm{e}^{ \gamma r}}\left(-2r_{0}\mathrm{r}\mathrm{e}^{\gamma r_{0}}+4r^{2}\mathrm{e}^{ \gamma r_{0}}+2r_{0}r\mathrm{e}^{\gamma r}-2\gamma r^{3}\mathrm{e}^{\gamma r_{ 0}}-r_{0}{}^{2}\mathrm{e}^{\gamma r_{0}}-\gamma r_{0}r^{2}\mathrm{e}^{\gamma r _{0}}+r_{0}{}^{2}\mathrm{e}^{\gamma r}\right)\,,\] \[B\left(r\right) = 0\,,\] \[C\left(r\right) = -\frac{4r^{2}\mathrm{e}^{\gamma r_{0}}-6r_{0}r\mathrm{e}^{\gamma r _{0}}+6r_{0}r\mathrm{e}^{\gamma r}+2\gamma r^{3}\mathrm{e}^{\gamma r_{0}}-r_{0} {}^{2}\mathrm{e}^{\gamma r_{0}}-\gamma r_{0}r^{2}\mathrm{e}^{\gamma r_{0}}+r_{0 }{}^{2}\mathrm{e}^{\gamma r}}{4\kappa^{2}r^{4}\left(\mathrm{e}^{\gamma r}- \mathrm{e}^{\gamma r_{0}}\right)}\,,\] \[V\left(r\right) = \frac{-r_{0}\mathrm{e}^{\gamma r_{0}}+2r\mathrm{e}^{\gamma r_{0}}+ r_{0}\mathrm{e}^{\gamma r}-\gamma r^{2}\mathrm{e}^{\gamma r_{0}}}{2\kappa^{2}r^{3} \mathrm{e}^{\gamma r}}\,. \tag{9}\] We are assuming \(r\geq r_{0}\). Eq. (9) tells that if we consider the model, \[A\left(\chi\right) = \frac{\mathrm{e}^{\frac{r_{0}}{\chi}}}{4\kappa^{2}\chi^{4} \mathrm{e}^{\gamma\chi}}\left(-2r_{0}\chi\mathrm{e}^{\gamma r_{0}}+4\chi^{2} \mathrm{e}^{\gamma r_{0}}+2r_{0}\chi\mathrm{e}^{\gamma\chi}-2\gamma\chi^{3} \mathrm{e}^{\gamma r_{0}}-r_{0}{}^{2}\mathrm{e}^{\gamma r_{0}}-\gamma r_{0} \chi^{2}\mathrm{e}^{\gamma r_{0}}+r_{0}{}^{2}\mathrm{e}^{\gamma\chi}\right)\,,\] \[B\left(\chi\right) = 0\,,\] \[C\left(\chi\right) = -\frac{4\chi^{2}\mathrm{e}^{\gamma r_{0}}-6r_{0}\chi\mathrm{e}^{ \gamma r_{0}}+6r_{0}\chi\mathrm{e}^{\gamma\chi}+2\gamma\chi^{3}\mathrm{e}^{ \gamma r_{0}}-r_{0}{}^{2}\mathrm{e}^{\gamma r_{0}}-\gamma r_{0}\chi^{2} \mathrm{e}^{\gamma r_{0}}+r_{0}{}^{2}\mathrm{e}^{\gamma\chi}}{4\kappa^{2}\chi^{4} \left(\mathrm{e}^{\gamma\chi}-\mathrm{e}^{\gamma r_{0}}\right)}\,,\] \[V\left(\chi\right) = \frac{-r_{0}\mathrm{e}^{\gamma r_{0}}+2r\mathrm{e}^{\gamma r_{0}}+ r_{0}\mathrm{e}^{\gamma\chi}-\gamma\chi^{2}\mathrm{e}^{\gamma r_{0}}}{2\kappa^{2}\chi^{3} \mathrm{e}^{\gamma\chi}}\,, \tag{10}\] the wormhole spacetime given by (6) with (8) and (5) is an exact solution of the model. We should note both \(A\left(r\right)\) and \(C\left(r\right)\) become negative in some region of \(r\), as shown in Fig. 1 (a), for example, when \(r\to r_{0}\), we obtain, \[A\left(r\right)\rightarrow\frac{\mathrm{e}^{\frac{r_{0}}{r}}}{4\kappa^{2}{r_{0 }}^{2}}\left(4-3\gamma r_{0}\right)\,,\quad C\left(r\right)\rightarrow-\frac{4 +\gamma r_{0}}{4\kappa^{2}{\gamma r_{0}}^{2}\left(r-r_{0}\right)}\,. \tag{11}\] Therefore \(C\left(r\right)\) is negative and \(A\left(r\right)\) becomes negative if \(4-3\gamma r_{0}<0\). When \(A\left(r\right)\) or \(C\left(r\right)\) is negative, the scalar field \(\phi\) or \(\chi\) becomes a ghost, which generates the breakdown of the energy conditions in general. Using Eq. (10), we obtain the form of the energy density \(\rho\) and the radial and tangential Figure 1: (a) The general behaviors of the two functions \(A\) and \(C\) as \(r\to r_{0}\); (b) is the behavior of the density, radial, and tangential pressure; (c) represents some components of the energy conditions. components of the pressure, \(p_{\rm r}\) and \(p_{\rm t}\) as follows, 1 Footnote 1: We may express Eq. (2) as \(G_{\mu\nu}=\kappa^{2}T_{\mu\nu}^{\rm cc}\) by using the Einstein tensor \(G_{\mu\nu}:=R_{\mu\nu}-g_{\mu\nu}R/2\) and the energy-momentum tensor of the scalar field \(T_{\mu\nu}^{\rm cc}\). By writing the energy-momentum tensor as \(T^{\rm sc\nu}_{\ \mu}={\rm diag}\left(-\rho,p_{\rm r},p_{\rm t},p_{\rm r}\right)\), we can extend the energy conditions to Einstein’s gravity with scalar two fields as follows. \[\rho = \frac{-{\rm e}^{\gamma r_{0}}+{\rm e}^{\gamma r_{0}}\gamma r+{\rm e }^{-\gamma\left(r-2r_{0}\right)}-{\rm e}^{-\gamma\left(r-2r_{0}\right)}r\gamma }{r^{2}\left({\rm e}^{\gamma r}-{\rm e}^{\gamma r_{0}}\right)}\,,\] \[p_{\rm r} = -\frac{r{\rm e}^{\gamma r_{0}}+r_{0}{\rm e}^{\gamma r}-2r_{0}{ \rm e}^{\gamma r_{0}}-{\rm e}^{-\gamma\left(r-2r_{0}\right)}r+{\rm e}^{-\gamma \left(r-2r_{0}\right)}r_{0}}{r^{3}\left({\rm e}^{\gamma r}-{\rm e}^{\gamma r_{ 0}}\right)}\,,\] \[p_{\rm t} = \frac{\left({r_{0}}^{2}+\left(\gamma r^{2}+2r\right)r_{0}-2\gamma r ^{3}\right){\rm e}^{-\gamma\left(r-2r_{0}\right)}+\left(2\gamma r^{3}-2{r_{0}} ^{2}-\left(\gamma r^{2}+4r\right)r_{0}\right){\rm e}^{\gamma r_{0}}+r_{0} \left(2r+r_{0}\right){\rm e}^{\gamma r}}{4r^{4}\left({\rm e}^{\gamma r}-{\rm e }^{\gamma r_{0}}\right)}\,. \tag{12}\] Then we find the energy conditions are broken when \(r\gtrsim r_{0}\) as shown in Fig. 1 (b) and Fig. 1 (c). ## III Eliminating Ghosts We consider the conditions that \(A\left(r\right)\) becomes non-negative. We define a function \(N\left(r\right)\) by \(A\left(r\right)=\frac{{\rm e}^{\frac{r_{0}}{r}}N\left(r\right)}{4\kappa^{2}r ^{4}{\rm e}^{\gamma r}}\), that is \[N\left(r\right)\equiv -2r_{0}r{\rm e}^{\gamma r_{0}}+4r^{2}{\rm e}^{\gamma r_{0}}+2r_{0 }{\rm e}^{\gamma r}-2\gamma r^{3}{\rm e}^{\gamma r_{0}}-{r_{0}}^{2}{\rm e}^{ \gamma r_{0}}-\gamma r_{0}r^{2}{\rm e}^{\gamma r_{0}}+{r_{0}}^{2}{\rm e}^{ \gamma r}\,. \tag{13}\] Then we find \[N^{\prime\prime\prime\prime}\left(r\right)= 8\gamma^{3}r_{0}{\rm e}^{\gamma r}+2\gamma^{4}r_{0}r{\rm e}^{ \gamma r}+\gamma^{4}{r_{0}}^{2}{\rm e}^{\gamma r}>0\,, \tag{14}\] and \[N(r_{0}) =\left(4-3\gamma r_{0}\right){r_{0}}^{2}{\rm e}^{\gamma r_{0}}\,, \quad N^{\prime}(r_{0})=\left(8-5\gamma r_{0}\right){r_{0}}{\rm e}^{\gamma r_{ 0}}\,,\quad N^{\prime\prime}(r_{0})=\left(8-10\gamma r_{0}+3\gamma^{2}{r_{0}} ^{2}\right){\rm e}^{\gamma r_{0}}\,,\] \[N^{\prime\prime\prime}(r_{0}) =\gamma\left(-12+6\gamma r_{0}+3\gamma^{2}{r_{0}}^{2}\right){ \rm e}^{\gamma r_{0}}\,, \tag{15}\] The conditions \(N(r_{0})\geq 0\), \(N^{\prime}(r_{0})\geq 0\), \(N^{\prime\prime}(r_{0})\geq 0\), and \(N^{\prime\prime\prime}(r_{0})\geq 0\) give \(\gamma r_{0}\leq\frac{4}{3}\approx 1.33\), \(\gamma r_{0}\leq\frac{8}{5}=1.6\), \(\gamma r_{0}\leq\frac{4}{3}\) or \(\gamma r_{0}\geq 2\), and \(\gamma r_{0}\leq-1-\sqrt{5}\) or \(\gamma r_{0}\geq-1+\sqrt{5}\approx 1.236\), respectively. Therefore if \[\frac{4}{3}\geq\gamma r_{0}\geq-1+\sqrt{5}\,, \tag{16}\] \(N\left(r\right)\) and therefore \(A\left(r\right)\) are not negative, which tells \(\phi\) is not ghost but canonical scalar field. For the WH to be traversable, \(r_{0}\) should be large enough. Such \(r_{0}\) can be realized by choosing \(\gamma\) small enough to satisfy the condition (16). On the other hand, \(\chi\) is a ghost in general. In order to avoid the ghost, we propose a model where the scalar field \(\chi\) becomes non-dynamical by imposing the mimetic constraint on \(\chi\) as follows2, Footnote 2: An alternative conceptualization to the concept of cold dark matter emerges through the mimetic modification of General Relativity (GR), as originally introduced by Chamseddine and Mukhanov [91]. Subsequent investigations into this theoretical framework have been conducted in a series of works [92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104]. In their paper [91], Chamseddine and Mukhanov isolated the conformal degree of freedom of Einstein-Hilbert gravity in a covariant way, and in the resulting theory, the physical metric is defined with the account of an auxiliary scalar field, which appears through its first derivatives. In this sense, the addition of the term (18) to the action (1) may be regarded with a modification of Einstein’s gravity. \[\left(1-\frac{b(\chi)}{\chi}\right)g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu} \chi=1\,, \tag{17}\] whose solution is consistently \(\chi=r\). The constraint can be realized by introducing a multiplier field \(\lambda\) and add the following term \(S_{\rm mim}\) to the action (1), \(S_{\rm GR\phi\chi}\to S_{\rm GR\phi\chi}+S_{\rm mim}\), \[S_{\rm mim}\equiv\int d^{4}x\sqrt{-g}\lambda\left\{\left(1-\frac{b(\chi)}{ \chi}\right)\partial_{\rho}\chi\partial^{\rho}\chi-1\right\}\,. \tag{18}\] By the term (18), Eq. (2) is modified as \[\frac{1}{\kappa^{2}}\left(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right)=g _{\mu\nu} \left[-\frac{1}{2}\,A(\phi,\chi)\partial_{\rho}\phi\partial^{\rho} \phi-B(\phi,\chi)\partial_{\rho}\phi\partial^{\rho}\chi-\frac{1}{2}\,C(\phi, \chi)\partial_{\rho}\chi\partial^{\rho}\chi-V(\phi,\chi)\right]\] \[+A(\phi,\chi)\partial_{\mu}\phi\partial_{\nu}\phi+B(\phi,\chi) \left(\partial_{\mu}\phi\partial_{\nu}\chi+\partial_{\nu}\phi\partial_{\mu} \chi\right)+C(\phi,\chi)\partial_{\mu}\chi\partial_{\nu}\chi\] \[+\frac{1}{2}g_{\mu\nu}\lambda\left\{\left(1-\frac{b(\chi)}{\chi} \right)\partial_{\rho}\chi\partial^{\rho}\chi-1\right\}-\lambda\left(1-\frac{ b(\chi)}{\chi}\right)\partial_{\mu}\chi\partial_{\nu}\chi\,, \tag{19}\] but we can always consider the solution with \(\lambda=0\) and therefore the spacetime with the wormhole (6) becomes an exact solution even if we add the term in (18). We may consider more general models where \(\lambda\) does not vanish. Applying the field equations (19) to the line element (6), we obtain, \[\frac{b^{\prime}\left(r\right)}{r^{2}}= \frac{\left(\left(r\left(r-b\left(r\right)\right)C\left(r\right) +2V\left(r\right)r^{2}+b\left(r\right)\lambda\left(r\right)\left(2r-b\left(r \right)\right)\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r^{2} \right)\kappa^{2}}{2\mathrm{e}^{2\Phi\left(r\right)}r^{2}}\,,\] \[\frac{2r\left(r-b\left(r\right)\right)\Phi^{\prime}\left(r\right) -b\left(r\right)}{r^{3}}= \frac{\left(\left(r\left(r-b\left(r\right)\right)C\left(r\right) -2V\left(r\right)r^{2}-\lambda\left(r\right)\left(2r^{2}-2rb\left(r\right)+b^{ 2}\right)\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r^{2}\right) \kappa^{2}}{2\mathrm{e}^{2\Phi\left(r\right)}r}\,,\] \[\frac{1}{r^{3}}\left\{2r^{2}\left(r-b\left(r\right)\right)\Phi^{ \prime\prime}\left(r\right)+2\left(r\left(r-b\left(r\right)\right)\Phi^{ \prime}\left(r\right)-\frac{b^{\prime}\left(r\right)r}{2}+\frac{b\left(r\right) }{2}\right)\left(1+\Phi^{\prime}\left(r\right)r\right)\right\}\] \[= \frac{\left(\left(r\left(b\left(r\right)-r\right)C\left(r\right) -2V\left(r\right)r^{2}+b\left(r\right)\lambda\left(r\right)\left(2r-b\left(r \right)\right)\right)\mathrm{e}^{2\Phi\left(r\right)}+A\left(r\right)r^{2} \right)\kappa^{2}}{\mathrm{e}^{2\Phi\left(r\right)}r}\,. \tag{20}\] The solution of the above system takes the form: \[A\left(r\right)= -\frac{\mathrm{e}^{\frac{r_{0}}{r}}}{4\mathrm{e}^{\gamma r} \kappa^{2}r^{4}}\left(2\mathrm{e}^{\gamma r_{0}}\gamma r^{3}+2r_{0}\mathrm{e}^ {\gamma r_{0}}r-4r^{2}\mathrm{e}^{\gamma r_{0}}-2r_{0}r\mathrm{e}^{\gamma r}+ r_{0}\,\mathrm{e}^{\gamma r_{0}}\gamma r^{2}+r_{0}^{2}\mathrm{e}^{\gamma r_{0}}- r_{0}{}^{2}\mathrm{e}^{\gamma r}\right)\,,\] \[B\left(r\right)= 0\,,\] \[C\left(r\right)= \frac{\left(\mathrm{e}^{\gamma r}-\mathrm{e}^{\gamma r_{0}}\right) \lambda\left(r\right)}{\mathrm{e}^{\gamma r}}-\frac{2\mathrm{e}^{\gamma r_{0}} \gamma r^{3}-6r_{0}\mathrm{e}^{\gamma r_{0}}r+4r^{2}\mathrm{e}^{\gamma r_{0}}+ 6r_{0}r\mathrm{e}^{\gamma r}-r_{0}\mathrm{e}^{\gamma r_{0}}\gamma r^{2}-r_{0} ^{2}\mathrm{e}^{\gamma r_{0}}+r_{0}{}^{2}\mathrm{e}^{\gamma r}}{4\kappa^{2}r^{4 }\left(\mathrm{e}^{\gamma r}-\mathrm{e}^{\gamma r_{0}}\right)}\,,\] \[V\left(r\right)= -\frac{\lambda\left(r\right)}{2}-\frac{r_{0}\mathrm{e}^{\gamma r _{0}}-2\mathrm{e}^{\gamma r_{0}}r-r_{0}\mathrm{e}^{\gamma r}+\mathrm{e}^{ \gamma r_{0}}r^{2}\gamma}{2\mathrm{e}^{\gamma r}\kappa^{2}r^{3}}\,. \tag{21}\] where \(\lambda\) can take any value. Of course, the solution (21) gives the energy density and pressure components with those given by Eq. (12) because the geometry is not changed. By putting \(r=\chi\) in (21), we obtain a class of models where the wormhole spacetime given by (6) with (8) and (5) is an exact solution of the models. Because \(\lambda(\chi)=\lambda(r=\chi)\) is an arbitrary function, we may choose \(\lambda(r)\) so that \(C(\chi)=0\), that is \[\lambda\left(r\right)=\frac{2\mathrm{e}^{\gamma r_{0}}\gamma r^{3}-6r_{0} \mathrm{e}^{\gamma r_{0}}r+4r^{2}\mathrm{e}^{\gamma r_{0}}+6r_{0}r\mathrm{e}^{ \gamma r}-r_{0}\mathrm{e}^{\gamma r_{0}}\gamma r^{2}-r_{0}{}^{2}\mathrm{e}^{ \gamma r_{0}}+r_{0}{}^{2}\mathrm{e}^{\gamma r}}{4\kappa^{2}r^{4}\left(\mathrm{e }^{\gamma r}-\mathrm{e}^{\gamma r_{0}}\right)^{2}\mathrm{e}^{-\gamma r}}\,, \tag{22}\] or so that \(V(\chi)=0\), \[\lambda\left(r\right)=\frac{r_{0}\mathrm{e}^{\gamma r_{0}}-2\mathrm{e}^{\gamma r _{0}}r-r_{0}\mathrm{e}^{\gamma r}+\mathrm{e}^{\gamma r_{0}}r^{2}\gamma}{ \mathrm{e}^{\gamma r}\kappa^{2}r^{3}}\,. \tag{23}\] We should note that the ghost can be eliminated regardless of the choice (22) or (23) as confirmed in the next section and therefore the wormhole geometry in the model could be stable. ## IV Absence of ghost In order to investigate the (in)stability of the wormhole geometry, we focus on the model where \(\lambda=0\) is a solution. We now consider the perturbation around the solution (6) with (8) and (5) as follows, \[g_{\mu\nu}\to g_{\mu\nu}+h_{\mu\nu}\,,\quad\phi\rightarrow\phi+\tau\,,\quad \chi\rightarrow\chi+\xi\,,\quad\lambda\rightarrow\lambda+\zeta\,. \tag{24}\] Then Eq. (19) in the background where \(A=A(\chi)\), \(B=0\), \(C=C(\chi)\), \(V=V(\chi)\), and \(\lambda=0\) gives, \[\frac{1}{\kappa^{2}} \left[\frac{1}{2}\left\{\nabla_{\mu}\nabla^{\rho}h_{\nu\rho}+\nabla _{\nu}\nabla^{\rho}h_{\mu\rho}-\nabla^{2}h_{\mu\nu}-\nabla_{\mu}\nabla_{\nu} \left(g^{\rho\sigma}h_{\rho\sigma}\right)-2R^{\sigma}_{\ \nu\ \mu}h_{\sigma\rho}+R^{\rho}_{\ \mu}h_{\rho\nu}+R^{\rho}_{\ \nu}h_{\rho\mu}\right\} \tag{25}\] \[-\frac{1}{2}h_{\mu\nu}R-\frac{1}{2}g_{\mu\nu}\left\{-h_{\rho \sigma}R^{\rho\sigma}+\nabla^{\rho}\nabla^{\sigma}h_{\rho\sigma}-\nabla^{2} \left(g^{\rho\sigma}h_{\rho\sigma}\right)\right\}\right]\] \[= h_{\mu\nu}\left[-\frac{1}{2}A(\chi)\partial_{\rho}\phi\partial^ {\rho}\phi-\frac{1}{2}C(\chi)\partial_{\rho}\chi\partial^{\rho}\chi-V(\chi) \right]-g_{\mu\nu}\left[-\frac{1}{2}A(\chi)\partial^{\rho}\phi\partial^{\sigma }\phi-\frac{1}{2}C(\chi)\partial^{\rho}\chi\partial^{\sigma}\chi\right]h_{\rho\sigma}\] \[-g_{\mu\nu}A(\chi)\partial_{\rho}\phi\partial^{\rho}\tau+A(\chi )\left(\partial_{\mu}\tau\partial_{\nu}\phi+\partial_{\mu}\phi\partial_{\nu} \tau\right)-g_{\mu\nu}C(\chi)\partial_{\rho}\chi\partial^{\rho}\xi+C(\chi) \left(\partial_{\mu}\xi\partial_{\nu}\chi+\partial_{\mu}\chi\partial_{\nu}\xi\right)\] \[+\left[g_{\mu\nu}\left\{-\frac{1}{2}A^{\prime}(\chi)\partial_{ \rho}\phi\partial^{\rho}\phi-\frac{1}{2}C^{\prime}(\chi)\partial_{\rho}\chi \partial^{\rho}\chi-V^{\prime}(\chi)\right\}+A^{\prime}(\chi)\partial_{\mu} \phi\partial_{\nu}\phi+C^{\prime}(\chi)\partial_{\mu}\chi\partial_{\nu}\chi \right]\xi\] \[-\zeta\left(1-\frac{b(\chi)}{\chi}\right)\partial_{\mu}\chi \partial_{\nu}\chi\,.\] Here we also used the constraint (17). Under the perturbation (24), the constraint (17) has the following form, \[0=\left(1-\frac{b(\chi)}{\chi}\right)g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu }\xi\,. \tag{26}\] By using the background solution (6) with (8) and (5), the constraint (26) gives, \[0=\partial_{r}\xi\,, \tag{27}\] whose solution is \(\xi=\xi(t,\theta,\phi)\) and \(\xi\) does not depend on \(r\). Therefore if we put the boundary condition that \(\xi\to 0\) when \(r\to\infty\), we find \(\xi\) identically vanishes, \[\xi=0\,. \tag{28}\] This is because \(\chi\) is not dynamical due to the mimetic constraint (17). We now choose a condition to fix the gauge as follows, \[0=\nabla^{\mu}h_{\mu\nu}\,. \tag{29}\] Then Eq. (25) with (28) has the following form, \[\frac{1}{\kappa^{2}} \left[\frac{1}{2}\left\{-\nabla^{2}h_{\mu\nu}-\nabla_{\mu}\nabla_ {\nu}\left(g^{\rho\sigma}h_{\rho\sigma}\right)-2R^{\sigma}_{\ \nu\ \mu}h_{\sigma\rho}+R^{\rho}_{\ \mu}h_{\rho\nu}+R^{\rho}_{\ \nu}h_{\rho\mu}\right\}-\frac{1}{2}h_{\mu\nu}R- \frac{1}{2}g_{\mu\nu}\left\{-h_{\rho\sigma}R^{\rho\sigma}-\nabla^{2}\left(g^{ \rho\sigma}h_{\rho\sigma}\right)\right\}\right]\] \[-g_{\mu\nu}A(\chi)\partial_{\rho}\phi\partial^{\rho}\tau+A(\chi )\left(\partial_{\mu}\tau\partial_{\nu}\phi+\partial_{\mu}\phi\partial_{\nu} \tau\right)-\zeta\left(1-\frac{b(\chi)}{\chi}\right)\partial_{\mu}\chi \partial_{\nu}\chi\,. \tag{30}\] By multiplying Eq. (30) with \(g^{\mu\nu}\) and using the mimetic constraint (17), we obtain \[\zeta= -\frac{1}{\kappa^{2}}\left[\nabla^{2}\left(g^{\mu\nu}h_{\mu\nu} \right)-\frac{1}{2}\left(g^{\mu\nu}h_{\mu\nu}\right)R+2h_{\mu\nu}R^{\mu\nu}\right]\] \[+(g^{\mu\nu}h_{\mu\nu})\left[-\frac{1}{2}A(\chi)\partial_{\rho} \phi\partial^{\rho}\phi-\frac{1}{2}C(\chi)\partial_{\rho}\chi\partial^{\rho} \chi-V(\chi)\right]-4\left[-\frac{1}{2}A(\chi)\partial^{\rho}\phi\partial^{\sigma }\phi-\frac{1}{2}C(\chi)\partial^{\rho}\chi\partial^{\sigma}\chi\right]h_{\rho\sigma}\] \[-2A(\chi)\partial_{\rho}\phi\partial^{\rho}\tau\,. \tag{31}\] This tells that \(\zeta\) is not an independently propagating mode. We may consider the perturbation of (3) \[0=A(\chi)\nabla^{2}\tau-A(\chi)h_{\mu\nu}\nabla^{\mu}\nabla^{\nu}\phi-\frac{1}{ 2}A(\chi)\nabla^{\rho}\left(g^{\mu\nu}h_{\mu\nu}\right)\partial_{\rho}\phi+A_{ \chi}\partial_{\mu}\tau\partial^{\mu}\chi\,. \tag{32}\] Therefore \(\tau\) behaves as a massless mode. And as long as the condition (16) is satisfied, which tells \(A\) is positive, the scalar mode \(\tau\) is not a ghost but a canonical scalar. The remaining mode could be only massless spin-two mode corresponding to the standard graviton. Therefore there is no mode generating ghost instability. This result is valid whether \(\lambda=0\) or \(\lambda\neq 0\) because the constraint (17) or (26) is always obtained regardless of the background value of \(\lambda\). In order to investigate the causality, we often use the speed of sound in a fluid. The radial and tangential speeds of sound could be defined by \[v_{r}^{2}=\frac{dp_{r}}{d\rho}=\frac{p_{r}^{\prime}}{\rho^{\prime}}\,,\quad v_ {t}^{2}=\frac{dp_{t}}{d\rho}=\frac{p_{t}^{\prime}}{\rho^{\prime}}\,. \tag{33}\] As we have seen, however, nothing corresponds to the sound wave. The existing waves are gravitational waves and massless scalar waves, whose propagating speeds are different from the above sound speeds. Therefore the arguments of the sound speed are not applicable here. In other words, the dynamics of the fields cannot be approximated by the dynamics of the fluid, in general. Even if we can approximate the field(s) by any fluid the EoS could not be so simple. In the simple case, the pressure \(p\) only depends on the energy density \(\rho\) but in general, \(p\) depends on other parameters, and therefore the expressions in (33) are not approved for the general cases. Even in our model, because \(p_{\tau}\neq p_{\rm t}\), the pressure depends on the direction and therefore the field cannot be expressed by the perfect fluid, which has no direction dependence. ## V Summary and discussions Studying stable wormholes in the context of Einstein's theory of GR with two scalar fields is a complex and challenging endeavor. Such research would likely involve a deep dive into theoretical physics, differential geometry, and advanced mathematical techniques. Stable wormholes have been a popular subject in science fiction, often depicted as portals to other parts of the Universe or alternate dimensions. While these portrayals are speculative, the idea of traversable wormholes has captured the imagination of both scientists and the general public. Investigating the theoretical possibility of stable wormholes can be seen as a step toward understanding the Universe's potential intricacies. In this study, we constructed the model where the standard and traversable wormhole geometries are included in the exact solutions. We have used the formulation in [87], where it has been shown how the model reproducing general spherically symmetric and even time-dependent solutions can be constructed. The model based on the original formulation in [87], however, includes ghosts. In order to eliminate the ghosts, we have imposed the mimetic constraint on the scalar field so that the ghost fields become non-dynamical. As a result, although the energy conditions are broken, we have obtained models without the instability due to the ghosts. This could tell that there are stable models even if the energy conditions are broken. In this study, we succeeded in constructing realistic a stable wormhole using Einstein GR with two scaler fields. Can this procedure be applied in the frame of \(f(R)\) gravitational theory with two scalar fields or in the frame of Gauss-Bonnet theory with two scalar fields? All these questions may be answered elsewhere.
2308.02525
Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?
Self-supervised learning in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations without explicit human annotation, enabling a holistic understanding of visual scenes. Robustness in vision machine learning ensures reliable and consistent performance, enhancing generalization, adaptability, and resistance to noise, variations, and adversarial attacks. Self-supervised paradigms, namely contrastive learning, knowledge distillation, mutual information maximization, and clustering, have been considered to have shown advances in invariant learning representations. This work investigates the robustness of learned representations of self-supervised learning approaches focusing on distribution shifts and image corruptions in computer vision. Detailed experiments have been conducted to study the robustness of self-supervised learning methods on distribution shifts and image corruptions. The empirical analysis demonstrates a clear relationship between the performance of learned representations within self-supervised paradigms and the severity of distribution shifts and corruptions. Notably, higher levels of shifts and corruptions are found to significantly diminish the robustness of the learned representations. These findings highlight the critical impact of distribution shifts and image corruptions on the performance and resilience of self-supervised learning methods, emphasizing the need for effective strategies to mitigate their adverse effects. The study strongly advocates for future research in the field of self-supervised representation learning to prioritize the key aspects of safety and robustness in order to ensure practical applicability. The source code and results are available on GitHub.
Prakash Chandra Chhipa, Johan Rodahl Holmgren, Kanjar De, Rajkumar Saini, Marcus Liwicki
2023-07-31T13:07:56Z
http://arxiv.org/abs/2308.02525v2
# Can Self-Supervised Representation Learning Methods Withstand Distribution ###### Abstract Self-supervised representation learning (SSL) in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations without explicit human annotation, enabling a holistic understanding of visual scenes. Robustness in vision machine learning ensures reliable and consistent performance, enhancing generalization, adaptability, and resistance to noise, variations, and adversarial attacks. Self-supervised representation learning paradigms, namely contrastive learning, knowledge distillation, mutual information maximization, and clustering, have been considered to have shown advances in invariant learning representations. This work investigates the robustness of learned representations of SSL approaches focusing on distribution shifts and image corruptions in computer vision. Detailed experiments have been conducted to study the robustness of SSL methods on distribution shifts and image corruptions. The empirical analysis demonstrates a clear relationship between the performance of learned representations within SSL paradigms and the severity of distribution shifts and corruptions. Notably, higher levels of shifts and corruptions are found to significantly diminish the robustness of the learned representations. These findings highlight the critical impact of distribution shifts and image corruptions on the performance and resilience of SSL methods, emphasizing the need for effective strategies to mitigate their adverse effects. The study strongly advocates for future research in the field of self-supervised representation learning to prioritize the key aspects of safety and robustness in order to ensure practical applicability. The source code and results are available on GitHub. 1 Footnote 1: [https://github.com/prakashchhipa/Robustness-Evaluation-of-Self-supervised-Methods-Distribution-Shifts-and-Corruptions](https://github.com/prakashchhipa/Robustness-Evaluation-of-Self-supervised-Methods-Distribution-Shifts-and-Corruptions) ## 1 Introduction Safety and robustness are crucial in computer vision as they ensure the accurate and reliable perception of the visual world, enabling applications such as autonomous driving [31], and surveillance systems to make informed and trustworthy decisions, reduce environmental noise [22], ultimately enhancing overall human safety and well-being. In recent years self-supervised representation learning (SSL) methods [18] have garnered interest in computer vision applications. Its current state-of-the-art is prominent even against supervised examples where invariant representation learning has been the core, as stated in [17]. SSL is a well-explored representation learning approach, with many studies on its performance in large datasets such as ImageNet-2012 and also on multi-modality [41]. In addition, SSL has also been well-explored with other learning approaches, including active learning [3], graphs [35], lifelong learning [36], and many more. Recent advances in self-supervised representation learning can be broadly categorized into multiple paradigms, namely contrastive learning [9, 23], Knowledge Distillation [21, 7, 12], Mutual Information Maximization [39, 2], and Clustering [5]. Despite these advancements, the robustness and safety aspects of SSL paradigms have not been extensively explored, which hinders their applicability in real-world use cases. This study is one of the early attempts highlighting the above-stated research gap on a large-scale dataset [25] focusing on the distribution shifts and image corruptions. Representation learning from self-supervised representation learning paradigms for computer vision can be categorized majorly as (i) Joint Embedding Architecture & Method (JEAM) ([10], [20], [6], [40]), (ii) Prediction Methods ([37], [33], [16]), and loosely (iii) Reconstruction Methods ([30], [19]). Specifically, JEAM can be divided further with each subdivision providing many interesting works; (i) Contrastive Methods (PIRL [32], SimCLR [10], SimCLRv2 [11], MoCo [24]), (ii) Distillation (BYOL [20], SimSiam [13]), (iii) Quantization (SwAV [6], DeepCluster [4]), and (iv) Information Maximization (Barlow Twins [40], VICReg [1]). Robustness is critical in real-life computer vision applications as there will be a shift in distribution for the deployed models with time. Understanding the behavior of existing models to the distribution shift is a crucial consideration in developing newer, more robust models. Ericsson et al. [17] explore the impact of different augmentation strategies on the transferability of self-supervised representation learning models to downstream tasks. The authors show that CNNs trained contrastively do learn invariances corresponding to the augmentations used, and specializing CNNs to particular appearance/spatial augmentations can lead to greater corresponding invariances. Furthermore, learning invariances to synthetic transforms does provide a degree of invariance to corresponding real-world transforms. This work establishes the correspondence between synthetic transforms and learning invariances for knowledge transfer limited to [15] without focusing on robustness and distribution shift. Another significant work by Jiang et al. [28] focuses on improving the robustness of self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. It leverages contrastive learning to enhance adversarial robustness via self-supervised pre-training. They discuss several options to inject adversarial perturbations to reduce adversarial fragility. Through experiments in both supervised fine-tuning and semi-supervised learning settings, they demonstrate that the proposed adversarial contrastive learning can lead to models that are both label-efficient and robust. The paper does not specifically focus on corruption, but rather on improving the model's ability to handle adversarial attacks. This work shows notable improvement in robustness performance but remains limited to a small-scale CIFAR dataset, subject to limited generalizability. Research is needed to learn invariant SSL representations capable of handling distribution shifts and corruptions; this study provides a ground in this direction by sharing insights into the robustness performance of a large-scale dataset. The identified research gap(s), raises several research questions addressed in later sections. For the detailed investigation, we considered the most popular SSL paradigms, namely contrastive learning, knowledge distillation, mutual information maximization, and clustering. Next, we exhaustively evaluated the corruptions and their severity levels present in ImageNet-C dataset [25] to understand the resilience of each method. Further, compare the robustness performance across multiple metrics, including qualitative analysis. To the best of our knowledge, this is one of the early works in this direction. **Q1**: _How do self-supervised representation learning (SSL) paradigms (contrastive learning, knowledge distillation, mutual information maximization, clustering) perform in terms of robustness when exposed to distribution shifts and image corruptions?_ A1: Distribution shifts and image corruptions have an effect on the robustness performance of the well-known SSL paradigms. The empirical analysis in this study shows that the error rates (averaged over all distribution shifts and image corruptions) increase with an increase in the severity levels of the distribution shifts and image corruptions. (Figure 1, and Section 3.Q1). **Q2**: _To what extent can self-supervised representation learning methods maintain their robustness in the presence of distribution shifts, and what are the factors that limit their ability to do so?_ A2: Extensive experiments reveal that SSL methods sustain robustness performance when subjected to lower levels of corruptions, and subsequently, the performance reduces when subjected to higher levels of corruptions. Higher corruptions may lead to massive distribution shifts, which may affect the robustness performance of learned representations. (Figure 2, Table 3, and Section 3.Q2). **Q3**: _What is the relationship between the robustness of different SSL paradigms and common categories of corruptions?_ A3: Generally, robustness performance decreases for increased severity of corruptions; specifically, the weather group's robustness performance is poorer than that of other groups. (Figure 5, and Section 3.Q3). **Q4**: _Do self-supervised representation learning methods deviate from the observed trend of error increase for certain corruptions, and what factors contribute to their robustness in the face of these corruptions?_ A4: Yes; a few corruptions, namely, _snow, elastic transform_, and _saturate_, deviate from the observed trend supported by visual quality analysis. (Table 4, and Section 3.Q4). **Q5**: _To what extent does the presence of corruptions shift the focus of classifiers from overall representation to specific features?_ A5: GradCam [34] analysis reveals that there is a significant shift in the attention maps when the image is subjected to higher levels of corruption. (Figure 3&6, and Section 3.Q5). **Q6**: _Do different backbones, such as Convolutional Neural Networks (CNNs) and Transformers, influence the behavior and robustness?_ A6. Yes; the self-attention mechanism in transformer, in contrast to CNNs, does not embed any visual inductive bias of spatial locality [27]. (Figure 4, and Section 3.Q6). ## 2 Methodology Comparative performance evaluation against robustness is carried out in two steps. In the first step, self-supervised representation learning method(s) are chosen from each potential self-supervised representation learning approach (based on JEAM), including contrastive learning, knowl edge distillation, mutual information maximization, and clustering. In the second step, evaluation measures are chosen, indicating quantitative and qualitative comparisons on distribution shift and corrupted data samples from ImageNet-C. _Reason for measuring robustness of learnt representations with corruptions and severity_ - This study focuses specifically on robustness of representations where domain shifts is simulated in controlled manner through corruption and their varying severity level. Corruptions and perturbations in ImageNet-C [25] are meticulously curated and carefully designed to closely simulate natural phenomena in vision, related to geometric distortions, visual noises, and other explicit factors. Five severity levels further resembles the increased difficulty level, aiding to study robustness at scale. Corruptions across multiple severity levels, thereby altering the original data distribution in a controlled manner [25]. Each corruption severity level shifts the distribution progressively. The corruptions cause variations in texture, color, and spatial coherence, effectively expanding the data manifold towards shift. ### Self-supervised Representation Learning Methods Methods from different self-supervised representation learning approaches are considered for analysis on the ImageNet-C dataset. The self-supervised representation learning techniques considered for this work are categorized into four main categories based on their methodology. **Contrastive Learning**: It is a self-supervised representation learning approach in computer vision and other machine learning domains. The principle behind contrastive learning is to learn valuable representations by encouraging similarity between semantically similar data points while maintaining dissimilarity between unrelated or contrasting data points. In computer vision, this approach helps in learning features and representations from images without relying on labeled data. Instead, it exploits the inherent structure in the data to learn meaningful representations that can be used for various downstream tasks. Specifically, SimCLR method [9] minimizes the temperature-scaled loss function. This contrastive loss penalizes the network when positive pair similarity is low and negative pair similarity is high. **Knowledge Distillation**: Distillation-based self-supervision is where student and teacher style encoders are structured and share the learning weights with specific arrangements such as exponential moving averages. Typically, similarity learning is performed by inducing architectural dissimilarity, such as adding a prediction MLP network on only one of the branches. In this work, SimSiam [12], a self-distillation method, and BYOL [21] & DINO (with ResNet encoder) [7] dual encoder style knowledge distillation methods are employed. **Mutual Information Maximization**: This principle is used in self-supervised representation learning to learn valuable and meaningful representations from data without explicit labels. The principle is to maximize the mutual information between different views or transformations of the input data, assuming that the learned representations should be invariant or robust to these transformations. Barlow Twins [39], and VICReg [2] are two self-supervised representation learning methods employed for the work to follow the principle of mutual information maximization to learn visual representations by applying redundancy reduction. **Clustering**: SwAV [5] combines contrastive learning and clustering-based approaches to learning meaningful and invariant features from images. The main idea behind SwAV is to use a clustering mechanism to enforce consistency between different views of the same image while promoting diversity in the learned representations. **Robustness Evaluation Criteria**: The error rate metrics, namely corruption error (\(CE\)), mean corruption error (\(mCE\)), clean error, average error, and average relative error, were introduced as a standardized measure to benchmark the robustness of machine learning models on Imagenet-C. The two-step evaluation is described by Hendrycks et al. [25]. The same procedure has been followed in this study. ### Dataset and Experimental Setup **ImageNet-C** dataset [25] contains 19 types of corruptions with five severity levels, each algorithmically generated. The main objective is to analyze the performance of different self-supervised representation learning methods across these corruptions and severity levels. By conducting detailed experiments, this research aims to gain insights into how self-supervised representations handle various types of corruptions. In this paper, we have performed detailed experiments considering all the corruptions and severity levels to gain a deeper understanding of how different self-supervised representations work on different types of corruptions and present our findings in Section 3. **Experimental details** for evaluating the robustness of self-supervised representations are as follows. We extracted the encoder from a ImageNet pre-trained self-supervised representation learning model and added a classification layer at the end of the network. This allows the model to \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & Badrow Twins & BYOL & SimSiam & SimCLR & DINO & SWAV \\ \hline Batch Size & 2048 & 4096 & 256 & 4096 & 1024 & 256 \\ Epochs & 300 & 200 & 100 & 200 & 800 & 200 \\ \hline Linear-Eval\% & 71.8 & 71.8 & 68.3 & 66.9 & 75.3 & 70.5 \\ \hline Epoch & 90 & 90 & 90 & 90 & 100 & 100 \\ Batch Size & 256 & 512 & 512 & 512 & 256 & 256 \\ \hline \hline \end{tabular} \end{table} Table 1: Configuration used (refer [14, 7] for implementation details). be fine-tuned on ImageNet 2012 dataset for a classification task. Evaluations is performed on ImageNet-C dataset [25]. For this work, we have considered six of the state-of-the-art SSL algorithms, and the configurations are shown in Table 1. We first initialized the classifier layer randomly and froze all the parameters of the pre-trained encoder. Next, we trained the classifier using the labeled training set. The models used were trained by mmsetup[14] except for DINO, which came from its original repository [7]. ResNet-50 was chosen across all different methods to keep the analysis uniform, and all experiments subsequently were conducted using this architecture. The SSL models were tested on ImageNet-C [25], and \(mCE\)[25] is used as a performance measure. The results are shown in Table 2 and 3. ## 3 Can SSL methods endure shifts in data distribution and image corruptions? The raised research questions are discussed in this section. **Q1: How do self-supervised representation learning (SSL) paradigms (contrastive learning, knowledge distillation, mutual information maximization, clustering) perform in terms of robustness when exposed to distribution shifts and image corruptions?** The average error rates against all corruptions (per severity level) of all the SSL methods are depicted in Figure 1. The general trend is that SimCLR and SimSam have higher error rates as compared to other methods. While learning has reported good performance previously on ImageNet-C [29], we noticed that SimCLR is not comparably robust against these corruptions. A pattern observed (Figure 1) is that, in general, Knowledge distillation methods seem to outperform contrastive learning. Clustering outperforms other methods indicating robust representations. From Figure 1, one important observation is that for corruptions with lower severity levels, the six SSL methods form three sets where SwAV and DINO perform best, followed by BYOL and Barlow twins; finally, SimCLR and SimSiam have relatively lower performance. However, at the highest severity level, all the methods have similar and high error rates. This is likely because most images in this group are heavily distorted and challenging even for the human visual system to comprehend. From Table 2, we observe that SwaV outperforms all the competing methods in terms of corruption error and mean corruption error; however, DINO has a better robustness performance. **Q2: To what extent can self-supervised representation learning methods maintain their robustness in the presence of distribution shifts, and what are the factors that limit their ability to do so?** Table 3 presents a detailed analysis using mean corruption error \(mCE\) for each corruption. Here, we report the average \(mCE\) for each corruption in the ImageNet-C dataset. One of the findings is that glass blur significantly impacts the robustness of these models, specifically at higher severity levels. Most of these models have demonstrated good robustness to brightness-based corruptions. As corroborated by Figure 2 for most corruptions, the model robustness suffers with the increase in severity levels. **Q3: What is the relationship between the robustness performance of different SSL paradigms and common categories of corruptions?** As the severity levels of corruptions increase, all self-supervised representation learning (SSL) methods demonstrate a decline in their robustness, as shown in Figure 5. While the _noise_ and _blur_ groups have a similar trend, whereas _digital_ group shows comparatively strong resilience for intermediate severity level. SSL methods are least robust against _weather_ group. **Q4: Do self-supervised representation learning methods deviate from the observed trend of error increase for certain corruptions, and what factors contribute to their robustness in the face of these corruptions?** We observed (Figure 2) that for three corruptions, namely, _snow_, _saturate_, and _elastic transform_ (last row), there is a deviation from the expected trend; the expected trend is that the error increases with an increase in severity level. However, SSL models are performing low at severity level 2 than at severity level 3. Given the intriguing deviations displayed by (snow, elastic, saturate) from their anticipated behavior, we delved deeper into our inquiry, employing a renowned perceptual measure known as Structural Similarity Index measure (SSIM) [38] to investigate further, as one of the metrics popularly used by image quality researchers for reference image-based quality assessment. We computed the SSIM between the original image (from ImageNet) and the corresponding corrupted image (from ImageNet-C) for all test images and averaged at each severity level (Table 4); this gives an estimate of the visual Figure 1: Error rates vs. severity levels across ImageNet-C [25] corruptions. \begin{table} \begin{tabular}{l|r r r r r r r r r r r r r r r} \hline & \multicolumn{3}{c}{**Wearher**} & \multicolumn{3}{c}{**Noise**} & \multicolumn{3}{c}{**Extra**} & \multicolumn{3}{c}{**Digital**} & \multicolumn{3}{c}{**Hazur**} \\ \hline & **anC.S.** & **Shou** & **Feng** & **Hybrid** & **Gauss.** & **Shot** & **Implies** & **Speckle** & **Gauss.** & **Spontier** & **Saturate** & **Parsulate** & **Critical** & **Enside** & **JPEG** & **Z zoom** & **Defocus** & **Motion** & **Class** \\ \hline **horlow** & 73.8 & 84 & 78 & 87 & 46 & 73 & 75 & 83 & 70 & 73 & 69 & 47 & 66 & 85 & 72 & 62 & 86 & 79 & 80 & 88 \\ **hyd** & 73.3 & 85 & 78 & 86 & 46 & 73 & 76 & 81 & 71 & 73 & 70 & 48 & 66 & 84 & 73 & 63 & 86 & 78 & 80 & 86 \\ **shutter** & 76.0 & 85 & 78 & 89 & 48 & 76 & 79 & 85 & 76 & 77 & 70 & 50 & 67 & 80 & 75 & 68 & 88 & 83 & 82 & 85 \\ **shissan** & 75.8 & 85 & 80 & 90 & 50 & 74 & 76 & 81 & 71 & 74 & 73 & 50 & 65 & 88 & 75 & 63 & 88 & 82 & 84 & 89 \\ **swsw** & 70.5 & 80 & 73 & 79 & 41 & 71 & 73 & 82 & 68 & 70 & 65 & 44 & 67 & 73 & 71 & 60 & 83 & 77 & 75 \\ **dino** & 72.9 & 83 & 75 & 82 & 41 & 74 & 75 & 84 & 68 & 70 & 65 & 44 & 68 & 79 & 70 & 60 & 82 & 76 & 76 & 85 \\ **supervised**[25] & 76.7 & 78 & 75 & 66 & 57 & 80 & 82 & 83 & 76 & 74 & 76 & 58 & 77 & 71 & 85 & 77 & 80 & 75 & 78 & 89 \\ \hline \end{tabular} \end{table} Table 3: \(mCE\) for each corruption type against the baseline. The error rates in each column of corruption types are average values of all severity levels. Figure 2: Model performance against specific corruptions by severity. For corruptions, namely, snow, saturate, and elastic (last row), SSL models perform poorly at severity level 2 than at severity level 3. Figure 4: Comparison between different backbones, ResNet50 and ViT-s8 for DINO SSL method over ImageNet-C [25] corruptions. Severity levels (left), corruptions (right). Figure 3: Glass blur on dogs; markers in the images show correct (green) and incorrect (red) classifications. In ImageNet, with many dog breed classes, misclassification doesn’t necessarily indicate a bad model if the representation is adequate. In the twin dog example, with low blur severity, both dogs have good activations for all models, suggesting good representations. However, at high blur severity, the model struggles to classify, resulting in distorted activations and difficulty in distinguishing between the dogs, leading to poor results. quality. _Snow_ corruption occludes the object by adding whitish pixels as snowflakes with motion blur. It has more visually challenging images at severity level 2 than other levels; therefore, SSIM at level 2 is lower than the SSIM at other severity levels. Similarly, for _elastic transform_, the SSIM at level 2 is lower than the SSIM at other severity levels. At low severity (levels 1 and 2), the affine transform is more noticeable in some cases, causing artifacts, which can also be seen in figure 6 on its elastic transform. The _saturation_ corruptions have very low saturation at low severity levels, causing it to be a grayscale image. This might lead to some classes not being accurately predicted, where color information is crucial. In nutshell, only for snow, elastic and saturate, increased severity level (2 to 4) by increasing respective artifacts, does not reflect increased noise in image examples which mitigates above stated behaviour from all SSL methods. **Q5: To what extent does the presence of corruptions shift the focus of classifiers from overall representation to specific features?** To gain more insight into how different self-supervised representation learning methods for classification task pick a label, we have used gradcams [34] to compare the different methods qualitatively. Gradcams are used to explain the model's decision as they provide heatmaps on where in the image the model is focusing. In Figure 6, we show the grad cams of an image for all SSL methods under different corruptions of varying severity levels. The difference among Gradcams gives an understanding of how the model behavior changes in the presence of a particular corruption. From Table 3, we noticed that _glass_ blur corruption had caused the highest misclassification for all competing SSL methods; to understand how different methods respond to different severity of _glass_ blur, we provide the corresponding gradcams in Figure 3. **Q6: Do different backbones, such as Convolutional Neural Networks (CNNs) and Transformers, influence the behavior and robustness?** There has been analysis [26] on adversarial robustness for transformer and CNN architectures but to specifically analyze the robustness against corruptions and distribution shifts, we chose the most robust SSL method from the previous analysis (i.e., DINO), and compared the backbone ViT-s8 [8] transformer with standard CNN ResNet-50. Undoubtedly, transformer architecture outperformed CNN backbone across the severity levels and also for each image corruption. A detailed trend is shown in Figure 4. ## 4 Conclusion The primary objective of this investigation was to conduct an in-depth analysis of diverse paradigms employed in current self-supervised representation learning paradigms, focusing on their robustness characteristics when subjected to varying corruptions present in the ImageNet-C database. The aim was to gain a comprehensive understanding of how these self-supervised representation learning paradigms perform and behave in the face of diverse corruptions, thereby contributing to the advancement of robust representation learning in the computer vision domain. Through empirical analysis, we have presented various analytical trends and demonstrated that self-supervised representation learning methods exhibit decreased robustness as distributional shifts intensify. Notably, our findings indicate that the DINO method employing the distillation approach and the SwAV method utilizing clustering exhibit relatively higher levels of robustness compared to the other methods investigated in this study. While DINO is associated with knowledge distillation, SwAV employs a contrastive assignment quantization approach, indicating their dissimilarity in methodology. These results suggest that multiple SSL methods originating from diverse SSL paradigms display enhanced robustness when evaluated on ImageNet-C. However, it is essential to view these empirical findings as a starting point for further exploration rather than definitive conclusions. The comparative study conducted in this research serves to enhance the comprehension of the com \begin{table} \begin{tabular}{l|l l l} \hline Severity & Snow & Elastic & Saturate \\ \hline 1 & 0.218 & 0.276 & 0.288 \\ 2 & 0.179 & 0.237 & 0.283 \\ 3 & 0.194 & 0.315 & 0.273 \\ 4 & 0.186 & 0.312 & 0.234 \\ 5 & 0.189 & 0.305 & 0.210 \\ \hline \end{tabular} \end{table} Table 4: SSIM metric for snow, elastic and saturate-based corruptions. Figure 5: Group-wise comparison. (a) Noise (b) Blur (c) Weather (d) Digital (left to right). puter vision community regarding the strengths and limitations of various self-supervised representation learning approaches. Furthermore, it facilitates researchers in developing robust representations in future endeavors. A significant finding from our analysis is that the SwaV method, which employs a clustering approach, exhibits higher robustness compared to popular methods such as SimCLR and Barlow Twins. This result offers valuable insights for future research directions aimed at further improving self-supervised representation learning methodologies. Considering the findings of this study, it becomes imperative to address the challenges associated with the performance degradation of self-supervised representation learning methods under distribution shifts and image corruptions. By prioritizing safety and robustness, researchers can contribute to the development of more reliable and trustworthy self-supervised representation learning techniques that can effectively handle real-world scenarios and enhance the practical utility of these methods. In this work, we dedicate to the methodical revelation of empirical evidence, rather than hypothesizing. Our endeavor remains steadfast in illuminating numerous enigmas through a rigorous examination. We firmly hold the conviction that this pioneering work shall pave the way for future inquiries, enabling the formulation and evaluation of cogent hypotheses.
2310.00178
Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm
Contextual biasing refers to the problem of biasing the automatic speech recognition (ASR) systems towards rare entities that are relevant to the specific user or application scenarios. We propose algorithms for contextual biasing based on the Knuth-Morris-Pratt algorithm for pattern matching. During beam search, we boost the score of a token extension if it extends matching into a set of biasing phrases. Our method simulates the classical approaches often implemented in the weighted finite state transducer (WFST) framework, but avoids the FST language altogether, with careful considerations on memory footprint and efficiency on tensor processing units (TPUs) by vectorization. Without introducing additional model parameters, our method achieves significant word error rate (WER) reductions on biasing test sets by itself, and yields further performance gain when combined with a model-based biasing method.
Weiran Wang, Zelin Wu, Diamantino Caseiro, Tsendsuren Munkhdalai, Khe Chai Sim, Pat Rondon, Golan Pundak, Gan Song, Rohit Prabhavalkar, Zhong Meng, Ding Zhao, Tara Sainath, Pedro Moreno Mengibar
2023-09-29T22:50:10Z
http://arxiv.org/abs/2310.00178v1
# Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm ###### Abstract Contextual biasing refers to the problem of biasing the automatic speech recognition (ASR) systems towards rare entities that are relevant to the specific user or application scenarios. We propose algorithms for contextual biasing based on the Knuth-Morris-Pratt algorithm for pattern matching. During beam search, we boost the score of a token extension if it extends matching into a set of biasing phrases. Our method simulates the classical approaches often implemented in the weighted finite state transducer (WFST) framework, but avoids the FST language altogether, with careful considerations on memory footprint and efficiency on tensor processing units (TPUs) by vectorization. Without introducing additional model parameters, our method achieves significant word error rate (WER) reductions on biasing test sets by itself, and yields further performance gain when combined with a model-based biasing method. ## 1 Introduction Recent years have seen a tremendous explosion in voice user interfaces (VUIs), like voice search, assistant, and control applications. The success of VUI-based applications depends on the ability of the underlying Automatic Speech Recognition (ASR) system to properly transcribe phrases that are contextually relevant to the speaker, the listener, or both. Examples of contextually-relevant phrases include names of the speaker's contacts and geographically-close points of interest. Contextually-relevant phrases are inherently hard to recognize because they represent instances of domain shift. For example, generally, it is much more likely for a single user to speak the name of one of their contacts than for any given contact name to occur in a given training data set; indeed, a given name or phrase may not appear at all in an ASR system's training set in the case of unorthodox spellings (Ke5ha) or novel words (COVID-19). Further, contextually-relevant phrases may not be known until inference time, e.g., as the user of a voice assistant can add contact names any time before speaking. ASR _contextual biasing_ is a set of techniques which enables ASR systems to recognize contextually-relevant words without retraining. Contextual biasing can generally be grouped into model-based and inference-based approaches. Model-based methods typically incorporate a biasing component into an end-to-end (E2E) ASR system (Graves, 2012; Chorowski et al., 2015; Chan et al., 2016), which takes in biasing contexts as additional input to the E2E model. An attention mechanism (Vaswani et al., 2017) is typically used to condition the model outputs on biasing contexts (Munkhdalai et al., 2021; Chang et al., 2021; Han et al., 2022) (see Sec 3 for more discussions). The more classical inference-based approach, dating back to the pre-E2E era, injects biasing contexts to boost decoding scores for the words or phrases in the biasing contexts to increase the probability of recognizing those words (Aleksic et al., 2015; Hall et al., 2015). A compact search graph, based on Weighted Finite State Transducers (WFSTs, Mohri et al., 2002), is built to encompass the set of biasing phrases, and incorporated into the normal search graph which then transduces acoustic model outputs to word-level hypotheses. Weights are distributed along edges of the biasing search graph, so that when the acoustic model output extends the matching of the phrases, a bonus score is added to the hypothesis to help it survive beam search and increase its likelihood of becoming the top hypothesis. The approach was later extended to E2E models (Zhao et al., 2019) where bonuses are incorporated at subword level. While E2E ASR systems have greatly simplified modeling and deployment, and that most components are readily implemented on GPU or TPU to enjoy parallel processing, FST-based biasing poses significant challenges for an efficient TPU-based implementation, due to their inherent sparse nature (adjacency matrices for FSTs are typically very sparse). Our contributionsIn this work, we propose a TPU-friendly implementation of search-based biasing, leveraging the equivalence between the biasing FST and the efficient matching algorithm by Knuth et al. (1977), with careful considerations on memory complexity and efficiency through vectorization. Our algorithms can be incorporated into the beam search of any ASR system, in both the on-the-fly rescoring and shallow fusion manner. On large voice search datasets, our method achieves significant word error rate (WER) reductions on biasing test sets by itself, without introducing additional model parameters. And when plugged into a model-based biasing method, namely neural associative memory (NAM, Munkhdalai et al., 2021), our method leads to further improved biasing accuracy. Our method enables learning with the discrete structure of ASR biasing, and can be potentially useful for other sequence transduction tasks. ## 2 Our Method An intuitive and classical idea for biasing is to check iteratively, at each beam search step, if the suffixes of partial hypotheses are partially or fully matching any of the biasing phrases, and give score bonuses to those with matches. This helps a partial hypotheses to survive beam search pruning, if it has the potential to develop into a full match of a biasing phrase. In this section, we develop the algorithms for efficiently performing pattern matching for multiple biasing phrases, and properly assigning biasing bonus for each beam search expansion, based on the classical KMP algorithm for string/pattern matching. We review the classical algorithm in Sec 2.1, describe its usage for biasing in Sec 2.2, discuss the two variants for beam search in Sec 2.3, and an extension in Sec 2.4. NotationsBelow we use \(\mathcal{P}\) to denote the pattern sequence to be searched/matched, and \(\mathcal{T}\) to denote the sequence to be searched from; both are strings in the context of the classical matching algorithm or token sequences in the context of speech recognition. The length of \(\mathcal{P}\) is denoted \(len(\mathcal{P})\). We use \(\mathcal{P}[i]\) to denote the element at (0-based) index \(i\), and use \(\mathcal{P}[s,\ldots,t]:=[\mathcal{P}[s],\mathcal{P}[s+1],\ldots,\mathcal{P}[t]]\) to denote the sub-sequence of \(\mathcal{P}\) with start index \(s\) and end index \(t\). Two sequences are equal if they have the same length and corresponding elements match for all indices. ### The Knuth-Morris-Pratt matching algorithm For searching the occurrences of a string \(\mathcal{P}\) of length \(m\) within another string \(\mathcal{T}\) of length \(n\), the most naive solution is perhaps to loop over the set of indices \(j=0,1,\ldots,n-m\), and check if the sub-string \(\mathcal{T}[j,\ldots,j+m-1]=\mathcal{P}\), which requires another loop over the elements of \(\mathcal{P}\). Clearly, this algorithm has a worse-case time complexity of \(\mathcal{O}(mn)\). There exists, however, a much more efficient linear-time Knuth-Morris-Pratt (KMP) matching algorithm (Knuth et al., 1977) for this problem, with a worse case complexity of \(\mathcal{O}(m+n)\). We extract two major components out of KMP below, which are used for efficiently maintaining status of matching, as needed by biasing. #### 2.1.1 The failure function The key insight behind the KMP algorithm is to not waste comparisons: if during matching we have a partial matching of length \(i\) and \(\mathcal{T}[j]\neq\mathcal{P}[i]\), then instead of moving back to index \(j-i+1\) for \(\mathcal{T}\) and moving back to index 0 for \(\mathcal{P}\), and restarting the matching (by checking whether \(\mathcal{T}[j-i+1,\ldots,j-i+m]=\mathcal{P}\)), we may continue by comparing \(\mathcal{T}[j]\) against \(\mathcal{P}[\pi(i)]\) with some \(\pi(i)<i\), without backtracking in \(\mathcal{T}\). Here \(\pi(i)\) specifies the index of the _potential_ next match in \(\mathcal{P}\) when we have a mismatch for \(\mathcal{P}[i]\), and is called the _failure function_. The failure function is originally defined as follows (Cormen et al., 2001): set \(\pi(0)=-1\), and for \(i=1,\ldots,m-1\), \[\pi(i)=\max\ \left\{k<i:\ \mathcal{P}[0,\ldots,k-1]=\mathcal{P}[i-k,\ldots,i-1] \right\}.\] That is, for \(i>0\), \(\pi(i)\) is the length of the longest proper prefix that matches a proper suffix of the sequence \(\mathcal{P}[0,\ldots,i-1]\); the value is \(0\) if no such prefix exists. The special value \(-1\) indicates that there is no possible match starting at the current index of \(\mathcal{T}\) and we must move to the next index to restart matching: if \(\mathcal{T}[j]\neq\mathcal{P}[0]\), we must move to index \(j+1\) in \(\mathcal{T}\) to compare again with \(\mathcal{P}[0]\). To see why this definition helps save unnecessary comparisons, consider the scenario where we have a partial match of length \(i>0\), but then the mismatch \(\mathcal{T}[j]\neq\mathcal{P}[i]\) occurs. Since \(\mathcal{T}[j-i,\ldots,j-1]=\mathcal{P}[0,\ldots,i-1]\), we must have \[\mathcal{T}[j-\pi(i),\ldots,j-1]=\mathcal{P}[i-\pi(i),\ldots,i-1]=\mathcal{P}[ 0,\ldots,\pi(i)-1].\] Therefore, without backtracking in \(\mathcal{T}\), we already have a partial match of length \(\pi(i)<i\), and we then check if \(\mathcal{T}[j]=\mathcal{P}[\pi(i)]\) to determine whether we can extend the partial match; in case of further mismatch, we repeat the process and backtrack to \(\pi(\pi(i))\), \(\pi^{3}(i)\),..., etc, until we reach \(-1\). The failure function we use in this work, denoted as \(\bar{\pi}(\cdot)\), is based on the above definition, and has an additional "shortcut" logic (Aho & Corasick, 1975): for \(i=1,\ldots,m-1\), \[\bar{\pi}(i)=\left\{\begin{array}{ll}\bar{\pi}(\pi(i))&\text{if }\mathcal{P}[\pi(i)]= \mathcal{P}[i],&\text{(shortcut)}\\ \pi(i)&\text{otherwise}.\end{array}\right.\] The rationale behind the shortcut is that, as we are backtracking due to \(\mathcal{T}[j]\neq\mathcal{P}[i]\), in the case of \(\mathcal{P}[\pi(i)]=\mathcal{P}[i]\) we deduce \(\mathcal{T}[j]\neq\mathcal{P}[\pi(i)]\), and thus \(\pi(i)\) cannot be the next possible match and we shall keep backtracking. We provide the algorithm for computing \(\bar{\pi}(\cdot)\) in Algorithm 4 (Append A). The time complexity for building the failure function of a pattern with length \(m\) is \(\mathcal{O}(m)\). ``` 0:\(\mathcal{P}\) with length \(m\) and failure function \(\Pi\), current partial matching length \(i\), new token \(x\). 0: Updated partial matching length \(q\), and if we obtain a full match, after consuming \(x\). procedureForward(\(\mathcal{P},i,x\)) \(full\_match\gets False\) if\(\mathcal{P}[i]=x\)then \(q\gets i+1\) if\(q=m\)then \(full\_match\gets True,\quad q\gets 0\) endif else \(k\leftarrow\Pi[i]\) while\(k>=0\) and \(\mathcal{P}[k]\neq x\)do\(\triangleright\) Determinization loop \(k\leftarrow\Pi[k]\) endwhile \(q\gets k+1\)\(\triangleright\) Either \(k=-1\) or \(\mathcal{P}[k]=x\) endif return (\(q\), \(full\_match\)) endprocedure ``` **Algorithm 1** Forward a search pattern with an input token. An example of search pattern and its failure function is as follows. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline i&0&1&2&3&4&5&6&7&8\\ \hline\mathcal{P}[i]&\text{A}&\text{B}&\text{A}&\text{C}&\text{A}&\text{B}& \text{A}&\text{B}&\text{A}\\ \hline\Pi[i]&\text{-1}&0&\text{-1}&1&\text{-1}&0&\text{-1}&3&\text{-1}\\ \hline\end{array} \tag{1}\] #### 2.1.2 The forward function With the failure function defined above, we can define a forward function. Given the matching state, defined as the current partial matching length \(i\) (i.e., we have matched \(\mathcal{P}[0,\ldots,i-1]\) so far, and is the next index to match in \(\mathcal{P}\)), and a new token \(x\) from the string \(\mathcal{T}\) to be searched, the forward returns the updated partial matching length (the new position in \(\mathcal{P}\) to match), after _consuming_\(x\). Here by "consuming" we mean either we have a match for \(x\) and we move to \(i+1\) in \(\mathcal{P}\), or we determine that it is impossible to match \(x\) and restart the matching; in both cases we move beyond \(x\) in \(\mathcal{T}\). The logic is sketched in Algorithm 1. ``` 0: Biasing phrases \(\{\mathcal{P}^{b}\}_{b=1}^{B}\), current partial matching lengths \(\mathcal{I}=(i^{1},\ldots,i^{B})\), new token \(x\). 0: Updated partial matching lengths, and biasing bonus. procedure\(\textsc{ComputeBonus}(\{\mathcal{P}^{b}\}_{b=1}^{B},\mathcal{I},x)\)\(any\_match\gets False\)\(\triangleright\) Track if there is any full match for\(b=1,\ldots,B\)do \(u^{b},\ match^{b}\leftarrow\textsc{FORWARD}(\mathcal{P}^{b},i^{b},x)\) if\(match^{b}\)then \(any\_match\gets True\) \(v^{b}\gets len(\mathcal{P}^{b})\)\(\triangleright\) For full match, use pattern length for computing potential else \(v^{b}\gets u^{b}\) endif endfor \(bonus\leftarrow\mu(v^{1},\ldots,v^{B})-\mu(i^{1},\ldots,i^{B})\) if\(any\_match\)then \(u^{b}\gets 0\) for \(b=1,\ldots,B\)\(\triangleright\) In case of any full match, restart matching for all phrases endif return \((u^{1},\ldots,u^{B}),\ bonus\) endprocedure ``` **Algorithm 2** Compute bonus score of a token extension. The complexity of this algorithm mainly lies in the "determinization loop", where we keep backtracking until we find a match of \(x\) in \(\mathcal{P}\); when no such match is possible, we end up with \(k=-1\) out of the loop, and restart matching at the next token in \(\mathcal{T}\). Additionally, we check whether we obtain a full match of \(\mathcal{P}\) after matching token \(x\), in which case we also restart matching at the next token in \(\mathcal{T}\) (we are not interested in overlapping matches of patterns in this work). If we add another loop on top of Algorithm 1 over the tokens in \(\mathcal{T}\), we recover the KMP search algorithm, which has a time complexity of \(\mathcal{O}(n)\) after the failure function is computed (with \(\mathcal{O}(m)\) complexity). Note how similar is the determinization loop of Algorithm 1 to the inner loop of Algorithm 4; in fact the latter can be seen as searching \(\mathcal{P}\) over itself. We can design a finite state automaton (FSA) \(\mathcal{A}(\mathcal{P})\) with \(m\) states, where state \(i=0,\ldots,m-1\) denotes the state for partially matching \(i\) tokens of \(\mathcal{P}\), and the forward function provides the transition function for this automaton, i.e., for an arc that starts at state \(i\) with input \(x\), it ends at the state specified by \(\textsc{FORWARD}(\mathcal{P},i,x)\). With the determinization loop, each transition consumes a non-epsilon token on its edge, ensuring that \(\mathcal{A}(\mathcal{P})\) is deterministic and epsilon-free. See Cormen et al. (2001, Chapter 32.4) for more detailed discussions on the equivalence between KMP and FSA. One could run Algorithm 1 for all \(x\) in the vocabulary (all characters in the alphabet in the case of string matching) for \(i=0,\ldots,m-1\); this yields a table of size \(m\times|V|\) where \(|V|\) is the vocabulary size. While we could in principle use this table for biasing, the memory cost may be too high when we have on the level of thousands or more patterns to search, each with some number of tokens (up to \(16\) in our experiments), while \(V\) is also in the thousands (4096 for our ASR system). It is therefore much more memory efficient to store the failure function which only takes \(\mathcal{O}(m)\) memory, and we pay the cost of determinization loop. For any \(x\), the number of times we have to backtrack in the determinization loop is bounded by \[\gamma(\mathcal{P})=\max_{i}\ e_{i},\quad\text{where $e_{i}$ is the integer satisfying $\pi^{e_{i}}(i)=-1$}. \tag{2}\] As an example, for the pattern in (1), we have \(\gamma(\mathcal{P})=3\) with maximum achieved at \(i=7\). ### contextual biasing with KMP For biasing in ASR, each utterance is associated with \(B\) biasing phrases, denoted as \((\mathcal{P}^{1},\ldots,\mathcal{P}^{B})\), and we attempt to match all of them at each beam search step. Another task is to assign a _bonus_, either positive or negative, to each new token expansion proposed by beam search. We achieve this goal by defining a _potential_ function based on the state of matching. For each phrase \(\mathcal{P}^{b}\), \(b=1,\ldots,B\), we first define a _scoring_ function for partial matching of length \(i\) (i.e., we have matched \(\mathcal{P}^{b}[0,\ldots,i-1]\) so far). In this work, we simply parameterize the function to be linear in \(i\): \[f(\mathcal{P}^{b},i)=i\cdot\delta,\qquad\text{for}\quad i=0,\ldots,\,len( \mathcal{P}^{b}),\] where \(\delta\geq 0\) is the per-token bonus and is a hyper-parameter to be tuned. It is future work to explore more sophisticated scoring functions for biasing phrases. Let the partial matching lengths for the \(B\) biasing phrases be \(\mathcal{I}=(i^{1},\ldots,i^{B})\). We define the potential function as the maximum scoring function over phrases: \[\mu(i^{1},\ldots,i^{B})=\max_{b=1,\ldots,B}\,f(\mathcal{P}^{b},i^{b}).\] After consuming a new token \(x\) for each biasing phrase with the forward function, the partial matching lengths are updated, based on which we compute the new potential function; the difference between the potentials is the bonus for \(x\). We sketch this algorithm in Algorithm 2. We additionally track if we finish matching any phrase fully, in which case we restart matching for all phrases as we do not want overlapping matches. Note it is possible that \(x\) extends matching for multiple phrases, especially if these phrases share prefix. If the hypothesis was partially matching a phrase and then becomes non-matching after consuming a new token, the previously added biasing bonus is canceled (Zhao et al., 2019). To summarize, we maintain a total of \(B\) integers as states for tracking the progress on each phrase. Consuming a new token and computing its bonus boils down to running the forward function. We vectorize the **for**-loop in Algorithm 2, and compute the forward functions for all \(B\) phrases in parallel, which further reduces to looking up the failure function table and running the determinization loop in parallel. Therefore, the time complexity for Algorithm 2 is \(\mathcal{O}(\bar{\gamma}B)\), where \(\bar{\gamma}=\max_{b=1,\ldots,B}\,\gamma(\mathcal{P}^{b})\) with the \(\gamma\) function defined in (2). Note \(\gamma\) is a worse-case bound for the number of iterations in the determinization loop. ### Integrating biasing into beam search We propose two ways to incorporate biasing bonus computation into beam search, with trade offs between accuracy and efficiency. We collectively refer to them as _KMP biasing_. * **Shallow fusion**. In this approach, we perform biasing before pruning: for each hypothesis, we consider a number of top expansions according to the ASR model scores, and compute biasing bonus for each of them, which are combined with ASR scores used for pruning; this is similar to the shallow fusion approach for applying language models to ASR inference (Gulcehre et al., 2015; Chorowski and Jaitly, 2017; Zhao et al., 2019). * **On-the-fly (OTF) rescoring**. In this approach, after expansions and pruning, we compute biasing bonuses for the expansion tokens of surviving hypotheses, and incorporate the bonus into the total score of each hypothesis, to influence future steps. Note this is different from offline rescoring, which only modifies total scores for re-ranking final hypotheses. The two approaches are sketched in Algorithm 3. Let the beam size be \(K\), which is the number of hypotheses maintained at the end of each beam search step. If we consider \(F\) expansions for biasing, the complexity of shallow fusion is \(\mathcal{O}(\bar{\gamma}KFB)\) per beam search step. Typically the biasing accuracy improves with \(F\), at the cost of heavier computation. On the other hand, since we consider a total of \(K\) expansions in on-the-fly biasing, its time complexity is \(\mathcal{O}(\bar{\gamma}KB)\), cheaper than shallow fusion biasing by a factor of \(F\). As we have multiple hypotheses, each of which considers multiple extensions, our implementation of ComputeBonus is parallelized in the (hyp, extension, phrase) combination dimension. The failure functions are computed for all phrases once before beam search starts. One can implement the loops using the while statement, and table lookups using gather or einsum functions provided by tensor-based learning platforms. ### Boosting biasing strength with prefixes In many applications, the biasing phrase frequently follows a set of prefixes (also known as carrier phrases). For example, when using smart devices to initiate communication, the user typically speakers "call", "text", or "send a message to" before the contact name. It is natural to bias the ASR system more heavily towards the user's contact list, conditioned on recognizing such prefixes (Zhao et al., 2019). A naive way to extend our method to leverage prefixes is to augment the original biasing phrases (contact names in the above use case) with all combinations of prefix and biasing phrase ("call John", "text Joe", etc). If we have \(C\) prefixes and \(B\) biasing phrases, this approach leads to \(B+CB\) phrases, significantly increasing the cost of KMP biasing. We propose an alternative and more time efficient approach, with minimal cost increase in state maintenance. For each new token, we perform matching for both prefixes and biasing phrases simultaneously (although the hypothesis receives no bonus from matching prefixes), with a time complexity of \(\mathcal{O}(C+B)\). If a new token is not extending the partial matching of any biasing phrase, but leads to a full matching of some prefix, we restart matching of biasing phrases for the extended hypothesis, which is marked as prefix-matching for all biasing phrases. And if a hypothesis is prefix-matching for some biasing phrase, we boost the scoring function of partial matches _of that biasing phrase_ by a factor \(\lambda>1\). A hypothesis stays prefix-matching if it was prefix-matching, and the new token extends the partial matching of the same biasing phrase. Compared with the case without prefixes, we pay an additional cost of maintaining the states of partial matching lengths for prefixes, with a memory cost of \(\mathcal{O}(C)\), and whether each hypothesis is prefix-matching for each biasing phrase, with a memory cost of \(\mathcal{O}(B)\). We sketch the implementation in Algorithm 5 (Appendix B). Our approach can be interpreted in the WFST framework, as having one FST for the set of prefixes and another for the set of biasing phrases, and we transit from the prefix FST to biasing FST when detecting full match of some prefix, so that the two FSTs are concatenated. ## 3 Related works WFST-based biasingInitial WFST (Mohri et al., 2002) approaches to contextual ASR (Aleksic et al., 2015; Hall et al., 2015) performed on-the-fly rescoring (Hori et al., 2007) during beam-search, for classical ASR systems that use a CLG decoder graph (Mohri et al., 2008). The contextual phrases are encoded separately from the CLG in a word-level deterministic WFST with failure transitions. Arbitrary word-level rescoring functions can be used, including CLG score replacement and various forms of interpolation. In Vasserman et al. (2016), the approach was extended to efficiently handle dynamic classes, by encoding non-terminal labels in the contextual models. Classes are dynamically inserted in the CLG graph, instead of being inserted in the contextual WFST, avoiding its exponential growth during determinization. Search errors caused by the late integration of contextual models at word labels were reduced by Williams & Aleksic (2017). Later End-to-End (E2E) ASR systems most often do not have an external LM and require an alternative WFST approach that uses shallow fusion (Zhao et al., 2019) instead of on-the-fly rescoring. In this approach, the contextual information is encoded as a subword-level deterministic WFST with failure transitions, which is used to directly modify the acoustic scores, before pruning is done by beam-search. The search space of E2E ASR systems tends to be sparser than the search space of classic ASR systems, so earlier integration is necessary to reduce search errors. WFST contextual modeling can also be approached as a lattice-augmentation problem (Serrino et al., 2019; Huang et al., 2020). These techniques identify spans in the word lattice where rare entities are likely to occur and search for acoustically confusable alternatives that are contextually relevant. The span identification and fuzzy matching are done using flexible and efficient WFST-based techniques. We note that FSTs are good at encoding domain knowledge and complex matching rules compactly. While they can be represented as graph with sparse adjacency matrices, in general FSTs are not efficient to use on TPUs which are optimized for dense operations. Our work is one step towards incorporating FST functionalities into a TPU-friendly implementation. Model-based biasingContext can be utilized by adding trainable parameters to the ASR model and performing _model-based biasing_(Fu et al., 2023; Harding et al., 2023; Xu et al., 2023). Learning such parameters in an end-to-end fashion was first considered in the CLAS model (Pundak et al., 2018), that augmented the LAS (Chan et al., 2016) decoder with a suitable attention mechanism. CLAS is trained by sampling random n-grams (playing to role of bias phrases) from the reference transcripts. CLAS sampling was later improved in Alon et al. (2019) by emphasizing proper nouns, and considering hard phonetically-similar distractors (anti-bias terms). A notable drawback from CLAS is that the full-context nature of the decoder limits it to non-streaming applications. The above limitation was addressed in Neural Associative Memory (NAM, Munkhdalai et al., 2021), a _streaming_ model-based biasing method that utilizes an external associative memory module (Munkhdalai et al., 2019; Ramsauer et al., 2020) as an intermediate representation of biasing contexts, and is augmenting the RNN-T architecture. Given a trained ASR model, let \(\mathbf{x}\) be audio feature sequence extracted by the encoder, and \(\mathbf{y}\) be the label sequence. NAM learns a modified conditional probability \(p(\mathbf{y}|\mathbf{x}+\Delta)\) by incorporating into ASR model an additional feature sequence \(\Delta\). To compute \(\Delta\), NAM utilizes an additional text encoder to extract embeddings of biasing phrases \(\{\mathcal{P}^{b}\}_{b=1}^{B}\), which are used to construct the associative memory, and another Multi-Head Attention (Vaswani et al., 2017) module that uses \(\mathbf{x}\) as query and the associative memory as keys and values, whose output context vector becomes \(\Delta\). Essentially, the attention module is used to detect the presence of biasing phrases in the audio. NAM is trained as part of the E2E model (typically with the base ASR model frozen), so that the likelihood of ground truth, including the biasing phrase present in the audio, is maximized. At inference time, NAM introduces a biasing strength parameter \(s\geq 0\) to control the effect of external biasing phrases (Wu et al., 2023), and uses \(p(\mathbf{y}|\mathbf{x}+s\cdot\Delta)\) for decoding. Given that NAM injects biasing information at the encoder output, while KMP biasing works at beam search, they can be complimentary to each other, as is observed in our experiments. ## 4 Experiments We use a large RNN-Transducer (RNN-T, Graves, 2012) as our base ASR model. Our training set contains 520M utterances of English voice search queries; the total amount of audio is 490K hours. A small percentage of the training data is human transcribed while the rest is pseudo-labeled by a teacher (Hwang et al., 2022). We tokenize training transcripts, as well as biasing phrases, using a word-piece model (Schuster and Nakajima, 2012) with an inventory of 4096 tokens. All acoustic and text training data is anonymized and adheres to Google AI Principles (Google, 2023). We use 128-dimensional log Mel-filterbank energies, extracted from 32ms window and 10ms shift, as frontend features. After two 2D-convolution layers, both with strides 2, the resulting feature sequence has a frame rate of 40ms and becomes the input to a conformer encoder (Gulati et al., 2020). The encoder consists of 16 Conformer layers of attention dimension 1536, where each attention module has 8 heads, and each feedforward network has a hidden dimension of 6144. The RNN-T decoder uses a \(|V|^{2}\) embedding prediction network (Botros et al., 2021), which computes text features based on two previous non-blank tokens. The ASR model has a total of 870M parameters. For decoding, we perform label synchronous beam search with beam size \(K=8\). RNN-T has a special blank token which indicates non-emission and does not alter decoder and biasing states. The word error rate (WER) of our RNN-T system on an in-house test set of voice search queries is 3.8%. EvaluationWe use both voice-assistant based real-audio data and TTS synthetic data as described in Munkhdalai et al. (2023) for evaluation. The real-audio test-set Contact-Tag contains 7.6K utterances focusing on contact recognition (i.e., call SCONTACTS), each utterance is associated with 265 biasing entities and one of which is the true contact. The TTS data contains three categories: 1. **Anti-Biasing**: Utterances simulating general voice-assistant traffic (e.g., what's the weather), we use a super-set of that used in Munkhdalai et al. (2023) containing 10K utterances; 2. **With-Prefix**: 2.6K utterances with patterns such as: open SAPPS, call SCONTACT, play SSONG; 3. **Without-Prefix**: 1.3K Utterances with prefix-less patterns such as SAPPS, SCONTACTS, or SSONGS. The real-audio test set is anonymized and adheres to Google AI Principles (Google, 2023). The utterances are associated with up to 3K biasing entities in total, and the maximum number of tokens in a biasing phrase is 16. With-Prefix and Without-Prefix evaluate in-domain performance (one of the biasing entities appears in the transcript truth), while Anti-Biasing evaluates out-of-domain performance (biasing entities are irrelevant to the transcript truth). In general, a larger set of biasing entities leads to more confusion in the ASR model and worse WER. We tune the hyper-parameters of methods based on averaged WERs on Anti-Biasing and With-Prefix; Without-Prefix and Contact-Tag are treated as test sets. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{c|}{KMP Biasing} \\ \cline{2-5} & OIT Rescoring & Fusion \(F=10\) & Fusion \(F=50\) & Fusion \(F=4096\) \\ & \(\delta=2.4\) & \(\delta=2.2\) & \(\delta=2.3\) & \(\delta=2.3\) \\ \hline \hline \multicolumn{5}{|c|}{Anti-Biasing, without-biasing WER: **1.7**} \\ \hline \(B=150\) & 1.7 & 1.7 & 1.7 & 1.7 \\ \(B=600\) & 1.8 & 1.8 & 1.8 & 1.8 \\ \(B=3000\) & 2.1 & 2.2 & 2.3 & 2.3 \\ \hline \hline \multicolumn{5}{|c|}{With-Prefix, without-biasing WER: 9.6} \\ \hline \(B=150\) & 4.1 & 3.7 & 2.6 & **2.4** \\ \(B=600\) & 4.5 & 4.0 & 2.9 & **2.7** \\ \(B=3000\) & 5.1 & 4.6 & 3.8 & **3.6** \\ \hline \hline \multicolumn{5}{|c|}{Without-Prefix, without-biasing WER: 20.9} \\ \hline \(B=150\) & 7.9 & 7.7 & 5.5 & **4.8** \\ \(B=600\) & 8.4 & 8.0 & 5.8 & **5.3** \\ \(B=3000\) & 10.1 & 9.6 & 7.9 & **7.4** \\ \hline \hline \multicolumn{5}{|c|}{Contact-Tag, without-biasing WER: 14.7} \\ \hline \(B=265\) & 8.7 & 8.3 & 7.8 & **7.7** \\ \hline \end{tabular} \end{table} Table 1: WER (%) results obtained by KMP biasing. ### Results by KMP biasing We first present the results of KMP biasing by itself, without combining with NAM or score boosting by prefixes. In both the OTF rescoring and shallow fusion modes, we tune the hyper-parameter \(\delta\) which is the biasing bonus per token along phrases. We observe that, as we increase \(\delta\), WERs on biasing sets first decrease, then stay low for a range of values, and eventually increase. We provide WER curves of varying \(\delta\) in Figure 1 (Appendix C) for both modes. We provide WERs together with the optimal \(\delta\) in Table 1, for OTF-rescoring and three settings of \(F\) for shallow fusion. Our method achieves significant WER reduction over the base ASR model on all biasing sets, e.g., by 50% to over 70% relative on the With-Prefix set with \(B=150\) phrases, while not degrading the Anti-Biasing set by much. We observe that shallow fusion consistently outperforms OTF rescoring as expected, and in general large \(F\) leads to better WER. From \(F=50\) to the full vocabulary size \(4096\), improvements start to saturate and we find \(F=50\) to offer a good balance between accuracy and efficiency and use it in the experiments below. ### Combining KMP biasing with model-based biasing Given that KMP biasing is applied during beam search and is agnostic to the base ASR model, one may wonder if its performance gain is additive to that of a strong model-based biasing method. We train a state of the art NAM model on top of our base model, and its performance with normal beam search is provided in Table 2 (left column). We observe that model-based biasing does achieve superior results by itself, especially on Without-Prefix and Contact-Tag. We then perform KMP biasing on top of NAM, and the results are given in Table 2 (mid columns). We find it better to slightly tune down the biasing strength of NAM when combing it with KMP biasing, and now the optimal \(\delta\) is much smaller than those used in Section 4.1, as the output of NAM already contains strong biasing information. Nonetheless, KMP provides additional 20%-40% relative WER improvements over NAM across With-Prefix and Without-Prefix, and 8%-10% relative improvements on Contact-Tag, with small degradation on Anti-Biasing. ### Boosting KMP with prefixes Finally, we verify the effectiveness of prefixes and Algorithm 5 on top of NAM + KMP biasing. We provide three prefixes {call, open, play} as additional inputs to KMP biasing while NAM only uses biasing phrases as before. For With-Prefix and Contact-Tag, each test utterance comes from one of App, Contact, and Song domains, and so it contains one of the prefix while the other two act as distractors; Without-Prefix does not contain prefixes before biasing phrases by design. \begin{table} \begin{tabular}{|l||c||c|c||c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\begin{tabular}{c} NAM \\ \(s=0.6\) \\ \end{tabular} } & \multicolumn{2}{c||}{\begin{tabular}{c} NAM (\(s=0.5\)) + KMP biasing \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} NAM (\(s=0.5\)) + KMP w. boost \\ \end{tabular} } \\ \cline{3-6} & & \begin{tabular}{c} MIT Rescoring \\ \(s=0.6\) \\ \end{tabular} & \begin{tabular}{c} Fusion \(F=50\) \\ \(\delta=0.8\) \\ \end{tabular} & \begin{tabular}{c} OTF Rescoring \\ \(\delta=0.6,\lambda=2.0\) \\ \end{tabular} & \begin{tabular}{c} Fusion \(F=50\) \\ \(\delta=0.9,\lambda=1.5\) \\ \end{tabular} \\ \hline \hline \multicolumn{6}{|c||}{Anti-Biasing, without-biasing WER: **1.7**} \\ \hline \(B=150\) & 1.9 & 1.9 & 2.0 & 1.9 & 2.1 \\ \(B=600\) & 2.1 & 2.1 & 2.3 & 2.1 & 2.3 \\ \(B=3000\) & 2.2 & 2.2 & 2.4 & 2.2 & 2.5 \\ \hline \hline \multicolumn{6}{|c||}{With-Prefix, without-biasing WER: 9.6} \\ \hline \(B=150\) & 1.5 & 1.0 & 0.9 & 0.9 & **0.8** \\ \(B=600\) & 1.8 & 1.3 & 1.2 & 1.0 & **0.9** \\ \(B=3000\) & 2.8 & 2.2 & 2.1 & 1.9 & **1.7** \\ \hline \hline \multicolumn{6}{|c||}{Without-Prefix, without-biasing WER: 20.9} \\ \hline \(B=150\) & 1.8 & 0.9 & 0.8 & 1.0 & **0.8** \\ \(B=600\) & 2.1 & 1.2 & 1.0 & 1.3 & **1.0** \\ \(B=3000\) & 4.0 & 3.1 & 2.5 & 3.1 & **2.3** \\ \hline \hline \multicolumn{6}{|c||}{Contact-Tag, without-biasing WER: 14.7} \\ \hline \(B=265\) & 3.8 & 3.5 & 3.4 & 3.1 & **3.0** \\ \hline \end{tabular} \end{table} Table 2: WER (%) results obtained by NAM + KMP biasing. \(s\) denotes NAM biasing strength, \(\lambda\) denotes score boosting factor with prefixes. We fix NAM biasing strength to \(s=0.5\), tune the score boosting factor \(\lambda\) over {1.5, 2.0, 2.5}, and search \(\delta\) locally around the optimal values found in Section 4.2. Final results are shown in Table 2 (right columns). We obtain further gains on With-Prefix and Contact-Tag, while not degrading much on other sets. In particular, we achieve a final 21% relative WER reduction on Contact-Tag with shallow fusion over NAM itself. We also observe that OTF rescoring prefers a larger \(\lambda\) than shallow fusion, as it has less chance to be confused by mis-recognized prefixes and wrongly boosted bonuses. It is future work to conduct full-fledged experiments with more complex prefixes. ## 5 Conclusions We have proposed a TPU-friendly implementation of pattern-matching based biasing, and demonstrated the effectiveness of its variants on large-scale voice search queries. Our method achieves significant WER reduction on biasing sets without introducing additional learning parameters, and is complementary to a strong model-based biasing method. There are several directions for future research. To scale up our method to more than thousands of biasing phrases, we may study deep integration with NAM+ (Munkhdalai et al., 2023), which performs a quick filtering of a large amount of unlikely biasing phrases before conducting more careful search over the remaining ones. Our current implementation has used a fixed per-token score, and it is straightforward to incorporate an external neural language model for improved bonus computation (Le et al., 2021). Finally, our on-TPU implementation enables training with matching-based biasing, and it is an interesting research question to design a suitable learning objective for further performance gain.
2309.08447
A theory of phonon induced friction on molecular adsorbates
In this manuscript, we provide a general theory for how surface phonons couple to molecular adsorbates. Our theory maps the extended dynamics of a surface's atomic vibrational motions to a generalized Langevin equation, and by doing so captures these dynamics in a single quantity: the non-Markovian friction. The different frequency components of this friction are the phonon modes of the surface slab weighted by their coupling to the adsorbate degrees of freedom. Using this formalism, we demonstrate that physisorbed species couple primarily to acoustic phonons while chemisorbed species couple to dispersionless local vibrations. We subsequently derive equations for phonon-adjusted reaction rates using transition state theory and demonstrate that these corrections improve agreement with experimental results for CO desorption rates from Pt(111).
Ardavan Farahvash, Adam P. Willard
2023-09-15T14:52:20Z
http://arxiv.org/abs/2309.08447v3
# A theory of phonon induced friction on molecular adsorbates ###### Abstract In this paper we provide a theory for how surface phonons couple to molecular adsorbates. Our theory maps the extended dynamics of a solid's vibrational motions to a generalized Langevin equation, and by doing so, encapsulates these dynamics in a single quantity: the non-Markovian friction. The different frequency components of this friction are the phonon modes of the solid weighted by their coupling to the adsorbate degrees of freedom. Using this formalism, we demonstrate that physisorbed species couple primarily to the acoustic phonons while chemisorbed species couple to dispersionless local vibrations. We subsequently derive equations for phonon-adjusted reaction rates using transition state theory and demonstrate that these corrections improve agreement with experimental results for CO desorption rates from Pt(111). ## 1 Introduction When a molecule binds to a solid surface, its properties and dynamics become intertwined with those of the solid. This interaction affects chemical reaction rates and mechanisms via changes to both the electron and nuclear properties of the bound molecule. Recent advances in theoretical techniques have given a powerful perspective into the salient effects of surface binding on molecular electronic structure [1, 2, 3, 4]. However, despite numerous experimental and computational studies demonstrating the significant influences surface vibrations sometimes have on reaction rates and mechanisms [5, 6, 7, 8, 9], there remains comparatively little general insight into the role of solid nuclear vibrations on surface chemical processes. In this paper, we provide a general theory of how surface phonons couple to molecular adsorbates. Our method is extends our previous method for utilizing the generalized Langevin equation (GLE) for describing the influence of surface vibrational modes on molecular adsorbates. 10 In our previous method, the GLE was employed to describe the effective dynamics of a surface binding site, while in the method we present here the surface is integrated over so that the GLE can be employed to describe the effective dynamics of the adsorbate itself. Using this technique, we demonstrate that phonons interact in a very different manner with physisorbed and chemisorbed species. Specifically, physisorbed species couple primarily to acoustic phonons across a broad range of frequencies, while chemisorbed species couple primarily to dispersionless local vibrations. By combining these observations with harmonic transition state theory, we derive equations explicitly showing how phonons alter reaction rates at solid surfaces and demonstrate that these phononic corrections agree with experimental measurements of desorption rate constants. Over the past four decades, a multiplicity of experimental and theoretical studies have demonstrated how surface vibrations modulate reactions dynamics at metal interfaces. For example, dissociation of both methane and N\({}_{2}\) on both a wide variety metal catalysts has been shown to be deeply intertwined with surface atom motion.[5, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] Another example is the enhancement of many reactions (such as the oxidation of CO or ethanol) by surface acoustic waves.[23, 24, 25, 26, 27, 28, 29] In these experiments, a piezoelectric driver is used to apply surface waves at a catalytic interface at specific frequencies and polarization. For certain choices of frequency and polarization, rate enhancement is observed while for others no noticeable change is made. While several explanations have been given for this effect, there remains no clear consensus.[7] Much of the lack of consensus arises from the inability of current theory to connect the length and timescales of atomistic dynamics with the mesoscopic scale of experimentally relevant surface acoustic waves.[30] Inspired greatly by these experiments, we have sought to develop a theory which can capture how surface phonons couple to molecular adsorbates and bridge the gap between mesoscopic and molecular scales. In order to accomplish this task, we will integrate over the solid nuclear coordinates to derive an effective equations of motion for the adsorbate. As we demonstrate in Section 2, such a procedure naturally leads to a generalized Langevin equation (GLE), \[\vec{\textbf{x}}_{A}=-\frac{\partial\tilde{V}}{\partial\textbf{x}_{A}}-\int_{ 0}^{t}\textbf{K}(t-\tau)\dot{\textbf{x}}_{A}(\tau)d\tau+\textbf{R}(t), \tag{1}\] where \(\textbf{x}_{A}\) is a vector of the mass-weighted displacements of the adsorbate degrees of freedom, \(\tilde{V}\) is the effective potential energy surface for the adsorbate, **K** is the non-Markovian friction or memory kernel, and **R** is the conjugate random force. The friction kernel, \(\textbf{K}(t)\), is central to determining the dynamics of the GLE and thus the dynamics of the adsorbate degrees of freedom. It encodes all relevant information about the solid's phonon frequencies, displacements, and how they couple to the adsorbate. Naturally, the friction kernel also plays a significant role in determining how phonons modulate reaction rates of adsorbed species. The primary results of this paper are two-fold. First, using our GLE formalism, we demonstrate a general pattern for how surface phonons interact with adsorbates, wherein species that are weakly coupled to the surface (physisorbed) interact primarily with acoustic phonons, and species that are strongly coupled to the surface (chemisorbed) interact primarily with dispersionless local modes. The key value that separates these regimes is the ratio of the solid's Debye frequency to the frequency of the adsorbate-surface bond. Figure 1 provides a schematic of this result. Second, using harmonic transition state theory, we derive a general formula for how phonons alter desorption rates from solid surfaces, \[k_{d}=\frac{\sqrt{\frac{\mu}{m}\omega_{\rm as}^{2}-K(t=0)}}{\omega_{\rm as}}\times \frac{\tilde{\omega}_{\rm D}}{\omega_{\rm D}}\times k_{d0} \tag{2}\] where \(k_{d0}\) is the rate constant for a static surface, \(\omega_{\rm as}\) is the frequency of the adsorbate-surface bond, \(K(t=0)\) is the instantaneous friction along the desorption coordinate, \(\mu\) is the reduced mass of the adsorbate and adsorption site, \(m\) is the adsorbate mass, \(\omega_{\rm D}\) is the solid's natural Debye frequency, and \(\tilde{\omega}_{\rm D}\) is the adjusted maximum phonon frequency due to adsorption. We demonstrate the adjustments from Eq. 2 improvement agreement with experimental results for CO desorption from Pt(111). The remainder of the paper is organized as follows. In Section 2, we present the formal theory behind the phonon-induced GLE and highlight the characteristics of the friction kernel. We then analyze results for the friction kernel across a range of values for \(\omega_{\rm as}\) for a variety of metal surfaces. In Section 3 we provide further insight into the results presented in Section 2 by analyzing the role of dispersion plays in the phonon-induced friction. Finally, in Section 4, we use transitions state theory to examine how phonons affect reaction rates at solid surfaces. ## 2 Coupling surface vibrations to adsorbates ### Theory We seek to understand the effect of a solid's surface vibrations on adsorbate degrees of freedom through integrating out the solid's nuclear degrees of freedom to derive effective equations of motion for the adsorbate. In order to do so we must make two critical assumptions. First, we assume that the nuclear degrees of freedom of the adsorbate and solid can be described classically and adiabatically. There are cases where adsorbate dynamics are non-adiabatic, and in such cases a mixed quantum-classical approach has been shown to be effective [31, 32], but such a theory is Figure 1: 2D schematic illustrating the dominant phonon modes in terms of their coupling to the adsorbate or contribution to the friction kernel. \(\omega_{\rm as}\) is the frequency of the adsorbate-surface bond and the \(\omega_{\rm D}\) is the solid’s Debye frequency. beyond the scope of this paper. Second, we assume that the solid vibrations and surface-adsorbate interaction are both harmonic. Clearly, this assumption is an oversimplification, and therefore its consequences and limitations should be thoroughly tested. In this paper, we will develop theory within this harmonic approximation, and intend to subsequently publish results examining the role of anharmonicity. To begin, let \(\mathbf{x}_{A}\) and \(\mathbf{x}_{S}\) refer to the the mass-weighted displacements of the adsorbate and solid nuclei respectively from their equilibrium positions. We decompose the total potential energy into contributions from the adsorbate and solid, \[V(\mathbf{x}_{A},\mathbf{x}_{S})=V_{A}(\mathbf{x}_{A})+V_{AS}(\mathbf{x}_{A}, \mathbf{x}_{S})+V_{S}(\mathbf{x}_{S}), \tag{3}\] where \(V_{A}\) is the adsorbate potential energy containing all intramolecular and intermolecular interactions, \(V_{S}\) is the solid potential energy surface, and \(V_{AS}\) the adsorbate-solid interaction. By expanding \(V_{AS}\) and \(V_{S}\) to second order around the equilibrium displacements we may define \(\mathbf{G}_{AS}\), \(\mathbf{G}_{A}\), \(\mathbf{G}_{S}\), and \(\mathbf{H}_{S}\) whose elements are given by, \[G_{AS;ij} =\frac{\partial^{2}V_{AS}}{\partial x_{A,i}\partial x_{S,j}}, \tag{4}\] \[G_{A;ij} =\frac{\partial^{2}V_{AS}}{\partial x_{A,i}^{2}}\delta_{ij},\] (5) \[G_{S;ij} =\frac{\partial^{2}V_{AS}}{\partial x_{S,i}^{2}}\delta_{ij},\] (6) \[H_{S;ij} =G_{S;ij}+\frac{\partial^{2}V_{S}}{\partial x_{S,i}\partial x_{S,j}}, \tag{7}\] where \(\delta_{ij}\) is the Kronecker delta, \(\mathbf{H}_{S}\) is the solid's mass-weighted Hessian, \(\mathbf{G}_{AS}\) is the coupling between adsorbate and solid degrees of freedom, and \(\mathbf{G}_{A}\) and \(\mathbf{G}_{S}\) are diagonal matrices of oscillation frequencies for the adsorbate and solid degrees of freedom respectively. All derivatives in Eqns. 4-7 are evaluated at zero, per the definition of \(\mathbf{x}_{A}\) and \(\mathbf{x}_{S}\). Using this expansion, the equations of motion for adsorbate and solid atoms are, \[\ddot{\mathbf{x}}_{A} =-\frac{\partial V_{A}}{\partial\mathbf{x}_{A}}-\mathbf{G}_{A} \mathbf{x}_{A}-\mathbf{G}_{AS}\mathbf{x}_{S}, \tag{8}\] \[\ddot{\mathbf{x}}_{S} =-\mathbf{G}_{AS}^{T}\mathbf{x}_{A}-\mathbf{H}_{S}\mathbf{x}_{S}, \tag{9}\] respectively. We now introduce solid phonon coordinates \(\mathbf{u}_{S}=\mathbf{U}^{T}\mathbf{x}_{S}\) where \(\mathbf{U}\) is a matrix with the eigenvectors of \(\mathbf{H}_{S}\) as columns. In terms of these coordinates the equations of motion may be expressed as, \[\ddot{\mathbf{x}}_{A} =-\frac{\partial V_{A}}{\partial\mathbf{x}_{A}}-\mathbf{G}_{A} \mathbf{x}_{A}-\mathbf{C}\mathbf{u}_{S}, \tag{10}\] \[\dot{\mathbf{u}}_{S} =-\mathbf{C}^{T}\mathbf{x}_{A}-\boldsymbol{\omega}^{2}\mathbf{u}_ {S}, \tag{11}\] where \(\mathbf{C}=\mathbf{G}_{AS}\mathbf{U}\) is a matrix of couplings between each adsorbate coordinate and solid phonon mode, and \(\mathbf{\omega}^{2}=\mathbf{U}^{T}\mathbf{H}_{S}\mathbf{U}\) is a diagonal matrix of the square frequency of the solid phonons. Solving Eq. 11 in terms of \(\mathbf{x}_{A}\) and plugging back in to Eq. 10 yields a GLE for the adsorbate degrees of freedom, \[\mathbf{\hat{x}}_{A}^{*}=-\frac{\partial V_{A}}{\partial\mathbf{x}_{A}}-[ \mathbf{G}_{A}-\mathbf{K}(t=0)]\,\mathbf{x}_{A}(t)-\int_{0}^{t}\mathbf{K}(t- \tau)\mathbf{\hat{x}}_{A}(\tau)d\tau+\mathbf{R}(t), \tag{12}\] where the friction kernel \(\mathbf{K}(t)\) is given by, \[\mathbf{K}(t)=\mathbf{C}\frac{\cos(\mathbf{\omega}t)}{\mathbf{\omega}^{2}}\mathbf{C}^ {T}, \tag{13}\] and statistics of the random force \(\mathbf{R}(t)\) are related to \(\mathbf{K}(t)\) by the second fluctuation-dissipation theorem, \[\frac{\left\langle\mathbf{R}(t)\mathbf{R}^{T}(0)\right\rangle}{k_{\mathbf{B}} T}=\mathbf{K}(t). \tag{14}\] The positive frequency components of the Fourier transform of the friction kernel are (up to a multiplicative constant), \[\bar{K}_{ab}(\omega)=\sum_{j}\frac{C_{aj}C_{bj}}{\omega_{j}^{2}}\delta(\omega -\omega_{j}), \tag{15}\] where \(\omega_{i}\) are the solid's phonon frequencies, which in the case of the continuous spectrum of frequencies leads to the expression, \[\bar{K}_{ab}(\omega)=\frac{C_{a}(\omega)C_{b}(\omega)}{\omega^{2}}\rho(\omega). \tag{16}\] Eq. 12 through 16 contain a great deal of insight. Eq. 12 illustrates that, in the harmonic approximation, the influence of phonon modes on adsorbates can be entirely captured in terms of the friction \(\mathbf{K}(t)\), as it determines both the properties of \(\mathbf{R}(t)\) via the fluctuation-dissipation theorem and the deviation of the adsorbate potential from a fixed solid. Eq. 15 and Eq. 16 reveal that the Fourier transform of this friction kernel is equivalent to the phonon density of states of the solid \(\rho(\omega)=\sum_{j}\delta(\omega-\omega_{j})\) weighted by the adsorbate-phonon coupling \(\mathbf{C}\). The Fourier transform of the friction kernel is often called the _spectral density_ of the environment. For any given model of an adsorbate and metal, such quantities may be readily calculated. In many cases Eq. 13 and Eq. 15 may further be simplified by realizing that adsorbates couple only to a handful of surface sites and thus \(\mathbf{G}_{AB}\) and \(\mathbf{C}\) are quite sparse. Consider the limit where a single atom per adsorbate couples only to a single surface atom with a force constant, \(\mu\omega_{\mathrm{as}}^{2}\), where \(\mu\) is the reduced mass of the adsorbate and surface atom. In such a case, Eq. 13 and 15 may be simplified to, \[K(t)=\frac{\mu^{2}}{mM}\omega_{\mathrm{as}}^{4}\sum_{j}\frac{U_{sj}^{2}}{ \omega_{j}^{2}}\cos(\omega_{j}t), \tag{17}\] \[\bar{K}(\omega)=\frac{\mu^{2}}{mM}\omega_{\mathrm{as}}^{4}\sum_{j}\frac{U_{sj }^{2}}{\omega_{j}^{2}}\delta(\omega-\omega_{j}), \tag{18}\] where \(U_{sj}\) is the expansion coefficient of the surface displacement \(s\) in the \(j\)th normal mode, \(m\) is the mass of the adsorbate atom, and \(M\) is the mass of the surface atom. ### Results and discussion We have computed the phonon induced friction for a simple model of CO on Pt(111). The CO was assumed the interact with a single adsorption site, as experimental structures show that CO adsorbs primarily atop Pt(111) sites at low surface coverages [33, 34]. Both theory and experiment show that the frequency of the Pt-CO stretch, which we define as \(\omega_{\text{as}}\), is around 480 cm\({}^{-1}\)[34, 35, 36]. In this section we will vary \(\omega_{\text{as}}\), in order asses how to magnitude of the surface-adsorbate coupling affects the phonon-induced friction kernel \(K(t)\). The metal potential energy surface \(V_{S}\) was modeled using an effective medium theory (EMT) forcefield developed by Norskov et al. [37] More details about calculations can be found in the Section 5. Figure 2. illustrates results for phonon friction kernel in the direction normal to the surface plane \(K(t)\), it's Fourier transform \(\bar{K}(\omega)\), and the phonon density of states \(\rho(\omega)\), for three values of \(\omega_{\text{as}}\) using a 4x4x8 and 8x8x8 Pt(111) surface slab. These three values w Figure 2: (A) Friction kernel, (B) spectral density, and (C) density of states for Pt(111) and three values of \(\omega_{\text{as}}\). The density of states in (C) was calculated using the 4x4x8 surface slab. represent three distinct physical limits of the adsorbate-surface coupling. The leftmost column of Figure 2. -- \(\omega_{\rm as}=100\) cm\({}^{-1}\) -- represents a weak coupling limit where the adsorbate-surface interaction frequency is less than platinum's Debye frequency \(\omega_{\rm D}=156\)cm\({}^{-1}\). We identify this limit with physisorption, as noble gases have been shown to have surface interaction frequency of \(100\)cm\({}^{-1}\) or less with Pt(111). The rightmost column -- \(\omega_{\rm as}=500\) cm\({}^{-1}\) -- represents a strong coupling limit where the adsorbate-surface frequency is much greater than the solid's Debye frequency. We identify this limit with chemisorption as it is near the aforementioned interaction frequency for CO. The center column -- \(\omega_{\rm as}=300\) cm\({}^{-1}\) -- represents an intermediate case. Delta function peaks in \(\bar{K}(\omega)\) and \(\rho(\omega)\) were broadened to thin Lorentzians of width \(1\)cm\({}^{-1}\) for ease of visualization. For all cases of \(\omega_{\rm as}\), the adsorbate couples strongly to a surface acoustic mode around \(\omega=10\)cm\({}^{-1}\). The frequency of this peak is observed to be highly dependent on the dimensions of the surface slab, consistent with the behavior of an acoustic phonon. In Section 3 we will give a more detailed theoretical and numerical analysis of the size effects observed here, fully accounting for phonon dispersion. In the physisorbed limit, this acoustic mode dominates over the others, resulting in highly non-Markovian friction. The mode associated with this peak can be shown to arise from the flexing of the lattice in the direction perpendicular the the surface. Snapshots of the motion mode are provided Figure 2A. and entire movies are provided in the electronic SI. In contrast to the physisorbed limit, in the chemisorbed limit, the adsorbate couples most strongly to a high frequency mode around \(\omega=210\) cm\({}^{-1}\). This mode arises from the local oscillations of the surface site the adsorbate is bound to. The frequency of this mode is seen to be independent of unit cell size, suggesting that it is dispersionless. The large amplitude of this mode signifies that in the chemisorbed limit, the collective motions of the bulk phonons of the solid do not contribute significantly to the adsorbate's motion. Instead, the surface site's local oscillations (which are shifted in frequency due to the presence of the adsorbate) dominate the contribution to the forces on the adsorbate. Many published models for reactive scattering on solid surfaces will describe solid vibrations using only a single, harmonically bound surface atom [38, 39, 40, 12, 41]. These results explain why such a method is successful for strongly coupled species. Despite this large shift in the spectral density when varying \(\omega_{\rm as}\), in Figure 1C. we demonstrate that the phonon density of states is nearly identical to that of a bare surface. The only significant change is the presence of the aforementioned surface site local mode present at high values of \(\omega_{\rm as}\). These results seem physically valid given that the low frequency acoustic modes of a solid should be unaffected by gaseous species especially at low pressures and surface coverages. Figure 3: Illustration of the modes with the largest amplitude contributions to \(K(t)\) and \(\bar{K}(\omega)\) in the (A) \(\omega_{\rm as}=100\) cm\({}^{-1}\) (B) \(\omega_{\rm as}=500\) cm\({}^{-1}\) cases. In addition to Pt(111) with EMT we have tested several different surfaces of different crystal structures, facets, elemental compositions, and using different forcefields. These results are presented in the Section S1 of the SI. While the details of the friction kernel and spectral density certainly vary (e.g. Ru has a much higher Debye frequency than Pt) the behavior in the limits of physisorption (\(\omega_{\rm as}<\omega_{\rm D}\)) and chemisorption (\(\omega_{\rm as}>>\omega_{\rm D}\)) remains the same as in Figure 1. This independence with respect to the details of the atomistic model is appealing and can be understood from a perturbative perspective. Specifically, in the physisorbed limit, the motion of adsorption site(s) is a small perturbation to the bulk phonon modes, while in the chemisorbed limit, the bulk phonon modes are a small perturbation to the motion of the adsorption sites. In the SI, we rigorously examine this statement by comparing perturbative schemes to the exact results for \(K(t)\) and \(\bar{K}(\omega)\). Finally, we note that in this section we have only studied the effects of phonons for a clean surface, and presumably surface heterogeneity's, such as steps and defects, may add further richness and complexity to the picture we have provided here. We also note that we have only studied elemental solids, and the optical modes of polyatomic crystals could add another interesting dimension. While we believe such studies may prove fruitful, for the purposes of brevity we do not discuss such results in this paper. ## 3 Phonon dispersion The size-dependence of the acoustic peak in Figure 2 is troubling. In many studies, atomistic models are used as convenient substitutes when simulating macroscopic solids, particularly in application to molecular adsorption. Figure 2 demonstrates that such simulations suffer from finite size errors due to the maximum wavelength allowed by the boundary conditions of the surface unit cell. We emphasize that these finite-size artifacts are present even when using periodic boundary conditions to evaluate interatomic interactions/forces, as we did in the previous section, and that they can have a significant impact when computing measurable quantities such as sticking coefficients.[10] In this section, we illustrate how to generalize the theory presented in Section 2 in order to compute phonon friction kernels in the limit of an infinite surface by integrating the frequencies and eigenmodes across the first Brillouin zone. In crystalline solids, displacement by a lattice vector returns the same solid. This symmetry can be used to calculate the phonon frequencies and displacements of the bulk solid using a spatial Fourier transform of the mass-weighted Hessian. This result is known as Bloch's theorem. Let \(a\) and \(b\) be the indices of two primitive unit cells, and let \({\bf H}_{S}(a,b)\) be the mass-weighted Hessian of the crystal \(H_{S;ij}(a,b)=\frac{\partial^{2}V}{\partial x_{S,j}(a)\partial x_{S,j}(b)}\). The Fourier transform of this matrix may be expressed as, \[{\bf D}({\bf k})=\sum_{a,b}{\bf H}_{S}(a,b)e^{i{\bf k}\cdot({\bf r}_{a}-{\bf r }_{b})}, \tag{19}\] where \({\bf r}_{a}\) is the origin of the \(a\)th cell. \({\bf D}({\bf k})\) is known as the _dynamical matrix_ satisfying, \[{\bf D}({\bf k}){\bf U}_{j}({\bf k})=-\omega_{j}^{2}({\bf k}){\bf U}_{j}({\bf k }). \tag{20}\] where \(\omega_{j}\) is the \(j\)th phonon band frequency and \({\bf U}_{j}\) is the corresponding polarization vector. For 3D monatomic crystals, the primitive (Wigner-Seitz) cell consists of a single atom with three degrees of freedom, thus producing three phonon bands. These bands are the acoustic transverse and longitudinal modes, and characterized by linear dispersion at low wavenumbers. They are illustrated for FCC platinum in Figure 4A. In a surface slab, the presence of anisotropy breaks the symmetry of the 3D crystal. This symmetry breaking leads to additional surface mode bands, which can be acoustic (Rayleigh waves), but non-acoustic bands are also often observed [42, 43, 44]. In Figure 4B. and 4C. we demonstrate the phonon dispersion of a Pt(111) surface calculated via EMT. These results were calculated using a 4x4x8 primitive surface replicated in a 6x6 super-cell. We verified convergence with respect to supercell size. Of course, the number of surface phonon bands depends on the size of the surface unit cell used in computing \(\mathbf{D}(\mathbf{k})\), however we demonstrate in Figure S6 results are similar across different unit cell sizes. Figure 4B illustrates the phonon dispersion for a bare surface, while Figure 4C illustrates the dispersion for a surface with an adsorbed CO. The bands are colored based on their mean slope between the \(\Gamma\) and \(K\) high-symmetry points. The three lowest frequency bands are all surface acoustic modes. The flexing mode depicted in Figure 3A. is precisely one of these bands, confirming that the corresponding peak in the phonon spectral density in Figure 1. arises from an acoustic phonon. The remaining modes are nearly dispersionless -- especially the highest frequency mode in Figure 4C, which corresponds to the surface site local vibration depicted in Figure 3C. The dispersionless nature of this mode supports why it was not seen to be dependent on surface slab size in Figure 1. Using these dispersion relations, we can average the friction kernel across the first Brillouin zone, \[K(t)=\frac{\mu^{2}}{mM}\omega_{\text{as}}^{4}\sum_{j}\sum_{\mathbf{k}}\ \frac{\left|U_{sj}(\mathbf{k})\right|^{2}}{\omega_{j}(\mathbf{k})^{2}}\cos( \omega_{j}(\mathbf{k})t), \tag{21}\] \[\bar{K}(\omega)=\frac{\mu^{2}}{mM}\omega_{\text{as}}^{4}\sum_{j}\sum_{\mathbf{ k}}\ \frac{\left|U_{sj}(\mathbf{k})\right|^{2}}{\omega_{j}(\mathbf{k})^{2}}\delta( \omega-\omega_{j}(\mathbf{k})), \tag{22}\] Figure 4: Phonon dispersion curves. (Right) Bulk Pt dispersion curves calculated using an EMT forcefield and a 10x10x10 atom supercell. (Middle) Dispersion curves for a 4x4x8 atom surface slab replicated in a 6x6 surface cell (Right) Same as middle but with an adsorbed CO molecule corresponding to \(\omega_{\text{as}}=480\) cm\({}^{-1}\). where the outer sum goes over all phonon bands, and the inner sum over the first Brillouin zone. The outer sum and inner sum together sum over all phonon modes. Eqns. 21 and 22 simply separate the sum in Eqns. 17 and 18 into two parts: the outer sum varying intra-cell displacements and the inner sum varying inter-cell displacements. In principle, this procedure eliminates the dependence on boundary conditions and generalizes results from a surface unit cell to an infinite surface. However, the depth of the surface (corresponding to the non-periodic dimension) is still limited to the depth of the surface unit cell. The results in previous section essentially correspond to only taking the value of Eqns. 21 and 22 at \(\mathbf{k}=0\). Figure 5 demonstrates the results of Eqns. 21 and 22 and compares to previous results for a single unit cell slab. Naturally, the strong coupling of the adsorbate to a single acoustic mode is broadened, resulting in a much flatter spectral density in the low frequency ranges. Such a flat spectral density is characteristic of Markovian (white) noise. The high frequency range of the spectral density is largely unaltered, due to the dispersionless nature of the high frequency modes. In the SI, we discuss how a flat spectral density for acoustic phonons is consistent with predictions from continuum elastic theory. ## 4 Phonon effects on reaction rates The most salient motivation for the study of the phononic friction is to elucidate the role of phonons in reactions rates at catalytic interfaces. As was noted first by Kramers,[45] and expanded on by many others,[46, 47, 48, 49] the friction of the environment plays a critical role in determining reaction rates, especially in determining rate prefactors. In this section, we use transition state theory (TST) to demonstrate how surface phonons affect reaction rates near thermal equilibrium, emphasizing the role of the phonon friction kernel. We then specialize to the case of desorption rates and compare the results of our model to experimental temperature dependent rate constants for CO and Xe desorption from Pt(111). Many details of the formalism given this section have been omitted for the purpose of providing a concise presentation of the theory that focuses on physical ramifications. A thorough presentation of the theory is provided in the SI for interested readers. Figure 5: Spectral density for Pt(111) surface phonons. The black line is equivalent to the 4x4x8 results presented in Figure 1. The blue dashed line was calculated by averaging the results for the black-line across \(k\)-space (Eqns. 21 and 22). ### Transition State Theory #### 4.1.1 General theory The Eyring TST equation for the rate constant may be expressed as, \[k=\frac{1}{\beta h}\frac{Q^{\ddagger}}{Q_{R}}e^{-\beta\Delta E^{\ddagger}}, \tag{23}\] where \(h\) is Planck's constant, \(\Delta E^{\ddagger}\) is the potential energy difference between the reactant state and the transition state, \(Q^{\ddagger}\) is the partition function at the transition state, and \(Q_{R}\) is the partition function in the reactant basin. By expanding said partition functions to second order about the transition state and the reactant basin, we can analytically evaluate \(Q_{R}\) and \(Q^{\ddagger}\) yielding, \[k=\frac{1}{2\pi}\times\frac{\prod\limits_{i=0}^{N-1}f_{i}}{\prod\limits_{i=1}^ {N-1}f_{i}^{\ddagger}}\times e^{-\beta\Delta E^{\ddagger}}, \tag{24}\] where \(N\) is the total number of modes, \(f_{i}\) are the vibrational frequencies in the reactant basin, \(f_{i}^{\ddagger}\) are the _non-imaginary_ vibrational frequencies at the transition state. Note that since the transition state is defined to be a saddle point, it always has one imaginary frequency mode and this mode corresponds the reaction coordinate. Pollak illustrated that Eq. 24 is equivalent to Kramers and Grote-Hynes rate theories [50]. In a gas phase molecular reaction, \(f_{i}\) are the eigenfrequencies of the molecular mass-weighted Hessian. However, in a reaction at a solid interface, both phonon modes and molecular/adsorbate modes will contribute to the rate constant, \[k=\frac{1}{2\pi}\times\prod\limits_{i=0}^{N_{S}-1}\frac{\omega_{i}}{\omega_{i} ^{\ddagger}}\times\frac{\prod\limits_{i=0}^{N_{A}-1}\tilde{f}_{i}}{\prod \limits_{i=1}^{N_{A}-1}\tilde{f}_{i}^{\ddagger}}\times e^{-\beta\Delta E^{ \ddagger}}, \tag{25}\] where \(N_{S}\) and \(N_{A}\) are the number of solid and adsorbate degrees of freedom respectively, \(\omega_{i}\) and \(\omega_{i}^{\ddagger}\) are the phonon frequencies in the reactant and transition state respectively, and \(\tilde{f}_{i}\) and \(\tilde{f}_{i}^{\ddagger}\) are effective molecular frequencies in the reactant and transition state respectively. Let \(\mathbf{H}_{A}\) be the mass-weighted Hessian of the adsorbate degrees of freedom, then \(\tilde{f}_{i}\) and \(\tilde{f}_{i}^{\ddagger}\) are the eigenfrequencies of an effective Hessian, \[\mathbf{\tilde{H}}_{A}=\mathbf{H}_{A}-\mathbf{C}\boldsymbol{\omega}^{-2} \mathbf{C}^{T}, \tag{26}\] where the shift term is equal to the instantaneous (zero-time) friction, \(\mathbf{K}(t=0)=\mathbf{C}\boldsymbol{\omega}^{-2}\mathbf{C}^{T}\). Comparing Eqns. 24 and 25 reveals that phonons introduce two distinct corrections to rate constant: the ratio of phonon frequencies between the reactant and transition state, and the shift in adsorbate frequencies. The ratio of phonon frequencies is a kinetic correction factor. It is largest when the reactant molecules are strongly coupled to the surface and the transition state is not. In Figure 2C we demonstrated that an adsorbate strongly coupled to a surface affects only the highest frequency phonon mode, leaving the bulk of the phonon density of states unchanged. Thus, we can approximate the ratio of phonon frequencies as, \[\prod_{i=0}^{N_{S}-1}\frac{\omega_{i}}{\omega_{i}^{\frac{\lambda}{i}}}\approx \frac{\tilde{\omega}_{\text{D}}}{\omega_{\text{D}}}, \tag{27}\] where \(\omega_{max}\) is the highest frequency phonon mode when the reactants are bound to the surface and \(\omega_{\text{D}}\) is the bare solid Debye frequency. In contrast, the shift in the adsorbate frequencies is a thermodynamic correction arising from phonons shifting the free energy surface along the reaction coordinate. Indeed, if we denote \(\tilde{f}_{0}\) as the adsorbate normal mode along the reaction coordinate Eq. 25 can be simplified to, \[k=\frac{\tilde{f}_{0}}{2\pi}\times\prod_{i=0}^{N_{S}-1}\frac{\omega_{i}}{\omega _{i}^{\frac{\lambda}{i}}}\times e^{-\beta(\Delta E^{\ddagger}+T\Delta\tilde{S }^{\ddagger})}, \tag{28}\] where \(\Delta\tilde{S}^{\ddagger}\) is the effective barrier entropy, \[\Delta\tilde{S}^{\ddagger}=k_{\text{B}}\sum_{i=0}^{N_{A}-1}\ln\left(\frac{ \tilde{f}_{i}^{\ddagger}}{\tilde{f}_{i}}\right). \tag{29}\] In the harmonic approximation this barrier entropy is independent of temperature, but in a more general context it can be shown to be temperature dependent (see SI). #### 4.1.2 Surface desorption In surface desorption, the details of the adsorbate degrees of freedom are unimportant as the reaction coordinate can be defined to be the center of mass of the adsorbate and it's distance to the surface. Furthermore, for most desorption processes the transition state is not coupled to the surface [51] and we can use Eq.27. With these assumptions the desorption rate constant is, \[k_{d}=\frac{\tilde{\omega}_{\text{as}}}{2\pi}\times\frac{\tilde{\omega}_{\text {D}}}{\omega_{\text{D}}}\times e^{-\beta\Delta E^{\ddagger}}, \tag{30}\] where \(\Delta E^{\ddagger}\) is the desorption barrier and \(\tilde{\omega}_{\text{as}}\) is the effective adsorbate-surface interaction frequency satisfying, \[\tilde{\omega}_{\text{as}}=\sqrt{\frac{\mu}{m}\omega_{\text{as}}^{2}-K(t=0)}. \tag{31}\] While transition state theory is fundamentally a classical theory [52, 53, 54, 55], studies have demonstrated improved agreement with experiment when introducing quantum corrections, such as adding a correction for the rotational motion of the molecule or using quantum harmonic oscillator partition functions instead of classical oscillator partition functions [51, 56]. In the subsequent section, we will compare 4 different models for the desorption rate constant to experimental results: (1) a fixed-surface model using classical harmonic oscillator partition functions and a rotation factor, \[k_{d1}=\frac{\omega_{\text{as}}}{2\pi}\times\frac{2I}{\hbar^{2}\beta}\times e^ {-\beta\Delta E^{\ddagger}}, \tag{32}\] (2) a phonon-corrected model using classical harmonic oscillator partition functions and a rotation factor, \[k_{d2}=\frac{\partial_{\text{as}}}{2\pi}\times\frac{\partial_{\text{D}}}{\omega_{ \text{D}}}\times\frac{2I}{\hbar^{2}\beta}\times e^{-\beta\Delta E^{\ddagger}}, \tag{33}\] (3) a fixed-surface model using quantum harmonic oscillator partition functions and a rotation factor, \[k_{d3}=\frac{1-e^{-\beta\hbar\omega_{\text{as}}}}{2\pi\beta\hbar}\times\frac{2I }{\hbar^{2}\beta}\times e^{-\beta\Delta E^{\ddagger}}, \tag{34}\] and (4) a phonon-corrected model using quantum harmonic oscillator partition functions and a rotation factor, \[k_{d4}=\frac{1-e^{-\beta\hbar\omega_{\text{as}}}}{2\pi\beta\hbar}\times\frac{1- e^{-\beta\hbar\omega_{\text{D}}}}{1-e^{-\beta\hbar\omega_{\text{D}}}} \times\frac{2I}{\hbar^{2}\beta}\times e^{-\beta\Delta E^{\ddagger}}. \tag{35}\] In the equations above \(I\) denotes the moment of inertia of the adsorbate. Note that by "fixed-surface" we do not mean a surface at absolute zero, but rather a surface that acts as an ideal, structureless, thermal environment. ### Results and discussion We have compared results from Eqns. 32 to 35 to experimental temperature dependent desorption rate constants for CO and Xe from a Pt(111) surface. Parameters used in computing Eqns. 32 to 35 are presented in Table 1. Note that because Xe is an atomic species, its moment of inertia is 0, and thus we ignore the rotational partition function factors for Xe calculations. The phonon corrections in Eq. 33 and Eq. 35 were computed using a 4x4x8 EMT surface slab and subsequently averaged across the first Brillouin zone as described in Section 3. In the SI, we illustrate that the rate constant corrections we present here are not sensitive to the size of the surface slab used, or even whether or not one accounts for phonon dispersion. The CO desorption rate constants were taken from Ref. [56] and the Xe desorption rates were taken from Ref. [57]. In Ref. [56], desorption rate constants were calculated by fitting the time-dependent flux from a beam scattering experiment to two models: a single exponential model and a bi-exponential model. The single exponential model fit the entire flux signal, mixing contributions from terrace and steps. Meanwhile the biexponential model separated the flux into a fast component, arising from terrace desorption, and a slow component, arising from step to terrace diffusion followed by terrace desorption. While our TST calculations do not include the role of steps, we compare the results of our models to data both from the single exponential model and the fast component of the biexponential for thoroughness and transparency. In Figure 6A, we see that the phonon corrected models (Eqns. 33 and 35) give improved agreement with experimental results for CO desorption. In particular, \(k_{d4}\) the quantum, flexible \begin{table} \begin{tabular}{l l l l l} \hline \hline & \(E^{\ddagger}\) (eV) & \(\omega_{\text{as}}\) (cm\({}^{-1}\)) & \(\bar{\omega}_{\text{as}}\) (cm\({}^{-1}\)) & \(\bar{\omega}_{\text{D}}\) (cm\({}^{-1}\)) \\ \hline CO & 1.47[56] & 480[34] & 164 & 203 \\ Xe & 0.245[57] & 28[58] & 21 & 156 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters used for computing desorption rate constants shown in Figure 6. surface model and the terrace desorption rate constants yield the best agreement. The improved agreement when using Eqns. 33 to 35 versus Eqns. 32 to 34 arises from the reduced adsorbate-surface frequency \(\tilde{\omega}_{as}\). Physically, the flexible surface relaxes the stiffness of the adsorbate-surface bond, leading to a lower frequency of attempts over the barrier and a lower rate prefactor. In Figure 6B, we demonstrate results for Xe. Here, all the TST models lie essentially on top of each other and are lower in value than the experimental rate constants, although by a small margin. The smaller phonon corrections for Xe versus CO are a natural result of the weaker interactions with the surface. Weaker coupling means a small \(K(t=0)\), leading to \(\tilde{\omega}_{\rm as}\approx\omega_{\rm as}\). Furthermore, weak coupling also results in a phonon density of states that is unchanged from a bare lattice, implying \(\tilde{\omega}_{\rm D}=\omega_{\rm D}\). In general, the theory presented in this section suggests the phonon corrections to the rate constant are much larger for chemisorbed species than physisorbed species. Of course, it is worth emphasizing that using a slightly different values for the surface binding energy, \(\Delta E^{\frac{3}{4}}\), can substantially shift the quality of agreement of theoretical calculations with experiment. The major impediment to the first principles calculation of chemical rates is still the calculation of the barrier energy, and the phonon corrections to the rate constants seem to be a comparatively minor factor, even for chemisorbed species. Indeed, the purpose of this section was not to demonstrate that the magnitude of phonon corrections to reaction rates is large, but rather to illustrate that the theoretical models we developed in Sections 2 and 3, when combined with transition state theory, produce physically interpretable results which correspond well with existing Figure 6: Rate constants for desorption from a Pt(111) surface. (A) CO desorption. Grey squares are experimental data which mixed contributions from both steps and terraces. Black circles refer experimental data where the kinetics of terrace desorption was isolated. (B) Xe desorption. experimental measurements. ## 5 Methods All calculations were performed using the Atomic Simulation Environment (ASE) python package [59, 60, 61, 62, 63, 64]. Mass-weighted Hessians were computed using a finite-difference scheme in the vibrations module of ASE. Dynamical matrices were computed using the phonon module. The displacement size used in finite difference calculations was 0.01. 100 points were used to uniformly sample the first Brillouin zone. When computing the friction kernels and spectral densities, all adsorbates were treated as an effective adatom, with a given coupling between the center of mass of the adsorbate and a surface atom. Friction kernels were computed for the three Cartesian degrees of freedom of this adatom. Additionally, a single atom in the bottom layer of each surface slab was constrained in order to remove center of mass motion. Without a constraint on the solid, the center of mass diffusion of the entire lattice will dominate the contribution to the friction kernel. Results for friction kernels were computed using individual surface atoms as adsorption sites and subsequently averaged. ## 6 Conclusions In this manuscript we have developed a theory for how surface phonons couple to molecular adsorbates based on the generalized Langevin equation. By integrating out the solid degrees of freedom (assuming they could be described harmonically) we derived a GLE for the adsorbate, wherein the friction is merely a sum of the phonon frequency of the solid weighted by their coupling to the adsorbate. We demonstrated that this friction kernel depends sensitively on the frequency of the adsorbate-surface bond. When the frequency of this bond is smaller than the Debye frequency of the solid, adsorbates couple primarily to the acoustic phonons of solid. When the frequency of the bond is much larger than the Debye frequency of the solid, adsorbates couple primarily to the dispersionless local vibrations of the adsorption site. Subsequently, we used harmonic transition state theory to derive phononic corrections to reaction rate constants. We show that these corrections improved agreement between theory and experiment for CO desorption rates from Pt(111). In subsequent publications we intend on further elaborating on the results presented in this paper by: (1) analyzing how anhamonicity affects the phononic friction, (2) examining the interplay between phonon and solvent fluctuations in solvated surface reactions, and (3) using the models derived in this paper to examine how surface acoustic waves can activate chemical reactions. ## Supplementary Material See supplementary material for results for friction kernels for surfaces other than Pt(111), a perturbative analysis of the physisorbed and chemisobed limits, more details on convergence of the friction kernel with respect to supercell size, a discussion of continuum elastic theory, and more details on the derivation of rate constants using transition state theory. ## Data Availability Data that support the findings of this study are available from the corresponding author upon reasonable request. The code used to process memory kernels is available on Github [https://github.com/afarahva/glepy](https://github.com/afarahva/glepy). ## Acknowledgements AF and APW were supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-SC0019441. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Ardavan Farahvash acknowledges support from the National Science Foundation Graduate Research Fellowship program. ## References * Chen et al. 2021 Chen, B. W. J.; Xu, L.; Mavrikakis, M. Computational Methods in Heterogeneous Catalysis. _Chemical Reviews_**2021**, _121_, 1007-1048, Publisher: American Chemical Society. * Norskov et al. 2011 Norskov, J. K.; Abild-Pedersen, F.; Studt, F.; Bligaard, T. Density functional theory in surface chemistry and catalysis. _Proceedings of the National Academy of Sciences_**2011**, _108_, 937-943. * Koch et al. 2008 Koch, N.; Gerlach, A.; Duhm, S.; Glowatzki, H.; Heimel, G.; Vollmer, A.; Sakamoto, Y.; Suzuki, T.; Zegenhagen, J.; Rabe, J. P.; Schreiber, F. Adsorption-Induced Intramolecular Dipole: Correlating Molecular Conformation and Interface Electronic Structure. _Journal of the American Chemical Society_**2008**, _130_, 7300-7304, Publisher: American Chemical Society. * Michalsky et al. 2014 Michalsky, R.; Zhang, Y.-J.; Medford, A. J.; Peterson, A. A. Departures from the Adsorption Energy Scaling Relations for Metal Carbide Catalysts. _The Journal of Physical Chemistry C_**2014**, _118_, 13026-13034, Publisher: American Chemical Society. * Shakouri et al. 2018 Shakouri, K.; Behler, J.; Meyer, J.; Kroes, G.-J. Analysis of Energy Dissipation Channels in a Benchmark System of Activated Dissociation: N \({}_{2}\) on Ru(0001). _The Journal of Physical Chemistry C_**2018**, _122_, 23470-23480. * Alducin et al. 2019 Alducin, M.; Camillone, N.; Hong, S.-Y.; Juaristi, J. I. Electrons and Phonons Cooperate in the Laser-Induced Desorption of CO from Pd(111). _Physical Review Letters_**2019**, _123_, 246802. * von Boehn et al. 2020 von Boehn, B.; Foerster, M.; von Boehn, M.; Prat, J.; Macia, F.; Casals, B.; Khaliq, M. W.; Hernandez-Minguez, A.; Aballe, L.; Imbihl, R. On the Promotion of Catalytic Reactions by Surface Acoustic Waves. _Angewandte Chemie International Edition_**2020**, _59_, 20224-20229. * Li et al. 2021 Li, W.-L.; Lininger, C. N.; Chen, K.; Vaissier Welborn, V.; Rossomme, E.; Bell, A. T.; Head-Gordon, M.; Head-Gordon, T. Critical Role of Thermal Fluctuations for CO Binding on Electrocatalytic Metal Surfaces. _JACS Au_**2021**, jacsau.1c00300. * Tetenoire et al. 2022 Tetenoire, A.; Ehlert, C.; Juaristi, J. I.; Saalfrank, P.; Alducin, M. Why Ultrafast Photoinduced CO Desorption Dominates over Oxidation on Ru(0001). _The Journal of Physical Chemistry Letters_**2022**, _13_, 8516-8521. * Farahvash et al. 1989 Farahvash, A.; Agarwal, M.; Peterson, A.; Willard, A. P. On using the generalized Langevin equation to model substrate phonons and their role in surface adsorption and desorption. 2023; [http://arxiv.org/abs/2301.04873](http://arxiv.org/abs/2301.04873). * Luntz and Bethune 1989 Luntz, A. C.; Bethune, D. S. Activation of methane dissociation on a Pt(111) surface. _The Journal of Chemical Physics_**1989**, _90_, 1274-1280. * Luntz and Harris 1991 Luntz, A. C.; Harris, J. CH4 dissociation on metals: a quantum dynamics model. _Surface Science_**1991**, _258_, 397-426. * Henkelman and Jonsson 2001 Henkelman, G.; Jonsson, H. Theoretical Calculations of Dissociative Adsorption of CH 4 on an Ir(111) Surface. _Physical Review Letters_**2001**, _86_, 664-667. * Nave and Jackson 2007 Nave, S.; Jackson, B. Methane Dissociation on Ni(111): The Role of Lattice Reconstruction. _Physical Review Letters_**2007**, _98_, 173003, =. * Nave and Jackson 2009 Nave, S.; Jackson, B. Methane dissociation on Ni(111) and Pt(111): Energetic and dynamical studies. _The Journal of Chemical Physics_**2009**, _130_, 054701. * Nave et al. 2010 Nave, S.; Tiwari, A. K.; Jackson, B. Methane dissociation and adsorption on Ni(111), Pt(111), Ni(100), Pt(100), and Pt(110)-(1\(\times\)2): Energetic study. _The Journal of Chemical Physics_**2010**, _132_, 054705, =. * Tiwari et al. 2009 Tiwari, A. K.; Nave, S.; Jackson, B. Methane Dissociation on Ni(111): A New Understanding of the Lattice Effect. _Physical Review Letters_**2009**, _103_, 253201, Publisher: American Physical Society. * Tiwari et al. 2010 Tiwari, A. K.; Nave, S.; Jackson, B. The temperature dependence of methane dissociation on Ni(111) and Pt(111): Mixed quantum-classical studies of the lattice response. _The Journal of Chemical Physics_**2010**, _132_, 134702. * Egeberg et al. 2001 Egeberg, R. C.; Larsen, J. H.; Chorkendorff, I. Molecular beam study of N2 dissociation on Ru(0001). _Physical Chemistry Chemical Physics_**2001**, \(3\), 2007-2011. * Diekhoner et al. 2001 Diekhoner, L.; Mortensen, H.; Baurichter, A.; Jensen, E.; Petrunin, V. V.; Luntz, A. C. N2 dissociative adsorption on Ru(0001): The role of energy loss. _The Journal of Chemical Physics_**2001**, _115_, 9028-9035. * Nattino et al. 2015 Nattino, F.; Costanzo, F.; Kroes, G.-J. N2 dissociation on W(110): An ab initio molecular dynamics study on the effect of phonons. _The Journal of Chemical Physics_**2015**, _142_, 104702. * Shakouri et al. 2017 Shakouri, K.; Behler, J.; Meyer, J.; Kroes, G.-J. Accurate Neural Network Description of Surface Phonons in Reactive Gas-Surface Dynamics: N2 + Ru(0001). _The Journal of Physical Chemistry Letters_**2017**, \(8\), 2131-2136. * Mitrelias et al. 1998 Mitrelias, T.; Kelling, S.; Kvon, R. I.; Ostanin, V. P.; King, D. A. Effect of acoustic excitation on the rate of CO oxidation over Pt{110}. _Surface Science_**1998**, _417_, 97-106. * Kelling et al. 1998 Kelling, S.; Cerasari, S.; Rotermund, H. H.; Ertl, G.; King, D. A. A photoemission electron microscopy (PEEM) study of the effect of surface acoustic waves on catalytic CO oxidation over Pt{110}. _Chemical Physics Letters_**1998**, _293_, 325-330. * Kelling et al. 1999 Kelling, S.; Saito, N.; Inoue, Y.; King, D. A. Surface morphological changes induced in catalysts by acoustic waves. _Applied Surface Science_**1999**, _150_, 47-57. * Nishiyama et al. 2000 Nishiyama, H.; Rattana, N.; Saito, N.; Sato, K.; Inoue, Y. Effects of Rayleigh Surface Acoustic Wave upon Adsorptive and Surface Properties of a Thin NiO Film. _The Journal of Physical Chemistry B_**2000**, _104_, 10602-10607, Publisher: American Chemical Society. * Nishiyama and Inoue 2005 Nishiyama, H.; Inoue, Y. IRAS study of surface acoustic wave effects on CO adsorbed on Cu surfaces. _Surface Science_**2005**, _594_, 156-162. * Nishiyama and Inoue 2006 Nishiyama, H.; Inoue, Y. PEEM study of work function changes in Cu, Au and Pd metal surfaces with surface acoustic wave propagation. _Surface Science_**2006**, _600_, 2644-2649. * Inoue 2007 Inoue, Y. Effects of acoustic waves-induced dynamic lattice distortion on catalytic and adsorptive properties of metal, alloy and metal oxide surfaces. _Surface Science Reports_**2007**, _62_, 305-336. * An et al. 2016 An, Q.; Qian, J.; Nielsen, R. R.; Sementa, L.; Barcaro, G.; Negreiros, F. R.; Fortunelli, A.; Goddard Iii, W. A. The quantum mechanics derived atomistic mechanism underlying the acceleration of catalytic CO oxidation on Pt(110) by surface acoustic waves. _Journal of Materials Chemistry A_**2016**, \(4\), 12036-12045. * Head-Gordon and Tully 1995 Head-Gordon, M.; Tully, J. C. Molecular dynamics with electronic frictions. _The Journal of Chemical Physics_**1995**, _103_, 10137-10145, Publisher: American Institute of Physics. * Hertl et al. 2021 Hertl, N.; Martin-Barrios, R.; Galparsoro, O.; Larregaray, P.; Auerbach, D. J.; Schwarzer, D.; Wodtke, A. M.; Kandratsenka, A. Random Force in Molecular Dynamics with Electronic Friction. _The Journal of Physical Chemistry C_**2021**, _125_, 14468-14473. * Ertl et al. 1977 Ertl, G.; Neumann, M.; Streit, K. M. Chemisorption of CO on the Pt(111) surface. _Surface Science_**1977**, _64_, 393-410. * Steininger et al. 1982 Steininger, H.; Lehwald, S.; Ibach, H. On the adsorption of CO on Pt(111). _Surface Science_**1982**, _123_, 264-282. * Lahee et al. 1986 Lahee, A. M.; Toennies, J. P.; Woll, C. Low energy adsorbate vibrational modes observed with inelastic helium atom scattering: CO on Pt(111). _Surface Science_**1986**, _177_, 371-388. * Gunasooriya and Saeys 2018 Gunasooriya, G. T. K. K.; Saeys, M. CO Adsorption on Pt(111): From Isolated Molecules to Ordered High-Coverage Structures. _ACS Catalysis_**2018**, \(8\), 10225-10233. * Norskov and Lang 1980 Norskov, J. K.; Lang, N. D. Effective-medium theory of chemical binding: Application to chemisorption. _Physical Review B_**1980**, _21_, 2131-2136. * Nattino et al. 2016 Nattino, F.; Galparsoro, O.; Costanzo, F.; Diez Muino, R.; Alducin, M.; Kroes, G.-J. Modeling surface motion effects in N2 dissociation on W(110): Ab initio molecular dynamics calculations and generalized Langevin oscillator model. _The Journal of Chemical Physics_**2016**, _144_, 244708. * Rittmeyer et al. 2018 Rittmeyer, S. P.; Bukas, V. J.; Reuter, K. Energy dissipation at metal surfaces. _Advances in Physics: X_**2018**, \(3\), 1381574. * Zhou and Jiang 2019 Zhou, X.; Jiang, B. A modified generalized Langevin oscillator model for activated gas-surface reactions. _The Journal of Chemical Physics_**2019**, _150_, 024704. * Zhou et al. 2020 Zhou, Y.; Zhou, L.; Hu, X.; Xie, D. Dynamics Studies of O2 Collision on Pt(111) Using a Global Potential Energy Surface. _The Journal of Physical Chemistry C_**2020**, _124_, 10573-10583. * Hong et al. 2005 Hong, S.; Rahman, T. S.; Heid, R.; Bohnen, K. P. First-principles calculations of the dispersion of surface phonons on unreconstructed and reconstructed Pt(110). _Physical Review B_**2005**, _72_, 205424. * Benedek et al. 2010 Benedek, G.; Bernasconi, M.; Chis, V.; Chulkov, E.; Echenique, P. M.; Hellsing, B.; Toennies, J. P. Theory of surface phonons at metal surfaces: recent advances. _Journal of Physics: Condensed Matter_**2010**, _22_, 084020. * Bortolani et al. 1989 Bortolani, V.; Franchini, A.; Santoro, G.; Toennies, J. P.; Woll, C.; Zhang, G. Surface phonons on the Pt(111) surface: A comparison of He-scattering experiments with lattice-dynamical calculations. _Physical Review B_**1989**, _40_, 3524-3545. * Kramers 1940 Kramers, H. A. Brownian motion in a field of force and the diffusion model of chemical reactions. _Physica_**1940**, \(7\), 284-304. * Grote and Hynes 1980 Grote, R. F.; Hynes, J. T. The stable states picture of chemical reactions. II. Rate constants for condensed and gas phase reaction models. _The Journal of Chemical Physics_**1980**, _73_, 2715-2732. * Carmeli and Nitzan 1983 Carmeli, B.; Nitzan, A. Non-Markovian theory of activated rate processes. I. Formalism. _The Journal of Chemical Physics_**1983**, _79_, 393-404. * Pollak et al. 1989 Pollak, E.; Grabert, H.; Hanggi, P. Theory of activated rate processes for arbitrary frequency dependent friction: Solution of the turnover problem. _The Journal of Chemical Physics_**1989**, _91_, 4073-4087, =. * Kappler et al. 2018 Kappler, J.; Daldrop, J. O.; Brunig, F. N.; Boehle, M. D.; Netz, R. R. Memory-induced acceleration and slowdown of barrier crossing. _The Journal of Chemical Physics_**2018**, _148_, 014903. * Pollak 1986 Pollak, E. Theory of activated rate processes: A new derivation of Kramers' expression. _The Journal of Chemical Physics_**1986**, _85_, 865-867. * Tully 1994 Tully, J. C. The dynamics of adsorption and desorption. _Surface Science_**1994**, _299-300_, 667-677. * Garrett and Truhlar 1979 Garrett, B. C.; Truhlar, D. G. Generalized transition state theory. Classical mechanical theory and applications to collinear reactions of hydrogen molecules. _The Journal of Physical Chemistry_**1979**, _83_, 1052-1079. * Pollak and Talkner 2005 Pollak, E.; Talkner, P. Reaction rate theory: What it was, where is it today, and where is it going? _Chaos: An Interdisciplinary Journal of Nonlinear Science_**2005**, _15_, 026116. * Chandler 1978 Chandler, D. Statistical mechanics of isomerization dynamics in liquids and the transition state approximation. _The Journal of Chemical Physics_**1978**, _68_, 2959-2970. * Miller 2014 Miller, W. H. A Journey Through Chemical Dynamics. _Annual Review of Physical Chemistry_**2014**, _65_, 1-19. * Golibrzuch et al. 2015 Golibrzuch, K.; Shirhatti, P. R.; Geweke, J.; Werdecker, J.; Kandratsenka, A.; Auerbach, D. J.; Wodtke, A. M.; Bartels, C. CO Desorption from a Catalytic Surface: Elucidation of the Role of Steps by Velocity-Selected Residence Time Measurements. _Journal of the American Chemical Society_**2015**, _137_, 1465-1475. * Rettner et al. 1990 Rettner, C. T.; Bethune, D. S.; Schweizer, E. K. Measurement of Xe desorption rates from Pt(111): Rates for an ideal surface and in the defect-dominated regime. _The Journal of Chemical Physics_**1990**, _92_, 1442-1457. * Chen et al. 2012 Chen, D.-L.; Al-Saidi, W. A.; Karl Johnson, J. The role of van der Waals interactions in the adsorption of noble gases on metal surfaces. _Journal of Physics: Condensed Matter_**2012**, _24_, 424211. * Larsen et al. 2017 Larsen, A. H. et al. The atomic simulation environment--a Python library for working with atoms. _Journal of Physics: Condensed Matter_**2017**, _29_, 273002. * Tadmor et al. 2011 Tadmor, E. B.; Elliott, R. S.; Sethna, J. P.; Miller, R. E.; Becker, C. A. The potential of atomistic simulations and the Knowledgebase of Interatomic Models. _JOM_**2011**, _63_, 17. * Schiotz 1996 Schiotz, J. EMT potential for Ni developed by Jacobsen, Stolze, and Norskov (1996) v001. OpenKIM, [https://doi.org/10.25950/ef22cbc3](https://doi.org/10.25950/ef22cbc3), 2019. * Schiotz 2019 Schiotz, J. Effective Medium Theory (EMT) potential driver v004. OpenKIM, [https://doi.org/10.25950/7e5b8be7](https://doi.org/10.25950/7e5b8be7), 2019. * Tadmor 1986 Tadmor, E. EAM potential (LAMMPS cubic hermite tabulation) for Pt (Universal3) developed by Foiles, Baskes, and Daw (1986) v000. OpenKIM, [https://doi.org/10.25950/24de6537](https://doi.org/10.25950/24de6537), 2018. * Elliott 2018 Elliott, R. S. EAM Model Driver for tabulated potentials with cubic Hermite spline interpolation as used in LAMMPS v005. OpenKIM, [https://doi.org/10.25950/68defa36](https://doi.org/10.25950/68defa36), 2018. # Supplementary information for "A theory of phonon induced friction on molecular adsorbates" Ardavan Farahvash Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Adam P. Willard [email protected] Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Friction kernels and spectral densities for various solid surfaces In Figures S1 through S4 we present phonon friction kernels and spectral densities for various 4x4x4 surface slabs. Figure S1 demonstrates results for a Pt(111) slab calculated modeled a Lennard-Jones (LJ) forcefield [1] and compares the results to the EMT forcefield [2] used to the main text. Figure S2 compares the results from adsorption on (111) surface facets to (100) surface facets. Figure S3. presents results for a BCC Fe(110) lattice. Figure S4. presents results for HCP Ru(0001). The Fe and Ru results were calculated using embedded atom forcefields from Ref. [3] and Ref. [4] respectively. Note that since Ru has a much higher Debye Frequency than Pt (225 cm\({}^{-1}\) for Ru and 156 cm\({}^{-1}\) for Pt) the dependence of the spectral density on \(\omega_{\text{as}}\) is shifted as well. All cases show the same qualitative dependence on \(\omega_{\text{as}}\) as presented for the Pt(111) surfaces. Figure S2: (A) Friction kernel and (B) spectral density for Pt(110) surface. ## Sii Perturbation theory In this section, we compare the results for the phonon friction kernels and spectral densities calculated using exact diagonalization to two different perturbative schemes: one which agrees well with exact results in the physisorbed limit (when \(\omega_{\text{as}}\) is small), and the other which agrees well with exact results when in the chemisorbed limit (when \(\omega_{\text{as}}\) is large). The formulas for the perturbative corrections are the familiar Rayleigh-Schrodinger perturbation theory equations. In this section, we will only use 1st and 2nd order corrections to the eigenvalues and 1st order corrections to the eigenvectors. We provide the formulas below for completeness. For the eigenvalues we have, \[\lambda_{i}\approx\lambda_{i}^{(0)}+\lambda_{i}^{(1)}+\lambda_{i}^{(2)}\] (S1) where \(\lambda_{i}\) is equal to the square phonon frequency \(\omega_{i}^{2}\) and \(\lambda_{i}^{(0)}\), \(\lambda_{i}^{(1)}\), and \(\lambda_{i}^{(2)}\) are the 0th, 1st, and 2nd order corrections respectively. \[\lambda_{i}^{(1)}=\mathbf{P}_{i}^{(0)}\cdot\delta\mathbf{H}\cdot\mathbf{P}_{i }^{(0)},\] (S2) where \(\mathbf{P}_{i}^{(0)}\) is the \(i\)th eigenvector of the unperturbed Hessian and \(\delta\mathbf{H}\) is the perturbation term. \[\lambda_{i}^{(2)}=\sum_{j\neq i}\frac{\left(\mathbf{P}_{j}^{(0)}\cdot\delta \mathbf{H}\cdot\mathbf{P}_{i}^{(0)}\right)^{2}}{\lambda_{i}^{(0)}-\lambda_{j} ^{(0)}}.\] (S3) For the eigenvectors we have, \[\mathbf{P}_{i}\approx\mathbf{P}_{i}^{(0)}+\mathbf{P}_{i}^{(1)},\] (S4) where, \[\mathbf{P}_{i}^{(1)}=\sum_{j\neq i}\frac{\mathbf{P}_{j}^{(0)}\cdot\delta \mathbf{H}\cdot\mathbf{P}_{i}^{(0)}}{\lambda_{i}^{(0)}-\lambda_{j}^{(0)}} \mathbf{P}_{j}^{(0)}.\] (S5) Before proceeding with the application of perturbation theory, we separate the solid's mass weighted Hessian \(\mathbf{H}_{S}\) into blocks corresponding to the adsorption site(s), the bulk atoms, and the coupling between the two, \[\mathbf{H}_{S}=\left(\begin{array}{c|c}\mathbf{H}_{X}&\mathbf{H}_{XB}\\ \hline\\ \mathbf{H}_{BX}&\mathbf{H}_{B}\\ \end{array}\right),\] (S6) where \(\mathbf{H}_{X}\) is the adsorption site(s) block, \(\mathbf{H}_{B}\) is the bulk block, and \(\mathbf{H}_{XB}\) and is the coupling between blocks. Note that the size of \(\mathbf{H}_{B}\) is much larger than \(\mathbf{H}_{X}\) as most atoms in the solid can be treated as not interacting with the adsorbate. We may diagonalize the \(\mathbf{H}_{B}\) to arrive the following form, \[\mathbf{H}_{S}=\left(\begin{array}{c|c}\mathbf{H}_{X}&\mathbf{C}_{XB}\\ \hline\\ \mathbf{C}_{XB}&\mathbf{\Omega}_{B}^{2}\\ &\end{array}\right),\] (S7) where \(\mathbf{C}_{XB}\) is the coupling between the adsorption site(s) and each bulk phonon mode and \(\mathbf{\Omega}_{B}^{2}\) is a diagonal matrix containing the square frequencies of these modes. With this setup we are ready to perform perturbation theory. We will illustrate the results of perturbation theory on a 4x4x4 Pt(111) lattice with the Hessian evaluated using the EMT\({}^{2}\) forcefield. #### ii.1.1 Weak-coupling/physisorbed limit In the physisorbed limit, we treat \(\mathbf{H}_{X}\) as the perturbation, \(\delta\mathbf{H}=\mathbf{H}_{X}\), and the remaining Hessian as the reference, \[\mathbf{H}_{0}=\left(\begin{array}{c|c}0&\mathbf{C}_{XB}\\ \hline\\ \mathbf{C}_{XB}&\mathbf{\Omega}_{B}^{2}\\ &\end{array}\right).\] (S8) This scheme assumes that the contribution of a handful of adsorption sites to the bulk phonon modes is small. Figure S5 illustrates the spectral density calculated using this scheme, to first and second order, and compares to the results from exact diagonalization. The 1st order results for \(K(\omega)\) use 1st order corrections for both the eigenvectors and eigenvalues, while the 2nd order results add 2nd order corrections to the eigenvalues while keeping the eigenvectors at 1st order. Both 1st and 2nd order results agree well with the exact results when \(\omega_{\text{as}}\) is less than platinum's Debye frequency \(\omega_{\text{D}}=156\) cm\({}^{-1}\) as expected, and even qualitatively capture results at \(\omega_{\text{as}}\approx\omega_{\text{D}}\). However, as effective frequency of motion of the adsorption site, \[\tilde{\omega}_{s}=\sqrt{\frac{\mu}{M}\omega_{\text{as}}^{2}+\omega_{s}^{2}}\] (S9) approaches \(\omega_{\text{D}}\), this weak-coupling perturbative scheme fails. #### s.2.2 Strong coupling/chemisorbed limit In the chemisorbed limit, we treat the reference Hessian as one where the adsorption site(s) and the phonon modes of the bulk are uncoupled, \[\mathbf{H}_{0}=\left(\begin{array}{c|c}\mathbf{H}_{A}&0\\ \hline\\ 0&\mathbf{\Omega}_{B}^{2}\\ \end{array}\right),\] (S10) and therefore the perturbation is coupling, \[\delta\mathbf{H}=\left(\begin{array}{c|c}0&\mathbf{C}_{AB}\\ \hline\\ \mathbf{C}_{AB}&0\\ \end{array}\right).\] (S11) In this scheme, to 0th order, the adsorbate and adsorption site(s) are completely independent of the other modes of the solid, which results in the phonon friction kernel being a single sinusoid with frequency \(\tilde{\omega}_{s}\). The perturbation introduces coupling between the adsorption site(s) and the bulk solid, allowing for the contribution of bulk phonons to contribute to the friction. Figure S6 illustrates the spectral density calculated using this scheme to first and second order, and compares to the results from exact diagonalization. The 1st order results qualitatively match the form of the spectral density, however they underestimate the frequency of surface-site phonon mode. One can show that this mismatch is because the first order corrections to eigenvalues of the Hessian are zero in this perturbative scheme. Introducing second order corrections removes this discrepancy, and leading to excellent agreement with the exact results when \(\tilde{\omega}_{s}>\omega_{\text{D}}\). As the effective frequency of motion of the adsorption site approaches \(\omega_{\text{D}}\) (now from the positive end), this perturbative scheme qualitatively fails. The success of these two perturbative schemes in these separate physical regimes is characteristic of many non-equilibrium theories of a system in contact with a thermal bath, wherein systems with strong coupling and weak coupling to the environment are shown to exhibit significantly different behavior. The success of these perturbative schemes at reproducing the exact results within the appropriate regime suggests that the behavior we have observed is not a feature of a particular atomistic model, but a fundamental consequence of the response of phonon modes in different physical regimes. ## Siii Phonon dispersion Elastic continuum theory In this section, we take an alternative approach to determining the phonon-induced friction in the limit of a macroscopic solid by using continuum elastic theory instead of atomistic models. Such an approach was originally explored by Ref. [5] and Ref. [6]. It is theoretically appealing due to the minimal, experimentally accessible free parameters used in elastic theory. However, it also suffers from several limiting assumptions which we explicitly delineate. Elastic (energy conserving) acoustic waves in a material may be modeled via the Navier-Cauchy equation, \[\mathbf{\ddot{u}}(\mathbf{r},t)=c_{t}\vec{\nabla}^{2}\mathbf{u}(\mathbf{r},t)+ (c_{l}^{2}+c_{t}^{2})\nabla\left(\nabla\cdot\mathbf{u}(\mathbf{r},t)\right)+ \mathbf{F}(\mathbf{r},t),\] (S12) where \(\mathbf{u}(\mathbf{r},t)\) is the displacement of the solid at position \(\mathbf{r}=(x,y,z)\) and time \(t\), \(\vec{\nabla}^{2}\) is the vector Laplacian, \(\nabla\left(\nabla\cdot\right)\) is gradient of the divergence, and \(\mathbf{F}\) are external forces. Solutions to this Eq. S12 can generally be separated into zero divergence and zero curl components corresponding to the transverse and longitudinal modes respectively, \[\mathbf{\ddot{u}}(\mathbf{x},t)=\mathbf{u}_{t}(\mathbf{x},t)+\mathbf{u}_{t}( \mathbf{x},t),\] (S13) each of which satisfy a 3D wave equation, \[\mathbf{\ddot{u}}_{/t}(\mathbf{x},t)=c_{l/t}\vec{\nabla}^{2}\mathbf{u}_{l/t}( \mathbf{x},t).\] (S14) Consider a single adsorbate degree of freedom whose displacement is denoted by \(q\). If this degree of freedom is harmonically coupled to surface normal (z-axis) displacement the solid at position \(\mathbf{r}_{0}=(x_{0},y_{0},L_{\z})\) then the coupled adsorbate-solid equations are, \[\ddot{q}(t)=-\frac{1}{m}\frac{dU_{A}}{dq}(t)-\frac{\mu}{m}\omega_{as}^{2}(q(t) -u_{z}(\mathbf{r}_{0},t))\] (S15) \[\mathbf{\ddot{u}}(\mathbf{r},t)-c_{t}\vec{\nabla}^{2}\mathbf{u}( \mathbf{r},t)-(c_{l}^{2}+c_{t}^{2})\nabla\left(\nabla\cdot\mathbf{u}(\mathbf{ r},t)\right)\mathbf{u}(\mathbf{r},t)\\ =\frac{\mu\omega_{as}^{2}}{M}(q(t)-u_{z}(\mathbf{r}_{0},t))a^{3} \delta\left(\mathbf{r}-\mathbf{r}_{0}\right)\hat{z},\] (S16) where \(\hat{z}=(0,0,1)\) is the unit vector in the z-direction, \(a\) is spacing between atoms in the crystal, and \(M\) is the mass of the solid atom. The \(a^{3}\) factor arises from taking the continuum limit of a force on a single lattice point and offsets the inverse volume units of the 3D delta function \(\delta\left(\mathbf{r}-\mathbf{r}_{0}\right)\) The forces from the adsorbate on the solid can be be separated in two contributions: a static contribution, \[\frac{\mu\omega_{as}^{2}}{M}u_{z}(\mathbf{r}_{0},t)a^{3}\delta\left(\mathbf{r}- \mathbf{r}_{0}\right),\] (S17) which enforces to the shift in the solid's vibrational spectrum due to the presence of the adsorbate, and dynamic contribution, \[f(\mathbf{r},t)=\frac{\mu\omega_{as}^{2}}{M}q(t)a^{3}\delta\left(\mathbf{r}- \mathbf{r}_{0}\right),\] (S18) representing the time-dependent external force of the adsorbate on the solid. We can rearrange Eq. S16 as, \[\left[\frac{d^{2}}{dt^{2}}-c_{t}^{2}\tilde{\nabla}^{2}-(c_{t}^{2}+c_{t}^{2}) \nabla\nabla\cdot+\frac{\mu\omega_{as}^{2}}{M}a^{3}\delta(\mathbf{r}-\mathbf{r }_{0})\hat{z}\right]\mathbf{u}(\mathbf{r},t)=f(\mathbf{r},t)\hat{z}.\] (S19) The operator on the right-hand-side (RHS) of this equation is a linear operator; therefore Eq. S19 may be solved using the method of Green's functions, \[\mathbf{u}(\mathbf{r},t)=\mathbf{u}_{0}(\mathbf{r},t)+\int_{0}^{t}d\tau\int d \mathbf{r^{\prime}}\mathbf{G}(\mathbf{r},t;\mathbf{r^{\prime}},\tau)\cdot f (\mathbf{r^{\prime}},\tau)\hat{z}\] (S20) where \(\mathbf{u}_{0}(x,t)\) is the solution to the homogeneous case (\(f(\mathbf{r},t)=0\)) and \(\mathbf{G}(\mathbf{r},t;\mathbf{r^{\prime}},\tau)\) is a 3x3 tensor Green's function corresponding to the operator on the RHS of Eq. S19. If we substitute this solution into Eq. S15 we arrive at a GLE where the friction kernel is proportional to the antiderivative of the Green's function, \[K(t)=\frac{\mu^{2}\omega_{as}^{4}}{mM}a^{3}\int dtG_{zz}(\mathbf{r}_{0},t; \mathbf{r}_{0},0).\] (S21) Henceforth we will denote \(G_{zz}(\mathbf{r}_{0},t;\mathbf{r}_{0},0)\) as simply \(G(t)\) for simplicity. This Green's function may be decomposed in the following form, \[G(t)=\sum_{\alpha}\sum_{\mathbf{k}}\frac{\sin(c_{\alpha}\left|\mathbf{k} \right|t)}{c_{\alpha}\left|\mathbf{k}\right|}R_{z,\alpha}(\mathbf{r}_{0}, \mathbf{k})R_{z,\alpha}^{*}(\mathbf{r}_{0},\mathbf{k})\] (S22) where \(\alpha\) denotes phonon polarizations (i.e. transverse or longitudinal), and \(R_{z,\alpha}\) are the \(z\)th spatial components of the normalized eigenfunctions of the operator on the RHS of Eq. S19. The spectrum of \(\mathbf{k}\) values as well as the specific form of the spatial eigenfunctions depend on the choice of boundary conditions. Due to the delta function in the adsorbate shift term (Eq. S17), the allowed \(\mathbf{k}\) values and cannot be computed exactly. Indeed, this term makes Eq. S19 very similar to the Schrodinger Equation with a delta function well, in which the spectrum must be computed numerically as a solution to a system of transcendental equations. However, perturbation theory, physical intuition, and the numerical results presented in Figure 1 of the main text all suggest that the low-frequency acoustic modes of a solid should not depend on the presence of an adsorbate. Therefore, we proceed by ignoring the adsorbate shift term, reducing the problem of finding for the phonon-induced friction to finding the Green's function of the bare Navier-Cauchy equations. _However, we emphasize that this method is only valid for the low-frequency acoustic modes._ Setting periodic boundary conditions in the \(xy\) plane, fixed boundary conditions at \(z=0\) and Neumann boundary conditions at \(z=L_{z}\) we have, \[R_{z,\alpha}^{*}(\mathbf{r},\mathbf{k})=\frac{2}{\sqrt{L_{x}L_{y}L_{z}}}e^{2 \pi ik_{x}x}e^{2\pi ik_{y}y}\sin(k_{z}z)\] (S23) where \(L_{x}\), \(L_{y}\), and \(L_{z}\) are the size of the solid in the \(x\), \(y\), and \(z\) directions respectively. The allowed are values of \(k\) are \(k_{x}=\frac{2\pi n_{x}}{L_{x}}\), \(k_{y}=\frac{2\pi n_{y}}{L_{y}}\), and \(k_{z}=\frac{(n_{x}+\frac{1}{2})\pi}{L_{z}}\), where \(n_{x}\) and \(n_{y}\) are any integers and \(n_{z}\) is any integer greater than or equal to zero. Taking the limit as \(L_{x}\), \(L_{y}\), and \(L_{z}\) become very large we find that the Green's function becomes, \[G(t)=\sum_{\alpha}\frac{1}{8\pi^{3}}\int d\mathbf{k}\frac{\sin(c_{\alpha} \left|\mathbf{k}\right|t)}{c_{\alpha}\left|\mathbf{k}\right|}.\] (S24) It is well-known that the integral in Eq. S24 diverges if the integral is taken over all \(k\)-space due to the contribution wavelengths smaller than the inter-atom spacing. Therefore, the integral in Eq. S24 should only be taken over first Brillouin zone. Taking inspiration from the Debye model, we may approximation the first Brillouin zone with a spherical cutoff with a maximum radius \(\left|k_{\mathrm{D}}\right|\), \[G(t)=\frac{1}{2\pi^{2}}\sum_{\alpha}\int_{0}^{k_{\mathrm{D}}}dk\frac{\sin(c_{ \alpha}kt)}{c_{\alpha}k}.\] (S25) Carrying out the integration over \(k\) and subsequently integrating over time \(t\), leads to the following formulas for the friction kernel and spectral density, \[K(t)=\frac{\mu^{2}\omega_{\mathrm{as}}^{4}}{mM}\frac{a^{3}}{2\pi^{2}}\left( \frac{2}{c_{t}^{3}}+\frac{1}{c_{l}^{3}}\right)\frac{\sin\left(\omega_{ \mathrm{D}}t\right)}{t}.\] (S26) \[\bar{K}(\omega)=\frac{\mu^{2}\omega_{\mathrm{as}}^{4}}{mM}\frac{a^{3}}{2\pi^{2 }}\left(\frac{2}{c_{t}^{3}}+\frac{1}{c_{l}^{3}}\right)\Theta(\omega-\omega_{ \mathrm{D}}).\] (S27) where \(\Theta\) is the Heaviside step function. Eq. S27 illustrates that the friction is flat (Ohmic) with a high frequency cutoff at the Debye frequency. We can use Eq S26 to compute the effective adsorbate-surface bond frequency, \(\tilde{\omega}_{\mathrm{as}}\). Figure S9 illustrates results for \(\tilde{\omega}_{\mathrm{as}}^{2}\) as a function of the Debye frequency and \(\omega_{\mathrm{as}}\). One can see that for many physically reasonable choices of parameters (including those for CO on Pt(111)) the effective frequency becomes imaginary, signifying that there is no stable adsorption state. This "ultraviolet catastrophe" is characteristic of classical continuum models and highlights the weakness of the continuum approach. ## SV Transition state theory Transition state theory calculates the rate constant of a chemical reaction as the equilibrium flux of trajectories through a dividing surface separating reactants and products. Formulaically, such a rate constant may be expressed as, \[k=\frac{1}{Q_{R}}\int d\mathbf{r}d\mathbf{p}\ e^{-\beta H(\mathbf{r},\mathbf{p })}\delta\left[f(\mathbf{r})\right]\left(\nabla f\cdot\mathbf{p}\right)\Theta \left(\nabla f\cdot\mathbf{p}\right),\] (S28) where \(\mathbf{r}\) and \(\mathbf{p}\) are mass-weighted positions and momenta respectively, \(H\) is the Hamiltonian, \(f(\mathbf{x})=0\) is the dividing surface, \(\nabla f\) is the surface normal, \(\Theta\) is the Heaviside step function, and \(Q_{R}\) is the reactant partition function. \(Q_{R}\) is defined as, \[Q_{R}=\int_{R}d\mathbf{r}d\mathbf{p}\ e^{-\beta H(\mathbf{r},\mathbf{p})}.\] (S29) where the subscript \(R\) denotes that the integral is taken only over the reactant region in position space. The transition state is the saddle point on this dividing surface and, in the vicinity of the transition state, the reaction coordinate corresponds to the unstable (imaginary frequency) normal mode. Carrying out the momentum integrals in Eq. S28 (assuming a one-dimensional reaction coordinate) we have, \[k=\frac{1}{\sqrt{2\pi\beta}}\frac{\int d\mathbf{r}\;e^{-\beta V(\mathbf{r})} \delta[f(\mathbf{r})]}{\int_{R}d\mathbf{r}\;e^{-\beta V(\mathbf{r})}.}\] (S30) The integral in the numerator is largest in the vicinity of the transition state while the integral in the denominator is largest near the minimum of \(V\). Expanding around these two points gives, \[V(\mathbf{r}\approx\mathbf{r}_{0})=E_{R}-\frac{1}{2}(\mathbf{r}-\mathbf{r}_{0 })^{T}\mathbf{H}_{R}(\mathbf{r}-\mathbf{r}_{0}),\] (S31) for the reactant basin and, \[V(\mathbf{r}\approx\mathbf{r}^{\ddagger})=E^{\ddagger}-\frac{1}{2}(\mathbf{r} -\mathbf{r}^{\ddagger})^{T}\mathbf{H}^{\ddagger}(\mathbf{r}-\mathbf{r}^{ \ddagger}),\] (S32) for the transition state, where \(\mathbf{H}_{R}\) is the mass-weighted Hessian evaluated at the minimum of the reactant basin and \(\mathbf{H}^{\ddagger}\) is the mass-weighted Hessian evaluated at the transition state. Using Eq. S31 and Eq. S32 we may evaluate the integrals in Eq. S30 giving, \[k=\frac{\lambda^{\ddagger}}{2\pi}\sqrt{\frac{\det(\mathbf{H}_{R})}{\det( \mathbf{H}^{\ddagger})}}e^{-\beta(E^{\ddagger}-E_{R})},\] (S33) where \(\lambda^{\ddagger}\) is the frequency of the unstable mode of \(\mathbf{H}^{\ddagger}\), and "det" denotes the matrix determinant. For a reaction at a surface, these Hessians can be organized into a block structure corresponding to the molecular/adsorbate degrees of freedom, the solid degrees of freedom, and the coupling between them, \[\mathbf{H}=\left(\begin{array}{c|c}\mathbf{H}_{A}&\mathbf{G}_{AS}\\ \hline\\ \mathbf{G}_{AS}^{T}&\mathbf{H}_{S}\\ \end{array}\right).\] (S34) The determinant of such a block matrix may be evaluated as, \[\det(\mathbf{H})=\det(\mathbf{H}_{S})\times\det(\mathbf{H}_{A}-\mathbf{G}_{AS }^{T}\mathbf{H}_{S}^{-1}\mathbf{G}_{AS}).\] (S35) Using this determinant identity together with Eq. S33 gives Eq. 25 of the main text. In the main text, we presented phonon-corrected desorption rates using a model where we computed the friction kernel in the limit of an infinite surface slab by averaging across \(k\)-space. In Figure S10, we illustrate results from single 4x4x8 surface slab without accounting for phonon dispersion. The results illustrated Figure S10 are extremely similar to that of the main text, and the deviation between the two is smaller than the intrinsic uncertainty in the experimental measurements of the rate constant. Parameters used in computing the reaction rates shown in Figure S10 are given in Table SI.
2305.19847
How Does Pretraining Improve Discourse-Aware Translation?
Pretrained language models (PLMs) have produced substantial improvements in discourse-aware neural machine translation (NMT), for example, improved coherence in spoken language translation. However, the underlying reasons for their strong performance have not been well explained. To bridge this gap, we introduce a probing task to interpret the ability of PLMs to capture discourse relation knowledge. We validate three state-of-the-art PLMs across encoder-, decoder-, and encoder-decoder-based models. The analysis shows that (1) the ability of PLMs on discourse modelling varies from architecture and layer; (2) discourse elements in a text lead to different learning difficulties for PLMs. Besides, we investigate the effects of different PLMs on spoken language translation. Through experiments on IWSLT2017 Chinese-English dataset, we empirically reveal that NMT models initialized from different layers of PLMs exhibit the same trends with the probing task. Our findings are instructive to understand how and when discourse knowledge in PLMs should work for downstream tasks.
Zhihong Huang, Longyue Wang, Siyou Liu, Derek F. Wong
2023-05-31T13:36:51Z
http://arxiv.org/abs/2305.19847v1
# How Does Pretraining Improve Discourse-Aware Translation? ###### Abstract Pretrained language models (PLMs) have produced substantial improvements in discourse-aware neural machine translation (NMT), for example, improved coherence in spoken language translation. However, the underlying reasons for their strong performance have not been well explained. To bridge this gap, we introduce a probing task to interpret the ability of PLMs to capture discourse relation knowledge. We validate three state-of-the-art PLMs across encoder-, decoder-, and encoder-decoder-based models. The analysis shows that (1) the ability of PLMs on discourse modelling varies from architecture and layer; (2) discourse elements in a text lead to different learning difficulties for PLMs. Besides, we investigate the effects of different PLMs on spoken language translation. Through experiments on IWSLT2017 Chinese-English dataset, we empirically reveal that NMT models initialized from different layers of PLMs exhibit the same trends with the probing task. Our findings are instructive to understand how and when discourse knowledge in PLMs should work for downstream tasks. Zhihong Huang\({}^{1*}\) Longyue Wang\({}^{2*}\) Siyou Liu Derek F. Wong\({}^{1}\)\({}^{1}\)NLP\({}^{2}\)CT Lab, University of Macau \({}^{2}\)Tencent AI Lab [email protected], [email protected], [email protected], [email protected] **Index Terms**: spoken Language, discourse, pretrained language models, machine translation, linguistic probing ## 1 Introduction Translating spoken language is a significantly challenging task due to its inherent characteristics such as irregular expressions and discourse properties [1, 2, 3]. In recent years, discourse-aware neural machine translation (NMT) has performed better by initializing the Transformer-based [4] models with pretrained language models (PLMs) in encoder [5], decoder [6], both [7] or themselves [8]. The common assumption is that NMT models can utilize rich knowledge from PLMs to tackle complex discourse phenomena [9]. For example, some works found that better-translated results often contain more connective words [8], which can be classified as explicit with the non-tree-structure shallow discourse relations [10]. Table 1 shows an example of discourse-aware translation. However, it is still unclear how discourse knowledge is embedded in PLMs, and when PLMs are leveraged in discourse-level NMT. Towards the understanding of PLMs, probing tasks are exploited to provide fine-grained analysis of model ability [11]. Related works either probed different PLMs about tree-structure discourse knowledge based on rhetorical structure theory (RST) [12, 13], or probed multilingual PLMs about part of the discourse relations [14]. Therefore, a more comprehensive study on the effects of discourse knowledge in PLMs on NMT is needed, particularly for spoken language translation. To bridge this gap, we propose a method to probe the ability of advanced PLMs (i.e., BERT [15], BART [16], and GPT-2 [17]) to capture discourse knowledge. Analysis results on PDTB dataset [18] demonstrate that encoder-decoder-based PLMs perform best especially on higher layers (except for GPT-2). About the translation tasks, we leverage PLMs to discourse-aware NMT by utilizing part/tail of their parameters. Experiment results on the Chinese-English IWSLT2017 dataset [19] show that (i) PLMs in the source language help more than that in the target language; (ii) NMT prefers PLMs with the same architecture (i.e., BART); (iii) NMT initialized with the single discourse-aware layer can achieve a close performance to using all layers; (iv) translation performance exhibits the same trends with the probing task at the layer level. ## 2 Methodology ### Probing Discourse Knowledge Our probing tasks mainly focus on the shallow discourse relation in a sentence with two semantic arguments [20], rather than consider the RST relations in several sentences. The shallow discourse relations can be characterized into five types: (1) _Explicit_ relation means that the connective words in the sentence are visible; (2) _Implicit_ relation means that the sentence has no connective words but can be annotated manually; when the sentence has no connective word but shows a discourse relation by its expressions or entities, it contains (3) _AllLex_ or (4) _EntRel_ relation; (5) _NoRel_ relation means there is no discourse relation in the sentence. Further, _Explicit_, _Implicit_, and _AllLex_ relations \begin{table} \begin{tabular}{p{34.1pt} p{34.1pt}} \hline \hline \multirow{2}{*}{**Imp.**} & \# \\ & \\ \hline \multirow{4}{*}{**Out.**} & This is one such volunteer and this is a device that he had built in the village where he worked. \\ \cline{2-3} & **And** the idea was that you could take waste paper; you could compress it and make briquettes that could be used for fuel. \\ \cline{2-3} & **But** this device was very slow. \\ \hline \hline \end{tabular} \end{table} Table 1: _An example of discourse-aware translation from IWSLT2017 dataset. The token is an explicit connective word while © is implicit that is invisible to models. “**Imp.**” and “**Out.**” represent the Chinese input and English translation, respectively. As seen, a coherent translation should accurately transfer discourse relation between sentences from source to target language (e.g., words). can be annotated with sense in three levels: class, type, and subtype. We list three examples below, whose words underlined are connective words, highlighted in italics are argument-1, and in boldface are argument-2. 1. _It was a far safer deal for lenders_ since **NWA had a healthier cash flow and more collateral on hand.** 2. _Some have raised their cash positions to record levels_. (Implicit: BECAUSE) **High cash positions help buffer a fund when the market falls.** 3. _Ms._ _Bartlet's previous work, which earned her an international reputation in the non-horticultural art world, often took gardens as its nominal subject._ (AltLex) **Mayhap this metaphorical connection made the BPC Fine Arts Committee think she had a literal green thumb.** In the sentence (2), the manual-annotation connective word BECAUSE has an _Implicit_ relation, a _Contgenency_ class, a _Cause_ type, and a _Reason_ subtype. This kind of fine-grained sense is the discourse relation label of the sentences in our tasks dataset. According to the shallow discourse relation, we propose two probing tasks to assess PLMs' ability to encode the discourse relations in different dimensional views: (1) To interpret the overall ability of PLMs, we probe the whole sentence embeddings in the complete dataset, including the four discourse relations. In particular, we adopt two forms of BERT representation to extract the sentences: _[CLS]_ embeddings and average pooling embeddings of all tokens. (2) To determine if the PLMs distribute the discourse knowledge in different linguistic elements, we individually probe the embeddings of connective words and sentiment arguments after obtaining the whole-sentence embeddings in the explicit data. The whole-sentence embeddings of implicit and AltLex data are also probed to compare with the explicit data. The overview of the probing tasks' structure is shown in Figure 1. First, we extract the representations of the whole sentences with the frozen-parameters PLMs. Then we feed the different combinations of embeddings into two-layer MLPs (Multilayer Perceptron) to train probing models with the parameters updated. The probing models are classifiers, and the labels are fine-grained sense relations of the input sentence. All layers of three PLMs (i.e., BERT, GPT-2, and BART) are probed separately. We evaluated the classification accuracy of the probing models. Specifically, we treat BART's encoder layers as its first to sixth model layers, and decoder layers as its seventh to twelfth model layers. The layer with a higher accuracy means that it is more capable of embedding the discourse knowledge. ### Discourse-Aware NMT with Pretraining Inspired by the work of Rothe et al.[6], we adopt the following strategies to leverage the PLMs to Transformer-based NMT models: (1) For BERT models, we initialize the encoder with the PLMs and randomly initialize the decoder. (2) For GPT-2 models, we initialize the decoder with the PLMs and randomly initialize the encoder. (3) For BART models, we use them directly as a sequence-to-sequence model. The three models are trained on Chinese data. We also exploit Chinese and English versions for all three types of PLMs, and multilingual-BERT to investigate whether discourse knowledge in the source or target language is more significant to discourse-aware translation. For a fair comparison, all models have similar size of parameters. We employ a document-level NMT model for the translation task, which can utilize context in the source language. Following [21], we use one previous source sentence in the document as context information when translating each current sentence. Taking Table 1 for instance, the first Chinese sentence can be used as context for translating the second one. When the translation model can make better use of context information, that is, the translation of the current text contains more complete discourse knowledge, then its quality will be better. We consider such \begin{table} \begin{tabular}{l l l} \hline \hline **Task** & **Probing** & **Translation** \\ \hline Dataset & PDTB2.0 [18] & IWSLT2017 [19] \\ \hline Train & 32,535 (18,459) & 231,266 \\ Valid & 1,436 (812) & 879 \\ Test & 1,928 (1,090) & 6,046 \\ \hline \hline \end{tabular} \end{table} Table 2: _Data statistics of the datasets on discourse probing and machine translation tasks. We calculate the sizes of training, validation, and testing sets in terms of sentence number. The number in brackets denotes the size of instances with explicit discourse relation._ Figure 1: _The framework of our proposed probing task. The input is two adjacent sentences in a document, which are split into two spans (i.e., Argument 1 and 2) with explicit/implicit connective words. Second, we extract corresponding embeddings by feeding the input to a pretrained model. Third, the embeddings are used to train an MLP model for learning to classify the discourse relation between Argument 1 and 2. The classification accuracy is used to reflect the ability of PLM on modelling discourse._ a translation task to be a discourse-aware NMT. Note that the sentence number of translation input is also consistent with that in the probing task, making the results of the two experiments comparable. We use BLEU [22], TER [23], and METEOR [24] scores to measure the performance of the translation systems. Besides, in a fine-grained view, we propose a method to investigate how PLMs perform in each layer when they are utilized for NMT. We train special discourse-aware NMT tasks by only updating the parameters of a specific layer each time, while the other layers' are frozen. Based on the results of the probing tasks, we select the first, the middle, the last layer (i.e., layers 1, 6, 12), and the discourse-aware as the specific layer. We wonder if the performance of NMT models with a single updated layer is related to the probing results. In this additional task, we only leverage the English-version PLMs, and keep the rest of the experiment settings the same as the normal discourse-aware translation task. ## 3 Experiments ### Experimental Setup We summarize all data used in experiments in Table 2. For probing tasks, we conduct experiments on PDTB2.0 [18], which only contains English data. We simplify the discourse relation labels from 35 to 19 based on the strategy from [25]. _EntRel_ and _NoRel_ are also included as individual labels to consider as many shallow discourse phenomena as possible. For discourse-aware NMT, we conduct experiments on IWSLT2017 Chinese-English dataset. IWSLT2017 is generated from TED talk, which is a spoken language dataset and has coherent sentences. According to our translation tasks, the adjacent sentences in the dataset can be combined as translation units for their discourse relation. To make the model understand which sentence is context, a break token _[SEP]_ is inserted between two source sentences of every unit during the training. We use dev2010 for development and tst2010-2013 for testing from IWSLT dataset. We train all NMT models up to 200K steps with 16 batch sizes. Length penalty is set to 1. Adam [26] is used to optimize parameters with the 2e-5 learning rate. The beam size of decoding is 4. ### Results of Probing Task In Figure 2(a), we present the first probing task performance for three basic models generated from the 12 layers. For the accuracy of all models' layers under 0.6, it seems that PLMs have a weaker capture ability in discourse than other linguistic knowledge [27]. It's clear that BART, as an encoder-decoder model, is the best among the three models. BART has a process of gradual increase in discourse knowledge in both all encoder layers and decoder layers 7-10. In BERT, the average pooling of embeddings can extract more discourse knowledge than the _[CLS]_ embeddings. Both BART and BERT have their discourse-aware layer in the ninth layer and have a significant decline after the discourse-aware layer. Different from the other two PLMs, GPT-2 has its strongest discourse-aware capability in the first layer and then continually decreases in the after layers. In Figure 2(b), we show the results of BART performed in the second probing task. The curves of EXP and IMP both perform the same trend as BART (all data) in Figure 2(a). This indicates that the relative capability between the different layers of PLM is the same in different kinds of discourse manifestations. We can see that the accuracy of implicit data is much lower than that of the various embeddings of explicit data. Without connective words, pretrained language models can not embed discourse knowledge well. As the number of encoder layers increases, the discourse knowledge declines in both CON and ARG. But as the number of decoder layers increases, the discourse knowledge turns to rise in ARG, while that declines sharply in CON. CON contains larger discourse knowledge than EXP and ARG before layer 11 but is overtaken in layer 11 by ARG. ### Fine-grained Analysis As observed, the embeddings of connective words within most layers of PLMs encompass discourse knowledge. In Figure 3, we further investigate the effects of three pretrained models on three types of discourse relations. As seen, the accuracies of Implicit (IMP) and AltLex (ALT) relations are comparable and notably lower than that of the Explicit (EXP) relation. Based on these consistent phenomena observed across the three models, we can reaffirm the conclusion drawn in Section 3.2 that connective words serve as essential elements for PLMs in comprehend discourse relations. As demonstrated in [8], mBART Figure 2: Results of probing tasks for evaluating discourse properties embedded in different layers of representations in different pretrained models. (a) shows the overall results of three PLMs on the complete dataset. (b) shows the fine-grained results of BART by feeding parts of embeddings into the probing model. Note that, “CON” means only connective words while “ARG” indicates two arguments without connective words. We also report results on parts of data with explicit (“EXP”), and implicit (“IMP”) discourse relations, respectively, including connective words. exhibits a preference for translating implicit discourse relations into explicit connective words, which in turn enhances the performance of document-level translation. This finding underscores the significant role of explicit discourse knowledge in PLMs in facilitating discourse-aware translation. ### Results of Translation Task Table 3 shows the overall performance of PLMs leveraged in NMT models. Except for English BART's TER score which is weaker than that of English BERT, BART performs best on both language versions and all. GPT-2 is the worst. As BART is an encoder-decoder model, we consider that document-level NMT task prefers PLMs with the same architecture. Besides, all three Chinese PLMs get better scores than the English PLMs. The multilingual-BERT even performs a huge improvement over all other PLMs. We consider that a PLM with the source language discourse properties will perform better than the target language. Table 4 shows the fine-grained results of the translation tasks. According to all three scores, the discourse-aware layer of PLMs revealed in probing tasks still performs better than the other layers in NMT tasks. During the experiments, we found that only updating one layer's parameters can save around 13% of training time, and the performance of the discourse-aware layer is close to that of all layers. The conclusions from Table 3 and Table 4 show the same trend with the probing tasks. ## 4 Conclusion and Future Work This paper analyzes discourse knowledge in pretrained language models for spoken language translation. The coincident trend in probing tasks and machine translation tasks reveals how the discourse knowledge changes in different layers of different pretrained language models when the model is leveraged in NMT. Our work complements existing probing tasks about discourse knowledge in PLMs and provides a fine-grained interpretive perspective for the application of PLMs in document-level NMT. In future work, we intend to delve deeper into the challenges of discourse-aware MT. Our plan is to leverage large language models (LLMs), which have shown great potential in dealing with complex tasks, to tackle this particular challenge [28, 29]. Moreover, we plan to evaluate our approach on more discourse-aware tasks [30, 31, 32, 33]. ## 5 Acknowledgements This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYR62020-00054-FST). This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Layer** & **1** & **6** & **9** & **12** & **ALL** \\ \hline \multicolumn{7}{c}{_BLEU\({}^{\dagger}\)_} \\ BERT & 4.86 & 4.89 & **5.04** & 4.84 & **5.67** \\ BART & 5.27 & 5.46 & **6.03** & 5.66 & **6.53** \\ GPT-2 & **4.52** & 3.69 & 3.51 & 3.37 & **4.60** \\ \hline \multicolumn{7}{c}{_TER\({}^{\downarrow}\)_} \\ BERT & 85.12 & 84.97 & **84.92** & **84.92** & **83.52** \\ BART & 85.84 & 86.28 & **84.33** & 84.68 & **83.96** \\ GPT-2 & **85.55** & 88.35 & 88.67 & 88.75 & **86.07** \\ \hline \multicolumn{7}{c}{_METEOR\({}^{\dagger}\)_} \\ BERT & 0.2002 & 0.2011 & **0.2019** & 0.2011 & **0.2150** \\ BART & 0.2287 & 0.2215 & **0.2424** & 0.2374 & **0.2479** \\ GPT-2 & **0.1932** & 0.1690 & 0.1676 & 0.1650 & **0.1916** \\ \hline \hline \end{tabular} \end{table} Table 4: Fine-grained results of translation task in Table 3. We use only a specific layer of PLMs for initializing NMT models. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Lang.** & **Model** & **BLEU\({}^{\dagger}\)** & **TER\({}^{\downarrow}\)** & **METEOR\({}^{\dagger}\)** \\ \hline \multirow{3}{*}{EN} & BERT & 5.67 & **83.52** & 0.21 \\ & BART & **6.53** & 83.96 & **0.25** \\ & GPT-2 & 4.60 & 86.07 & 0.19 \\ \hline \multirow{3}{*}{ZH} & BERT & 6.72 & 81.74 & 0.23 \\ & BART & **7.60** & **78.57** & **0.27** \\ & GPT-2 & 4.64 & 86.18 & 0.19 \\ \hline Multi & BERT & **12.90** & **75.82** & **0.32** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of Chinese\(\rightarrow\)English translation task in terms of different evaluation metrics. We compare three pretrained models in different languages, including English (“EN”), Chinese (“ZH”), or multilingual (“Multi”). Figure 3: Results of probing tasks for evaluating different discourse relations, including explicit (“EXP”), implicit (“IMP”), and AltLex (“ALT”) discourse relation, embedded in different layers of representations in different pretrained models. We label the results of all data, including five discourse relations, as “ALL”
2309.08453
Eguchi-Hanson harmonic spinors revisited
We revisit the problem of determining the zero modes of the Dirac operator on the Eguchi-Hanson space. It is well known that there are no normalisable zero modes, but such zero modes do appear when the Dirac operator is twisted by a $U(1)$ connection with $L^2$ normalisable curvature. The novelty of our treatment is that we use the formalism of spin-$c$ spinors (or spinors as differential forms), which makes the required calculations simpler. In particular, to compute the Dirac operator we never need to compute the spin connection. As a result, we are able to reproduce the known normalisable zero modes of the twisted Eguchi-Hanson Dirac operator by relatively simple computations. We also collect various different descriptions of the Eguchi-Hanson space, including its construction as a hyperk\"ahler quotient of $\mathbb{C}^4$ with the flat metric. The latter illustrates the geometric origin of the connection with $L^2$ curvature used to twist the Dirac operator. To illustrate the power of the formalism developed, we generalise the results to the case of Dirac zero modes on the Ricci-flat K\"ahler manifolds obtained by applying Calabi's construction to the canonical bundle of $\mathbb{C} P^n $.
Guido Franchetti, Kirill Krasnov
2023-09-15T14:57:05Z
http://arxiv.org/abs/2309.08453v1
# Eguchi-Hanson harmonic spinors revisited ###### Abstract. We revisit the problem of determining the zero modes of the Dirac operator on the Eguchi-Hanson space. It is well known that there are no normalisable zero modes, but such zero modes do appear when the Dirac operator is twisted by a \(U(1)\) connection with \(L^{2}\) normalisable curvature. The novelty of our treatment is that we use the formalism of spin-\(c\) spinors (or spinors as differential forms), which makes the required calculations simpler. In particular, to compute the Dirac operator we never need to compute the spin connection. As a result, we are able to reproduce the known normalisable zero modes of the twisted Eguchi-Hanson Dirac operator by relatively simple computations. We also collect various different descriptions of the Eguchi-Hanson space, including its construction as a hyperkahler quotient of \(\mathbb{C}^{4}\) with the flat metric. The latter illustrates the geometric origin of the connection with \(L^{2}\) curvature used to twist the Dirac operator. To illustrate the power of the formalism developed, we generalise the results to the case of Dirac zero modes on the Ricci-flat Kahler manifolds obtained by applying Calabi's construction to the canonical bundle of \(\mathbb{C}P^{n}\). ###### Contents * 1 Introduction * 2 Eguchi-Hanson space as a Kahler and hyperkahler manifold * 2.1 EH as a line bundle * 2.2 Equal standing coordinates * 2.3 Harmonic 2-forms * 2.4 \(U(1)\) connection with \(L^{2}\) harmonic curvature * 2.5 Hyperkahler quotient * 3 Dirac zero modes on the Eguchi-Hanson space * 3.1 No Dirac zero modes on EH * 3.2 The canonical spin-\(c\) structure * 3.3 Computing the action of the Dirac operator on spin-\(c\) spinors * 3.4 Computation of \(\bar{\partial}^{*}\) * 3.5 Spin-\(c\) Dirac operator on a Kahler manifold * 3.6 Practical way of computing the Dirac operator on a Kahler manifold * 3.7 Zero modes of the twisted Dirac operator on EH * 3.8 Normalisability * 4 The general case * 4.1 Calabi's metric on \(\mathcal{O}(-n-1)\) * 4.2 Rewriting in terms of "symmetrical" coordinates * 4.3 \(U(1)\) connection with \(L^{2}\) harmonic curvature * 4.4 Zero modes of the twisted Dirac operator on \(\mathcal{O}(-n-1)\) * 5 Discussion * A The geometry of \(\mathbb{C}^{2}\) * A.1 The geometry of \(SU(2)\) * A.2 The geometry of \(\mathbb{C}^{2}\) * B. Calabi's construction * B.1 The Chern connection * B.2 Calabi's construction ## 1. Introduction The Dirac operator and its spectrum, in particular the spectrum of its zero modes, are of great importance and interest in both differential geometry and theoretical physics. In relation to the former, the vanishing theorem due to Lichnerowicz states that there are no harmonic spinors on compact manifolds with positive scalar curvature. When the dimension of the space of harmonic spinors is not zero, it turns out to depend on the metric, [12], and is not a topological invariant of the manifold. There are also beautiful results pertaining to eigenvalue estimates for the Dirac operator and the relation between this and Killing spinors, see e.g. [6] for an authoritative exposition of all these results. In physics, harmonic spinors are important in the context of the Kaluza-Klein programme, see e.g. [18], [19], where harmonic spinors on the internal space correspond to massless particles in the physical space. For all these reasons, harmonic spinors have been studied extensively, and there is a great body of literature available on the subject. In the context of Riemannian geometry, one is usually interested in spinors and harmonic spinors defined on compact Riemannian manifolds. However, one can also consider non-compact spaces, in particular gravitational instantons, as is the case in this paper. In this case the relevant notion is that of normalisable, more precisely square integrable, also known as \(L^{2}\), harmonic spinors. The non-compact case has also been studied, and appropriate index formulae have been developed, see in particular [16]. More recently, there has been a renewed interest in such a setup, see [13] and [4], in the context of geometric models of matter [1]. In this paper we revisit the problem of determining the zero modes of the (twisted) Dirac operator on the Eguchi-Hanson (EH) space [3]. This problem has been explicitly solved, as a subcase of a more general computation, in [4]. The novelty of our treatment in this paper is that, given that EH is a Kahler manifold, we can describe spinors as spin-\(c\) spinors. Indeed, any almost complex manifold \(M\) carries a canonical spin-\(c\) structure, in which the bundle of spinors is identified with the space of differential forms \(\Lambda^{0,\bullet}(M)\). The natural Dirac operator on \(M\) is \(\sqrt{2}(\bar{\partial}+\bar{\partial}^{*})\). On a Kahler manifold, this operator coincides with the Dirac operator defined by the lift of the Levi-Civita connection to the spinor bundle. This fact allows us to reduce the necessary computations to elementary operations involving the action of \(\bar{\partial},\bar{\partial}^{*}\) on the space \(\Lambda^{0,\bullet}(M)\). The operator \(\sqrt{2}(\bar{\partial}+\bar{\partial}^{*})\) on EH admits no normalisable zero modes, so to get an interesting problem it needs to be further twisted by a \(U(1)\) connection. Up to integer multiples, there is a unique \(U(1)\) connection with square integrable curvature, and it is this connection that we use for the twisting. To elucidate the geometric meaning of this connection, we illustrate how it arises from the construction of EH as a hyperkahler reduction of \(\mathbb{H}^{2}\), which we review and further detail in this paper. One of the main results of this paper is the simple explicit expression (3.41) for the EH zero modes. This should be compared with the much more involved expressions in the literature [4]. The computations that lead to this result are also quite elementary, and take not more than a page of text, again in a favourable contrast with the other existing treatments. To illustrate the power of the formalism that we have developed, we study the generalisation of the problem to the case of the Calabi metric on the total space of the line bundle \(\mathcal{O}(-n-1)\) over \(\mathbb{CP}^{n}\). As it is well known, the EH metric is the simplest metric in this family and corresponds to \(n=1\). We find it interesting to observe that most of the properties of EH extend to the general case, and that the latter can be handled with essentially the same techniques as those used for EH. Another intriguing result obtained here is that, for all values of \(n\), the \(U(1)\) connection with \(L^{2}\) curvature used to twist the Dirac operator coincides with the simplest Dirac zero mode of the untwisted operator. The corresponding relation in the EH case is (2.49). Of course it is only possible to appreciate this fact by making use of the formalism of spinors as differential forms as we do here. The organisation of this paper is as follows. We start, in Section 2, by giving several descriptions of EH as a Kahler and eventually as a hyperkahler manifold. We first describe it as the space obtained from the Calabi construction of a Ricci flat metric on the total space of the canonical line bundle over a Kahler manifold with non-zero scalar curvature. We then perform a change of coordinates that puts the base and fibre coordinates of the Calabi construction on equal footing. This puts the EH metric in the form (2.25), which is the most convenient one for spinor computations. We then discuss various harmonic 2-forms on EH, and in particular the unique (up to scale) harmonic 2-form with \(L^{2}\) curvature, as well as its potential. We show that this 2-form is intimately related to the exterior derivative of the (metric dual of the) Killing vector field generating the \(U(1)\) isometric action on the fibres. Finally, we describe the EH metric as the hyperkahler quotient of the flat metric on \(\mathbb{H}^{2}=\mathbb{C}^{4}\). We were unable to find the details of this construction, in the amount necessary to compare to our other description (2.25), in the literature, so we spell them out here. We find that the \(U(1)\) connection on EH with \(L^{2}\) curvature arises naturally as the \(U(1)\) connection on the total space of a circle bundle over EH obtained as the level set of the hyperkahler reduction. In Section 3 we determine the zero modes of the twisted Dirac operator on the EH space. We first recall why the untwisted operator admits no normalisable zero modes and review, to the extent necessary for our purposes, results about spin-\(c\) spinors and their relation to the usual Dirac spinors. We then explain how to calculate the spin-\(c\) Dirac operator in practice. The key point here is that on a Kahler manifold, by using the spinors as differential forms approach, it is never necessary to compute derivatives of the metric. Instead, computations only involve taking exterior derivatives and metric contractions of the relevant differential forms. This method is much more efficient than the usual approach, where spinors are treated as column vectors on which the \(\gamma\)-matrices act, and which requires the explicit computation of the spin connection. Finally, we solve the problem of finding the Dirac zero modes on the EH space, and obtain the explicit simple expression (3.41) for the resulting harmonic spinors. In section 4 we generalise our results to the case of spinors on the total space of the canonical bundle over \(\mathbb{CP}^{n}\). We close the paper with some concluding remarks as well as two Appendices. In the first one we review the description of \(\mathbb{C}^{2}\) as the total space of a line bundle over \(\mathbb{CP}^{1}\). The second one reviews Calabi's construction. ## 2. Eguchi-Hanson space as a Kahler and hyperkahler manifold The Eguchi-Hanson (EH) metric is a hyperkahler metric defined on a 4-manifold diffeomorphic to the cotangent bundle of \(S^{2}\). It was introduced in [3], where the metric is given in bi-axial Bianchi IX form \[g=\left(1-\frac{\kappa}{r^{4}}\right)^{-1}\mathrm{d}r^{2}+\frac{r^{2}}{4} \left(1-\frac{\kappa}{r^{4}}\right)\eta_{3}^{2}+\frac{r^{2}}{4}(\eta_{1}^{2}+ \eta_{2}^{2}). \tag{2.1}\] Here \((\eta_{i})\) are left-invariant 1-forms on \(SU(2)\), see (A.1), and the parameter \(\kappa\) is a positive constant. Substituting \(u^{2}=r^{2}(1-(\frac{\kappa}{r^{4}}))\) shows that the metric is regular at \(r=\kappa^{1/4}\) provided that \(\psi\in[0,2\pi)\), \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi)\). The topology of a hypersurface \(\Sigma_{r}\) of fixed \(r>\kappa^{1/4}\) is that of a circle bundle, with the circle fibre parametrised by \(\psi\) and the base \(S^{2}\) parametrised by \((\theta,\phi)\). If \(\psi\) had its usual range of \(4\pi\), \(\Sigma_{r}\) would be the total space \(S^{3}\) of the Hopf fibration, but due to the reduced range we have instead \(\Sigma_{r}\simeq S^{3}/\mathbb{Z}_{2}\). The level set \(r=\kappa^{1/4}\) is a 2-sphere known as a bolt [8]. Asymptotically \(g\) approaches the (\(\mathbb{Z}_{2}\)-quotient of the) metric of Euclidean 4-space. Therefore, EH gives a resolution of the singularity of the \(\mathbb{R}^{4}/\mathbb{Z}^{2}=\mathbb{C}^{2}/\mathbb{Z}^{2}\) orbifold. In general, a metric with an asymptotic volume growth equal to that of \(E^{4}\) is known as asymptotically locally Euclidean, or ALE. The form (2.1) emphasises the structure of EH as a real manifold with cohomogeneity one under the action of \(SU(2)/\mathbb{Z}_{2}\). There are several other useful descriptions of EH, which emphasise its complex structure. We are going to review the ones that are important for our purposes. First, EH is a Kahler-Einstein manifold. One description that makes this clear describes EH as the total space of a complex line bundle over \(\mathbb{CP}^{1}\), with complex coordinates \((w,\zeta)\) parametrising the base and fibre. Such a construction is a special case of the more general construction in [2] of a Ricci-flat Kahler metric on the total space of the canonical line bundle over a Kahler-Einstein manifold. A second description that we need trades the "asymmetrical" \((w,\zeta)\) coordinates for coordinates \((z_{1},z_{2})\) of equal standing, which play a similar role to the global coordinates of \(\mathbb{C}^{2}\) and make the similarities between the two manifolds apparent. For comparison, the analogous descriptions of \(\mathbb{C}^{2}\) as a real cohomogeneity one manifold and as a line bundle are reviewed in Appendix A. Finally we review (and make explicit) the description of EH as a hyperkahler quotient of \(\mathbb{C}^{4}\) with its flat metric. This is of interest to us because it exhibits the \(U(1)\) connection with \(L^{2}\) curvature that we use for the twist of the Dirac operator as arising geometrically, in the total space of a certain line bundle over EH. The 5-dimensional total space in question is the level set of the hyperkahler quotient construction. ### EH as a line bundle The paper [2] describes two constructions leading to Kahler-Einstein metrics on the total space of a complex line bundle over a Kahler manifold. Applied to \(\mathbb{C}P^{n}\), the first construction leads to a family of Ricci-flat Kahler metrics on the total space of the canonical line bundle \(K=\mathcal{O}(-n-1)\to\mathbb{C}P^{n}\), the second one to a family of hyperkahler metrics on \(T^{*}\mathbb{C}P^{n}\). For \(n=1\), \(\mathcal{O}(-2)\simeq T^{*}\mathbb{C}P^{1}\) and both constructions result in the same metric, which is in fact the EH one. The first construction is reviewed in Appendix B.2. Here we apply it to the case of the canonical bundle \(K\) over \(M=\mathbb{C}P^{1}\) with the Fubini-Study metric. In this case \(K\) is simply the cotangent bundle \(\Lambda^{1,0}(\mathbb{C}P^{1})\). It is well known that line bundles over \(\mathbb{C}P^{1}\) are classified by their Chern number and \(K\) has Chern number \(-2\). In terms of the inhomogeneous coordinate \(w\), the Fubini-Study metric on \(\mathbb{C}P^{1}\) takes the form \[g_{\mathbb{C}P^{1}}=\frac{4|\mathrm{d}w|^{2}}{(1+|w|^{2})^{2}}, \tag{2.2}\] and is isometric to the round metric on the 2-sphere of unit radius. The corresponding Kahler form is \[\omega_{\mathbb{C}P^{1}}=\frac{2i\mathrm{d}w\wedge\mathrm{d}\bar{w}}{(1+|w|^{2 })^{2}}=\frac{i}{2}e\wedge\bar{e}=\mathrm{vol}, \tag{2.3}\] where \(\mathrm{vol}\) is the Riemannian volume element with the natural orientation as a complex manifold, and \(e\) is the unitary section of \(K\) given by \[e=\frac{2\mathrm{d}w}{1+|w|^{2}}. \tag{2.4}\] The metric (2.2) is Einstein with scalar curvature \(s=2\), hence the Ricci form \(\rho_{\mathbb{C}P^{1}}\), Kahler form \(\omega_{\mathbb{C}P^{1}}\) and curvature \(\mathrm{d}\alpha\) of the Chern connection \(\alpha\) on \(K\), see Appendix B.1, are related by \[\rho_{\mathbb{C}P^{1}}=\omega_{\mathbb{C}P^{1}}=-i\mathrm{d}\alpha. \tag{2.5}\] It is convenient to set \[\alpha=2i\,a, \tag{2.6}\] where \[a=\frac{1}{2i}\frac{\bar{w}\mathrm{d}w-w\mathrm{d}\bar{w}}{1+|w|^{2}}=\mathrm{ Im}\left(\frac{\bar{w}\mathrm{d}w}{1+|w|^{2}}\right). \tag{2.7}\] We now apply the construction of [2], as described in [17]. Thus, as shown in Appendix B.2, if \(\zeta\) is a coordinate on the fibres of \(K\) and we define \[\theta =\mathrm{d}\zeta+2i\zeta a, \tag{2.8}\] \[\omega =2(u\omega_{\mathbb{C}P^{1}}+iu^{\prime}\theta\wedge\bar{\theta}), \tag{2.9}\] where \(u\) is a function of \(|\zeta|^{2}\) only, then \(K\) with Kahler form (2.9) is Ricci-flat Kahler provided that \[2uu^{\prime}=1\quad\Rightarrow\quad u=\sqrt{\kappa+|\zeta|^{2}}, \tag{2.10}\] for \(\kappa\) an integration constant. The associated Kahler form and metric are \[\omega =2u\,\omega_{\mathbb{C}P^{1}}+iu^{-1}\theta\wedge\bar{\theta}, \tag{2.11}\] \[g =2u\,g_{\mathbb{C}P^{1}}+2u^{-1}|\theta|^{2}. \tag{2.12}\] It is convenient to rescale \(\zeta\to\zeta/8\), \(\kappa\to\kappa/64\), getting \[\omega =\frac{i}{8}\left(\left(|\zeta|^{2}+\kappa\right)^{1/2}e\wedge \bar{e}+\left(|\zeta|^{2}+\kappa\right)^{-1/2}\theta\wedge\bar{\theta}\right), \tag{2.13}\] \[g =\left(|\zeta|^{2}+\kappa\right)^{1/2}\frac{|\mathrm{d}w|^{2}}{(1 +|w|^{2})^{2}}+\left(|\zeta|^{2}+\kappa\right)^{-1/2}\frac{1}{4}|\mathrm{d} \zeta+2i\zeta a|^{2}. \tag{2.14}\] Equation (2.14) clearly displays EH as a non-trivial complex line bundle over \(\mathbb{C}P^{1}\) with a twisted product metric on the total space. The \(SU(2)\) and line bundle structure of equations (2.1), (2.14) should be compared with the corresponding expressions for \(\mathbb{C}^{2}\), given by (A.10) and (A.28). We now check that (2.14) is indeed the same metric as (2.1). Switching to polar coordinates \(\zeta=Re^{i\chi}\) in the fibres gives \[g=\left(R^{2}+\kappa\right)^{1/2}\frac{|\mathrm{d}w|^{2}}{(1+|w|^{2})^{2}}+ \frac{1}{\left(R^{2}+\kappa\right)^{1/2}}\frac{R^{2}}{4}\left(\frac{\mathrm{d }R^{2}}{R^{2}}+(\mathrm{d}\chi+2a)^{2}\right). \tag{2.15}\] Making the coordinate change \(r^{2}=R\) shows that (2.15) asymptotically becomes the flat metric on \(\mathbb{C}^{2}/\mathbb{Z}_{2}\), cfr. (A.33). This suggests defining \[r^{2}=\left(R^{2}+\kappa\right)^{1/2}, \tag{2.16}\] which gives \[g=\left(1-\frac{\kappa}{r^{4}}\right)^{-1}\mathrm{d}r^{2}+r^{2}\frac{| \mathrm{d}w|^{2}}{(1+|w|^{2})^{2}}+\frac{r^{2}}{4}\left(1-\frac{\kappa}{r^{4} }\right)(\mathrm{d}\chi+2a)^{2}. \tag{2.17}\] Further setting \(w=\cot\left(\frac{\theta}{2}\right)\mathrm{e}^{i\phi}\), \[\chi=\psi-\phi, \tag{2.18}\] gives \[\frac{|\mathrm{d}w|^{2}}{(1+|w|^{2})^{2}}=\frac{\eta_{1}^{2}+\eta_{2}^{2}}{4}, \quad a=\left(\frac{1+\cos\theta}{2}\right)\mathrm{d}\phi,\quad\mathrm{d} \chi+2a=\eta_{3} \tag{2.19}\] so that we recover (2.1). The difference between (2.18) and (A.35) is due to the different range \(\psi\in[0,4\pi)\) for the \(SU(2)\) orbits in \(\mathbb{C}^{2}\) and \(\psi\in[0,2\pi)\) for the \(SU(2)/\mathbb{Z}_{2}\) orbits in EH. If we introduce the frame \[e_{1}=\frac{1}{2}(\kappa+|\zeta|^{2})^{1/4}e,\qquad e_{2}=\frac{1}{2}(\kappa+| \zeta|^{2})^{-1/4}\theta, \tag{2.20}\] the metric and Kahler form take the flat-space form \[\omega =\frac{i}{2}(e_{1}\wedge\bar{e}_{1}+e_{2}\wedge\bar{e}_{2}), \tag{2.21}\] \[g =|e_{1}|^{2}+|e_{2}|^{2}.\] ### Equal standing coordinates The simplest expression for the \(\mathbb{C}^{2}\) metric is of course \(|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2}\) where the two coordinates \((z_{1},z_{2})\) have equal standing. In the case of \(\mathbb{C}^{2}\) the bundle coordinates \((w,\zeta)\) are related to \((z_{1},z_{2})\) by (A.26). Essentially, \(w=z_{1}/z_{2}\) is an inhomogeneous coordinate on the base of the line bundle \(\mathbb{C}^{2}\setminus\{0\}\to\mathbb{C}P^{1}\) while \(\zeta\) parametrises the \(\mathbb{C}\) fibre. In the case of \(\mathbb{C}^{2}\) we have \(|\zeta|=\sqrt{|z_{1}|^{2}+|z_{2}|^{2}}=r\), while for EH \(|\zeta|=R\approx r^{2}\). This suggests to define \[(z_{1},z_{2})=\frac{\sqrt{\zeta}}{\sqrt{1+|w|^{2}}}(w,1),\qquad(w,\zeta)= \left(\frac{z_{1}}{z_{2}},(|z_{1}|^{2}+|z_{2}|^{2})\frac{z_{2}}{\bar{z}_{2}} \right). \tag{2.22}\] One calculates \[2\mathrm{d}z_{1}=\frac{\sqrt{\zeta}}{\sqrt{1+|w|^{2}}}e+\frac{w}{\sqrt{\zeta} \sqrt{1+|w|^{2}}}\theta,\qquad 2\mathrm{d}z_{2}=\frac{1}{\sqrt{\zeta}\sqrt{1+|w|^{2}}} \theta-\frac{\bar{w}\sqrt{\zeta}}{\sqrt{1+|w|^{2}}}e, \tag{2.23}\] with inverse \[e=\frac{2}{|\zeta|}\frac{\bar{z}_{2}}{z_{2}}(z_{2}dz_{1}-z_{1}dz_{2}),\qquad \theta=2\frac{z_{2}}{\bar{z}_{2}}(\bar{z}_{1}dz_{1}+\bar{z}_{2}dz_{2}), \tag{2.24}\] leading to \[g=\frac{1}{s}\Big{(}F|z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}|^{2}+F^{-1}| \bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}|^{2}\Big{)}, \tag{2.25}\] where we have set \[s=|\zeta|=|z_{1}|^{2}+|z_{2}|^{2},\quad F(s)=\sqrt{1+\frac{\kappa}{s^{2}}}. \tag{2.26}\] The metric (2.25) should be compared with the \(\mathbb{C}^{2}\) metric in the form (A.22), that is \[g_{\mathbb{C}^{2}}=\frac{1}{s}\left[|z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1} |^{2}+|\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}|^{2}\right], \tag{2.27}\] which (2.25) reduces to for \(\kappa=0\). For later usage we note that in components (2.25) becomes \[\begin{split}& g_{z_{1}\bar{z}_{1}}=\frac{1}{s}(F|z_{2}|^{2}+F^{-1}|z _{1}|^{2}),\qquad g_{z_{2}\bar{z}_{2}}=\frac{1}{s}(F|z_{1}|^{2}+F^{-1}|z_{2}|^{ 2}),\\ & g_{z_{1}\bar{z}_{2}}=\bar{g}_{z_{2}\bar{z}_{1}}=\frac{1}{s}(F^{ -1}-F)z_{2}\bar{z}_{1},\end{split} \tag{2.28}\] and, since \(g\) has unit determinant, \[\begin{split}& g_{z_{1}\bar{z}_{1}}^{-1}=\frac{1}{s}(F|z_{1}|^{2}+F^{ -1}|z_{2}|^{2}),\qquad g_{z_{2}\bar{z}_{2}}^{-1}=\frac{1}{s}(F|z_{2}|^{2}+F^{- 1}|z_{1}|^{2}),\\ & g_{z_{1}\bar{z}_{2}}^{-1}=\bar{g}_{z_{2}\bar{z}_{1}}^{-1}= \frac{1}{s}(F-F^{-1})\bar{z}_{1}z_{2}.\end{split} \tag{2.29}\] Writing \[Z=\begin{pmatrix}z_{1}\\ z_{2}\end{pmatrix},\quad J=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}, \tag{2.30}\] we obtain \[g=\frac{1}{s}\Big{(}F|Z^{T}J\mathrm{d}Z|^{2}+F^{-1}|Z^{T}\mathrm{d}\bar{Z}|^{2 }\Big{)}. \tag{2.31}\] Clearly \(s\) and \(Z^{T}\mathrm{d}\bar{Z}\) are invariant under \(Z\mapsto GZ\) for any \(G\in U(2)\), while \(Z^{T}J\mathrm{d}Z\) is invariant for \(G\in Sp(2,\mathbb{C})=SL(2,\mathbb{C})\). However \(|Z^{T}J\mathrm{d}Z|\) is invariant for any \(G\in GL(2,\mathbb{C}):|\det G|=1\), hence (2.31) shows that the isometry group of EH is \(U(2)\). ### Harmonic 2-forms With respect to the \((z_{1},z_{2})\) coordinates, the frame (2.20) becomes \[\begin{split}& e_{1}=\sqrt{\frac{F}{s}}\frac{\bar{z}_{2}}{z_{2}} (z_{2}dz_{1}-z_{1}dz_{2})=\frac{1}{2}(Fs)^{1/2}e,\\ & e_{2}=\frac{1}{\sqrt{Fs}}\frac{z_{2}}{\bar{z}_{2}}(\bar{z}_{1} dz_{1}+\bar{z}_{2}dz_{2})=\frac{1}{2}(Fs)^{-1/2}\theta.\end{split} \tag{2.32}\] Since \[\begin{split}&*e_{1}=\frac{1}{2}e_{1}\wedge e_{2}\wedge\bar{e}_{ 2},\quad*e_{2}=\frac{1}{2}e_{2}\wedge e_{1}\wedge\bar{e}_{1},\\ &*(e_{1}\wedge\bar{e}_{1})=e_{2}\wedge\bar{e}_{2},\quad*(e_{2} \wedge\bar{e}_{2})=e_{1}\wedge\bar{e}_{1},\end{split} \tag{2.33}\] the combination \(e_{1}\wedge\bar{e}_{1}+e_{2}\wedge\bar{e}_{2}\) is self-dual. In fact, it is essentially the Kahler form \(\omega\) on EH, see (2.21). Of course \(\omega\) is also closed, hence harmonic, but not \(L^{2}\). Using (2.32) \(\omega\) can be rewritten in the form \[\omega=2i\left[F(\mathrm{d}z_{1}\wedge\mathrm{d}\bar{z}_{1}+\mathrm{d}z_{2} \wedge\mathrm{d}\bar{z}_{2})+\frac{1}{s}(F^{-1}-F)(\bar{z}_{1}\mathrm{d}z_{1} +\bar{z}_{2}\mathrm{d}\bar{z}_{2})\wedge(z_{1}\mathrm{d}\bar{z}_{1}+z_{2} \mathrm{d}\bar{z}_{2})\right]. \tag{2.34}\] Since \(F\) satisfies \[F^{\prime}s=F^{-1}-F=-\frac{\kappa}{s\sqrt{\kappa+s^{2}}}=-\frac{\kappa}{Fs^{ 3}}, \tag{2.35}\] we can write the Kahler form as the exterior derivative of a local potential, \[\omega=2i\mathrm{d}\left(F(z_{1}\mathrm{d}\bar{z}_{1}+z_{2}\mathrm{d}\bar{z}_{ 2})\right). \tag{2.36}\] Consider now the anti self-dual combination \(e_{1}\wedge\bar{e}_{1}-e_{2}\wedge\bar{e}_{2}\). It is not closed so we look for a closed multiple of it, \[\tilde{\omega}=\frac{i}{2}f(e_{1}\wedge\bar{e}_{1}-e_{2}\wedge\bar{e}_{2}), \tag{2.37}\] where \(f\) is a function of \(s\) only. Since \(\omega\) is closed, \({\rm d}(sFe\wedge\bar{e})=-{\rm d}((sF)^{-1}\theta\wedge\bar{\theta})\), so using (B.18) we have \[{\rm d}\tilde{\omega}=(2f(sF)^{\prime}+(sF)f^{\prime})\wedge(\zeta\bar{\theta}+ \bar{\zeta}\theta)\wedge e\wedge\bar{e}, \tag{2.38}\] showing that \(\tilde{\omega}\) is closed provided that \[f=\frac{1}{(sF)^{2}}. \tag{2.39}\] Therefore the form \[\tilde{\omega}=\frac{1}{(sF)^{2}}\frac{i}{2}(e_{1}\wedge\bar{e}_{1}-e_{2} \wedge\bar{e}_{2}) \tag{2.40}\] is anti self-dual and closed, hence harmonic. It is also \(L^{2}\) since \[|\tilde{\omega}|^{2}{\rm vol}=-\tilde{\omega}\wedge\tilde{\omega}=-\frac{1}{4 }(sF)^{-4}e_{1}\wedge\bar{e}_{1}\wedge e_{2}\wedge\bar{e}_{2}\quad\Rightarrow \quad|\tilde{\omega}|^{2}=(sF)^{-4} \tag{2.41}\] which has a finite integral over EH. Being closed, \(\tilde{\omega}\) can also be written as the exterior derivative of a local potential, and a computation shows that \[\tilde{\omega}=2i{\rm d}\left[\frac{1}{Fs^{2}}(z_{1}{\rm d}\bar{z}_{1}+z_{2}{ \rm d}\bar{z}_{2})\right]. \tag{2.42}\] It is well known that the harmonic cohomology of EH is non-trivial only in dimension two, and that the space of harmonic \(L^{2}\) 2-forms is 1-dimensional [9]. As we have just seen it is generated by \(\tilde{\omega}\). It is interesting to compare the expressions of \(\omega\), \(\tilde{\omega}\) with that of \({\rm d}\theta_{3}\), for \(\theta_{3}\) the metric dual with respect to the EH metric of the Killing vector field \(X_{3}\), \[\theta_{3}=X_{3}^{\flat}=\frac{i}{2F}\left(z_{1}{\rm d}\bar{z}_{1}+z_{2}{\rm d }\bar{z}_{2}-\bar{z}_{1}{\rm d}z_{1}-\bar{z}_{2}{\rm d}z_{2}\right)=\frac{1}{ F}\mathop{\rm Im}(\bar{z}_{1}{\rm d}z_{1}+\bar{z}_{2}{\rm d}_{2}). \tag{2.43}\] Since \(X_{3}\) is a Killing vector field and EH is Ricci-flat, \({\rm d}\theta_{3}\) is harmonic. One calculates \[2{\rm d}\theta_{3}=\frac{i}{2}\left[e_{1}\wedge\bar{e}_{1}+e_{2}\wedge\bar{e} _{2}-\kappa\left(\frac{e_{1}\wedge\bar{e}_{1}-e_{2}\wedge\bar{e}_{2}}{(sF)^{2 }}\right)\right]=\omega-\kappa\tilde{\omega}, \tag{2.44}\] hence \(\kappa\tilde{\omega}\) represents the same cohomology class as \(\omega\), which generates of \(H^{2}_{\rm dR}({\rm EH})\). We can also see that \({\rm d}\theta_{3}\) is harmonic but not \(L^{2}\) and, interestingly, that \(\omega\), \(\kappa\tilde{\omega}\) are the self-dual and anti self-dual parts of \({\rm d}\theta_{3}\), \[\omega=*{\rm d}\theta_{3}+{\rm d}\theta_{3},\qquad\kappa\tilde{\omega}=*{\rm d }\theta_{3}-{\rm d}\theta_{3}. \tag{2.45}\] Using (2.36) and (2.42) we see that \({\rm d}\theta_{3}\) can also be written in the form \[{\rm d}\theta_{3}=i{\rm d}\left[\frac{1}{F}(z_{1}{\rm d}\bar{z}_{1}+z_{2}{\rm d }\bar{z}_{2})\right]. \tag{2.46}\] ### \(U(1)\) connection with \(L^{2}\) harmonic curvature In Section 3.1 we will consider the Dirac operator on EH twisted by a \(U(1)\) connection \(\mathcal{A}\) with \(L^{2}\) harmonic curvature \({\rm d}\mathcal{A}\). As we just discussed, \({\rm d}\mathcal{A}\) is necessarily some constant multiple of \(\tilde{\omega}\) and by (2.42) we can take \(\mathcal{A}\) to be some multiple of \[\frac{1}{s^{2}F}(z_{1}{\rm d}\bar{z}_{1}+z_{2}{\rm d}\bar{z}_{2}-\bar{z}_{1}{ \rm d}z_{1}-\bar{z}_{2}{\rm d}z_{2}). \tag{2.47}\] In order for \({\rm d}\mathcal{A}\) to be the curvature of a connection we need to impose the quantisation condition \[\frac{i}{2\pi}\int_{\mathbb{C}P^{1}}{\rm d}\mathcal{A}=\ell\in\mathbb{Z}, \tag{2.48}\] obtaining \[\mathcal{A}=\ell(A-\bar{A}),\quad A=\frac{\sqrt{\kappa}}{2s^{2}F}(z_{1}{\rm d }\bar{z}_{1}+z_{2}{\rm d}\bar{z}_{2})=\frac{1}{2}\,\partial\sinh^{-1}(\sqrt{ \kappa}/s). \tag{2.49}\] ### Hyperkahler quotient It is well known, see e.g. [11], that EH can be obtained as the hyperkahler quotient of \(\mathbb{H}^{2}\), but we were unable to find anywhere in the literature the details of this construction in the amount sufficient for comparison with the two descriptions given above. We spell it out here.1 Footnote 1: One of us (KK) benefited from a discussion with Daniel Platt in relation to the material described in this subsection. The first part of this construction is standard and appears in many references. Identify \(\mathbb{H}^{2}\) with \(\mathbb{C}^{4}\) via the isomorphism \(\mathbb{H}=\mathbb{C}^{2}+\mathbb{C}^{2}\mathbf{j}\), \(q_{i}=Z_{i}+W_{i}\mathbf{j}\), \(i=1,2\), \(q_{i}\in\mathbb{H}\), \(Z_{i},W_{i}\in\mathbb{C}\), and equip \(\mathbb{H}^{2}\) with the flat metric and the Kahler forms \[\omega_{R} =\frac{i}{2}(\mathrm{d}Z_{1}\wedge\mathrm{d}Z_{1}+\mathrm{d}Z_{2 }\wedge\mathrm{d}Z_{2}+\mathrm{d}W_{1}\wedge\mathrm{d}\bar{W}_{1}+\mathrm{d}W _{2}\wedge\mathrm{d}\bar{W}_{2})=\frac{i}{2}\partial\overline{\partial}(|Z|^ {2}+|W|^{2}), \tag{2.50}\] \[\omega_{C} =\omega_{2}+i\omega_{3}=\mathrm{d}W_{1}\wedge\mathrm{d}Z_{1}+ \mathrm{d}W_{2}\wedge\mathrm{d}Z_{2},\] where \(Z=(Z_{1},Z_{2})^{T}\), \(W=(W_{1},W_{2})^{T}\), \(|Z|^{2}=|Z_{1}|^{2}+|Z_{2}|^{2}\). The right \(U(1)\) action \[(Z,W)\mapsto(Z,W)\mathrm{e}^{\mathrm{i}t}=(\mathrm{e}^{\mathrm{i}t}Z,\mathrm{ e}^{-\mathrm{i}t}W), \tag{2.51}\] which corresponds to translation along the \(U(1)\) fibres of the Hopf fibration \(S^{1}\hookrightarrow S^{7}\to\mathbb{C}P^{3}\), is Hamiltonian and isometric. It is convenient to take the associated moment maps to be \[\mu_{R}=|Z|^{2}-|W|^{2}-2\sqrt{\kappa},\quad\mu_{C}=Z^{T}W=Z_{1}W_{1}+Z_{2}W_ {2}. \tag{2.52}\] The level set \(\mu^{-1}(0)=\mu_{R}^{-1}(0)\cap\mu_{C}^{-1}(0)\) is a smooth real 5-manifold. Further quotienting by the \(U(1)\) action (2.51) we get the hyperkahler quotient \[\mathbb{H}^{2}\mathbin{/\!\!/}U(1)=\mu^{-1}(0)/U(1). \tag{2.53}\] To compare to our previous description (2.25) of EH, we need to parametrise this level set by coordinates related to \(z_{1},z_{2}\), as well as some coordinate for the \(S^{1}\) fibre. This is something we were unable to find in the literature. To proceed, let \(h\in U(2)\) act on \(\mathbb{C}^{2}\) by ordinary matrix multiplication. Then the isometric left \(U(2)\) action on \(\mathbb{C}^{4}\) given by \[h\cdot(Z,W)=(hZ,\bar{h}W) \tag{2.54}\] commutes with (2.51) and preserves \(\mu\) level sets, so it descends to an isometric action on the quotient. In quaternionic notation, the moment map is \[\mu(q_{1},q_{2})=\frac{1}{2}\sum_{a=1,2}q_{a}\mathbf{i}\bar{q}_{a}-\sqrt{ \kappa}\,\mathbf{i} \tag{2.55}\] and (2.54) corresponds to \[h\cdot(q_{1},q_{2})=(q_{1},q_{2})h^{T} \tag{2.56}\] where on the rhs we have ordinary matrix multiplication. The isometric \(U(2)\) action on the quotient just introduced should match the one described in the paragraph following (2.31). This is achieved by setting \[Z_{1} =z_{1}e^{i\psi}f^{1/2}, Z_{2} =z_{2}e^{i\psi}f^{1/2}, \tag{2.57}\] \[W_{1} =-z_{2}e^{-i\psi}f^{-1/2}, W_{2} =z_{1}e^{-i\psi}f^{-1/2},\] where \(f\) is some function of \(z_{1},z_{2}\) to be determined. This ansatz automatically satisfies \(\mu_{C}(Z,W)=0\), while the condition \(|Z|^{2}-|W|^{2}=2\sqrt{\kappa}\) becomes \[(|z_{1}|^{2}+|z_{2}|^{2})(f-f^{-1})=2\sqrt{\kappa}, \tag{2.58}\] which is solved by \[f=\sqrt{1+\frac{\kappa}{s^{2}}}+\frac{\sqrt{\kappa}}{s}. \tag{2.59}\] We note the useful identities \[f-f^{-1}=\frac{2\sqrt{\kappa}}{s},\qquad f+f^{-1}=2\sqrt{1+\frac{\kappa}{s^{2 }}}=2F, \tag{2.60}\] as well as \[\frac{1}{f^{1/2}}\frac{\mathrm{d}f^{1/2}}{\mathrm{d}s}=\frac{1}{2f}\frac{\mathrm{d }f}{\mathrm{d}s}=-\frac{\sqrt{\kappa}}{2s^{2}F}. \tag{2.61}\] We now pull-back the flat metric on \(\mathbb{C}^{4}\) to \(\mu^{-1}(\mathbf{i}\sqrt{\kappa})\) as parametrised by (2.57). We have \[\mathrm{d}Z_{1} =z_{1}e^{i\psi}f^{1/2}\left(\frac{\mathrm{d}z_{1}}{z_{1}}+i \mathrm{d}\psi-\frac{\sqrt{\kappa}}{2s^{2}F}\mathrm{d}s\right),\quad\mathrm{d}Z _{2}=z_{2}e^{i\psi}f^{1/2}\left(\frac{\mathrm{d}z_{2}}{z_{2}}+i\mathrm{d}\psi- \frac{\sqrt{\kappa}}{2s^{2}F}\mathrm{d}s\right),\] \[\mathrm{d}W_{1} =-z_{2}e^{-i\psi}f^{-1/2}\left(\frac{\mathrm{d}z_{2}}{z_{2}}-i \mathrm{d}\psi+\frac{\sqrt{\kappa}}{2s^{2}F}\mathrm{d}s\right),\quad\mathrm{d} W_{2}=z_{1}e^{-i\psi}f^{-1/2}\left(\frac{\mathrm{d}z_{1}}{z_{1}}-i\mathrm{d} \psi+\frac{\sqrt{\kappa}}{2s^{2}F}\mathrm{d}s\right), \tag{2.62}\] giving \[|\mathrm{d}Z_{1}|^{2}+|\mathrm{d}Z_{2}|^{2} =f(|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2})+sf\mathrm{d}\psi^ {2}+if\mathrm{d}\psi(z_{1}\mathrm{d}\bar{z}_{1}-\bar{z}_{1}\mathrm{d}z_{1}+z_{ 2}\mathrm{d}\bar{z}_{2}-\bar{z}_{2}\mathrm{d}z_{2})\] \[+f\frac{\kappa(\mathrm{d}s)^{2}}{4s^{3}F^{2}}-f\frac{\sqrt{\kappa }(\mathrm{d}s)^{2}}{2s^{2}F},\] \[|\mathrm{d}W_{1}|^{2}+|\mathrm{d}W_{2}|^{2} =f^{-1}(|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2})+sf^{-1} \mathrm{d}\psi^{2}-if^{-1}\mathrm{d}\psi(z_{1}\mathrm{d}\bar{z}_{1}-\bar{z}_{ 1}\mathrm{d}z_{1}+z_{2}\mathrm{d}\bar{z}_{2}-\bar{z}_{2}\mathrm{d}z_{2})\] \[+f^{-1}\frac{\kappa(\mathrm{d}s)^{2}}{4s^{3}F^{2}}+f^{-1}\frac{ \sqrt{\kappa}(\mathrm{d}s)^{2}}{2s^{2}F}. \tag{2.63}\] This means that the pull-back of (half) the flat metric on \(\mathbb{C}^{4}\) is \[\frac{1}{2}(|\mathrm{d}Z_{1}|^{2}+|\mathrm{d}Z_{2}|^{2}+|\mathrm{d}W_{1}|^{2} +|\mathrm{d}W_{2}|^{2})=F(|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2})+sF \mathrm{d}\psi^{2} \tag{2.64}\] We now complete the square getting \[\frac{1}{2}(|\mathrm{d}Z_{1}|^{2}+|\mathrm{d}Z_{2}|^{2}+|\mathrm{d}W_{1}|^{2} +|\mathrm{d}W_{2}|^{2})=g+sF(\mathrm{d}\psi+i(A-\bar{A}))^{2}, \tag{2.65}\] where \[g =F(|\mathrm{d}z_{1}|^{2}+|\mathrm{d}_{2}|^{2})+\frac{1}{s}(F^{-1} -F)|\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}|^{2}, \tag{2.66}\] \[A =\frac{\sqrt{\kappa}}{2s^{2}F}(z_{1}\mathrm{d}\bar{z}_{1}+z_{2} \mathrm{d}\bar{z}_{2}). \tag{2.67}\] Using the identity \[s\Big{(}|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2}\Big{)}=|z_{1}\mathrm{d}z_{ 2}-z_{2}\mathrm{d}z_{1}|^{2}+|\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d} z_{2}|^{2} \tag{2.68}\] we see that (2.66) is the EH metric in the form (2.25). We recognise \(A\) for the \((0,1)\) part of the \(\mathrm{U}(1)\) connection (2.49), whose curvature is anti-self-dual and \(L^{2}\). Therefore, the natural connection \(A-\bar{A}\) on EH arises as the connection in the 5-dimensional fibered space \(\mu^{-1}(\mathbf{i}\sqrt{\kappa})\) obtained in the process of a hyperkahler reduction of \(\mathbb{H}^{2}\). ## 3. Dirac zero modes on the Eguchi-Hanson space ### No Dirac zero modes on EH Let \(D\) be the Dirac operator on EH and \(\sigma\) be a spinor. Since the scalar curvature \(s\) of EH vanishes, Lichernowicz's identity \[D^{2}\sigma=\nabla^{*}\nabla\sigma+\frac{s}{4}\sigma \tag{3.1}\] implies that EH admits no \(L^{2}\) Dirac zero modes. In order to obtain a non-trivial problem it is necessary to twist the spinor bundle by a complex line bundle equipped with a connection \(\mathcal{A}\). Equivalently, we need to replace the spin structure on EH by a spin-\(c\) structure. Lichernowicz's identity then becomes \[D^{2}_{\mathcal{A}}\sigma=\nabla^{*}\nabla\sigma+\frac{s}{4}\sigma+\mathrm{d} \mathcal{A}\cdot\sigma, \tag{3.2}\] where \(\cdot\) denotes Clifford multiplication, and for a suitable choice of \(\mathcal{A}\) it is possible to obtain non-trivial solutions of the twisted Dirac equation \(D_{\mathcal{A}}\sigma=0\). Twisting by an arbitrary connection does not make for an interesting problem, but taking \(\mathcal{A}\) so that \(\mathrm{d}\mathcal{A}\) is \(L^{2}\) harmonic is a natural choice. As we already discussed, in the case of EH any such connection \(\mathcal{A}\) takes the form (2.49). Therefore, we want to find \(L^{2}\) solutions of the equation \(D_{\mathcal{A}}\sigma=0\) for \(\mathcal{A}\) given by (2.49). Working on the spinor bundle, viewed as rank four \(\mathrm{Spin}(4)\) module, this problem has been considered in [4]. ### The canonical spin-\(c\) structure However, since EH is a Kahler manifold, there is a more convenient approach in terms of complex differential forms. In fact, any almost complex manifold carries a canonical spin-\(c\) structure. Let us recall the necessary background. We follow [6], section 3.4. We have the following proposition: **Proposition 1**.: _The spinor bundle \(S\) of an Hermitian manifold of complex dimension \(k\) (with respect to an arbitrary spin-\(c\) structure) is isomorphic to_ \[S=(\Lambda^{0,0}\oplus\ldots\oplus\Lambda^{0,k})\otimes S_{0}=(\Lambda^{0,0} \oplus\ldots\oplus\Lambda^{k,0})\otimes S_{k},\] _where_ \[S_{0}=\{\sigma\in S:\omega\sigma=ik\sigma\},\quad S_{k}=\{\sigma\in S:\omega \sigma=-ik\sigma\},\] _and \(\omega\) is the Kahler form, acting on a spinor by Clifford multiplication. In particular \(S_{0}=\Lambda^{k,0}\otimes S_{k}\) and \(S_{k}=\Lambda^{0,k}\otimes S_{0}\)._ The _anti-canonical spin-\(c\) structure_ on an Hermitian manifold \(M,J\) is the one for which the bundle \(S_{0}\) is trivial and \(S_{k}\) coincides with the canonical bundle \(K=\Lambda^{k}(T^{*})\) of \(M\). We have the following proposition, see [6] and also [15]. **Proposition 2**.: _Let \((M,J)\) be a Kahler manifold equipped with the anti-canonical spin-\(c\) structure. Then_ \[S\simeq\Lambda^{0,0}\oplus\ldots\oplus\Lambda^{0,k},\] _and the Dirac operator defined by the Levi-Civita connection coincides with_ \[\sqrt{2}(\overline{\partial}+\overline{\partial}^{*}). \tag{3.3}\] For reasons that will become clear in the next subsections, we will consider instead the Dirac operator \[D=\overline{\partial}-\overline{\partial}^{*}, \tag{3.4}\] which has the same kernel as \(\overline{\partial}+\overline{\partial}^{*}\), with the Clifford action given by \[\upsilon\cdot\sigma=\upsilon^{0,1}\wedge\sigma+\iota_{\upsilon^{\sharp}}\sigma. \tag{3.5}\] We discussed above how the spin structure on EH admits no \(L^{2}\) harmonic spinors. The same holds for the (anti-)canonical spin-\(c\) structure. In fact the spinor bundle is isomorphic to \(W=\Lambda^{0}\oplus\Lambda^{0,1}\oplus\Lambda^{0,2}\) and harmonic spinors correspond to forms \(\sigma\in W\) such that \[D\sigma=(\overline{\partial}-\overline{\partial}^{*})\sigma=0. \tag{3.6}\] On a complete Kahler manifold \[D\sigma=0\Leftrightarrow D^{2}\sigma=0\Leftrightarrow(\mathrm{d}\mathrm{d}^{*} +\mathrm{d}^{*}\mathrm{d})\sigma=0\Leftrightarrow\mathrm{d}\sigma=0=\mathrm{d}^ {*}\sigma. \tag{3.7}\] Thus harmonic spinors in \(\Lambda^{0}\) are constant functions and \(\alpha\in\Lambda^{2}\) is harmonic if and only if \(*\alpha\) is a constant function. Neither is \(L^{2}\). As for 1-forms, by the Bochner identity \[\triangle\sigma=\nabla^{*}\nabla\sigma+\mathrm{Ric}(\sigma). \tag{3.8}\] Since EH is Ricci-flat, any harmonic form has to be parallel, hence cannot be \(L^{2}\). The result is not surprising since the Chern connection on the canonical bundle of a Ricci-flat manifold has zero curvature, so the Dirac operator associated to the canonical spin-\(c\) structure on EH is equivalent to the one associated to the spin structure. ### Computing the action of the Dirac operator on spin-\(c\) spinors In four dimensions, the two chiralities of spin-\(c\) spinors are identified with \[W^{+}=\Lambda^{0}+\Lambda^{0,2},\qquad W^{-}=\Lambda^{0,1}, \tag{3.9}\] and the spin-\(c\) Dirac operator is \[D=\bar{\partial}-\bar{\partial}^{*}. \tag{3.10}\] Here \(\bar{\partial}\) is the \((0,1)\) projection of the exterior derivative d, and \(\bar{\partial}^{*}\) its adjoint. The use of the minus sign here rather than plus is a convention that is more suitable for our purposes, because it leads to a very simple way of computing \(D\), which we will explain below. The action of \(\bar{\partial}\) on \(\Lambda^{0},\Lambda^{0,1}\) is very simple \[\bar{\partial}:\Lambda^{0}\ni\alpha\to\partial_{\bar{z}_{1}} \alpha\,d\bar{z}_{1}+\partial_{\bar{z}_{2}}\alpha\,d\bar{z}_{2}\in\Lambda^{0, 1}, \tag{3.11}\] \[\bar{\partial}:\Lambda^{0,1}\ni\beta\,d\bar{z}_{1}+\gamma\,d\bar {z}_{2}\to(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta)d\bar{z}_ {1}\wedge d\bar{z}_{2}\in\Lambda^{0,2}.\] Here \(\alpha,\beta,\gamma\) are functions of \(z_{1},\bar{z}_{1},z_{2},\bar{z}_{2}\). The action of \(\bar{\partial}^{*}\) involves the metric and is more complicated. However, on a Kahler manifold there is a very simple set of rules to be followed. We derive these rules after \(\bar{\partial}^{*}\) is computed. ### Computation of \(\bar{\partial}^{*}\) We first compute the action of \(\bar{\partial}^{*}\) on \(\Lambda^{0,1}\). For the sake of generality, we do the computation for a general Hermitian metric on \(\mathbb{C}^{2}\). The Hermitian pairing of \(\bar{\partial}\alpha\in\Lambda^{0,1}\) with an arbitrary element \(\beta\,d\bar{z}_{1}+\gamma\,d\bar{z}_{2}\in\Lambda^{0,1}\) is given by \[\langle\bar{\partial}\alpha,\beta\,d\bar{z}_{1}+\gamma\,d\bar{z} _{2}\rangle=g^{-1}(\partial_{\bar{z}_{1}}\alpha\,d\bar{z}_{1}+\partial_{\bar{ z}_{2}}\alpha\,d\bar{z}_{2},\bar{\beta}\,dz_{1}+\bar{\gamma}\,dz_{2})= \tag{3.12}\] \[\partial_{\bar{z}_{1}}\alpha\,\bar{\partial}g^{-1}_{z_{1}\bar{z} _{1}}+\partial_{\bar{z}_{1}}\alpha\,\bar{\gamma}g^{-1}_{z_{2}\bar{z}_{1}}+ \partial_{\bar{z}_{2}}\alpha\,\bar{\beta}g^{-1}_{z_{1}\bar{z}_{2}}+\partial_{ \bar{z}_{2}}\alpha\,\bar{\gamma}g^{-1}_{z_{2}\bar{z}_{2}}.\] We now multiply this by the volume factor, which is \(v_{g}|dz_{1}|^{2}|dz_{2}|^{2}\), and integrate by parts. This gives \[v_{g}\langle\alpha,\bar{\partial}^{*}(\beta\,d\bar{z}_{1}+\gamma\,d\bar{z}_{2 })\rangle=-\alpha\overline{\partial_{z_{1}}\left(g^{-1}_{\bar{z}_{1}\bar{z}_{ 1}}v_{g}\beta\right)}-\alpha\overline{\partial_{z_{1}}\left(g^{-1}_{z_{1}\bar{ z}_{1}}v_{g}\gamma\right)}-\alpha\overline{\partial_{z_{2}}\left(g^{-1}_{z_{2}\bar{z}_{ 1}}v_{g}\beta\right)}-\alpha\overline{\partial_{z_{2}}\left(g^{-1}_{z_{2}\bar{ z}_{2}}\gamma\right)},\] which is an identity that only holds modulo surface terms (which we assume to vanish). We can simplify this expression using the fact that the components of the inverse metric times the volume form give the components of the original metric. Doing so we get \[v_{g}\bar{\partial}^{*}(\beta\,d\bar{z}_{1}+\gamma\,d\bar{z}_{2})=-\partial_{ z_{1}}\left(g_{z_{2}\bar{z}_{2}}\beta\right)+\partial_{z_{1}}\left(g_{z_{2} \bar{z}_{1}}\gamma\right)+\partial_{z_{2}}\left(g_{z_{1}\bar{z}_{2}}\beta \right)-\partial_{z_{2}}\left(g_{z_{1}\bar{z}_{1}}\gamma\right)\in\Lambda^{0}. \tag{3.13}\] The computation of \(\bar{\partial}^{*}\) on \(\Lambda^{0,2}\) is similar. We have \[\langle(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta )d\bar{z}_{1}\wedge d\bar{z}_{2},\delta d\bar{z}_{1}\wedge d\bar{z}_{2}\rangle= \tag{3.14}\] \[g^{-1}((\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta )(d\bar{z}_{1}\otimes d\bar{z}_{2}-d\bar{z}_{2}\otimes d\bar{z}_{1}),\bar{ \delta}(dz_{1}\otimes dz_{2}))=\] \[(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta)v_{g} ^{-1}\bar{\delta}.\] We now multiply by \(v_{g}\) and integrate by parts to get \[v_{g}\langle\beta d\bar{z}_{1}+\gamma d\bar{z}_{2},\bar{\partial}^{*}(\delta d \bar{z}_{1}\wedge d\bar{z}_{2})\rangle=-\gamma\overline{\partial_{z_{1}}} \delta+\beta\overline{\partial_{z_{2}}}\delta. \tag{3.15}\] Since we want to rewrite \(\bar{\partial}^{*}(\delta d\bar{z}_{1}\wedge d\bar{z}_{2})\) in the form \[v_{g}\bar{\partial}^{*}(\delta d\bar{z}_{1}\wedge d\bar{z}_{2})=Ad\bar{z}_{1}+ Bd\bar{z}_{2}, \tag{3.16}\] we need to match \[-\gamma\overline{\partial_{z_{1}}\delta}+\beta\overline{\partial_{z_{2}}\delta} =\beta\bar{A}g^{-1}_{z_{1}\bar{z}_{1}}+\beta\bar{B}g^{-1}_{z_{2}\bar{z}_{1}}+ \gamma\bar{A}g^{-1}_{z_{1}\bar{z}_{2}}+\gamma\bar{B}g^{-1}_{z_{2}\bar{z}_{2}}, \tag{3.17}\] which gives \[A=g_{z_{1}\bar{z}_{1}}\partial_{z_{2}}\delta-g_{z_{2}\bar{z}_{1}}\partial_{z_{1}} \delta,\qquad B=g_{z_{1}\bar{z}_{2}}\partial_{z_{2}}\delta-g_{z_{2}\bar{z}_{2}} \partial_{z_{1}}\delta. \tag{3.18}\] Collecting the above results, we obtain the two chiral halves of the Dirac operator. Abusing the notation, we still denote by \(D:W^{+}\to W^{-}\) one of the two chiral parts, and by \(D^{\dagger}:W^{-}\to W^{+}\) the other one. Their action is \[D(\alpha+\delta d\bar{z}_{1}\wedge d\bar{z}_{2}) =\left(\partial_{\bar{z}_{1}}\alpha-v_{g}^{-1}g_{z_{1}\bar{z}_{1 }}\partial_{z_{2}}\delta+v_{g}^{-1}g_{z_{2}\bar{z}_{1}}\partial_{z_{1}}\delta \right)d\bar{z}_{1} \tag{3.19}\] \[+\left(\partial_{\bar{z}_{2}}\alpha-v_{g}^{-1}g_{z_{1}\bar{z}_{2} }\partial_{z_{2}}\delta+v_{g}^{-1}g_{z_{2}\bar{z}_{2}}\partial_{z_{1}}\delta \right)d\bar{z}_{2},\] \[D^{\dagger}(\beta d\bar{z}_{1}+\gamma d\bar{z}_{2})=(\partial_{\bar{z}_{1}} \gamma-\partial_{\bar{z}_{2}}\beta)d\bar{z}_{1}\wedge d\bar{z}_{2} \tag{3.20}\] \[+v_{g}^{-1}\partial_{z_{1}}\left(g_{z_{2}\bar{z}_{2}}\beta\right) -v_{g}^{-1}\partial_{z_{1}}\left(g_{z_{2}\bar{z}_{1}}\gamma\right)-v_{g}^{-1} \partial_{z_{2}}\left(g_{z_{1}\bar{z}_{2}}\beta\right)+v_{g}^{-1}\partial_{z_ {2}}\left(g_{z_{1}\bar{z}_{1}}\gamma\right).\] ### Spin-\(c\) Dirac operator on a Kahler manifold So far we have computed the Dirac operator only assuming that the metric is Hermitian. If the metric is also Kahler, thanks to the identities \[\partial_{z_{1}}g_{z_{2}\bar{z}_{1}}=\partial_{z_{2}}g_{z_{1}\bar{z}_{1}}, \qquad\partial_{z_{1}}g_{z_{2}\bar{z}_{2}}=\partial_{z_{2}}g_{z_{1}\bar{z}_{2 }}, \tag{3.21}\] the terms involving derivatives of the metric in (3.20) cancel among each other. As a result, we obtain the following simplified expression for the chiral Dirac operator on \(W^{-}\), \[D^{\dagger}(\beta d\bar{z}_{1}+\gamma d\bar{z}_{2})=(\partial_{ \bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta)d\bar{z}_{1}\wedge d\bar{z}_{2} \tag{3.22}\] \[+v_{g}^{-1}g_{z_{2}\bar{z}_{2}}\partial_{z_{1}}\beta-v_{g}^{-1}g _{z_{2}\bar{z}_{1}}\partial_{z_{1}}\gamma-v_{g}^{-1}g_{z_{1}\bar{z}_{2}} \partial_{z_{2}}\beta+v_{g}^{-1}g_{z_{1}\bar{z}_{1}}\partial_{z_{2}}\gamma.\] ### Practical way of computing the Dirac operator on a Kahler manifold The computations above illustrate that there is a very simple method for computing the Dirac operator on a Kahler manifold. The method consists in taking any differential form in question, and then first computing its **full** exterior derivative, including the terms not belonging to \(\Lambda^{0,k}\). One then uses the metric pairing to map the latter terms from \(\Lambda^{1,k}\) to \(\Lambda^{0,k-1}\). Let us see how this works in practice. We start with the spinor in \(W^{-}=\Lambda^{0,1}\). We have \[\mathrm{d}(\beta\mathrm{d}\bar{z}_{1}+\gamma\mathrm{d}\bar{z}_{ 2})=\partial_{z_{1}}\beta\mathrm{d}z_{1}\wedge\mathrm{d}\bar{z}_{1}+\partial_ {z_{2}}\beta\mathrm{d}z_{2}\wedge\mathrm{d}\bar{z}_{1}+\partial_{z_{1}}\gamma \mathrm{d}z_{1}\wedge\mathrm{d}\bar{z}_{2}+\partial_{z_{2}}\gamma\mathrm{d}z _{2}\wedge\mathrm{d}\bar{z}_{2} \tag{3.23}\] \[+(\partial_{\bar{z}_{2}}\gamma-\partial_{\bar{z}_{1}}\beta) \mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}.\] The second line here is in \(\Lambda^{0,2}\), and so a spinor in \(W^{+}\), but the first line lies in \(\Lambda^{1,1}\) and is not a spinor. However, we can use the metric pairing to map it into \(\Lambda^{0}\). Denoting this projection by \(g^{-1}\) we have \[g^{-1}\left(\mathrm{d}(\beta\mathrm{d}\bar{z}_{1}+\gamma\mathrm{d}\bar{z}_{ 2})\right)=g_{z_{1}\bar{z}_{1}}^{-1}\partial_{z_{1}}\beta+g_{z_{2}\bar{z}_{1}}^ {-1}\partial_{z_{2}}\beta+g_{z_{1}\bar{z}_{2}}^{-1}\partial_{z_{1}}\gamma+g_{z _{2}\bar{z}_{2}}^{-1}\partial_{z_{2}}\gamma \tag{3.24}\] \[+(\partial_{\bar{z}_{2}}\gamma-\partial_{\bar{z}_{1}}\beta) \mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}.\] Taking into account the relation between the metric and its inverse we can write this as \[g^{-1}\left(\mathrm{d}(\beta\mathrm{d}\bar{z}_{1}+\gamma\mathrm{d}\bar{z}_{ 2})\right)=v_{g}^{-1}g_{z_{2}\bar{z}_{2}}\partial_{z_{1}}\beta-v_{g}^{-1}g_{z_ {1}\bar{z}_{2}}\partial_{z_{2}}\beta-v_{g}^{-1}g_{z_{2}\bar{z}_{1}}\partial_{z _{1}}\gamma+v_{g}^{-1}g_{z_{1}\bar{z}_{1}}\partial_{z_{2}}\gamma \tag{3.25}\] \[+(\partial_{\bar{z}_{2}}\gamma-\partial_{\bar{z}_{1}}\beta) \mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2},\] which is the correct expression (3.22) for the chiral Dirac operator on \(W^{-}\). Completely analogous computations give the other chiral Dirac operator: we first apply the full exterior derivative to \(\Lambda^{0,2}\) \[\mathrm{d}(\delta\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2})=\partial_{ z_{1}}\delta\mathrm{d}z_{1}\wedge\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}+ \partial_{z_{2}}\delta\mathrm{d}z_{2}\wedge\mathrm{d}\bar{z}_{1}\wedge \mathrm{d}\bar{z}_{2}. \tag{3.26}\] We now do all possible metric pairings to map this into \(\Lambda^{0,1}\). We have \[g^{-1}\left(\mathrm{d}(\delta\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}) \right)=g_{z_{1}\bar{z}_{1}}^{-1}\partial_{z_{1}}\delta\,\mathrm{d}\bar{z}_{2} -g_{z_{1}\bar{z}_{2}}^{-1}\partial_{z_{1}}\delta\,\mathrm{d}\bar{z}_{1}+g_{z_{2 }\bar{z}_{1}}^{-1}\partial_{z_{2}}\delta\,\mathrm{d}\bar{z}_{2}-g_{z_{2}\bar{z }_{2}}^{-1}\partial_{z_{2}}\delta\,\mathrm{d}\bar{z}_{1}. \tag{3.27}\] Again using the relation between the metric and its inverse and collecting terms we have \[g^{-1}\left(\mathrm{d}(\delta\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}) \right)=v_{g}^{-1}(g_{z_{2}\bar{z}_{1}}\partial_{z_{1}}\delta-v_{g}^{-1}g_{z_ {1}\bar{z}_{1}}\partial_{z_{2}}\delta)\mathrm{d}\bar{z}_{1}+v_{g}^{-1}(g_{z_{2} \bar{z}_{2}}\partial_{z_{1}}\delta-v_{g}^{-1}g_{z_{1}\bar{z}_{2}}\partial_{z_ {2}}\delta)\mathrm{d}\bar{z}_{2},\] which is exactly the \(\delta\)-dependent part of (3.19). All in all, this is a very simple way of computing the spin-\(c\) Dirac operator for a Kahler metric, which involves nothing more complicated than taking the exterior derivative, and then doing metric contractions. It is in order to have such a simple recipe for computing \(D\) that we have taken the minus sign in our definition \(D=\overline{\partial}-\overline{\partial}^{*}\) of the Dirac operator. To summarise, the spinor \(D\sigma\) can be obtained by first calculating \(\mathrm{d}\sigma\in\Lambda^{0,q+1}\oplus\Lambda^{1,q}\). The \(\Lambda^{0,q+1}\) component corresponds to \(\overline{\partial}\sigma\) while contracting with the (inverse) metric maps the \((1,q)\) component to \(-\overline{\partial}^{*}\sigma\in\Lambda^{0,q-1}\). Similarly, when considering the twisted operator \[D_{\mathcal{A}}=D+\mathcal{A}, \tag{3.28}\] the action of \(\mathcal{A}\) is given by (3.5) so we first calculate \(\mathcal{A}\wedge\sigma\in\Lambda^{1,q}\oplus\Lambda^{0,q+1}\) and then contract the \((1,q)\) part with the inverse metric to obtain a form of degree \((0,q-1)\). This gives very simple computational rules, allowing to compute the twisted Dirac operator with minimal effort. In particular, we never need to compute the spin connection for EH. Nor do we ever need to compute the derivatives of the metric components. This is, of course, part of the magic of Kahler geometry. ### Zero modes of the twisted Dirac operator on EH We now proceed with the calculation using the EH metric in the form (2.25) and the connection (2.49). Write \[W^{+}=\mathbb{C}\oplus\Lambda^{0,2},\quad W^{-}=\Lambda^{0,1} \tag{3.29}\] for the even and odd part of \(\Lambda^{0,\bullet}\). A generic spinor \(\sigma=\sigma_{+}+\sigma_{-}\), \(\sigma_{\pm}\in W^{\pm}\), has the form \[\sigma_{+}=\alpha+\delta\,\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}, \quad\sigma_{-}=\beta\,\mathrm{d}\bar{z}_{1}+\gamma\,\mathrm{d}\bar{z}_{2}, \tag{3.30}\] with \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) functions of \(z_{i}\), \(\bar{z}_{i}\), \(i=1,2\). However, we are twisting by a connection whose curvature is anti-self-dual, see (2.42). Such anti-self-dual curvature can only Clifford act non-trivially in (3.2) on spinors of one chirality. It can be checked that this are the spinors in \(W^{-}\). This means that there are no non-trivial \(L^{2}\) harmonic spinors in \(W^{+}\). Hence we take \[\sigma=\sigma_{-}=\beta\,\mathrm{d}\bar{z}_{1}+\gamma\,\mathrm{d}\bar{z}_{2}. \tag{3.31}\] We calculate \[\begin{split}\mathrm{d}\sigma_{-}&=\partial_{z_{1} }\beta\mathrm{d}z_{1}\wedge\mathrm{d}\bar{z}_{1}+\partial_{z_{2}}\beta \mathrm{d}z_{2}\wedge\mathrm{d}\bar{z}_{1}+\partial_{z_{1}}\gamma\mathrm{d}z _{1}\wedge\mathrm{d}\bar{z}_{2}+\partial_{z_{2}}\gamma\mathrm{d}z_{2}\wedge \mathrm{d}\bar{z}_{2}\\ &+(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta) \mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}.\end{split} \tag{3.32}\] The second line belongs to \(W^{+}\) while, as discussed, we need to contract the first line with the inverse metric, getting \[D\sigma_{-}=g_{z_{1}\bar{z}_{1}}^{-1}\partial_{z_{1}}\beta+g_{z_{2}\bar{z}_{ 1}}^{-1}\partial_{z_{2}}\beta+g_{z_{1}\bar{z}_{2}}^{-1}\partial_{z_{1}}\gamma +g_{z_{2}\bar{z}_{2}}^{-1}\partial_{z_{2}}\gamma+(\partial_{\bar{z}_{1}} \gamma-\partial_{\bar{z}_{2}}\beta)\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{ z}_{2}.\] Substituting (2.29) we obtain \[\begin{split} D\sigma_{-}&=\frac{1}{s}(F|z_{1}|^{2 }+F^{-1}|z_{2}|^{2})\partial_{z_{1}}\beta+\frac{1}{s}(F-F^{-1})(\bar{z}_{1}z_{ 2}\partial_{z_{2}}\beta+\bar{z}_{2}z_{1}\partial_{z_{1}}\gamma)+\frac{1}{s}(F |z_{2}|^{2}+F^{-1}|z_{1}|^{2})\partial_{z_{2}}\gamma\\ &+(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta) \mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2},\end{split} \tag{3.33}\] which can be rewritten in the form \[D\sigma_{-}=\left(\frac{F-F^{-1}}{s}\right)(z_{1}\partial_{z_{1}}+z_{2} \partial_{z_{2}})(\bar{z}_{1}\beta+\bar{z}_{2}\gamma)+F^{-1}(\partial_{z_{1}} \beta+\partial_{z_{2}}\gamma)+(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z} _{2}}\beta)\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}.\] Consider now the action of the connection (2.49). We calculate \[\mathcal{A}\cdot\sigma_{-}=\frac{\ell\sqrt{\kappa}}{2Fs^{2}}(z_{1}\gamma-z_{2 }\beta)\mathrm{d}\bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}-\frac{\ell\sqrt{\kappa }}{2s^{2}}(\bar{z}_{1}\beta+\bar{z}_{2}\gamma). \tag{3.34}\] Putting all together, the twisted Dirac equation is \[\begin{split} D_{\mathcal{A}}\sigma_{-}&=\left( \left(\frac{F-F^{-1}}{s}\right)(z_{1}\partial_{z_{1}}+z_{2}\partial_{z_{2}})- \frac{\ell\sqrt{\kappa}}{2s^{2}}\right)(\bar{z}_{1}\beta+\bar{z}_{2}\gamma)+F ^{-1}\left(\partial_{z_{1}}\beta+\partial_{z_{2}}\gamma\right)\\ &+\left(\partial_{\bar{z}_{1}}\gamma-\partial_{\bar{z}_{2}}\beta+ \frac{\ell\sqrt{\kappa}}{2Fs^{2}}(z_{1}\gamma-z_{2}\beta)\right)\mathrm{d} \bar{z}_{1}\wedge\mathrm{d}\bar{z}_{2}=0.\end{split} \tag{3.35}\] We take the ansatz, which solves the \(\Lambda^{0,2}\) part of (3.35), \[\beta=z_{1}^{N-m+1}z_{2}^{N+m}h(s),\qquad\gamma=z_{1}^{N-m}z_{2}^{N+m+1}h(s) \tag{3.36}\] for \(h(s)\) an function of \(s\) to be determined below. Here \(N\geq 0\) is such that \(2N\in\mathbb{Z}\), \(m=\{-N,-N+1,\ldots,N\}\). The reason for this particular ansatz is that, since the left \(SU(2)\) action on \((z_{1},z_{2})\) is an isometry, spinors can be decomposed into irreducible \(SU(2)\) representations. By taking \(\beta\), \(\gamma\) as in (3.36) we have \[\sigma_{-}=hz_{1}^{N-m}z_{2}^{N+m}(\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2} \mathrm{d}z_{2}), \tag{3.37}\] where \(h(\bar{z}_{1}{\rm d}z_{1}+\bar{z}_{2}{\rm d}z_{2})\) is \(SU(2)\)-invariant and the space of homogeneous polynomials in \(z_{1},z_{2}\) of degree \(2N\) gives the \(SU(2)\) irrep of dimension \(2N+1\). The \(\Lambda^{0}\) part of (3.35) now reduces to \[F^{2}(2Nh+(hs)^{\prime})+h=\frac{\ell}{2}\sqrt{\kappa}\,\frac{hF}{s}, \tag{3.38}\] which is solved by \[h=\left(\frac{1}{Fs^{2N+2}}\right)\frac{1}{f^{\ell}}, \tag{3.39}\] where \[f=\sqrt{1+\frac{\kappa}{s^{2}}}+\frac{\sqrt{\kappa}}{s} \tag{3.40}\] is the same function as (2.59). Thus, we have found the following zero modes \[\sigma_{-}=\frac{z_{1}^{N-m}z_{2}^{N+m}}{Fs^{2N+2}f^{\ell}}(z_{1}{\rm d}\bar{z }_{1}+z_{2}{\rm d}\bar{z}_{2}). \tag{3.41}\] ### Normalisability Let us discuss normalisability. Using (2.29) we calculate \[|z_{1}{\rm d}\bar{z}_{1}+z_{2}{\rm d}\bar{z}_{2}|^{2}=|z_{1}|^{2}\frac{F|z_{1}| ^{2}+F^{-1}|z_{2}|^{2}}{s}+|z_{2}|^{2}\frac{F|z_{2}|^{2}+F^{-1}|z_{1}|^{2}}{s} +2\frac{F-F^{-1}}{s}|z_{1}|^{2}|z_{2}|^{2}=Fs, \tag{3.42}\] hence \[|\sigma_{-}|^{2}=\frac{|z_{1}|^{2(N-m)}|z_{2}|^{2(N+m)}}{Fs^{4N+3}f^{2\ell}}. \tag{3.43}\] As discussed previously, EH asymptotically approaches the flat metric on \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) with radial coordinate \(r=\sqrt{s}=\sqrt{|z_{1}|^{2}+|z_{2}|^{2}}\) and volume element \(\sim r^{3}{\rm d}r\). We need to check the \(L^{2}\) condition for small and large \(r\). For large \(r\), \(f\sim F\sim 1\) hence \[|\sigma_{-}|^{2}r^{3}\sim\frac{1}{r^{4N+3}} \tag{3.44}\] and the \(L^{2}\) condition gives \(N>-1/2\), which is satisfied by any non-negative half-integer \(N\). For small \(r\), \(f\sim F\sim r^{-2}\), hence \[|\sigma_{-}|r^{3}\sim r^{2\ell-4N-1}, \tag{3.45}\] hence a normalisable spinor needs to satisfy \(2N<\ell\). In conclusion, harmonic spinors belong to \(SU(2)\) representations of dimension \(2N+1\). A spinor is normalisable if and only if \[1\leq 2N+1\leq\ell \tag{3.46}\] where \(\ell\in\mathbb{Z}\) is the flux of the curvature \(2\)-form \({\rm d}\mathcal{A}\). Here we have assumed \(\ell>0\), the case \(\ell<0\) can be treated similarly. As we already knew, in the untwisted case \(\ell=0\) there are no normalisable zero modes. For \(\ell=1\) we have only the singlet zero mode, for \(\ell=2\) we have both the singlet and doublet, and so on, for a total of \(\ell(\ell+1)/2\) zero modes for general \(\ell\). ## 4. The general case ### Calabi's metric on \(\mathcal{O}(-n-1)\) We now apply Calabi's construction to the canonical bundle \(K=\mathcal{O}(-n-1)\to\mathbb{C}P^{n}\) over \(\mathbb{C}P^{n}\) equipped with the Fubini-Study metric. Applying the construction of Appendix B.2 to \(M=\mathbb{C}P^{n}\) we get, for \(\lambda\neq 0\), \(\kappa>0\) arbitrary constants, \[\omega =\lambda u\,\omega_{\mathbb{C}P^{n}}+i(n+1)(\lambda u)^{-n} \theta\wedge\bar{\theta}, \tag{4.1}\] \[g =\lambda u\,g_{\mathbb{C}P^{n}}+2(n+1)(\lambda u)^{-n}|\theta|^{2}. \tag{4.2}\] Here \[u=\left(c|\zeta|^{2}+\kappa\right)^{\frac{1}{n+1}}, \tag{4.3}\] and, for \(\zeta\) a complex coordinate on the fibres, \[\theta={\rm d}\zeta+\alpha\zeta \tag{4.4}\] with \(\alpha\) the Chern connection on \(K\). Its curvature \(\mathrm{d}\alpha\) satisfies \[-i\mathrm{d}\alpha=\frac{s_{\mathbb{C}P^{n}}}{2n}\omega_{\mathbb{C}P^{n}}, \tag{4.5}\] where \(s_{\mathbb{C}P^{n}}\) is the scalar curvature of the FS metric. The constants \(c\) and \(\lambda\) are related by \[c=\frac{s_{\mathbb{C}P^{n}}(n+1)^{2}}{2n\lambda^{n+1}}. \tag{4.6}\] Let \((w_{i})\), \(i=1,\ldots,n\), be inhomogeneous coordinates on \(\mathbb{C}P^{n}\). With respect to the local Kahler potential \[\mathcal{K}=\frac{C}{2}\log(1+|w|^{2}), \tag{4.7}\] where \(|w|^{2}=|w_{1}|^{2}+\cdots+|w_{n}|^{2}\), \(C\in\mathbb{R}^{\times}\) is some constant, the FS metric and Kahler form take the form \((g_{\mathbb{C}P^{n}})_{\mu\bar{\nu}}=\partial_{\mu}\partial_{\bar{\nu}} \mathcal{K}\), \((\omega_{\mathbb{C}P^{n}})_{\mu\bar{\nu}}=i(g_{\mathbb{C}P^{n}})_{\mu\bar{ \nu}}\). The scalar curvature is related to \(C\) by \[s_{\mathbb{C}P^{n}}=\frac{4n(n+1)}{C}. \tag{4.8}\] Since \[\partial\mathcal{K}=\frac{C}{2}\frac{\bar{w}\,\mathrm{d}w}{1+|w|^{2}},\quad \overline{\partial}\mathcal{K}=\overline{\partial}\overline{\mathcal{K}}, \tag{4.9}\] the Kahler form of \(\mathbb{C}P^{n}\) can be written \[\omega_{\mathbb{C}P^{n}}=\frac{i}{2}\left(\partial\overline{\partial} \mathcal{K}-\overline{\partial}\partial\mathcal{K}\right)=\frac{i}{2}\mathrm{ d}\left(\overline{\partial}\mathcal{K}-\partial\mathcal{K}\right), \tag{4.10}\] so that \[\mathrm{d}\alpha=-\left(\frac{n+1}{C}\right)\mathrm{d}(\overline{\partial} \mathcal{K}-\partial\mathcal{K}). \tag{4.11}\] Hence up to the addition of a closed form \[\alpha=2i\left(\frac{n+1}{2}\right)\frac{\mathrm{Im}(\bar{w}\mathrm{d}w)}{1+| w|^{2}}. \tag{4.12}\] Note that \(\alpha\) is purely imaginary. As we did for \(n=1\) we write \[\alpha=2i\,a,\quad a=\frac{1}{2i}\left(\frac{n+1}{2}\right)\left(\frac{\bar{w }\mathrm{d}w-w\mathrm{d}\bar{w}}{1+|w|^{2}}\right). \tag{4.13}\] ### Rewriting in terms of "symmetrical" coordinates Introducing the homogeneous coordinates \((z_{i})\), \(i=1,\ldots,n+1\), related to \((z_{i})\) by \(w_{i}=\frac{z_{i}}{z_{n+1}}\), the FS metric on \(\mathbb{C}P^{n}\) can also be written as (the pullback along a holomorphic section of) \[g_{\mathbb{C}P^{n}}=\frac{1}{s^{2}}\sum_{1\leq i<j\leq n+1}|z_{i}\mathrm{d}z_{ j}-z_{j}\mathrm{d}z_{i}|^{2}, \tag{4.14}\] where \[s=|z_{1}|^{2}+\cdots+|z_{n+1}|^{2}. \tag{4.15}\] We will also need the analogous of the identity (2.68), which for general \(n\) reads \[s|\mathrm{d}z|^{2}=|\bar{z}\mathrm{d}z|^{2}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with inverse \[z_{i}=w_{i}\frac{\zeta^{\frac{1}{n+1}}}{\sqrt{1+|w|^{2}}},\ i=1,\ldots,n,\qquad z _{n+1}=\frac{\zeta^{\frac{1}{n+1}}}{\sqrt{1+|w|^{2}}}. \tag{4.19}\] One calculates \[\bar{z}\mathrm{d}z=|\zeta|^{\frac{2}{n+1}}\frac{1}{2}\left(\frac{\bar{w} \mathrm{d}w-w\mathrm{d}\bar{w}}{1+|w|^{2}}\right)+\left(\frac{1}{n+1}\right) \frac{\bar{\zeta}\mathrm{d}\zeta}{|\zeta|^{\frac{2n}{n+1}}}, \tag{4.20}\] so that, using (4.13), we have \[(n+1)\bar{z}\mathrm{d}z=\frac{\bar{\zeta}}{|\zeta|^{\frac{2n}{n+1}}}(\mathrm{d }\zeta+\alpha\zeta)=\frac{\bar{\zeta}}{|\zeta|^{\frac{2n}{n+1}}}\theta. \tag{4.21}\] It follows \[|\theta|^{2}=(n+1)^{2}s^{n-1}|\bar{z}\mathrm{d}z|^{2}. \tag{4.22}\] Note that, since \(\alpha\) is purely imaginary, taking the real and imaginary part of (4.21) we get \[\mathrm{d}s=\mathrm{d}(|\zeta|^{2})^{\frac{1}{1+n}}, \tag{4.23}\] in agreement with (4.18), and \[(n+1)\operatorname{Im}(\bar{z}\mathrm{d}z)=|\zeta|^{\frac{2}{n+1}}\alpha+ \frac{\operatorname{Im}(\bar{\zeta}\mathrm{d}\zeta)}{|\zeta|^{\frac{2n}{n+1}}}. \tag{4.24}\] We can now rewrite 4.2 in terms of the coordinates \((z_{i})\). Using (4.16) we have \[\begin{split} g&=\lambda u\,g_{\mathbb{C}P^{n}}+2(n +1)(\lambda u)^{-n}|\theta|^{2}=\frac{2\lambda u}{s^{2}_{1}}\hskip-14.226378pt \sum_{1\leq i<j\leq n+1}|z_{i}\mathrm{d}z_{j}-z_{j}\mathrm{d}z_{i}|^{2}+2(n+1 )(\lambda u)^{-n}|\theta|^{2}\\ &=\frac{2\lambda u}{s^{2}}\left(s|\mathrm{d}z|^{2}-|\bar{z} \mathrm{d}z|^{2}\right)+2(n+1)(\lambda u)^{-n}|\theta|^{2}.\end{split} \tag{4.25}\] We now rescale \(\zeta\to\frac{\zeta}{(n+1)^{3/2}}\), so that by (4.22), \[|\theta|^{2}\to\frac{|\theta|^{2}}{(n+1)^{3}}=\frac{s^{n-1}}{n+1}|\bar{z} \mathrm{d}z|^{2}, \tag{4.26}\] hence \[g=2\left[\frac{\lambda u}{s}|\mathrm{d}z|^{2}+\left(\frac{s^{n-1}}{(\lambda u )^{n}}-\frac{\lambda u}{s^{2}}\right)|\bar{z}\mathrm{d}z|^{2}\right]. \tag{4.27}\] We are going to drop the overall factor of 2. Using the relation \(|\zeta|=s^{\frac{n+1}{2}}\) to rewrite \(u\) as a function of \(s\) we obtain \[\frac{\lambda u}{s}=\lambda c^{\frac{1}{n+1}}\left(1+\frac{\kappa}{cs^{n+1}} \right)^{\frac{1}{n+1}}. \tag{4.28}\] Rescaling \(\kappa\to c\kappa\) and choosing \(c=\lambda^{-(n+1)}\) we define \[F(s)=\frac{\lambda u}{s}=\left(1+\frac{\kappa}{s^{n+1}}\right)^{\frac{1}{n+1}}, \tag{4.29}\] so that \[F^{\prime}=\frac{1-F^{n+1}}{F^{n}s}=\frac{s^{n-1}}{(\lambda u)^{n}}-\frac{ \lambda u}{s^{2}}. \tag{4.30}\] Therefore, the Calabi metric on \(\mathcal{O}(-n-1)\) is given in terms of the coordinates \((z_{i})\) by \[g=F|\mathrm{d}z|^{2}+F^{\prime}\,|\bar{z}\mathrm{d}z|^{2}. \tag{4.31}\] Since \(F\) only depends on \(s\), it is clear that the metric is invariant under the left action of \(U(n+1)\) on \(\mathbb{C}^{n+1}\). The Kahler form corresponding to (4.31) is \[\omega=2i\left(F\mathrm{d}z\wedge\mathrm{d}\bar{z}+F^{\prime}\bar{z}\mathrm{d}z \wedge z\mathrm{d}\bar{z}\right)=2i\mathrm{d}\left(Fz\mathrm{d}\bar{z}\right). \tag{4.32}\] For small \(s\), \(F\sim s\) and \(\mathrm{Re}(Fz{\mathrm{d}}\bar{z})\sim\frac{1}{2}{\mathrm{d}}\log s\), so \(Fz{\mathrm{d}}\bar{z}\) is not well-defined for \(s=0\) and \(\omega\) is not exact. Note that for \(n=1\) (4.31) reduces to \[g=F|{\mathrm{d}}z|^{2}+\frac{1}{s}\left(\frac{1}{F}-F\right)|\bar{z}{\mathrm{d }}z|^{2},\quad F=\sqrt{1+\frac{\kappa}{s^{2}}} \tag{4.33}\] and we recover (2.66). The function \(F\) satisfies the identity \[F^{\prime}=\frac{1}{s}\left(\frac{1}{F^{n}}-F\right)\quad\Leftrightarrow\quad F ^{n}(sF)^{\prime}=1. \tag{4.34}\] In components \[g_{\mu\bar{\nu}}=\partial_{\mu}(Fz_{\nu})=F^{\prime}\bar{z}_{\mu}z_{\nu}+F \delta_{\mu\nu}, \tag{4.35}\] with determinant \[\det(g_{\mu\bar{\nu}})=F^{n}(Fs)^{\prime}=1, \tag{4.36}\] as it should be since the metric is Ricci-flat. The inverse of the matrix \(g_{\mu\bar{\nu}}\) has components \[g^{\rho\bar{\nu}}=\frac{\delta^{\rho\nu}}{F}+\frac{\kappa}{s^{n+2}F}\bar{z}^{ \rho}z^{\nu}. \tag{4.37}\] ### \(U(1)\) connection with \(L^{2}\) harmonic curvature The canonical spin-\(c\) structure on \(K\) admits no non-trivial \(L^{2}\) harmonic spinor, hence to get non-trivial zero modes we need to twist by a line bundle equipped with a \(U(1)\) connection \(\mathcal{A}\). The curvature \({\mathrm{d}}\mathcal{A}\) is a purely imaginary form of degree \((1,1)\) which we also want to be \(L^{2}\) harmonic. The space of \(L^{2}\) harmonic forms on \(K\) is \(1\)-dimensional. In fact, in the setting of [5], the Calabi metric on \(K\) is a scattering metric, with \(K\) viewed as a fibration with trivial fibre and \(X=\overline{L}\). By Theorem 1A of [5], the space \(L^{2}\mathcal{H}^{k}\) of \(L^{2}\) harmonic \(k\)-forms is \[L^{2}\mathcal{H}^{k}(K)=\begin{cases}H^{k}(K,\partial K)&\text{if }k<n+1,\\ \mathrm{Im}(H^{k}(K,\partial K)\to H^{k}(K))&\text{if }k=n+1,\\ H^{k}(K)&\text{if }k>n+1,\end{cases} \tag{4.38}\] where \(H^{k}(K,\partial K)\to H^{k}(K)\) is the inclusion map. We have \[H^{k}(K)\simeq H^{k}(\mathbb{C}P^{n})=\begin{cases}\mathbb{R}&\text{if }k=0,2,4, \dots,2n,\\ 0&\text{otherwise},\end{cases} \tag{4.39}\] and, using Poincare duality, \[H^{k}(K,\partial K)\simeq H^{k}_{c}(K)\simeq H^{2n+2-k}(K)=\begin{cases} \mathbb{R}&\text{if }k=2,4,\dots,2n+2,\\ 0&\text{otherwise}.\end{cases} \tag{4.40}\] Finally the inclusion \[\iota:H^{n+1}_{c}(K)\to H^{n+1}(K). \tag{4.41}\] is the zero map for \(n\) even, as \(H^{n+1}_{c}(K)=0\), and an isomorphism for \(n\) odd. In conclusion \[L^{2}\mathcal{H}^{k}(K)=\begin{cases}\mathbb{R}&\text{if }k=2,4,\dots,2n,\\ 0&\text{otherwise}.\end{cases} \tag{4.42}\] In particular, \(L^{2}\mathcal{H}^{2}(K)\) is \(1\)-dimensional. We now show that we can write a generator \(\tilde{\omega}\) of \(L^{2}\mathcal{H}^{2}(K)\) in the form \(\tilde{\omega}=2i{\mathrm{d}}\beta\) for \(\beta\in\Lambda^{0,1}(K)\) a Dirac zero mode. This extends the phenomenon observed in the case of the Calabi metric for \(\mathbb{C}\mathbb{P}^{1}\), where we had \(\tilde{\omega}=2i{\mathrm{d}}\beta\) with \(\beta\) given by (2.42). Furthermore, the explicit form (3.41) of the zero modes of the twisted Dirac operator shows that \(\beta\) is the \(N=0\) zero mode of the untwisted Dirac operator, something which also holds for general \(n\). By the local \(\partial\overline{\partial}\) lemma we can write \(\tilde{\omega}=2i\partial\overline{\partial}\phi\) for some locally defined real function \(\phi\). Set \[\beta=\overline{\partial}\phi. \tag{4.43}\] Clearly \(\overline{\partial}\beta=0\) so \(\beta\) is a Dirac zero mode provided that \[0=D\beta=-\overline{\partial}^{\ast}\beta=-\overline{\partial}^{\ast}\overline{ \partial}\phi=g^{\mu\bar{\nu}}\partial_{\mu}\partial_{\bar{\nu}}\phi=\overline {\partial}\partial\phi=\triangle\phi. \tag{4.44}\] Take \(\phi\) to be a real function of \(s\) and set \(\chi=\mathrm{d}\phi/\mathrm{d}s\), so that \[\beta=\chi(s)\overline{\partial}s. \tag{4.45}\] We have \(\partial\beta=\chi^{\prime}\partial s\wedge\overline{\partial}s+\chi\partial \overline{\partial}s\) so contracting with the inverse metric we get \[-\overline{\partial}^{\ast}\beta=g^{\nu\bar{\mu}}\left(\chi^{\prime}\bar{z}_ {\bar{\mu}}z_{\nu}+\chi\delta_{\nu\mu}\right). \tag{4.46}\] Using (4.37) we calculate \[\mathrm{Tr}(g^{-1})=\frac{1}{F}\left(n+1+\frac{\kappa}{s^{n+1}}\right)=\frac{ n}{F}+F^{n},\qquad\bar{z}_{\mu}z_{\nu}g^{\nu\bar{\mu}}=\frac{1}{F}\left[s+ \frac{\kappa}{s^{n}}\right]=sF^{n}. \tag{4.47}\] Therefore, \(\beta\) is a zero mode if \[\chi^{\prime}\left[s+\frac{\kappa}{s^{n}}\right]+\chi\left(n+1+\frac{\kappa}{ s^{n+1}}\right)=0, \tag{4.48}\] which integrates to \[\chi=\frac{1}{s^{n+1}\left(1+\frac{\kappa}{s^{n+1}}\right)^{\frac{n}{n+1}}}= \frac{1}{s^{n+1}F^{n}}. \tag{4.49}\] Thus \[\beta =\chi(s)z\mathrm{d}\bar{z}=\frac{z\mathrm{d}\bar{z}}{s^{n+1}F^{n}}, \tag{4.50}\] \[\tilde{\omega} =2i\mathrm{d}\left(\frac{z\mathrm{d}\bar{z}}{s^{n+1}F^{n}}\right). \tag{4.51}\] Note that for \(n=1\) we recover (2.42). The 2-form \(\tilde{\omega}=2i\mathrm{d}\beta\) has components \[\tilde{\omega}_{\mu\bar{\mu}}=-2i\frac{n}{s^{n+1}F^{2n+1}},\qquad\tilde{\omega }_{\mu\bar{\nu}}=-2i\left(1+\frac{n}{F^{n+1}}\right)\frac{z_{\mu}\bar{z}_{\nu }}{s^{n+2}F^{2n+1}},\ \mu\neq\nu. \tag{4.52}\] One can see that while \(\beta\) is not defined for \(s=0\), \(\tilde{\omega}\) is a globally defined 2-form. To check the \(L^{2}\) condition we need to integrate \(|\tilde{\omega}|^{2}\) with respect to the volume element \(r^{2n+1}\mathrm{d}r\) where \(r=\sqrt{s}\). For large \(r\) we have \(F\sim 1\), \(g^{\rho\bar{\nu}}\sim\delta^{\rho\nu}\), \(|\tilde{\omega}|^{2}\sim r^{4(n+1)}\) hence \(|\tilde{\omega}|^{2}\) is integrable. On the other hand \(\beta\) is not \(L^{2}\). In fact since \[\partial^{\dot{p}}_{z_{\mu}}=F^{\prime}\bar{z}_{\mu}z_{\nu}\mathrm{d}z^{\bar{ \nu}}+F\mathrm{d}z^{\bar{\nu}}\quad\Rightarrow\quad z^{\mu}\partial^{\dot{p}} _{z_{\mu}}=\frac{z_{\nu}\mathrm{d}\bar{z}^{\nu}}{F^{n}}, \tag{4.53}\] the metric dual \(\beta^{\sharp}\) of \(\beta\) is \[\beta^{\sharp}=\beta^{\nu}\partial_{z_{\nu}},\qquad\beta^{\nu}=g^{\bar{\mu} \nu}\beta_{\bar{\mu}}=\chi\,z^{\nu}F^{n}=\frac{z^{\nu}}{s^{n+1}}. \tag{4.54}\] Thus, the squared norm of \(\beta\) is \[|\beta|^{2}=\beta^{\nu}\bar{\beta}_{\nu}=\frac{1}{s^{2n+1}F^{n}}, \tag{4.55}\] which is not \(L^{2}\) due to the logarithmic divergence near \(r=0\). Finally, we want to normalise \(\tilde{\omega}\) so that it is the curvature of a connection \(\mathcal{A}\). To do so, we need to impose the quantisation condition \[\frac{i}{2\pi}\int_{\Sigma}\mathrm{d}\mathcal{A}=\ell\in\mathbb{Z}, \tag{4.56}\] where \(\Sigma\) is any generator of \(H^{2}(K,\mathbb{Z})=\mathbb{Z}\). We can take \(\Sigma\) to be the \(\mathbb{C}P^{1}\) obtained setting \(\zeta=0=w_{2}=\cdots=w_{n}\). To compute the flux of \(\tilde{\omega}\) over \(\Sigma\) we switch to \((w_{i},\zeta)\) coordinates. Using (4.21) we find \[\tilde{\omega}=\frac{2i}{n+1}\left(\frac{\mathrm{d}\log\zeta+\alpha}{(\kappa+| \zeta|^{2})^{\frac{n}{n+1}}}\right), \tag{4.57}\] for \(\alpha\) given by (4.12). It follows that \[\tilde{\omega}|_{\Sigma}=\frac{2i\mathrm{d}\alpha|_{\Sigma}}{(n+1)\kappa^{\frac{ n}{n+1}}}. \tag{4.58}\] Since \(\frac{i}{2\pi}\frac{\mathrm{d}\alpha|_{\Sigma}}{(n+1)}\) has unit flux over \(\Sigma\), the required normalisation is \[\mathcal{A}=\ell(A-\bar{A}),\qquad A=\kappa^{\frac{n}{n+1}}\frac{z\mathrm{d} \bar{z}}{2s^{n+1}F^{n}}, \tag{4.59}\] which for \(n=1\) gives back (2.49). The analogue of the \(n=1\) Killing vector field \(X_{3}\) generating translation along the circles \(|\zeta|=\mathrm{const}\) of the fibres is \[\xi=\frac{i}{2}(z^{\nu}\partial_{\nu}-\bar{z}^{\nu}\partial_{\bar{\nu}}). \tag{4.60}\] Using (4.53) we see that \[\xi^{\flat}=\frac{i}{2}\left(\frac{z\mathrm{d}\bar{z}-\bar{z}\mathrm{d}z}{F^{ n}}\right)\quad\Rightarrow\quad 2\mathrm{d}\xi^{\flat}=2i\mathrm{d}\left( \frac{z\mathrm{d}\bar{z}}{F^{n}}\right). \tag{4.61}\] Since \(F\) satisfies the identity \[F-\frac{\kappa}{F^{n}s^{n+1}}=\frac{1}{F^{n}}, \tag{4.62}\] it follows that \[\omega-\kappa\tilde{\omega}=2i\mathrm{d}\left(\frac{z\mathrm{d}\bar{z}}{F^{n} }\right)=2\mathrm{d}\xi^{\flat}. \tag{4.63}\] ### Zero modes of the twisted Dirac operator on \(\mathcal{O}(-n-1)\) As already mentioned, by the same argument used for EH, the canonical spin-\(c\) structure on \(K=\mathcal{O}(-n-1)\) admits no non-trivial \(L^{2}\) harmonic spinor. In this section we consider the problem of normalisable zero modes with respect to the Dirac operator twisted by the connection (4.59). We will limit ourselves to the study of zero modes in \(\Lambda^{0,1}(K)\). Take the ansatz \[\sigma=P(z_{i})h(s)\overline{\partial}s, \tag{4.64}\] where \(P\) is a function of \(z_{1},\dots,z_{n+1}\) only, so that the equation \(\overline{\partial}\sigma=0\) is automatically satisfied. The remaining equation is the projection onto \(\Lambda^{0}(K)\) of \[\partial\sigma=-\ell\bar{A}\wedge\sigma. \tag{4.65}\] One has \[\partial\sigma=h\partial_{i}P\,\mathrm{d}z^{i}\wedge\overline{\partial}s+Ph^{ \prime}\partial s\wedge\overline{\partial}s+Ph\partial\overline{\partial}s. \tag{4.66}\] The projections of \(\partial\overline{\partial}s\), \(\partial s\wedge\overline{\partial}s\) onto \(\Lambda^{0}(K)\) are given by (4.47), and \[hz_{\mu}\partial_{\nu}Pg^{\mu\bar{\nu}}=hz_{\mu}\partial_{\nu}f\left(\frac{ \delta^{\nu\mu}}{F}+\frac{\kappa}{s^{n+2}F}\bar{z}^{\mu}z^{\nu}\right)=hF^{n}z _{\mu}\partial_{\mu}P. \tag{4.67}\] Now take \(P\) to be a homogeneous polynomial in \((z_{1},\dots,z_{n+1})\) of degree \(\delta\), so that \(z_{\mu}\partial_{\mu}P=\delta P\). Then \[-\overline{\partial}^{*}\sigma=\frac{P}{F}(F^{n+1}(\delta h+h^{\prime}s+h)+nh). \tag{4.68}\] We also need to compute the \(\Lambda^{0}(K)\) projection of \(\mathcal{A}\wedge\sigma=\bar{A}\wedge\sigma\). Since \[\bar{A}=\kappa^{\frac{n}{n+1}}\frac{\partial s}{2s^{n+1}F^{n}}, \tag{4.69}\] we obtain \[-\bar{A}_{\mu}\sigma_{\nu}g^{\mu\bar{\nu}}=-\kappa^{\frac{n}{n+1}}\frac{Phz_{ \nu}\bar{z}_{\mu}g^{\nu\bar{\mu}}}{2s^{n+1}F^{n}}=-\kappa^{\frac{n}{n+1}}\frac {Ph}{2s^{n}}. \tag{4.70}\] Therefore, the twisted Dirac equation becomes the following ODE for \(h\), \[F^{n+1}((\delta+1)h+h^{\prime}s)+nh=\kappa^{\frac{n}{n+1}}\ell\frac{hF}{2s^{n}}. \tag{4.71}\] Equivalently \[(\log h)^{\prime}=\frac{\ell\kappa^{\frac{n}{n+1}}}{2F^{n}s^{n+1}}-\frac{\delta+ 1}{s}-\frac{n}{sF^{n+1}}, \tag{4.72}\] which has solution \[h=h_{0}f,\qquad h_{0}=\frac{1}{F^{n}s^{\delta+n+1}}. \tag{4.73}\] Here \(h_{0}\) solves (4.72) for \(\ell=0\), and \(f\) satisfies \[(\log f)^{\prime}=\frac{\ell\kappa^{\frac{n}{n+1}}}{2F^{n}s^{n+1}}. \tag{4.74}\] To discuss normalisability we only need the behaviour of \(f\) for small and large \(s\), which is \[f\sim\begin{cases}s^{\ell/2}&\text{for $s\ll 1$},\\ \exp\left(-\frac{\ell\kappa^{\frac{n}{n+1}}}{2n}\frac{1}{s^{n}}\right)&\text{ for $s\gg 1$}.\end{cases} \tag{4.75}\] Since \(|P|^{2}=s^{\delta}\), \(|\overline{\partial}s|^{2}=F^{n}s\), the squared norm of \(\sigma\) is \[|\sigma|^{2}=|P|^{2}h^{2}|\overline{\partial}s|^{2}=\frac{f^{2}}{F^{n}s^{ \delta+2n+1}}, \tag{4.76}\] which is to be integrated with respect to the volume element \(r^{2n+1}\) for \(r=\sqrt{s}\). For large \(r\) we have \(F\sim 1\) so \(|\sigma|^{2}r^{2n+1}\mathrm{d}r\sim r^{-(2\delta+2n+1)}\mathrm{d}r\) which always gives a finite contribution. For small \(r\) we have \(f\sim r^{\ell}\), \(F\sim\kappa^{\frac{1}{n+1}}r^{-2}\), hence \(|\sigma|^{2}r^{2n+1}\mathrm{d}r\sim r^{2\ell-2\delta-1}\mathrm{d}r\) and square-integrability requires \[\ell>\delta. \tag{4.77}\] Note the similarity with the condition (3.46) obtained for \(n=1\), where we took \(\delta=2N\). Note also how the (untwisted, not \(L^{2}\)) spinor \(h_{0}\,\overline{\partial}s\) obtained for \(\ell=0\), \(\delta=0\) is, up to scale, equal to the connection (4.59) with harmonic \(L^{2}\) curvature, again in complete analogy with the \(n=1\) case. ## 5. Discussion The main result of this work is the explicit description of the \(L^{2}\) zero modes of the (twisted) Dirac operator on both the Eguchi-Hanson metric, see (3.41), and its higher dimensional generalisation given by the Calabi metric on \(\mathcal{O}(-n-1)\). As expected, for \(n=1\) the EH zero modes organise themselves into multiplets of the EH isometry group \(SU(2)\). The dimension of the space of zero modes is controlled by the integer \(\ell\), see (2.49), that controls the twist. The Dirac operator index analysis in [4] confirms that the zero modes obtained are all the zero modes, and so for the EH space the problem of finding the harmonic spinors is completely solved. Far from arbitrary, the \(U(1)\) connection used for the twist is preferred geometrically for multiple reasons. First, its curvature is the unique (up to scale) harmonic \(L^{2}\) 2-form on EH. Second, it is a connection on the total space of the \(S^{1}\) bundle over EH arising as a level set in the process of the hyperkahler reduction from \(\mathbb{H}^{2}\). In passing, this suggests that the twisted Dirac operator may also be understood as an appropriate dimensional reduction of the untwisted Dirac operator on \(\mathbb{H}^{2}\). It would be very interesting to see whether this is the case, and whether the harmonic spinors (3.41) can also be understood as arising from the dimensional reduction of some very simple spinors on \(\mathbb{C}^{4}\). We leave this to further work. Third, the \(U(1)\) connection with \(L^{2}\) curvature agrees with the lowest lying (untwisted, not \(L^{2}\)) Dirac zero mode. The EH metric is the \(n=1\) case of Calabi's family of Ricci-flat Kahler metrics on \(\mathcal{O}(-n-1)\), \(n\in\mathbb{N}\). As we have shown, many of the EH results generalise to higher \(n\). First, the description (4.31) of the metric in terms of "symmetrical" coordinates completely parallels the case of EH. Second, for all values of \(n\) there is a unique \(L^{2}\) harmonic 2-form \(\tilde{\omega}\), see (4.51), which also arises as the curvature of the lowest lying zero mode of the untwisted Dirac operator, and whose connection can be used to twist the Dirac operator. Moreover, \(\tilde{\omega}\) differs from the Kahler form \(\omega\) by a constant multiple of \(\mathrm{d}\xi^{\flat}\), see (4.63), for \(\xi\) a Killing vector field generating the isometric \(U(1)\) action on the fibre. Interestingly, the forms \(\omega\), \(\tilde{\omega}\), \(\mathrm{d}\xi^{\flat}\) are all harmonic although only \(\tilde{\omega}\) is \(L^{2}\). Third, the EH zero modes (3.41) of the Dirac operator twisted by this preferred \(U(1)\) connection have analogues (4.64) in the general case. These general zero modes again fall into irreducible representations of the \(U(n+1)\) isometry group, and the number of allowed zero modes is controlled by the integer \(\ell\) that determines the curvature flux. The cases \(n=1\) and \(n>1\) also present an important difference: the EH metric is hyperkahler while for \(n>1\) Calabi's metric is Calabi-Yau but not hyperkahler. For this reason, for \(n>1\) there can be no analogue of the hyperkahler quotient derivation of the EH metric reviewed in section 2.5, but one may still wonder if the metric could be obtained as the Kahler reduction of some simpler higher-dimensional metric. In the general case, we cannot claim that the Dirac zero modes that we have found exhaust all \(L^{2}\) harmonic spinors. For EH it is possible to come to this conclusion thanks to the fact that, since the curvature of the twisting connection is self-dual, there are no zero modes in \(W^{+}\) and the index of the Dirac operator is equal to the number of \(L^{2}\) zero modes in \(W^{-}\). This self-duality argument does not extend to higher \(n\), and we have no alternative argument showing that there are no \(L^{2}\) zero modes in \(W^{+}\). In fact, we have only analysed zero modes belonging to \(\Lambda^{0,1}\) and we also do not know if there are additional zero modes in the spaces \(\Lambda^{0,2k+1}\), \(2k\leq n\). A complete answer to these questions is left for future work, along with the interesting problem of studying zero modes of the Dirac operator twisted by \(L^{2}\) harmonic forms of degree other than two. Another outcome of this work is the development of a set of rules for computing the spin-\(c\) Dirac operator on a Kahler manifold. Indeed, we have shown that the computation of the action of this Dirac operator on spinors is no more complicated than the computation of the exterior derivative of differential forms. The only additional operation needed is the application of the metric contraction to the result of the exterior derivative, to map the latter into the space \(\Lambda^{0,\bullet}\) where spinors live. This gives extremely simple computational rules and makes the formalism of spinors as differential forms extremely convenient for explicit calculations involving the Dirac operator. We hope this work will lead to a better familiarity of the community with the very efficient computational tool that spin-\(c\) spinors provide. We close with some remarks on what motivated us to embark on the present investigation. The Calabi construction can be applied to an arbitrary Kahler manifold \(M\) with non-zero scalar curvature, and gives a Ricci-flat Kahler metric on the total space of the canonical bundle of \(M\). Applying this construction to \(\mathbb{CP}^{1}\times\mathbb{CP}^{2}\) is particularly interesting because the resulting Ricci-flat metric has the Standard Model gauge group as its isometry group. The resulting space is Calabi-Yau with holonomy group \(SU(4)\). Its metric is asymptotically conical, with the metric on the base of the cone being the nearly parallel \(G_{2}\) metric \(M(3,2)\) discussed in particular in [7]. We are interested in determining the \(L^{2}\) harmonic spinors of (the appropriately twisted) Dirac operator on this space. The present work can be considered as setting the stage for this more involved computation. ### Acknowledgements The authors are grateful to Bernd Schroers for introducing them to each other, and for participating in the early stages of this collaboration. GF thanks the Simons Foundation for its support under the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics [grant number 488631]. ## Appendix A The geometry of \(\mathbb{C}^{2}\) In this appendix we want to relate the descriptions of Euclidean \(\mathbb{R}^{4}=\mathbb{C}^{2}\) as a cohomogeneity-one space with respect to the action of \(SU(2)\), as a complex manifold and as a complex line bundle. To that end it is convenient to first review the geometry of \(SU(2)\). ### The geometry of \(Su(2)\) As it is well known, \(S^{3}\simeq SU(2)\). A possible parametrisation of left-invariant 1-forms on \(SU(2)\) is \[\begin{split}\eta_{1}&=+\sin\psi\,\mathrm{d}\theta -\cos\psi\sin\theta\,\mathrm{d}\phi,\\ \eta_{2}&=-\cos\psi\,\mathrm{d}\theta-\sin\psi\sin \theta\,\mathrm{d}\phi,\\ \eta_{3}&=\mathrm{d}\psi+\cos\theta\,\mathrm{d}\phi, \end{split}\] (A.1) where \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi)\), \(\psi\in[0,4\pi)\). Taking the same range for \(\theta,\phi\) with \(\psi\in[0,2\pi)\) gives instead a parametrisation of \(SU(2)/\mathbb{Z}_{2}=SO(3)\). Note that the forms (A.1) satisfy \[\mathrm{d}\eta_{i}=+\frac{1}{2}\epsilon_{ijk}\eta_{j}\wedge\eta_{k}.\] (A.2) The left-invariant vector fields on \(SU(2)\) satisfying \(\eta_{i}(X_{j})=\delta_{ij}\) are \[X_{1} =+\sin\psi\,\frac{\partial}{\partial\theta}+\frac{\cos\psi}{ \sin\theta}\left(\cos\theta\,\frac{\partial}{\partial\psi}-\frac{\partial}{ \partial\phi}\right),\] \[X_{2} =-\cos\psi\,\frac{\partial}{\partial\theta}+\frac{\sin\psi}{ \sin\theta}\left(\cos\theta\,\frac{\partial}{\partial\psi}-\frac{\partial}{ \partial\phi}\right),\] (A.3) \[X_{3} =+\frac{\partial}{\partial\psi}.\] They satisfy the \(\mathfrak{su}(2)\) Lie algebra relation \[[X_{i},X_{j}]=-\epsilon_{ijk}X_{k}.\] (A.4) In fact it can be checked that \(X_{i}\) is the left-invariant vector field associated to \(\frac{i}{2}\sigma_{i}\), for \((\sigma_{i})\) the Pauli matrices. We can also introduce right-invariant 1-forms \((\zeta_{i})\) and vector fields \((Y_{i})\), \[\zeta_{1} =-\cos\phi\sin\theta\,\mathrm{d}\psi+\sin\phi\,\mathrm{d}\theta,\] (A.5) \[\zeta_{2} =+\sin\phi\sin\theta\,\mathrm{d}\psi+\cos\phi\,\mathrm{d}\theta,\] \[\zeta_{3} =\mathrm{d}\phi+\cos\theta\,\mathrm{d}\psi,\] \[Y_{1} =\sin\phi\frac{\partial}{\partial\theta}+\frac{\cos\phi}{\sin \theta}\left(\cos\theta\frac{\partial}{\partial\phi}-\frac{\partial}{ \partial\psi}\right),\] \[Y_{2} =\cos\phi\frac{\partial}{\partial\theta}-\frac{\sin\phi}{\sin \theta}\left(\cos\theta\frac{\partial}{\partial\phi}-\frac{\partial}{ \partial\psi}\right),\] (A.6) \[Y_{3} =\frac{\partial}{\partial\phi},\] satisfying \(\zeta_{i}(Y_{j})=\delta_{ij}\), \[[Y_{i},Y_{j}]=+\epsilon_{ijk}Y_{k},\quad\mathrm{d}\zeta_{i}=-\frac{1}{2} \epsilon_{ijk}\zeta_{k},\quad[X_{i},Y_{j}]=0.\] (A.7) The metric \[g_{S^{3}}=\frac{1}{4}(\eta_{1}^{2}+\eta_{2}^{2}+\eta_{3}^{2})=\frac{1}{4}( \zeta_{1}^{2}+\zeta_{2}^{2}+\zeta_{3}^{2})\] (A.8) is the round metric on the 3-sphere of unit radius, or equivalently the bi-invariant metric on \(SU(2)\). In terms of the latter description it is clear that the metric is invariant under both a left and right \(SU(2)\) action. The left action is generated by the _right_-invariant vector fields (A.6). Such vector fields satisfy \(L_{Y_{i}}\eta_{j}=0\). The right \(SU(2)\) action is generated by the _left_-invariant vector fields (A.3). We have \[L_{X_{i}}\zeta_{j} =L_{Y_{i}}\eta_{j}=0,\] (A.9) \[L_{X_{i}}\eta_{j} =-\epsilon_{ijk}\eta_{k},\quad L_{Y_{i}}\zeta_{j}=+\epsilon_{ijk} \zeta_{k}.\] ### The geometry of \(\mathbb{C}^{2}\) The orbit structure of \(\mathbb{C}^{2}\simeq\mathbb{R}^{4}\) with respect to the \(SU(2)\) action is obtained by writing \(\mathbb{R}^{4}=\{0\}\cup\left((0,\infty)\times S^{3}\right)\). Introducing a radial coordinate \(r\in[0,\infty)\) transverse to the \(SU(2)\) orbits we can write the Euclidean metric on \(\mathbb{C}^{2}\) as \[g_{\mathbb{C}^{2}}=\mathrm{d}r^{2}+\frac{r^{2}}{4}(\eta_{1}^{2}+\eta_{2}^{2}+ \eta_{3}^{2}).\] (A.10) The quantities \((\eta_{i})\), \((\zeta_{i})\), \((X_{i})\), \((Y_{i})\) extend to well-defined 1-forms and vector fields on \(\mathbb{R}^{4}\setminus\{0\}\). As a real manifold \(\mathbb{R}^{4}\) has isometry group \(O(4)\). We can identify a point \((z_{1},z_{2})\in\mathbb{C}^{2}\setminus\{0\}\) with a pair \((r,x)\) where \(x\) is the \(SU(2)\) element given by \[\frac{1}{|z_{1}|^{2}+|z_{2}|^{2}}\begin{pmatrix}z_{1}&-\bar{z}_{2}\\ z_{2}&\bar{z}_{1}\end{pmatrix}\] (A.11) and \(r=\sqrt{|z_{1}|^{2}+|z_{2}|^{2}}\). Left and right matrix multiplication of (A.11) by an \(SU(2)\) element give raise to isometries, with the left action generated by the right-invariant vector fields \((Y_{i})\) and the right action by the left-invariant vector fields \((X_{i})\). Explicitly is \(h\in SU(2)\) is given by \[h=\begin{pmatrix}a&-\bar{b}\\ b&\bar{a}\end{pmatrix}\] (A.12) the left and right \(SU(2)\) action are given by (A.13), (A.14) respectively, \[z_{1}\mapsto az_{1}-\bar{b}z_{2},\quad z_{2}\mapsto bz_{1}+\bar{ a}z_{2},\] (A.13) \[z_{1}\mapsto az_{1}-b\bar{z}_{2},\quad z_{2}\mapsto az_{2}+b \bar{z}_{1}.\] (A.14) Since \((SU(2)\times SU(2))/\mathbb{Z}_{2}=SO(4)\), the left and right \(SU(2)\) action give the (connected component of the) full isometry group of \(\mathbb{R}^{4}\). The \(\mathbb{Z}_{2}\) quotient corresponds to the fact that for \(a=\bar{a}\), \(b=0\), which in \(SU(2)\) implies \(a=\pm 1\), left and right action result in the same transformation. The map (A.14) is not complex linear. In fact \(\mathbb{C}^{2}\) as a complex manifold has metric \[g_{\mathbb{C}^{2}}=|\mathrm{d}z_{1}|^{2}+|\mathrm{d}z_{2}|^{2},\] (A.15) complex structure \[J(\partial/\partial z_{i})=+i\partial/\partial z_{i},\quad J(\mathrm{d}z_{i}) =-i\mathrm{d}z_{i},\] (A.16) and the smaller isometry group \(U(2)\subset SO(4)\). There is a group isomorphism \(U(2)=(SU(2)\times U(1))/\mathbb{Z}_{2}\), \((h,u)\mapsto hu\). The (left) \(SU(2)\) action is (A.13) and the \(U(1)\) action is diagonal, \[(z_{1},z_{2})\cdot\mathrm{e}^{it}=(z_{1},z_{2})\mathrm{e}^{it}.\] (A.17) Looking at (A.14) we see that (A.17) is the \(U(1)\) action obtained restricting the right \(SU(2)\) action to the \(U(1)\) subgroup obtained by setting \(b=0\). We now define an orthonormal frame \((X_{1},X_{2},X_{3},X_{4})\) consisting of real vector fields adapted to the action (A.17) and the complex structure \(J\), \[\begin{split} X_{1}&=-\frac{i}{2}\left(\bar{z}_{2} \frac{\partial}{\partial z_{1}}-\bar{z}_{1}\frac{\partial}{\partial z_{2}}-z_ {2}\frac{\partial}{\partial\bar{z}_{1}}+z_{1}\frac{\partial}{\partial\bar{z} _{2}}\right),\\ X_{2}&=-\frac{1}{2}\left(\bar{z}_{1}\frac{\partial} {\partial z_{2}}-\bar{z}_{2}\frac{\partial}{\partial z_{1}}+z_{1}\frac{ \partial}{\partial\bar{z}_{2}}-z_{2}\frac{\partial}{\partial\bar{z}_{1}} \right),\\ X_{3}&=+\frac{i}{2}\left(z_{1}\frac{\partial}{ \partial z_{1}}+z_{2}\frac{\partial}{\partial z_{2}}-\bar{z}_{1}\frac{ \partial}{\partial\bar{z}_{1}}-\bar{z}_{2}\frac{\partial}{\partial\bar{z}_{2} }\right),\\ X_{4}&=-\frac{1}{2}\left(z_{1}\frac{\partial}{ \partial z_{1}}+z_{2}\frac{\partial}{\partial z_{2}}+\bar{z}_{1}\frac{ \partial}{\partial\bar{z}_{1}}+\bar{z}_{2}\frac{\partial}{\partial\bar{z}_{2} }\right).\end{split}\] (A.18) The frame \((X_{i})\) is adapted in the sense that \(X_{3}\) is the infinitesimal generator of the action (A.17), \(X_{4}=JX_{3}\), \(X_{2}=JX_{1}\) and \((X_{1},JX_{1})\) is the \(g_{\mathbb{C}^{2}}\)-orthogonal complement of \((X_{3},JX_{3})\). More precisely, \[X_{3}-iJX_{3}=i\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial}{ \partial z_{2}}\right)\] (A.19) generates (A.17) and \(X_{3}\) is the corresponding real holomorphic vector field. We have \[|X_{1}|^{2}=|X_{2}|^{2}=|X_{3}|^{2}=|X_{4}|^{2}=\frac{1}{4}(|z_{1}|^{2}+|z_{2}|^ {2}),\] (A.20) where \(|X_{i}|^{2}=g_{\mathbb{C}^{2}}(X_{i},X_{i})\). As we will show, the vector fields \((X_{1},X_{2},X_{3})\) in (A.18) are the left-invariant vector fields (A.3) expressed in terms of complex coordinates. Denoting by \(X^{\flat}\) the \(g_{\mathbb{C}^{2}}\)-metric dual of \(X\) we have \[\begin{split}\theta_{1}&=X^{\flat}_{1}=-\frac{i}{4} \left(z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}-\bar{z}_{1}\mathrm{d}\bar{z}_{2} +\bar{z}_{2}\mathrm{d}\bar{z}_{1}\right)&=-\frac{1}{2}\operatorname {Im}(z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}),\\ \theta_{2}&=X^{\flat}_{2}=-\frac{1}{4}\left(z_{1} \mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}+\bar{z}_{1}\mathrm{d}\bar{z}_{2}-\bar{z}_ {2}\mathrm{d}\bar{z}_{1}\right)&=+\frac{1}{2}\operatorname{Re}(z _{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}),\\ \theta_{3}&=X^{\flat}_{3}=+\frac{i}{4}\left(z_{1} \mathrm{d}\bar{z}_{1}+z_{2}\mathrm{d}\bar{z}_{2}-\bar{z}_{1}\mathrm{d}z_{1}- \bar{z}_{2}\mathrm{d}z_{2}\right)&=+\frac{1}{2}\operatorname{ Im}(\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}_{2}),\\ \theta_{4}&=X^{\flat}_{4}=-\frac{1}{4}\left(z_{1} \mathrm{d}\bar{z}_{1}+z_{2}\mathrm{d}\bar{z}_{2}+\bar{z}_{1}\mathrm{d}z_{1}+ \bar{z}_{2}\mathrm{d}z_{2}\right)&=-\frac{1}{2}\operatorname{Re}( \bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}_{2}).\end{split}\] (A.21) Thus \[g_{\mathbb{C}^{2}}=\sum_{i=1}^{4}\frac{\theta_{i}^{2}}{|X_{i}|^{2}}=\frac{| \bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}|^{2}}{|z_{1}|^{2}+|z_{2} |^{2}}+\frac{|z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}|^{2}}{|z_{1}|^{2}+|z_{ 2}|^{2}},\] (A.22) where the first and second addend correspond to the metric restricted to \(\operatorname{Span}(X_{3},JX_{3})\), \(\operatorname{Span}(X_{1},JX_{1})\) respectively. The metrics (A.15), (A.10) are related by the diffeomorphism \[z_{1}=r\cos\left(\frac{\theta}{2}\right)\mathrm{e}^{\frac{i}{2}(\psi+\phi)}, \quad z_{2}=r\sin\left(\frac{\theta}{2}\right)\mathrm{e}^{\frac{i}{2}(\psi- \phi)}.\] (A.23) It can be checked the pulling back by (A.23) the vector fields \((X_{1},X_{2},X_{3})\) in (A.18) map to (A.3), and \[X_{4}=-\frac{r}{2}\frac{\partial}{\partial r},\qquad\theta_{4}=-\frac{r}{2} \mathrm{d}r.\] (A.24) Note how \((X_{3},X_{4})\) span the \((r,\psi)\) plane. The forms \(\eta_{i}\), \(\theta_{i}\) are the metric dual of \(X_{i}\) with respect to, respectively, \(g_{S^{3}}\) and \(g_{\mathbb{C}^{2}}\). For vector fields \(X,Y\) tangent to \(S^{3}\) we have \(g_{S^{3}}(X,Y)=\frac{4}{|z_{1}|^{2}+|z_{2}|^{2}}g_{\mathbb{C}^{2}}(X,Y)\), hence \[\eta_{i}=\left(\frac{4}{|z_{1}|^{2}+|z_{2}|^{2}}\right)\theta_{i}.\] (A.25) The space \(\mathbb{C}^{2}\setminus\{0\}\) can be viewed as a line bundle over \(\mathbb{C}P^{1}\). We introduce complex coordinates \((w,\zeta)\in\mathbb{C}^{2}\) on the base and fibre via \[(z_{1},z_{2})=\frac{\zeta}{\sqrt{1+|w|^{2}}}(w,1),\qquad(w,\zeta)=\left(\frac {z_{1}}{z_{2}},\sqrt{|z_{1}|^{2}+|z_{2}|^{2}}\frac{z_{2}}{|z_{2}|}\right).\] (A.26) Since \[\begin{split}|z_{1}|^{2}+|z_{2}|^{2}&=|\zeta|^{2}, \\ z_{1}\mathrm{d}z_{2}-z_{2}\mathrm{d}z_{1}&=-\frac{ \zeta^{2}\mathrm{d}w}{1+|w|^{2}},\\ \bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}&= \bar{\zeta}\mathrm{d}\zeta+\frac{|\zeta|^{2}}{2}\left(\frac{\bar{w}\mathrm{d}w -w\mathrm{d}\bar{w}}{1+|w|^{2}}\right)=\bar{\zeta}(\mathrm{d}\zeta+ia\zeta), \end{split}\] (A.27) where \[a=\frac{1}{2i}\frac{\bar{w}\mathrm{d}w-w\mathrm{d}\bar{w}}{1+|w|^{2}}=\frac{ \operatorname{Im}(\bar{w}\mathrm{d}w)}{1+|w|^{2}},\] (A.28) we get \[g_{\mathbb{C}^{2}}=|\mathrm{d}\zeta+ia\zeta|^{2}+\frac{|\zeta|^{2}|\mathrm{d}w|^ {2}}{(1+|w|^{2})^{2}}.\] (A.29) Thinking of \(S^{3}\) as the Hopf fibration \(U(1)\hookrightarrow S^{3}\to\mathbb{C}P^{1}\) we recognise \(z\) as an inhomogeneous coordinate on the base \(\mathbb{C}P^{1}\) and \(\zeta\), for \(|\zeta|=|R|\), as the angle parametrising the \(U(1)\) fibres. If the value of \(|\zeta|\) is allowed to vary in \((0,\infty)\) we obtain a parametrisation of \((0,\infty)\times S^{3}\) viewed as the line bundle \(\mathbb{C}^{2}\setminus\{0\}\to\mathbb{C}P^{1}\). Since \[\zeta\frac{\partial}{\partial\zeta}=z_{1}\frac{\partial}{\partial z_{1}}+z_{1} \frac{\partial}{\partial z_{2}}\] (A.30) the vector field \(X_{3}\) translating along the fibres becomes \[X_{3}=\frac{i}{2}\left(\zeta\frac{\partial}{\partial\zeta}-\bar{\zeta}\frac{ \partial}{\partial\bar{\zeta}}\right).\] (A.31) Further parametrising \[\zeta=re^{i\chi},\quad\Leftrightarrow\quad r=|\zeta|=\sqrt{|z_{1}|^{2}+|z_{2}| ^{2}},\;\mathrm{e}^{i\chi}=\frac{z_{2}}{|z_{2}|},\] (A.32) we obtain \[g_{\mathbb{C}^{2}}=\mathrm{d}r^{2}+r^{2}\left[(\mathrm{d}\chi+a)^{2}+\frac{| \mathrm{d}w|^{2}}{(1+|w|^{2})^{2}}\right],\] (A.33) where the term in brackets is the round metric on \(S^{3}\). Introducing the usual coordinates \((\theta,\phi)\), \(\psi\in[0,4\pi)\) parametrising the base and fibre of the Hopf fibration as in (A.23) we get \[w =\frac{z_{1}}{z_{2}}=\cot\frac{\theta}{2}\mathrm{e}^{i\phi},\] (A.34) \[\chi =\frac{\psi-\phi}{2},\] (A.35) so that \[\frac{|\mathrm{d}w|^{2}}{(1+|w|^{2})^{2}} =\frac{1}{4}(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d} \phi^{2})=\frac{1}{4}(\eta_{1}^{2}+\eta_{2}^{2}),\] (A.36) \[\mathrm{d}\chi+a =\frac{\mathrm{d}\psi+\cos\theta\,\mathrm{d}\phi}{2}=\frac{\eta _{3}}{2},\quad a=\left(\frac{1+\cos\theta}{2}\right)\mathrm{d}\phi,\] recovering (A.10). The line bundle \(\mathbb{C}^{2}\setminus\{0\}\to\mathbb{C}P^{1}\) is homotopy equivalent to the Hopf fibration. The latter has first Chern number one. In fact \[\beta=[\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}]|_{S^{3}}=i\, \mathrm{Im}(\bar{z}_{1}\mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2})|_{S^{3}}=i (\mathrm{d}\chi+a)=\frac{i}{2}\eta_{3}\] (A.37) evaluates to \(1\) on the generator (A.19) of the \(U(1)\) action and so defines a connection on the total space of the Hopf bundle. By Chern-Weyl the first Chern number of the Hopf fibration is \[\frac{i}{2\pi}\int_{\mathbb{C}P^{1}}\mathrm{d}\beta=1.\] (A.38) The splitting (A.22) can be understood as a special case of a Kahler reduction. Let \(M\) be a Kahler manifold with metric \(g\) and Kahler form \(\omega\). Assume that there is a free Hamiltonian and isometric action of \(U(1)\) on \(M\) generated by the vector field \(\xi\). Let \(\mu\) be the associated Hamiltonian, \(i_{\xi}\omega=-\mathrm{d}\mu\). Denote by \(\Sigma_{c}\) the level set \(\mu^{-1}(c)\). It is well known that if \(c\) is a regular value of \(\mu\) then the quotient \(\Sigma_{c}/U(1)\) equipped with the metric and Kahler form induced by \(g\) and \(\omega\) is a smooth Kahler manifold of complex dimension \(\dim_{\mathbb{C}}M-1\). The vector fields \(\xi\), \(J\xi\) are respectively tangent and orthogonal to \(\Sigma_{c}\). In particular \(J\xi\) spans the \(g\)-orthogonal complement of \(T\Sigma_{c}\). Therefore the quotient metric \(\overline{g}\) can be identified with the \(g\)-orthogonal complement of \((\xi,J\xi)\) in \(M\), \[g=\bar{g}+g^{\perp}\] (A.39) where \(g^{\perp}=g|_{\mathrm{Span}(\xi,J\xi)}\). Writing \(\theta=g(\xi,\cdot)\), \(\theta_{J}=g(J\xi,\cdot)\) we have \[g=\bar{g}+\frac{\theta^{2}+\theta_{J}^{2}}{g(\xi,\xi)}.\] (A.40) Take now \(M=\mathbb{C}^{2}\) and consider the \(U(1)\) action is generated by the nowhere vanishing vector field \(\xi=X_{3}\), which preserves both \(g_{\mathbb{C}^{2}}\) and \(\omega_{\mathbb{C}^{2}}\). The action is Hamiltonian with \[\mu=\frac{1}{2}(|z_{1}|^{2}+|z_{2}|^{2}).\] (A.41) Taking a basis \((X_{1},X_{2}=JX_{1})\) for the orthogonal complement of \((X_{3},X_{4}=JX_{3})\) as in (A.18), gives back (A.22), \[g_{\mathbb{C}^{2}}=\bar{g}+g^{\perp}=\frac{\theta_{1}^{2}+\theta_{2}^{2}}{|X_{1} |^{2}}+\frac{\theta_{3}^{2}+\theta_{4}^{2}}{|X_{3}|^{2}}=\frac{|z_{1}\mathrm{d} z_{2}-z_{2}\mathrm{d}z_{1}|^{2}}{|z_{1}|^{2}+|z_{2}|^{2}}+\frac{|\bar{z}_{1} \mathrm{d}z_{1}+\bar{z}_{2}\mathrm{d}z_{2}|^{2}}{|z_{1}|^{2}+|z_{2}|^{2}}.\] (A.42) For \(R\neq 0\) the \(\mu\) level sets \(|z_{1}|^{2}+|z_{2}|^{2}=R^{2}\) are 3-spheres and the induced metric is \[g_{S_{R}^{3}}=\frac{\theta_{1}^{2}+\theta_{2}^{2}}{|X_{1}|^{2}}+\frac{\theta_ {3}^{2}}{|X_{3}|^{2}}=\frac{R^{2}}{4}\left[\frac{4|\mathrm{d}w|^{2}}{(1+|w|^{ 2})^{2}}+4\left(\frac{\mathrm{Im}(\bar{\zeta}\mathrm{d}\zeta)}{R^{2}}+a\right) ^{2}\right]=\frac{R^{2}}{4}(\eta_{1}^{2}+\eta_{2}^{2}+\eta_{3}^{2}),\] (A.43) having used (A.36). ## Appendix B Calabi's construction For reference, we summarise here Calabi's construction [2] of a Ricci-flat Kahler manifold on the canonical bundle of a Kahler-Einstein manifold with scalar curvature \(s\neq 0\). We partially follow the presentation given in [17]. ### The Chern connection We recall, see e.g. [14], that a holomorphic vector bundle \(E\to M\) equipped with a Hermitian product \(\langle\cdot,\cdot\rangle\) carries a preferred connection \(\nabla\) known as the Chern connection. It is characterised by compatibility with the Hermitian product: \(\nabla_{X}\langle\sigma_{1},\sigma_{2}\rangle=\langle X(\sigma_{1}),\sigma_{2 }\rangle+\langle\sigma_{1},X(\sigma_{2})\rangle\) and compatibility with the complex structure on the total space. If \(E\) is a line bundle, \(\sigma\) a section of \(E\) then for some connection form \(\alpha\) \[\nabla\sigma=\alpha\otimes\sigma.\] (B.1) Compatibility with the Hermitian product implies \[\mathrm{d}\log\|\sigma\|^{2}=\alpha+\bar{\alpha}.\] (B.2) For a holomorphic section (B.2) implies \(\alpha=\partial\log\|\sigma\|^{2}\). If instead \(\sigma=e\) is unitary, that is \(\|e\|^{2}=1\), then the connection is purely imaginary, \(\alpha=-\bar{\alpha}\). We recall that if \(\alpha\) is the Chern connection on a line bundle \(E\) then the \(-\alpha\) is the Chern connection on \(E^{*}\). Assume now that \(M\) is Kahler and \(E=TM\otimes\mathbb{C}\) is the complexified tangent bundle. Denote by \(K=\Lambda^{n,0}(M)\) the canonical bundle of \(M\), by \(K^{*}\) its dual and let \(\alpha\) be the Chern connection on \(K\), \(\mathrm{d}\alpha\) the corresponding curvature. Since \(M\) is Kahler, the Chern connection on \(E\) is just the Levi-Civita connection on \(TM\) extended by complex linearity. The dual \(K^{*}\) of the canonical bundle \(K=\Lambda^{n,0}(M)\) of \(M\) is the determinant bundle of \(E\), so the Chern connection on \(K^{*}\) is the trace of the Levi-Civita connection on \(TM\), and the Chern connection \(\alpha\) on \(K\) its opposite. Therefore \[\mathrm{d}\alpha=i\rho_{M}\] (B.3) where \(\rho_{M}\) is the Ricci form on \(M\). If \(M\) is Kahler-Einstein of complex dimension \(n\) and scalar curvature \(s\) then \(\rho_{M}=\frac{s}{2n}\omega_{M}\), where \(\omega_{M}\) is the Kahler form on \(M\), hence \[-i\mathrm{d}\alpha=\rho_{M}=\frac{s}{2n}\omega_{M}.\] (B.4) ### Calabi's construction In this section \(M\) is a Kahler-Einstein manifold of complex dimension \(n\) with Riemannian metric \(g_{M}\), Kahler form \(\omega_{M}\), Riemannian volume element \[\mathrm{vol}_{M}=\frac{\omega^{n}}{n!}\] (B.5) and scalar curvature \(s\). The bundle \(K\) is the canonical bundle of \(M\), \(\alpha\) the connection form of the Chern connection \(\nabla\) on \(K\). Compatibility of \(\nabla\) with the complex structure is equivalent to \(\nabla^{0,1}=\overline{\partial}\). Let \(\sigma\) be a unitary section of \(K\). Since \(\sigma\) is a form of degree \((n,0)\), \(\partial\sigma=0=\nabla^{1,0}\sigma\) and so \[\mathrm{d}\sigma=\overline{\partial}\sigma=\nabla\sigma=\alpha\wedge\sigma.\] (B.6) The Riemannian metric on \(M\) induces a Hermitian product \(\langle\cdot,\cdot\rangle\) on \(K\) given by \[\sigma\wedge s\overline{\sigma}=\|\sigma\|^{2}\mathrm{vol}_{M}.\] (B.7) Since \(*:\Lambda^{p,q}(M)\to\Lambda^{n-q,n-p}(M)\) and \(*^{2}=(-1)^{(p+q)^{2}}\), \(*\) acts on \(\Lambda^{0,n}(M)\) by multiplication by \(i^{n^{2}}\). Hence (B.7) is equivalent to \[\sigma\wedge\overline{\sigma}=(-i)^{n^{2}}\|\sigma\|^{2}\frac{\omega_{M}^{n}}{ n!}.\] (B.8) Let \(\zeta\) be a coordinate on the fibres of \(K\), set \[\theta=\mathrm{d}\zeta+\zeta\alpha.\] (B.9) and define \[\omega =\lambda\left[u\,\omega_{M}+\frac{2in}{s}u^{\prime}\theta\wedge \bar{\theta}\right],\] (B.10) \[\Omega =\theta\wedge e,\] (B.11) where \(u\) is a function of \(|\zeta|^{2}\) and \(\lambda\) is some non-zero constant. We are now going to show that \((\omega,\Lambda)\) satisfy \[\mathrm{d}\omega =0,\] (B.12) \[\mathrm{d}\Omega =0,\] (B.13) \[\Omega\wedge\omega =0,\] (B.14) \[\Omega\wedge\bar{\Omega} =C\omega^{n+1}.\] (B.15) Closure of \(\Omega\) means that the complex structure defined by declaring \(\theta\) to be a \((1,0)\) form on \(K\) is integrable, while (B.14) implies that \(\omega\) is a \((1,1)\) form in the complex structure defined by \(\Omega\). Equation (B.12) and non-degeneracy of \(\omega\) then imply that \(\omega\) is a Kahler form on \(K\). Finally the normalisation condition (B.15), where \(C\) is some constant, implies that the Kahler metric on \(K\) is Ricci-flat. Since (B.15) is equivalent to \[\|\Omega\|^{2}=i^{(n+1)^{2}}C,\] (B.16) the value of the constant \(C\) depends on the value \(\|\Omega\|\). Here we follow [10] by taking \[C=(-i)^{(n+1)^{2}}=\begin{cases}1&\text{if $n$ is odd},\\ -i&\text{if $n$ is even},\end{cases}\] (B.17) corresponding to \(\|\Omega\|^{2}=1\). We now show the properties (B.12)-(B.15) hold for \(\omega\), \(\Omega\) defined by (B.10), (B.11) and \(u\) given below by (B.26). One calculates \[\mathrm{d}|\zeta|^{2} =\zeta\bar{\theta}+\bar{\zeta}\theta,\] (B.18) \[\mathrm{d}\theta =\zeta\mathrm{d}\alpha+\mathrm{d}\zeta\wedge\alpha=i\frac{s}{2n} \zeta\omega_{M}+\mathrm{d}\zeta\wedge\alpha=i\frac{s}{2n}\zeta\omega_{M}+ \theta\wedge\alpha,\] from which \(\mathrm{d}\omega=0\) easily follows. Since, using (B.6), \[\Omega=(\mathrm{d}\zeta+\zeta\alpha)\wedge e=\mathrm{d}(\zeta e)\] (B.19) \(\Omega\) is closed. Since \(\Omega\) is of type \((n+1,0)\) one has \(\omega_{M}\wedge\Omega=0\) and \(\omega\wedge\Omega=0\) follows. Using (B.8) applied to the Kahler metric determined by \(\omega\) one has \[\Omega\wedge\bar{\Omega}=(-i)^{(n+1)^{2}}\|\Omega\|^{2}\frac{\omega^{n+1}}{(n+ 1)!},\] (B.20) and from (B.10) \[\omega^{n+1}=\frac{2in}{s}\lambda^{n+1}u^{n}u^{\prime}\omega_{M}^{n}\wedge \theta\wedge\bar{\theta}\] (B.21) giving \[\Omega\wedge\bar{\Omega}=(-1)^{n}(-i)^{n^{2}}\frac{2n}{s(n+1)}\lambda^{n+1}\| \Omega\|^{2}u^{n}u^{\prime}\mathrm{vol}_{M}\wedge\theta\wedge\bar{\theta}\] (B.22) We can also compute directly \[\Omega\wedge\bar{\Omega} =(-1)^{n}\theta\wedge\bar{\theta}\wedge e\wedge\bar{e}=(-1)^{n}(-i)^ {n^{2}}\theta\wedge\bar{\theta}\wedge e\wedge*\bar{e}\] (B.23) \[=(-1)^{n}(-i)^{n^{2}}\mathrm{vol}_{M}\wedge\theta\wedge\bar{\theta}.\] Thus \[\frac{2n}{s(n+1)}\lambda^{n+1}\|\Omega\|^{2}u^{n}u^{\prime}=1\] (B.24) and in order to satisfy the normalisation condition \(\|\Omega\|^{2}=1\) we need to impose \[\frac{2n}{s(n+1)}\lambda^{n+1}u^{n}u^{\prime}=1,\] (B.25) which has solution \[u=\left(c|\zeta|^{2}+\kappa\right)^{\frac{1}{n+1}},\] (B.26) where \(\kappa\) is an integration constant and \[c=\frac{s(n+1)^{2}}{2n\lambda^{n+1}}.\] (B.27) In conclusion the Kahler form \(\omega\) and metric \(g\) on \(K\) are \[\omega =\lambda u\,\omega_{M}+i(n+1)(\lambda u)^{-n}\theta\wedge\bar{\theta},\] (B.28) \[g =\lambda u\,g_{M}+2(n+1)(\lambda u)^{-n}|\theta|^{2}.\] (B.29) If \(s>0\), \(\kappa>0\) and \(g_{M}\) is complete then \(g\) is globally defined on the total space of \(K\) and complete.
2309.15364
Non-Stationary Difference Equation and Affine Laumon Space II: Quantum Knizhnik-Zamolodchikov Equation
We show that Shakirov's non-stationary difference equation, when it is truncated, implies the quantum Knizhnik-Zamolodchikov ($q$-KZ) equation for $U_{\mathsf v}\bigl(A_1^{(1)}\bigr)$ with generic spins. Namely, we can tune mass parameters so that the Hamiltonian acts on the space of finite Laurent polynomials. Then the representation matrix of the Hamiltonian agrees with the $R$-matrix, or the quantum $6j$ symbols. On the other hand, we prove that the $K$ theoretic Nekrasov partition function from the affine Laumon space is identified with the well-studied Jackson integral solution to the $q$-KZ equation. Combining these results, we establish that the affine Laumon partition function gives a solution to Shakirov's equation, which was a conjecture in our previous paper. We also work out the base-fiber duality and four-dimensional limit in relation with the $q$-KZ equation.
Hidetoshi Awata, Koji Hasegawa, Hiroaki Kanno, Ryo Ohkawa, Shamil Shakirov, Jun'ichi Shiraishi, Yasuhiko Yamada
2023-09-27T02:15:02Z
http://arxiv.org/abs/2309.15364v4
# Non-stationary difference equation ###### Abstract. We show that Shakirov's non-stationary difference equation, when it is truncated, implies the quantum Knizhnik-Zamolodchikov (\(q\)-KZ) equation for \(U_{\mathsf{v}}(A_{1}^{(1)})\) with generic spins. Namely we can tune mass parameters so that the Hamiltonian acts on the space of finite Laurent polynomials. Then the representation matrix of the Hamiltonian agrees with the \(R\)-matrix, or the quantum \(6j\) symbols. On the other hand, we prove that the \(K\) theoretic Nekrasov partition function from the affine Laumon space is identified with the well-studied Jackson integral solution to the \(q\)-KZ equation. Combining these results, we establish that the affine Laumon partition function gives a solution to Shakirov's equation, which was a conjecture in our previous paper. We also work out the base-fiber duality and four dimensional limit in relation with the \(q\)-KZ equation. ## 1. Introduction The quantum (or \(q\)-deformed) Knizhnik-Zamolodchikov (\(q\)-KZ) equation typically arises as a linear difference equation satisfied by matrix elements of the products of intertwining operators of highest weight representations of quantum affine algebra [27], [23]. It roughly takes the following form; \[\Psi(u_{1},\cdots,pu_{j},\cdots,u_{n})=R_{V_{j}V_{j-1}}\left(\frac {pu_{j}}{u_{j-1}}\right)\cdots R_{V_{j}V_{1}}\left(\frac{pu_{j}}{u_{1}}\right) D_{j}\] \[R_{V_{n}V_{j}}^{-1}\left(\frac{u_{n}}{u_{j}}\right)\cdots R_{V_ {j+1}V_{j}}^{-1}\left(\frac{u_{j+1}}{u_{j}}\right)\Psi(u_{1},\cdots,u_{n}), \tag{1.1}\] where \(\Psi(u_{1},\cdots,u_{n})\) is an appropriate matrix element of the intertwiners with the spectral parameters \(u_{i}\) and \(R_{V_{k}V_{\ell}}\) denotes the \(R\) matrix on the tensor product \(V_{k}\otimes V_{\ell}\) of highest weight representations. \(D_{j}\) is a diagonal operator acting on \(V_{j}\). In the case of \(U_{\mathsf{v}}(A^{(1)})\) with level \(k\), the shift parameter is fixed to be \(p=\mathsf{v}^{2(k+2)}=q^{k+2}\)1. The case \(n=2\) is of our interest in this paper. By the scaling of the spectral parameters, which fixes the origin and the infinity, we can consider the two point function \(\Phi(u):=\Psi(1,u)\), for which the \(q\)-KZ equation is \[\Phi(pu)=R_{V_{2}V_{1}}(pu)D_{2}\Phi(u). \tag{1.2}\] For the \(q\)-difference equations including (1.2), various important results using the Jackson integrals have been developed based on the fundamental theory by K. Aomoto. In particular, the Jackson integral of symmetric Selberg type relevant for the equation (1.2) was thoroughly studied by Aomoto, Kato, Matsuo Mimachi and Ito around early 1990's (see e.g. [8],[30] and references therein). Around the same time, the relation to Bethe vectors was found by Reshetikhin [48] and the method was further developed by Tarasov, Varchenko, etc. These works play a crucial role in our study. In [10] we show that the non-stationary difference equation proposed in [51] is transformed into a quantization of the discrete Painleve VI equation. The original form of the non-stationary difference equation is \[{\sf U}(t\Lambda,x)={\mathcal{A}}_{1}(\Lambda,x)\cdot{\mathcal{B}}\cdot{ \mathcal{A}}_{2}(\Lambda,x)\cdot{\mathcal{B}}\cdot{\mathcal{A}}_{3}(\Lambda,x ){\sf U}(\Lambda,\frac{x}{tqQ}), \tag{1.3}\] where \({\mathcal{B}}\) is the \(q\)-Borel transformation on a formal Laurent series in \(x\); \[{\mathcal{B}}(\sum_{n}c_{n}x^{n})=\sum_{n}q^{n(n+1)/2}c_{n}x^{n}, \tag{1.4}\] and \({\mathcal{A}}_{i}(\Lambda,x)\) are multiplications of infinite products \((x;q)_{\infty}\) and \((x;q,t)_{\infty}\) defined by (1.11) and (1.12), respectively. (See [51] and [10] for explicit forms of \({\mathcal{A}}_{i}(\Lambda,x)\)). In appendix A, we explain a relation of the operator \({\mathcal{B}}\) and the refined Chern-Simons theory [2], which gave the original motivation for introducing \({\mathcal{B}}\) in [51]. The non-stationary difference equation (1.3) was found by looking at the instanton partition function of five dimensional \(SU(2)\) gauge theory with a surface defect. The parameters \((q,t)\) are equivariant parameters of the torus action \((z_{1},z_{2})\longrightarrow(qz_{1},t^{-1}z_{2})\) on \({\mathbb{C}}^{2}\) and the defect is at \(z_{2}=0\), which is the set of fixed points of \(t^{-1}\)-action2. The function \({\sf U}(\Lambda,x)\) is supposed to be a formal double power series in \(x\) and \(\Lambda/x\); Footnote 2: We can regard \(q\) and \(t\) as the \(\Omega\)-background along the defect and transverse to the defect, respectively. \[{\sf U}(\Lambda,x)=\sum_{k,\ell\geq 0}c_{k,\ell}x^{k}(\Lambda/x)^{\ell}, \tag{1.5}\] where the coefficients \(c_{m,n}\) are functions of the \(SU(2)\) Coulomb parameter \(Q\), mass parameters \(T_{i}\) and the equivariant parameters \((q,t)\). Physically, \(\Lambda\) is the instanton expansion parameter and \(x\) is identified with the position of the degenerate field \(\phi_{2,1}(x)\) that corresponds to the defect according to the AGT correspondence [5],[4]. In the case of the Virasoro algebra which corresponds to four dimensional \({\mathcal{N}}=2\) supersymmetric gauge theories, the degenerate field \(\phi_{2,1}(x)\) has a null state at level two, which implies a differential equation of the second order in \(x\) as the BPZ equation. The non-stationary difference equation (1.3) may be regarded as a \(q\)-deformed version of the BPZ equation. However, if we try to derive it, we encounter the problem, since the natural coproduct is not known for the deformed Virasoro algebra. Hence we cannot define intertwiners and consequently cannot use the methods of representation theory. We can overcome such a difficulty by noticing the following fact. Namely, the instanton partition function with a surface defect allows another description in terms of the affine Laumon space [35],[36],[6]. In this case the instanton partition function gives a conformal block of the affine Kac-Moody algebra and its \(q\) deformation, the quantum affine algebra is a Hopf algebra! In [10] (and implicitly in [51]), it was conjectured that the instanton partition function coming from the equivariant character of the affine Laumon space solves the gauge transformed version of (1.3); See (1.6) below. More concretely, in [10] we have shown that the non-stationary difference equation (1.3) can be factorized into a coupled system, which is in accord with the formula of Hamiltonian of the discrete Painleve VI equation in terms of the extended affine Weyl group of \(D_{5}^{(1)}\). Then the conjecture is that a pair of the affine Laumon partition functions whose parameters are related by a simple transformation provides a solution to the coupled system; see Conjecture 6.4 in [10]. In this paper, we prove this conjecture by using a connection through the \(q\)-KZ equation. In order to see the relation to the \(q\)-KZ equation, we first make a gauge transformation of the "wave function" \({\sf U}(\Lambda,x)\to\Psi(\Lambda,x)\) to recast (1.3) into the following form [10]; \[T_{t,\Lambda}\cdot\Psi(\Lambda,x)={\mathcal{H}}_{\rm S}T_{qtQ,x}^{-1}\cdot\Psi (\Lambda,x). \tag{1.6}\] An explicit form of the Hamiltonian \({\mathcal{H}}_{\rm S}\) is given by (2.1). \({\mathcal{H}}_{\rm S}\) involves "renormalized" mass parameters \(d_{i},\ 1\leq i\leq 4\) which are monomials in the original mass parameters \(T_{i}\), \(Q\) and \((q,t)\). It is amusing that we can eliminate the double infinite product \((x;q,t)_{\infty}\) appeared in \({\mathcal{A}}_{i}(\Lambda,x)\) from \({\mathcal{H}}_{\rm S}\). It is also remarkable that the parameters \(Q\) and \(t\) appear explicitly only through the shift parameters for \(x\) and \(\Lambda\). By tuning two of mass parameters, say \(d_{2}=q^{-m}\) and \(d_{3}=q^{-n}\), the Hamiltonian acts on the \(n+m+1\) dimensional space of Laurent polynomials with a basis \(x^{j}\) (\(-n\leq j\leq m\)). Then, the representation matrix of \({\mathcal{H}}_{\rm S}\) agrees with the \(R\)-matrix of \(U_{\sf v}(A_{1}^{(1)})\), which allows us to identify (1.6) as a \(q\)-KZ equation. Namely for the matrix \(r_{i,j}\) defined by \[{\mathcal{H}}_{\rm S}x^{i}=\sum_{j=-n}^{m}r_{i,j}(\Lambda)x^{j}\quad(-n\leq i \leq m), \tag{1.7}\] we have; **Theorem 1.1** (Theorem 2.3).: _The matrix \([r_{i,j}(\Lambda)]_{-n\leq i,j\leq m}\) coincides with the \((N+1)\times(N+1)\) block of the \(R\)-matrix for \(U_{\sf v}(A_{1}^{(1)})\) with \(N=m+n\) down spins, where \(\Lambda\), \(\log_{q}d_{1}\) and \(\log_{q}d_{4}\) are related to the spectral parameter and generic highest weights, respectively._ This is one of the fundamental results in this paper. As we mentioned above, the Jackson integral of symmetric Selberg type solves the \(q\)-KZ equation. Recall that the Jackson integral is a pairing of the integration cycle \(\xi\) and the cocycle function \(\varphi(z)\); \[\langle\varphi(z),\xi\rangle=(1-t)^{N}\sum_{\nu\in\mathbb{Z}^{N}}\Big{[}\varphi (z)\Phi(z)\Delta(1,z)\Big{]}_{z_{i}=\xi_{i}t^{\nu_{i}}}. \tag{1.8}\] In this paper we take the weight function \(\Phi(z)\) which depends on four parameters \((a_{1},a_{2},b_{1},b_{2})\); See Section 3 for details. On the other hand, the instanton partition function \(\mathcal{Z}_{\mathrm{AL}}\) from the equivariant character of the affine Laumon space is defined by (4.5). Hence, if we can show the relation of the partition function \(\mathcal{Z}_{\mathrm{AL}}\) to the Jackson integral (1.8), it gives a proof of the conjecture in our previous paper. Under the same tuning conditions for mass parameters as above, the partition function \(\mathcal{Z}_{\mathrm{AL}}\) becomes a Laurent polynomial in \(x\) (see Fig.1 and Appendix E). By carefully examining combinatorial structure of the orbifolded Nekrasov factor in (4.5) (see Appendix F) we can identify the coefficients of \(\mathcal{Z}_{\mathrm{AL}}\) in the expansion parameter \(x\) as the Jackson integral (1.8) with an appropriate choice of the cycle \(\xi\). In this way the affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\) agrees with the Jackson integral for the \(q\)-KZ equation term by term, if we choose suitable basis of (co-)cycles. More concretely, we have **Theorem 1.2** (Theorem 4.1).: _Under the identification of the parameters_ \[d_{1}=\frac{1}{q^{m-1}a_{1}b_{1}},\quad d_{2}=q^{-m},\quad d_{3}=q^{-n},\quad d _{4}=\frac{1}{q^{n-1}a_{2}b_{2}},\quad Q=\frac{q^{m-n}a_{1}}{ta_{2}}, \tag{1.9}\] _the affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\) coincides with the Jackson integral (1.8) with an appropriate choice of \(\xi\) and a suitable basis of \(\varphi(z)\)._ Figure 1. After the mass truncation \(d_{2}=q^{-m},d_{3}=q^{-n}\), the double series \(U(\Lambda,x)\) becomes a Laurent polynomial in \(x\), while it is still a formal power series in \(\Lambda\). The circles represent the positions of allowed terms in the \((x,\Lambda)\)-lattice. A definition of the basis and the choice of \(\xi\) are the subjects of section 3. In particular, we show the combinatorial structure of \(\mathcal{Z}_{\mathrm{AL}}\) as a sum over the pair of Young diagrams is in a nice correspondence with the cohomology basis called Matsuo basis in the literature. By combining above two theorems under the specialization \(d_{2}=q^{-m},d_{3}=q^{-n}\), we can show that the affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\) is a solution to the gauge transformed version (1.6) of the Shakirov's equation. Now we invoke the fact that the equation (1.3) determines the expansion coefficients \(c_{i,j}\) in (1.5) uniquely up to an overall normalization, which we fix by \(c_{0,0}=1\). Moreover, it is easy to see that each \(c_{i,j}\) is a polynomial in mass parameters \(d_{i}\) with coefficients in the field of rational functions of \(q,t\) and \(Q\). Hence, if \(\mathcal{Z}_{\mathrm{AL}}\) is a solution under the specialization \(d_{2}=q^{-m},d_{3}=q^{-n}\) for any \(m,n\in\mathbb{Z}_{\geq 0}\), it is valid for generic values of \(d_{2}\) and \(d_{3}\). **Theorem 1.3**.: _The affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\) is a solution to the non-stationary difference equation (1.6)._ In appendix H, we recast (1.6) to a coupled system, which is essentially the same as the coupled system derived in [10] based on the structure of the Hamiltonian of the \(qq\)-Painleve VI equation. Hence our main Theorem 1.3 also proves Conjecture 6.4 in [10]. Recall that the original conjecture in [51] was for the degenerate five point \(q\)-Virasoro conformal block, which is realized as the Nekrasov partition function of \(U(2)\times U(2)\) linear quiver gauge theory with the Higgsing condition on spectral parameters. One can express the partition function in question in terms of the topological vertex (intertwiners of the quantum toroidal \(\mathfrak{gl}_{1}\)-algebra) and confirm that it is similarly related to the Jackson integral of the same type as the present paper. This is regarded as a \(q\)-deformed analogue of the correspondence between the current block of \(\mathfrak{sl}_{2}\) and the degenerate conformal block. Details will be reported elsewhere. It is known that the Jackson integral relevant for \(q\)-KZ equation actually satisfies two kinds of difference equations (see e.g. [30]). For the case in our hand, the second difference equation involves the shift of a "renormalized" Coulomb modulus \((qtQ)^{-1}\) of \(SU(2)\) gauge theory3. Two difference equations are exactly the same under the exchange Footnote 3: Recall that \((qtQ)^{-1}\) is the shift parameter of \(x\) in (1.3). \[\Lambda\longleftrightarrow\frac{q^{2}}{d_{1}d_{2}}(qtQ)^{-1}. \tag{1.10}\] In the geometric engineering of \(SU(2)\) gauge theory by local \(\mathbb{F}_{0}=\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(\Lambda\) and \(Q\) are identified with Kahler parameters of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Hence this is a manifestation of the base-fiber duality in topological strings. We will also work out the four dimensional limit of the gauge transformed equation (1.6). Then we can check that it agrees with the KZ equation, which has been derived from the regularity of the fractional fundamental \(qq\)-character of type \(A_{1}\)[47]. Since the KZ equation can be regarded as a quantization of the Schlesinger system [49], the relation of the conformal block of the current algebra to the quantum monodromy preserving deformation has been believed as a folklore. The equation (1.6) is quite interesting object as an explicit example realizing such correspondence. In recent papers [54],[20], the quantum cohomology and (quantum) monodromy preserving deformation has also been discussed. It is very likely that the \(q\)-Borel transformation \(\mathcal{B}\) appears in some representation of double affine Hecke algebras (DAHA) on polynomials in \(x_{1},x_{2},\ldots\)[51]. From such a speculation, it is interesting that in connection with the quantum \(Q\) system [21],[22], an operator very close to \(\mathcal{B}\) is employed as Cherednik's Gaussian operator representing the Dehn twist on polynomial representations of spherical DAHA. The present paper is organized as follows; In section 2 we introduce a truncation of the non-stationary difference equation (1.3) by tuning two of mass parameters among four. We will call it mass truncation. After the mass truncation the Hamiltonian of (1.3) acts on the space of Laurent polynomials in \(x\). We find that the matrix elements \(r_{i,j}(\Lambda)\) satisfy the same relation as the quantum \(6j\)-symbol derived by Rosengren [50]. In section 3, following the work by Ito [30], we first review the \(q\)-KZ equations satisfied by the Jackson integral (1.8), which is a paring of an integration cycle \(\xi\) and a cocycle function \(\varphi(z)\). We also study a special integration cycle \(\xi\) which reduces the bilateral sum over the lattice in the definition of the Jackson integral to a sum over a positive cone of the lattice. This truncation (which we call lattice truncation) has been considered in literature (see e.g. [31][32] and references therein). The relevant cycles are called the characteristic cycles or the \(\alpha\)-stable or \(\alpha\)-unstable cycles by Aomoto, and play crucial roles in the study of asymptotic behavior. The lattice truncation is crucial, when we make an identification of the Jackson integral with the Nekrasov partition function which is expressed as a sum over a pair of Young diagrams. In section 4, we consider the Nekrasov partition function with a surface defect, which can be obtained from the equivariant character at fixed points of the toric action on the affine Laumon space. The partition function is expanded in \(x\) and \(\Lambda/x\) and we show that the expansion in \(x\) is in accord with a basis, which is called Matsuo basis, of the cocycle functions in the Jackson integral representation of solutions to the \(q\)-KZ equation. In section 5, we compute the four dimensional limit of the \(R\)-matrix that appears in the truncation of the non-stationary difference equation and show that the limit correctly reproduces the KZ equation for the conformal block of the \(\mathfrak{sl}_{2}\) current algebra. There are many appendices in this paper; See the table of contents below. Some helpful clarifications and technical details are presented in these appendices. We will use the following notations throughout the paper [28]; \[(x;q)_{\infty}=\prod_{n=0}^{\infty}(1-xq^{n})=\exp\left(-\sum_{n=1}^{\infty} \frac{1}{n}\frac{1}{1-q^{n}}x^{n}\right),\qquad|x|<1,\quad|q|<1, \tag{1.11}\] \[(x;q,t)_{\infty}=\prod_{n,m=0}^{\infty}(1-xq^{n}t^{m})=\exp\left(-\sum_{n=1}^{ \infty}\frac{1}{n}\frac{1}{(1-q^{n})(1-t^{n})}x^{n}\right),\quad|x|<1,\quad|q|,| t|<1. \tag{1.12}\] The \(q\)-shifted factorial is defined by \[(a;q)_{n}=\frac{(a;q)_{\infty}}{(aq^{n};q)_{\infty}}=\prod_{i=0}^{n-1}(1-aq^{i }). \tag{1.13}\] The following formulas are useful; \[(a)_{n}=(-a)^{n}q^{n(n-1)/2}(\frac{1}{aq^{n-1}})_{n},\qquad(a)_{k+\ell}=(a)_{k} (aq^{k})_{\ell}. \tag{1.14}\] The \(q\)-exponential function is \[e_{q}(x):=\sum_{n=0}^{\infty}\frac{x^{n}}{(q;q)_{n}}=\frac{1}{(x;q)_{\infty}}. \tag{1.15}\] Finally the \(q\)-binomial coefficient is defined by \[\left[\begin{array}{c}n\\ k\end{array}\right]_{q}:=\frac{(q;q)_{n}}{(q;q)_{k}(q;q)_{n-k}}. \tag{1.16}\] ###### Contents * 1 Introduction * 2 Shakirov's equation as a \(q\)-KZ equation with generic spins * 3 Jackson integral representation of solutions to the \(q\)-KZ equation * 4 Nekrasov partition function as Jackson integral * 5 Four dimensional limit and KZ equation * A \(q\) Borel transformation and refined Chern-Simons theory * B Lemma on the \(q\)-Borel transformation * C List of various \(R\) matrices and their characterization * D Proof of \(RD_{2}A=ARD_{2}\) * E Truncation by tuning mass parameters * F Affine Laumon space and orbifolded Nekrasov factor * G (Anti-)symmetrization in a factorized form * H Shakirov's equation as a coupled system ## 2. Shakirov's equation as a \(q\)-KZ equation with generic spins In [10] we show that by an appropriate gauge transformation we can recast the original equation (1.3) to the form \(\mathcal{H}_{\mathrm{S}}T_{q{\mathcal{U}},x}^{-1}T_{t,\Lambda}^{-1}\cdot\Psi( \Lambda,x)=\Psi(\Lambda,x)\) with the Hamiltonian \[\mathcal{H}_{\mathrm{S}}= \frac{1}{\varphi(qx)\varphi(\Lambda/x)}\cdot\mathcal{B}\cdot \frac{\varphi(\Lambda)\varphi(q^{-1}d_{1}d_{2}d_{3}d_{4}\Lambda)}{\varphi(-d_{ 1}x)\varphi(-d_{2}x)\varphi(-d_{3}\Lambda/x)\varphi(-d_{4}\Lambda/x)}\] \[\cdot\mathcal{B}\cdot\frac{1}{\varphi(q^{-1}d_{1}d_{2}x)\varphi( d_{3}d_{4}\Lambda/x)}, \tag{2.1}\] where \(\varphi(z):=(z;q)_{\infty}\). We have made the following change of variables4; Footnote 4: In (2.1) \({}^{\prime}\) on \(x\) and \(\lambda\) are deleted for simplicity. \[\Lambda^{\prime}=tT_{2}T_{3}\Lambda,\qquad x^{\prime}=T_{2}q^{-1/2}t^{1/2}x, \tag{2.2}\] and mass parameters5; Footnote 5: We made an exchange \(d_{1}\leftrightarrow d_{2}\) in eq.(5.7) of [10]. We also exchanged \(T_{1}\) and \(T_{2}\), which corresponds to the action of a Weyl reflection on the Painleve side, namely (2.1) is derived from eq.(6.1) of [10] by the change of variables (2.2) and (2.3). \[d_{1}=T_{1}q^{1/2}t^{-1/2},\quad d_{2}=T_{2}^{-1}q^{1/2}t^{-1/2}Q^{-1},\quad d _{3}=T_{3}^{-1}q^{1/2}t^{-1/2},\quad d_{4}=T_{4}q^{1/2}t^{1/2}Q. \tag{2.3}\] The Hamiltonian \(\mathcal{H}_{\mathrm{S}}\) is manifestly symmetric under the exchange \(d_{1}\leftrightarrow d_{2}\), \(d_{3}\leftrightarrow d_{4}\). **Lemma 2.1**.: _For \(m\in\mathbb{Z}_{\geq 0}\), we have_ \[\mathcal{H}_{\mathrm{S}}\cdot x^{m}\mathbb{C}[[\frac{1}{x}]]\subset x ^{m}\mathbb{C}[[\frac{1}{x}]], \text{for }d_{1}=q^{-m}\text{ or }d_{2}=q^{-m},\] \[\mathcal{H}_{\mathrm{S}}\cdot x^{-m}\mathbb{C}[[x]]\subset x^{-m }\mathbb{C}[[x]], \text{for }d_{3}=q^{-m}\text{ or }d_{4}=q^{-m}. \tag{2.4}\] Proof.: Due to the relation (eq.(4.24) in [10]) \[\mathcal{H}_{\mathrm{S}}(d_{1},d_{2},d_{3},d_{4},\Lambda,x)\ x=qx^{2}\ \mathcal{H}_{ \mathrm{S}}(qd_{1},qd_{2},d_{3}/q,d_{4}/q,\Lambda,x)\ p^{2},\] the statements inductively reduce to the case \(m=0\). Then, for \(m=0\), the relations (2.4) follow from the identities (\(k\geq 0\)) \[\frac{1}{\varphi(-aqx)}\mathcal{B}\frac{1}{\varphi(ax)}\cdot x^{-k}=x^{-k} \prod_{j=0}^{k-1}(q^{j}+ax),\quad\frac{1}{\varphi(-a/x)}\mathcal{B}\frac{1}{ \varphi(a/x)}\cdot x^{k}=x^{k}\prod_{j=1}^{k}(q^{j}+\frac{a}{x}),\] which are a special case of the formulae (B.1) in Appendix B. Due to Lemma 2.1, we have the following action of \(\mathcal{H}_{\mathrm{S}}\) with finite support \[\mathcal{H}_{\mathrm{S}}x^{i}=\sum_{j=-n}^{m}r_{i,j}(\Lambda)x^{j}\quad(-n\leq i \leq m), \tag{2.5}\] when the parameters are specialized as \(d_{2}=q^{-m}\), \(d_{3}=q^{-n}\) (\(m,n\in\mathbb{Z}_{\geq 0}\)) 6. The coefficient \(r_{i,j}(\Lambda)\) is a rational function in the spectral parameter \(\Lambda\). It also depends on the remaining mass parameters \(d_{1}\) and \(d_{4}\), which are to be identified with the highest weights of the in-state and the out-state, when we consider a matrix element (two point function) of the intertwiners. Footnote 6: Due to the symmetry \(d_{1}\leftrightarrow d_{2}\), \(d_{3}\leftrightarrow d_{4}\), the cases \(d_{1}=q^{-m}\), \(d_{4}=q^{-n}\) etc. have the similar truncation. **Example 2.2**.: _For \((m,n)=(1,0)\), \((2,0)\), we have_ \[\left[\begin{array}{cc}r_{0,0}&r_{0,1}\\ r_{1,0}&r_{1,1}\end{array}\right]=\left[\begin{array}{ccc}\frac{(\frac{d_{1} \Lambda}{q})_{1}}{(\frac{\Lambda}{q})_{1}}&-\frac{(d_{1})_{1}}{(\frac{\Lambda} {q})_{1}}\\ -\frac{\Lambda q(\frac{d_{4}}{q})_{1}}{(\frac{\Lambda}{q})_{1}}&\frac{q^{2}( \frac{d_{4}\Lambda}{q^{2}})_{1}}{(\frac{\Lambda}{q})_{1}}\\ \end{array}\right], \tag{2.6}\] _and_ \[\left[\begin{array}{ccc}r_{0,0}&r_{0,1}&r_{0,2}\\ r_{1,0}&r_{1,1}&r_{1,2}\\ r_{2,0}&r_{2,1}&r_{2,2}\end{array}\right]=\left[\begin{array}{ccc}\frac{( \frac{d_{1}\Lambda}{q^{2}})_{1}(\frac{d_{1}\Lambda}{q})_{1}}{(\frac{\Lambda} {q^{2}})_{1}(\frac{\Lambda}{q})_{1}}&-\frac{(1+q)(d_{1})_{1}(\frac{d_{1} \Lambda}{q})_{1}}{q(\frac{\Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1}}&\frac{( d_{1})_{1}(d_{1}q)_{1}}{q(\frac{\Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1}}\\ -\frac{\Lambda q(\frac{d_{4}}{q})_{1}(\frac{d_{1}\Lambda}{q})_{1}}{(\frac{ \Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1}}&\frac{q^{2}(\frac{d_{1}\Lambda} {q})_{1}(\frac{d_{4}\Lambda}{q^{2}})_{1}+\Lambda q(d_{1}q)_{1}(\frac{d_{4}}{q^ {2}})_{1}}{(\frac{\Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1}}&-\frac{q^{2}(d _{1}q)_{1}(\frac{d_{4}\Lambda}{q^{3}})_{1}}{(\frac{\Lambda}{q^{2}})_{1}( \frac{\Lambda}{q})_{1}}\\ \frac{\Lambda^{2}q^{3}(\frac{d_{4}}{q^{2}})_{1}(\frac{d_{4}}{q})_{1}}{(\frac{ \Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1}}&-\frac{\Lambda q^{4}(1+q)(\frac {d_{4}}{q^{2}})_{1}(\frac{d_{4}\Lambda}{q^{3}})_{1}}{(\frac{\Lambda}{q^{2}})_ {1}(\frac{\Lambda}{q})_{1}}&\frac{q^{6}(\frac{d_{4}\Lambda}{q^{3}})_{1}(\frac {d_{4}\Lambda}{q^{3}})_{1}}{(\frac{\Lambda}{q^{2}})_{1}(\frac{\Lambda}{q})_{1 }}\end{array}\right], \tag{2.7}\] _where \(r_{i,j}=r_{i,j}(\Lambda)\), \((x)_{1}=(1-x)\)._ **Theorem 2.3**.: _The matrix \([r_{i,j}(\Lambda)]_{-n\leq i,j\leq m}\) coincides with the \((N+1)\times(N+1)\) block of the \(R\)-matrix for \(U_{\mathsf{v}}(A_{1}^{(1)})\) with \(N=m+n\) down spins, where \(\Lambda\), \(\log_{q}d_{1}\) and \(\log_{q}d_{4}\) are related to the spectral parameter and generic highest weights, respectively._ Proof.: The defining relation (2.5) of \(r_{i,j}(\Lambda)\) can be written as \[\begin{array}{l}\frac{\varphi(\Lambda)\varphi(q^{-1}d_{1}d_{2}d_{3}d_{4} \Lambda)}{\varphi(-d_{1}x)\varphi(-d_{2}x)\varphi(-d_{3}\Lambda/x)\varphi(-d _{4}\Lambda/x)}\mathcal{B}\frac{1}{\varphi(q^{-1}d_{1}d_{2}x)\varphi(d_{3}d_{4 }\Lambda/x)}x^{i}\\ =\mathcal{B}^{-1}\varphi(qx)\varphi(\frac{\Lambda}{x})\sum_{j=-n}^{m}r_{i,j}( \Lambda)x^{j}.\end{array} \tag{2.8}\] Using the relations (see Appendix B) \[\begin{array}{l}\mathcal{B}\frac{1}{\varphi(q^{-1}d_{1}d_{2}x)\varphi(d_{3}d _{4}\frac{\Lambda}{x})}x^{i}=\frac{\varphi(-q^{i}d_{1}d_{2}x)\varphi(-q^{-i}d _{3}d_{4}\frac{\Lambda}{x})}{\varphi(q^{-1}d_{1}d_{2}d_{3}d_{4}\Lambda)}q^{i(i +1)/2}x^{i},\\ \mathcal{B}^{-1}\varphi(qx)\varphi(\frac{\Lambda}{x})x^{j}=q^{-j(j+1)/2}x^{j} \frac{\varphi(\Lambda)}{\varphi(-q^{-j}x)\varphi(-q^{j}\frac{\Lambda}{x})}, \end{array} \tag{2.9}\] we can rewrite (2.8) as \[\frac{\varphi(-q^{i}d_{1}d_{2}x)\varphi(-q^{-i}d_{3}d_{4}\frac{\Lambda}{x})}{ \varphi(-d_{3}\Lambda/x)\varphi(-d_{4}\Lambda/x)}q^{i(i+1)/2}x^{i}=\sum_{j=-n}^{ m}r_{i,j}(\Lambda)q^{-j(j+1)/2}x^{j}\frac{\varphi(-d_{1}x)\varphi(-d_{2}x)}{\varphi(-q^{- j}x)\varphi(-q^{j}\frac{\Lambda}{x})}. \tag{2.10}\] Since \(d_{2}=q^{-m}\), \(d_{3}=q^{-n}\)\((m,n\in\mathbb{Z}_{\geq 0})\), this equation can be simplified to \[q^{\frac{1}{2}i(i+1)}x^{i}(-d_{1}q^{i-m}x)_{m-i}(-d_{4}q^{-i-n} \frac{\Lambda}{x})_{i+n}\] \[=\sum_{j=-n}^{m}r_{i,j}(\Lambda)q^{-\frac{1}{2}j(j+1)}x^{j}(-q^{- m}x)_{m-j}(-q^{-n}\frac{\Lambda}{x})_{j+n}, \tag{2.11}\] where \((a)_{n}=(a;q)_{n}=\prod_{i=0}^{n-1}(1-aq^{i})\). This relation is essentially the same as the equation for the \(q\)-\(6j\) symbols [50]. Hence the coefficients \(r_{i,j}(\Lambda)\) is given by the \(q\)-\(6j\) (i.e. \(R\)-matrix) for \(U_{\mathsf{v}}(A_{1}^{(1)})\). **Remark 2.4**.: _Some explicit formulae for \(U_{\mathsf{v}}(A_{1}^{(1)})\)\(R\)-matrix are known [40],[14]. One of such expressions is related to the hypergeometric series \({}_{4}\phi_{3}\);_ \[R_{i,j}^{\mathrm{HG}}=\beta^{-j}\frac{(q)_{N}(\frac{\alpha}{z})_{N-i}(\frac{1} {\beta})_{N-j}(\frac{\beta}{z})_{j}}{(q)_{j}(q)_{N-j}(\frac{1}{z})_{N}(\frac{ 1}{\beta})_{N-i}}\sum_{k=0}^{j}\frac{(q^{-j})_{k}(q^{i-N})_{k}(q^{1-N}z)_{k}( \frac{z}{\alpha\beta})_{k}}{(q)_{k}(q^{-N})_{k}(\frac{q^{1+i-N}z}{\alpha})_{k} (\frac{q^{1-j}z}{\beta})_{k}}q^{k}, \tag{2.12}\] _where \((a)_{n}=(a;q)_{n}\)7. For \(d_{2}=q^{-m},d_{3}=q^{-n}\) and \(i,j\in[-n,m]\), one can show (see SS2.1);_ Footnote 7: The parameter \(q\) in [40], [14] is denoted by \(\mathsf{v}=q^{\frac{1}{2}}\) in this paper. \[r_{i,j}(\Lambda)=d_{1}^{m-i}q^{(m+1)i}R_{i+n,j+n}^{\mathrm{HG}}\Big{|}_{\{N=m+ n,z=\frac{\Lambda}{q},\alpha=\frac{q^{n}}{d_{1}},\beta=\frac{q^{m}}{d_{4}}\}}, \tag{2.13}\] _hence, the matrices \(\{r_{i,j}(\Lambda)\}_{i,j\in[-n,m]}\) with fixed \(N=m+n\) give essentially the same \((N+1)\times(N+1)\)\(R\)-matrix._ The above result implies that the equation \[\Psi(\Lambda,x)=\mathcal{H}_{\mathrm{S}}\Psi(\frac{\Lambda}{t},\frac{x}{qtQ}), \tag{2.14}\] for \(d_{2}=q^{-m},d_{3}=q^{-n}\)\((N=m+n)\) is written as the \(t\)-difference equation with \((N+1)\times(N+1)\) matrix coefficient: \[\psi_{j}(\Lambda)=\sum_{i=-n}^{m}\psi_{i}(\frac{\Lambda}{t})r_{i,j}(\Lambda)( qtQ)^{-i}, \tag{2.15}\] where \(\Psi(\Lambda,x)=\sum_{i=-n}^{m}x^{i}\psi_{i}(\Lambda)\). This is exactly the \(q\)-KZ equation for the four point function of \(U_{\mathsf{v}}(A_{1}^{(1)})\) in \(N\)-down spin sector8. The cases \((m,n)\) with \(m+n=N\) correspond to an equivalent equation of size \(N+1\). In section 4, we will show that the specializations of the affine Laumon partition function corresponding to the condition \(d_{2}=q^{-m},d_{3}=q^{-n}\) give a set of \(N+1\) solutions which form the fundamental solutions. In this sense, the Laumon partition function can be viewed as the "universal fundamental solution" for \(q\)-KZ equation. ### Computation of \(r_{i,j}\) We will explain how to compute \((r_{i,j})_{i,j=-n}^{m}\) from (2.11). For simplicity we will use the abbreviation \((a)_{n}=(a;q)_{n}\) in the following. By using the formula \((a)_{k}=(-a)^{k}q^{k(k-1)/2}(\frac{1}{aq^{k-1}})_{k}\), the relation (2.11) can be written as \[q^{-in}(\Lambda d_{4})^{i+n}U_{i}(x)=\sum_{j=-n}^{m}r_{i,j}q^{-j}\Lambda^{j+n} V_{j}(x), \tag{2.16}\] where \[U_{i}(x)=(-d_{1}q^{i-m}x)_{m-i}(-\frac{qx}{\Lambda d_{4}})_{i+n},\quad V_{j}( x)=(-q^{-m}x)_{m-j}(-\frac{q^{1-j}x}{\Lambda})_{j+n}. \tag{2.17}\] Since both \(\{U_{i}(x)\}_{i=-n}^{m}\) and \(\{V_{j}(x)\}_{j=-n}^{m}\) are bases of polynomials of degree \(m+n\), our task is to compute the transition matrix between these two bases. For this purpose it is convenient to introduce an intermediate basis \(\{W_{k}(x)\}_{k=-n}^{m}\) defined by9 Footnote 9: Another choice \(\tilde{W}_{k}(x)=U_{k}(x)\Big{|}_{d_{4\mapsto\frac{q^{m+1}}{\Lambda}}}\) also works as well, which gives different decomposition of the same matrix \((r_{i,j})\). \[W_{k}(x)=U_{k}(x)\Big{|}_{d_{1}\mapsto\frac{q^{n+1}}{\Lambda}}=(-\frac{q^{n+1 +k-m}}{\Lambda}x)_{m-k}(-\frac{qx}{\Lambda d_{4}})_{k+n}. \tag{2.18}\] The polynomial \(W_{k}(x)\) is defined so that the zeros of \(U_{i}(x),W_{k}(x),V_{j}(x)\) overlap with each other as follows: \[\begin{split} U_{i}(x):&-x=\underbrace{\frac{q}{d_ {1}},\ldots,\frac{q}{d_{1}}q^{m-i-1}}_{m-i},\ \underbrace{\frac{d_{4}\Lambda}{q^{n+i}},\ldots,\frac{d_{4}\Lambda}{q}}_{n+i}, \\ W_{k}(x):&-x=\underbrace{\frac{\Lambda}{q^{n}}, \ldots,\frac{\Lambda}{q^{n}}q^{m-k-1}}_{m-k},\ \underbrace{\frac{d_{4}\Lambda}{q^{n+k}},\ldots,\frac{d_{4}\Lambda}{q}}_{n+k}, \\ V_{j}(x):&-x=\underbrace{\frac{\Lambda}{q^{n}}, \ldots,\frac{\Lambda}{q^{n}}q^{n+j-1}}_{n+j},\ \underbrace{q^{j+1},\ldots,q^{m}}_{m-j}.\end{split} \tag{2.19}\] Let us compute the transition coefficients \(r_{i,k}^{UV}\) and \(r_{k,j}^{WV}\) defined by \[U_{i}(x)=\sum_{k=-n}^{m}r_{i,k}^{UW}W_{k}(x),\quad W_{k}(x)=\sum_{j=-n}^{m}r_{ k,j}^{WV}V_{j}(x). \tag{2.20}\] Due to the cancellation of common factors arising from (2.19), the first relation of (2.20) can be written as \[(-d_{1}q^{i-m}x)_{m-i}=\sum_{k=i}^{m}r_{i,k}^{UW}(-\frac{q^{n+1+k-m}}{\Lambda}x)_ {m-k}(-\frac{q^{i+n+1}x}{\Lambda d_{4}})_{k-i}. \tag{2.21}\] Note that we may delete the coefficient of \(W_{k}(x)\) which does not have the common zeros with \(U_{i}(x)\). Similarly from the second equation of (2.20), we have \[(-\frac{qx}{\Lambda d_{4}})_{k+n}=\sum_{j=m-k-n}^{m}r_{k,j}^{WV}(-q^{-m}x)_{m-j }(-\frac{q^{1-j}x}{\Lambda})_{j+n-m+k}. \tag{2.22}\] **Proposition 2.5**.: _The nonzero coefficients of \(r_{i,k}^{UW}\) (\(i\leq k\)) can be determined as_ \[r_{i,k}^{UW}=q^{\frac{1}{2}(i-k)(2m+1+i-k)}(-d_{4})^{k-i}\frac{(q)_{m-i}}{(q) _{k-i}(q)_{m-k}}\frac{(d_{1}\Lambda q^{i-k-n})_{k-i}(d_{1}d_{4}\Lambda q^{-m-n -1})_{m-k}}{(d_{4}q^{-m})_{m-i}}. \tag{2.23}\] _Hence \((r_{i,k}^{UW})\) is upper triangular matrix with factorized elements. Similarly the non-vanishing coefficients of \(r_{k,j}^{WU}\) (\(N-(k+n)\leq j+n\)) are_ \[r_{k,j}^{WV}=q^{\frac{1}{2}(j-k-m-n-1)(j+k-m+n)}(-\Lambda)^{j+k-m+n}\frac{(q) _{k+n}}{(q)_{k+j-m+n}(q)_{m-j}}\frac{(\frac{q^{m+1}}{d_{4}\Lambda})_{k+j-m+n}( \frac{q^{j+1}}{d_{4}})_{m-j}}{(q^{-k-n}\Lambda)_{k+n}}. \tag{2.24}\] _The matrix \((r_{k,j}^{WV})\) is "lower triangular along anti-diagonal" with factorized elements._ Proof.: We follow the method in [50] (section 3). By the change of the parameters; \[N=m-i,\quad r=m-k,\qquad(a,b,c)=(-d_{1}q^{i-m},-\frac{q^{i+n+1}}{d_{4}\Lambda },-\frac{q^{n+1}}{\Lambda}), \tag{2.25}\] for (2.21) and \[N=k+n,\quad r=j+k-m+n,\qquad(a,b,c)=(-\frac{q}{d_{4}\Lambda},-q^{-m},-\frac{q ^{k-m+n+1}}{\Lambda}), \tag{2.26}\] for (2.22), respectively, both equations are written as the binomial expansion \[(ax)_{N}=\sum_{r=0}^{N}C_{N,r}(bx)_{N-r}(cq^{-r}x)_{r}. \tag{2.27}\] This implies that the coefficients \(C_{N,r}\) are given by \[C_{N,r}=q^{r(r+1)/2}\big{(}-\frac{b}{c}\big{)}^{r}\frac{(q)_{N}}{(q)_{r}(q)_{N -r}}\frac{(\frac{a}{b})_{r}(q^{r+1}\frac{a}{c})_{N-r}}{(\frac{bq}{c})_{N}}. \tag{2.28}\] To see this, note that \[\begin{split} 1-q^{N}ax&=A_{r}(1-q^{N-r}bx)+B_{r}(1-q^{-r- 1}cx),\\ A_{r}&=\frac{c-aq^{N+1+r}}{c-bq^{N+1}},\quad B_{r}= \frac{b-aq^{r}}{b-cq^{-N-1}},\end{split} \tag{2.29}\] hence, \[\frac{(ax)_{N+1}}{(ax)_{N}}=A_{r}\frac{(bx)_{N+1-r}}{(bx)_{N-r}}+B_{r}\frac{(cq ^{-r-1}x)_{r+1}}{(cq^{-r}x)_{r}}. \tag{2.30}\] From this, the coefficients \(C_{N,r}\) are uniquely determined by "Pascal's triangle" \[C_{N+1,r}=A_{r}C_{N,r}+B_{r-1}C_{N,r-1}, \tag{2.31}\] with the boundary conditions \(C_{0,0}=1\), \(C_{N,-1}=C_{N,N+1}=0\). For \(C_{N,r}\) given by (2.28), the boundary conditions are obvious. The relation (2.31) follows from \[\frac{C_{N+1,r}}{C_{N,r}}=\frac{(1-q^{N+1})(1-\frac{aq^{N+1}}{c})}{(1-q^{N+1- r})(1-\frac{bq^{N+1}}{c})},\quad\frac{C_{N,r-1}}{C_{N,r}}=-\frac{cq^{-r}(1-q^{r })(1-\frac{aq^{r}}{c})}{b(1-q^{N+1-r})(1-\frac{aq^{r-1}}{b})}, \tag{2.32}\] and the identity \[(1-q^{N+1})(1-\frac{a}{c}q^{N+1})-q^{N+1-r}(1-q^{r})(1-\frac{a}{c}q^{r})-(1-q^ {N+1-r})(1-\frac{a}{c}q^{N+1+r})=0. \tag{2.33}\] Hence (2.28) is proved. Back to the original parameters, we obtain the desired results. Combining (2.23) and (2.24), we have \[\begin{split}& r_{i,j}=q^{-in}(\Lambda d_{4})^{i+n}q^{j}\Lambda^{- j-n}r_{i,j}^{UV}=q^{-in}(\Lambda d_{4})^{i+n}q^{j}\Lambda^{-j-n}\sum_{k=-n}^{m}r_{i,k }^{UW}r_{k,j}^{WV}\\ &=\sum_{k=m-n-j}^{m}q^{-in}q^{j}q^{\frac{1}{2}(i-k)(2m+1+i-k)}q^ {\frac{1}{2}(j-k-m-n-1)(j+k-m+n)}(-1)^{-m+n-i+j}d_{4}^{k+n}\Lambda^{i+n+k-m}\\ &\qquad\frac{(q)_{m-i}}{(q)_{k-i}(q)_{m-k}}\frac{(d_{1}\Lambda q ^{i-k-n})_{k-i}(d_{1}d_{4}\Lambda q^{-m-n-1})_{m-k}}{(d_{4}q^{-m})_{m-i}}\\ &\qquad\frac{(q)_{k+n}}{(q)_{k+j-m+n}(q)_{m-j}}\frac{(\frac{q^{m +1}}{d_{4}\Lambda})_{k+j-m+n}(\frac{q^{j+1}}{d_{4}})_{m-j}}{(q^{-k-n}\Lambda)_{ k+n}}.\end{split} \tag{2.34}\] To show (2.13) we make the change of variables; \[z=\frac{\Lambda}{q},\qquad\alpha=\frac{q^{n}}{d_{1}},\qquad\beta=\frac{q^{m}}{ d_{4}}. \tag{2.35}\] Using \((a)_{k}=(-a)^{k}q^{k(k-1)/2}(\frac{1}{aq^{k-1}})_{k}\) and \((a)_{k+l}=(a)_{k}(aq^{k})_{l}\), we compute \[\begin{split}& r_{i,j}=d_{1}^{m-i}\beta^{-n-j}q^{(m+1)i}\frac{( \frac{\alpha}{z})_{m-i}(\frac{1}{\beta})_{m-j}(\frac{\beta}{z})_{j+n}}{(\frac{1 }{z})_{m+n}(\frac{1}{\beta})_{m-i}}\sum_{k=m-n-j}^{m}q^{(m-k)(i+n)}\\ &(\frac{\beta}{z})^{m-k}\frac{(q)_{m-i}}{(q)_{k-i}(q)_{m-k}}\frac {(q)_{k+n}}{(q)_{k+j-m+n}(q)_{m-j}}\frac{(\frac{z}{\alpha\beta})_{m-k}}{(\frac {z}{\alpha}q^{1+i-m})_{m-k}}\frac{(zq^{1-N})_{m-k}}{(\frac{\beta}{z}q^{j+n+k- m})_{m-k}}.\end{split} \tag{2.36}\] One can check these coefficients \(r_{i,j}\) coincide with the results in Remark 2.4. ## 3. Jackson integral representation of solutions to the \(q\)-KZ equation ### Jackson integral of symmetric Selberg type Solutions to the \(q\)-KZ equation allow a representation by the Jackson integral. In this subsection we review the results of [30] which are needed later. In the construction of solutions to the \(q\)-KZ equation for \(U_{\mathsf{v}}(A_{1}^{(1)})\) by M.Ito [30], the \(N\)-tuple Jackson integral with the integration variables \(z=(z_{1},\ldots,z_{N})\); \[\langle\varphi(z),\xi\rangle=(1-t)^{N}\sum_{\nu\in\mathbb{Z}^{N}}\left[\varphi (z)\Phi(z)\Delta(1,z)\right]_{z_{i}=\xi_{i}\nu^{\nu_{i}}} \tag{3.1}\] is considered10. Here, Footnote 10: In view of the application in the next section, we exchange \(q\) and \(t\) in the original paper [30]. \[\Delta(q,z):=\prod_{1\leq i<j\leq N}(z_{i}-q^{-1}z_{j}), \tag{3.2}\] and \[\Phi(z)=\prod_{i=1}^{N}z_{i}^{\alpha}\frac{(ta_{1}^{-1}z_{i},ta_{2}^{-1}z_{i} ;t)_{\infty}}{(b_{1}z_{i},b_{2}z_{i};t)_{\infty}}\cdot\prod_{1\leq i<j\leq N}z_ {i}^{2(\log_{\epsilon}q)-1}\frac{(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(qz_{j}/z_{ i};t)_{\infty}}. \tag{3.3}\] is a common weight factor with four parameters \(a_{1},a_{2},b_{1}\) and \(b_{2}\). Note that \(\Delta(1,z)\) is nothing but the Vandermonde determinant. In the pairing (3.1), \(\xi=(\xi_{1},\ldots,\xi_{N})\) has the meaning of parameters for an integration cycle. On the other hand, \(\varphi(z)\) defines a cocycle function11. In order to give a basis of the space of cohomology classes, in [30] the symmetric polynomial; Footnote 11: \(\varphi(z)\) should not be confused with the infinite product \((z;q)_{\infty}\). \[\tilde{E}_{k,i}(a,b;z)=\frac{1}{\Delta(1,z)}\mathcal{A}(E_{k,i}(a,b;z))=8\frac {E_{k,i}(a,b;z)}{\Delta(1,z)},\quad(a,b\in\mathbb{C}^{\times}) \tag{3.4}\] was employed by introducing \[E_{k,i}(a,b;z)=z_{1}\cdots z_{k}\ \Delta(q,z)\prod_{j=1}^{N-i}(1-bz_{j})\prod_{j =N-i+1}^{N}(1-a^{-1}z_{j}). \tag{3.5}\] Here \(\mathcal{S}/\mathcal{A}\) denotes the symmetrization/anti-symmetrization of the variables \((z_{1},\ldots,z_{N})\), respectively. In particular, \(e_{i}(a,b;z)=\tilde{E}_{0,i}(a,b;z)\) (\(0\leq i\leq N\)) gives a basis which is called Matsuo basis [38], [39]. (See also [41], [55].) The function \(c(x)=x^{\log_{t}(\frac{a}{b})}\frac{\vartheta(ax;t)}{\vartheta(bx;t)}\) with \(\vartheta(x;t)=(x;t)_{\infty}(t/x;t)_{\infty}\) satisfies \(c(tx)=c(x)\). Such a function is called pseudo constant. The following lemma shows that the function \(\Phi(z)\Delta(1,z)\) in (3.1) can be written in a symmetric form, up to some pseudo constant factor. Due to this property, the integral is sometimes called Jackson integral of symmetric Selberg type. **Lemma 3.1**.: _Let \(\tau=\log_{t}q\) and \(i,j\in[N]:=\{1,2,\ldots,N\}\). We have_ \[\prod_{i\neq j}\frac{(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(tz_{j}/z_{i};t)_{ \infty}}=C(z)\Delta(1,z)\prod_{i=1}^{N}z_{i}^{-\tau(N-1)}\prod_{i<j}z_{i}^{2 \tau-1}\frac{(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(qz_{j}/z_{i};t)_{\infty}}, \tag{3.6}\] _where \(C(z)=\prod_{i<j}(\frac{z_{j}}{z_{i}})^{\tau}\frac{\vartheta(tq^{-1}z_{i}/z_{ j};t)}{\vartheta(tz_{i}/z_{j};t)}\) is a pseudo constant for each variable \(z_{i}\)._ Proof.: The left hand side is computed as follows; \[\prod_{i\neq j}\frac{(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(tz_{j}/z_{i };t)_{\infty}} = \prod_{i<j}\frac{(tq^{-1}z_{i}/z_{j};t)_{\infty}(tq^{-1}z_{j}/z_{ i};t)_{\infty}}{(tz_{i}/z_{j};t)_{\infty}(tz_{j}/z_{i};t)_{\infty}}\] \[= \prod_{i<j}\frac{\vartheta(tq^{-1}z_{i}/z_{j};t)(z_{j}/z_{i};t)_{ \infty}(tq^{-1}z_{j}/z_{i};t)_{\infty}}{\vartheta(tz_{i}/z_{j};t)(qz_{j}/z_{ i})_{\infty}(tz_{j}/z_{i};t)_{\infty}}\] \[= C(z)\prod_{i<j}(\frac{z_{j}}{z_{i}})^{-\tau}(1-z_{j}/z_{i})\frac {(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(qz_{j}/z_{i};t)_{\infty}}\] \[= C(z)\Delta(1,z)\prod_{i=1}^{N}z_{i}^{-\tau(N-1)}\prod_{i<j}z_{i} ^{2\tau-1}\frac{(tq^{-1}z_{j}/z_{i};t)_{\infty}}{(qz_{j}/z_{i})_{\infty}}.\] The last line follows from \(\prod_{i<j}(\frac{z_{j}}{z_{i}})^{-\tau}z_{i}^{-1}\,=\,\prod_{i<j}(z_{j}z_{i} )^{-\tau}z_{i}^{2\tau-1}\,=\,\prod_{i=1}^{N}z_{i}^{-\tau(N-1)}\prod_{i<j}z_{i} ^{2\tau-1}\). For the cocycle function \(\varphi(z)\) in the pairing (3.1), we will take the Matsuo basis \(e_{0}(a_{2},b_{1}),\ldots,e_{N}(a_{2},b_{1})\)12. Namely, we consider the following \(\mathbb{C}^{N+1}\)-valued function Footnote 12: Since the common factor (3.3) is symmetric under the exchanges \(a_{1}\leftrightarrow a_{2}\) and \(b_{1}\leftrightarrow b_{2}\), there are four choices of the Matsuo basis. Here we follow the choice in [30]. \[\Psi=[\Psi_{-n},\cdots,\Psi_{m}]=\Big{[}\langle e_{N}(a_{2},b_{1}),\xi\rangle,\cdots,\langle e_{0}(a_{2},b_{1}),\xi\rangle\Big{]}. \tag{3.7}\] In order to identify the instanton partition function to be introduced in the next section with the Jackson integral, it is important to keep the factorized structure of the integrand as far as possible, hence we use the following expression \[\hat{e}_{k}(a,b;z)=e_{N-k}(a,b;z) \tag{3.8}\] \[=[k]_{q^{-1}}![N-k]_{q^{-1}}!\sum_{I\sqcup J=[N],|J|=k}\prod_{i \in I}(1-\frac{z_{i}}{a})\prod_{j\in J}(1-bz_{j})\prod_{i\in I,j\in J}\frac{z_{ j}-q^{-1}z_{i}}{z_{j}-z_{i}},\] where the sum is taken over the \(\left(\begin{smallmatrix}N\\ k\end{smallmatrix}\right)\) terms corresponding to disjoint union \(I\sqcup J=[N]=\{1,2,\ldots,N\}\) with \(|I|=N-k\), \(|J|=k\). The formula (3.8) follows from Proposition G.1 in Appendix F for the case \(s=2\) with \(f_{1}(x)=1-b_{1}x\) and \(f_{2}(x)=1-\frac{x}{a_{2}}\).13 Thus the summation in Jackson integral is written as the sum over the cone (3.22) and additional \(2^{N}=\sum_{k=0}^{N}\left(\begin{smallmatrix}N\\ k\end{smallmatrix}\right)\) sums. As we will see in the next section, in the relevant instanton partition function, the first sum corresponds to the sum over the two Young diagrams with even lengths for all the columns and the second sum comes from their even/odd variants depending on the number of columns with odd length. Footnote 13: As noted in [48], these cocycle functions naturally arise as the Bethe vector. See [1] for the developments based on the geometric method. The expression (3.8) looks like rational, but it is a polynomial in \(z=(z_{1},\ldots,z_{N})\). In fact, the base \(e_{k}(a,b;z)\) can be characterized as the linear combination of the elementary symmetric functions in \(z\) having the following specialization [30],[32]: \[e_{k}(a,b;(x,xq\ldots,xq^{N-1}))=[N]_{q^{-1}}!\prod_{i=1}^{N-k}(1-q^{i}bx) \prod_{i=1}^{k-1}(1-q^{i}\frac{x}{a}). \tag{3.9}\] In [30] the following fact is proved. **Theorem 3.2**.: _[_30_]_ _With respect to the shifts \(T_{\alpha}:\alpha\to\alpha+1\) and \(T_{i}=T_{t,b_{i}}^{-1}T_{t,a_{i}}\) (\(i=1,2\)), the Jackson integral \(\Psi\) satisfies the following system of difference equations:_ \[T_{\alpha}\Psi=\Psi K_{0},\qquad T_{1}\Psi=\Psi K_{1},\qquad T_{2}\Psi=\Psi K _{2}, \tag{3.10}\] _where_ \[K_{0}=R^{-1}AR=D_{2}AD_{2}^{-1},\quad K_{1}=R^{-1}D_{1},\quad K_{2}=D_{2}(T_{2 }R),\] \[D_{1}=((t^{\alpha}q^{N-1})^{N-i}\delta_{ij})_{0\leq i,j\leq N},\quad D_{2}=((t ^{\alpha}q^{N-1})^{i}\delta_{ij})_{0\leq i,j\leq N}, \tag{3.11}\] _and \(\delta_{ij}\) is the Kronecker symbol. An explicit form of the matrix \(R\) is given by 14_ Footnote 14: The matrix \(R\) in this section is slightly different from the \(R\) matrix in the previous section. See Appendix C for a summary of various \(R\)-like matrices used in this paper. \[R=L_{R}D_{R}U_{R},\] \[L_{R}=(l_{ij}^{R})_{0\leq i,j\leq N},\quad D_{R}=(\delta_{ij}d_{j}^{R})_{0\leq i,j\leq N},\quad U_{R}=(u_{ij}^{R})_{0\leq i,j\leq N}, \tag{3.12}\] _where_ \[l_{ij}^{R}=\left[\begin{array}{c}N-j\\ N-i\end{array}\right]_{q^{-1}}\frac{(-1)^{i-j}q^{-(\frac{i-j}{2})}(a_{2}b_{2}q^{ j};q)_{i-j}}{(a_{1}^{-1}a_{2}q^{-(N-2j-1)};q)_{i-j}}\quad(N\geq i\geq j\geq 0),\] \[d_{j}^{R}=\frac{(a_{1}a_{2}^{-1}q^{-j};q)_{N-j}(a_{2}b_{1};q)_{j}} {(a_{1}b_{2};q)_{N-j}(a_{1}^{-1}a_{2}q^{-(N-j)};q)_{j}}\quad(0\leq j\leq N), \tag{3.13}\] \[u_{ij}^{R}=\left[\begin{array}{c}j\\ i\end{array}\right]_{q^{-1}}\frac{(a_{1}b_{1}q^{N-j};q)_{j-i}}{(a_{1}a_{2}^{-1 }q^{N-i-j};q)_{j-i}}\quad(0\leq i\leq j\leq N),\] _and other elements are zero. \(\left[\begin{array}{c}n\\ k\end{array}\right]_{q^{-1}}\) denotes \(q^{-1}\)-binomial coefficient. (See (1.16).) The matrix \(A\) is also given in a Gauss decomposed form as_ \[A=L_{A}D_{A}U_{A},\] \[L_{A}=(l_{ij}^{A})_{0\leq i,j\leq N},\quad D_{A}=(\delta_{ij}d_{j }^{A})_{0\leq i,j\leq N},\quad U_{A}=(u_{ij}^{A})_{0\leq i,j\leq N}, \tag{3.14}\] _where the nonzero elements are_ \[l_{ij}^{A}=(-1)^{i-j}q^{(\frac{N-i}{2})-(\frac{i}{2})}\Big{[} \begin{array}{c}N-j\\ N-i\end{array}\Big{]}_{q}\frac{(a_{2}b_{2}q^{j};q)_{i-j}}{(t^{\alpha}a_{2}b_{ 2}q^{2j};q)_{i-j}}\quad(N\geq i\geq j\geq 0),\] \[d_{j}^{A}=a_{1}^{N-j}a_{2}^{j}q^{(\frac{j}{2})+(\frac{N-j}{2})} \frac{(t^{\alpha};q)_{j}(t^{\alpha}a_{2}b_{2}q^{2j};q)_{N-j}}{(t^{\alpha}a_{2} b_{2}q^{j-1};q)_{j}(t^{\alpha}a_{1}a_{2}b_{1}b_{2}q^{n+j-1};q)_{N-j}}\quad(0\leq j \leq N),\] \[u_{ij}^{A}=(-t^{\alpha}a_{1}^{-1}a_{2})^{j-i}q^{(\frac{j}{2})-( \frac{i}{2})}\Big{[}\begin{array}{c}j\\ i\end{array}\Big{]}_{q}\frac{(a_{1}b_{1}q^{N-j};q)_{j-i}}{(t^{\alpha}a_{2}b_{ 2}q^{2i};q)_{j-i}}\quad(0\leq i\leq j\leq N).\] **Remark 3.3**.: _The difference equations for the shift \(T_{i}=T_{t,b_{i}}^{-1}T_{t,a_{i}}\) are nothing but the traditional (original) \(q\)-KZ equation. The components of the \(R\) matrix appearing in this type of \(q\)-KZ equation are nothing but the connection coefficients between two Matsuo bases \(\{e_{i}(a,b)\}\) with different parameters \(a_{i}\) and \(b_{i}\). Due to the specialization given in eq.(3.9), the computation can be reduced to the connection problem of single variable polynomials as in [50]. On the other hand, the matrix \(A\) for the shift of \(T_{\alpha}:\alpha\rightarrow\alpha+1\) is obtained in a different manner and the basis \(E_{k,i}\) was introduced for this purpose in [30]. It is the \(q\)-KZ equation for \(T_{\alpha}\) that is related to the Shakirov's equation._ The following relation between the matrices \(R\) and \(A\) seems to be noticed long ago at least among specialists (e.g.[29]). We give a proof, since the relation plays an important role in SS3.3. **Proposition 3.4**.: _We have_ \[A=sT(R)D, \tag{3.16}\] _where \(s\) is a scalar and \(D\) is a diagonal matrix given by_ \[s=q^{N(N-1)/2}(a_{1}a_{2}b_{2})^{N}\frac{(t^{\alpha};q)_{N}}{(t^{\alpha}q^{N-1 }a_{1}a_{2}b_{1}b_{2};q)_{N}},\quad D=((a_{1}b_{2})^{-i}\delta_{ij})_{0\leq i,j\leq N}. \tag{3.17}\] \(T\) is a shift operator acting only on the parameters \(a_{2},b_{2}\) as_ \[T=\{a_{2}\to a_{2}(t^{\alpha}q^{N-1}a_{1}b_{2}),\ b_{2}\to b_{2}(t^{\alpha}q^{N-1} a_{1}b_{2})^{-1}\}. \tag{3.18}\] Proof.: Using the relation \((x^{-1};q)_{k}=(-1)^{k}q^{k(k-1)/2}x^{-k}(xq^{1-k};q)_{k}\) and the explicit forms (3.12) and (3.14), one can check \[L_{A}=T(L_{R}),\quad D_{A}=sT(D_{R})D,\quad U_{A}=D^{-1}T(U_{R})D. \tag{3.19}\] Then we obtain \[A=L_{A}D_{A}U_{A}=T(L_{R})\ sT(D_{R})D\ D^{-1}T(U_{R})D=sT(L_{R}D_{R}U_{R})D=sT (R)D, \tag{3.20}\] as desired. The compatibility of the dual pair of difference equations in Theorem 3.2 implies \(RD_{2}A=ARD_{2}\). In Appendix D we give a direct check of the compatibility based on the matrix inversion formula of Andrews and Bressoud. It is amusing that the compatibility condition follows from Bailey's transformation formula for terminating very-well-poised balanced series \({}_{10}W_{9}\). ### Lattice truncation by a choice of the cycle The sum in the Jackson integral (3.1) considered in [30] is bilateral, namely it is taken over \(\{\nu_{i}\}\in\mathbb{Z}^{N}\). On the other hand, the Nekrasov partition function involves a sum over Young diagrams. For the function \(\Psi\), the discrepancy is remedied by an appropriate choice of the cycle \(\xi=(\xi_{1},\ldots,\xi_{N})\). The suitable cycle is already known (see [32] eq.(3.19) and [31] eq.(4.4),(4.5)). In fact, we have **Proposition 3.5**.: _For \(\xi\) given by_ \[\xi=\xi_{m,n}=(\underbrace{a_{2},a_{2}q,\ldots,a_{2}q^{n-1}}_{n},\underbrace {a_{1},a_{1}q,\ldots,a_{1}q^{m-1}}_{m}),\quad(m+n=N) \tag{3.21}\] _the lattice summation over \(\{z_{i}=\xi_{i}t^{\nu_{i}}\}\) is truncated to a cone_ \[0\leq\nu_{1}\leq\nu_{2}\leq\ldots\leq\nu_{n-1}\leq\nu_{n},\] \[0\leq\nu_{n+1}\leq\nu_{n+2}\ldots\leq\nu_{N-1}\leq\nu_{N}. \tag{3.22}\] _and we can normalize the \(\mathbb{C}^{N+1}\)-valued function (3.7) as_ \[\Psi^{T}=\left[\begin{array}{c}\Psi_{m}\\ \vdots\\ \Psi_{0}\\ \vdots\\ \Psi_{-n}\end{array}\right]=\left[\begin{array}{ccccc}*&*&*&*&\cdots\\ \vdots&\vdots&\vdots&\vdots&\cdots\\ 1&*&*&*&\cdots\\ &\ddots&\ddots&\ddots&\cdots\\ O&&*&*&\cdots\end{array}\right]\left[\begin{array}{c}1\\ \Lambda\\ \Lambda^{2}\\ \Lambda^{3}\\ \vdots\end{array}\right], \tag{3.23}\] _where \(\Lambda=t^{\alpha}\)._ When we impose the mass truncation condition \(d_{2}=q^{-m},d_{3}=q^{-n}\) the partition function becomes a Laurent polynomial in \(x\). With the normalization (3.23) the component \(\Psi_{i}\) (\(-n\leq i\leq m\)) is to be identified with the coefficient of \(x^{i}\)-term in the Laurent polynomial (see also Fig.1 for the structure of the partition function). Proof.: The condition (3.22) easily follows from the explicit form (3.3) of the function \(\Phi(z)\). We will show (3.23). From the expression (3.8) for \(e_{k}=e_{k}(a_{2},b_{1},z)\) with \[z=(\underbrace{a_{2}t^{\nu_{1}},a_{2}qt^{\nu_{2}},\ldots,a_{2}q^{n-1}t^{\nu_{ \mu}}}_{n},\underbrace{a_{1}t^{\nu_{n+1}},a_{1}qt^{\nu_{n+2}},\ldots,a_{1}q^{ m-1}t^{\nu_{n+m}}}_{m}),\] we see that the leading term in \(\Lambda\)-expansion of each component of \(\Psi\) has a single contribution arising from a specific \(\{\nu_{i}\}\) and \(J\subset[N],|J|=k\). Explicitly, for \(0\leq k\leq m\), we have \[e_{k}=g_{k}+O(\Lambda^{1}),\quad g_{k}=\frac{(q^{-1};q^{-1})_{m}(b_{1}a_{2};q )_{n}(q^{-N+k};q)_{n}(q^{k}b_{1}a_{1};q)_{m-k}(q^{-n}\tfrac{a_{1}}{a_{2}},q)_{ k}}{(1-q^{-1})^{N}},\] arising from \(J=\{n+1,n+2,\ldots,n+k\}\), \((\nu_{i})=(0^{N})\), and for \(0\leq l\leq n\), we have \[e_{m+l}=h_{l}\Lambda^{l}+O(\Lambda^{l+1}),\quad h_{l}=\frac{(q^{-1};q^{-1})_{ n-l}(q^{-1};q^{-1})_{m+l}(t;q)_{l}(b_{1}a_{2};q)_{n-l}(q^{l-n}\tfrac{a_{1}}{a_{2}} ;q)_{m}}{(1-q^{-1})^{N}},\] arising from \(J=\{n-l+1,n-l+2,\ldots,N\}\), \((\nu_{i})=(0^{l},1^{n-l},0^{m})\). Dividing by the scalar factor \[g_{m}=h_{0}=\frac{(q^{-1};q^{-1})_{n}(q^{-1};q^{-1})_{m}(b_{1}a_{2};q)_{n}(q^ {-n}\tfrac{a_{1}}{a_{2}};q)_{m}}{(1-q^{-1})^{N}},\] we have the expression (3.23). For the function \(\Psi\) normalized as above, the two types of \(q\)-KZ equations in Theorem 3.2 can be written as \[T_{t,a_{2}}^{-1}T_{t,b_{2}}\Psi\tilde{D}_{1}R=\Psi,\qquad T_{t,\Lambda}\Psi= \Psi\tilde{A}\tilde{D}_{2}, \tag{3.24}\] where the matrices \(\tilde{A}\) and \(R\) are given as the following connection matrices15 Footnote 15: Contrary to the matrix \(R\), the fact that the matrix \(\tilde{A}\) is obtained as the connection matrix of Matsuo base is not obvious from the definition. \[\left[e_{0}(a_{2},b_{1}),\ldots,e_{N}(a_{2},b_{1})\right]=\left[e_{N}(a_{1},b _{2}),\ldots,e_{0}(a_{1},b_{2})\right]R,\] \[\left[e_{N}(a,b),\ldots,e_{0}(a,b)\right]_{\begin{smallmatrix}a=a_{1}b_{1} \Lambda\\ b=q^{N-1}a_{2}b_{2}\end{smallmatrix}}=\left[e_{0}(c,d),\ldots,e_{N}(c,d) \right]_{\begin{smallmatrix}c=q^{1-N}\\ d=\Lambda^{-1}\end{smallmatrix}}\ \tilde{A},\] and \(\tilde{D}_{1},\tilde{D}_{2}\) are diagonal matrices given by \[\tilde{D}_{1}=(-1)^{N}q^{\frac{(N-1)(n-m)}{2}}(\frac{a_{2}}{a_{1}})^{N}\prod_ {i=0}^{n-1}\frac{1-q^{m-n+1+i\frac{a_{1}}{a_{2}}}}{1-q^{i}a_{2}b_{1}}\prod_{i= 0}^{m-1}\frac{1-q^{i}a_{1}b_{2}}{1-q^{n-m+1+i\frac{a_{2}}{a_{1}}}}\cdot\text{ diag}(\{(\Lambda q^{N-1})^{m-i}\}_{i=0}),\] \[\tilde{D}_{2}=(\frac{q^{m}a_{1}}{a_{2}})^{n}\prod_{i=0}^{N-1}\frac{1-q^{i}\Lambda} {1-q^{N-1+i}a_{1}a_{2}b_{1}b_{2}\Lambda}\cdot\text{diag}(\{(a_{2}b_{1})^{N-i}\}_ {i=0}^{N}).\] ### Base-fibre duality of the \(q\)-KZ equation Let us look at the two types of the \(q\)-KZ equation explicitly in the case at our hand. It turns out that these \(q\)-KZ equations are related by the duality which exchanges the instanton expansion parameter \(\Lambda\) and the Coulomb parameter \(Q\). The \(q\)-KZ equation arising from Shakirov's equation was given by \[\psi_{j}(t\Lambda)=\sum_{i=-n}^{m}\psi_{i}(\Lambda)(Q^{\vee})^{i}\ r_{i,j}( \Lambda),\quad(j=-n,\dots,m) \tag{3.25}\] where \(Q^{\vee}:=(qtQ)^{-1}\) is the shift parameter for \(x\) and \([r_{i,j}(\Lambda)]_{i,j=-n}^{m}\) is the \(R\)-matrix obtained by the specialization \(d_{2}=q^{-m},d_{3}=q^{-n}\). Since the leading term \(r_{i,j}(0)\) of the expansion \(r_{i,j}(\Lambda)=r_{i,j}(0)+O(\Lambda)\) is upper triangular with \(r_{i,i}(0)=q^{i(i+1)}\), the equation (3.25) has a set of fundamental solutions of the form \[\psi_{j}^{(i)}(\Lambda)=\Lambda^{\rho_{i}}Y_{i,j}(\Lambda),\quad Y_{i,j}( \Lambda)=Y_{i,j}(0)+O(\Lambda),\quad(i,j=-n,\dots,m) \tag{3.26}\] where \(t^{\rho_{i}}=(Q^{\vee})^{i}r_{i,i}(0)=(Q^{\vee})^{i}q^{i(i+1)}\) and the leading coefficients \(Y_{i,j}(0)\) are upper triangular. One can normalize them as \(Y_{i,i}(0)=1\). Then the equation for \(Y_{i,j}(\Lambda)\) is written as \[Y_{k,j}(t\Lambda)=\sum_{i=-n}^{m}(Q^{\vee})^{-k}q^{-k(k+1)}Y_{k,i}(\Lambda)(Q^ {\vee})^{i}\ r_{i,j}(\Lambda).\quad(k,j=-n,\dots,m) \tag{3.27}\] Since \(r_{i,j}(\Lambda)\) is independent of \(Q^{\vee}\), this equation depends on the parameter \(Q^{\vee}\) only through simple power factors. The following Proposition shows that the fundamental solution \(Y(\Lambda)=Y(\Lambda,Q^{\vee})\) satisfies another \(q\)-KZ equation with respect to the parameter \(Q^{\vee}\). **Proposition 3.6**.: _The fundamental solution \(Y(\Lambda,Q^{\vee})\) satisfies the following equation_ \[\sum_{j=-n}^{m}Y_{i,j}(\Lambda,\frac{Q^{\vee}}{t})(\frac{\Lambda d_{1}}{q^{m+ 2}})^{j}\tilde{r}_{j,k}(Q^{\vee})=v_{i}(\Lambda,Q^{\vee})Y_{i,k}(\Lambda,Q^{ \vee}),\quad(-n\leq i,k\leq m), \tag{3.28}\] _where_ \[\tilde{r}_{i,j}(Q^{\vee})=r_{i,j}(\Lambda)\Big{|}_{\Lambda\mapsto\frac{q^{m+2 }}{d_{1}}Q^{\vee}}, \tag{3.29}\] _and_ \[v_{i}(\Lambda,Q^{\vee})=q^{i(i+1)}\left(\frac{\Lambda d_{1}}{q^{m+2}}\right)^ {i}\frac{(Q^{\vee}q^{2+2i};q)_{m-i}(d_{4}Q^{\vee}q^{1-n};q)_{n+i}}{(d_{1}^{-1 }Q^{\vee}q^{2+i};q)_{m-i}(Q^{\vee}q^{1-n+i};q)_{n+i}}. \tag{3.30}\] The equations (3.27) and (3.28) look very similar. To make this similarity more explicit we make the gauge transformation \(Y_{i,j}(\Lambda,Q^{\vee})=G_{i}(Q^{\vee})\tilde{Y}_{i,j}(\Lambda,Q^{\vee})\) with \[G_{i}(Q^{\vee})=\prod_{k=1}^{\infty}\frac{(Q^{\vee}q^{2+2i}t^{k};q)_{m-i}(d_{4 }Q^{\vee}q^{1-n}t^{k};q)_{n+i}}{(d_{1}^{-1}Q^{\vee}q^{2+i}t^{k};q)_{m-i}(Q^{ \vee}q^{1-n+i}t^{k};q)_{n+i}}. \tag{3.31}\] Since \(G_{i}^{-1}(\frac{Q^{\vee}}{t})G_{i}(Q^{\vee})v_{i}(\Lambda,Q^{\vee})=q^{i(i+1 )}(\frac{\Lambda d_{1}}{q^{m+2}})^{i}\), one can rewrite the equation (3.28) as \[\sum_{j=-n}^{m}\tilde{Y}_{i,j}(\Lambda,\frac{Q^{\vee}}{t})(\frac{\Lambda d_{1 }}{q^{m+2}})^{j}\tilde{r}_{j,k}(Q^{\vee})=q^{i(i+1)}(\frac{\Lambda d_{1}}{q^{ m+2}})^{i}\tilde{Y}_{i,k}(\Lambda,Q^{\vee}),\quad(-n\leq i,k\leq m). \tag{3.32}\] This equation is exactly the same form as (3.27) under the replacement \[\Lambda\mapsto\frac{q^{m+2}}{d_{1}}Q^{\vee},\qquad Q^{\vee}\mapsto\frac{d_{1 }}{q^{m+2}}\Lambda. \tag{3.33}\] **Remark 3.7**.: _For the five dimensional quantum Seiberg-Witten curve with coefficient matrix \(\mathcal{D}\) or \(\mathcal{D}_{3}\) [eq.(4.11) or eq.(4.16) in [10]], the positions of the external lines (tentacles of the corresponding amoeba) are given by_ \[(x,p)=(0,d_{3}),(0,d_{4}),(\infty,\frac{1}{d_{1}}),(\infty,\frac{1}{d_{2}}), (1,0),(d_{3}d_{4}\Lambda\mu,0),(\frac{\mu q}{d_{1}d_{2}},\infty),(\frac{ \Lambda}{q},\infty),\] _or_ \[(x,p)=(0,d_{3}),(0,\frac{1}{\mu d_{4}}),(\infty,\frac{1}{d_{1}}),(\infty, \frac{d_{2}}{q^{2}\mu}),(\frac{q}{d_{2}},0),(d_{3}\Lambda,0),(\frac{1}{d_{1} },\infty),(\frac{d_{4}\Lambda}{q},\infty),\] _where \(\mu=Q^{\vee}\). Then, the exchange \(\Lambda\leftrightarrow Q^{\vee}\) can be seen as the exchange of two external lines at \(p=\infty\) (for \(\mathcal{D}\)) or as the exchange of \(x\) and \(p\) (for \(\mathcal{D}_{3}\)) respectively._ Figure 2. Geometric engineering of \(U(2)\) gauge theory by local \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). \(\Lambda\) is the Kähler parameter of the base \(\mathbb{P}^{1}\) and \(Q^{\vee}\) is that of the fibre. We can introduce four matter hypermultiplets by blow-ups. **Example 3.8**.: _For \(d_{2}=q^{-m},d_{3}=q^{-n}\) with \((m,n)=(1,0)\), the solution \(Z=Z(\Lambda,x)\) of Shakirov's equation_ \[T_{t,\Lambda}Z(\Lambda,x)=\mathcal{H}_{\mathrm{S}}T_{qtQ,x}^{-1}Z(\Lambda,x), \tag{3.34}\] _is explicitly given by the Heine's series \({}_{2}\phi_{1}\left[{}_{c}^{a,b};t,z\right]\) with base \(t\) as_ \[Z ={}_{2}\phi_{1}\left[{}_{\begin{array}{c}\frac{1}{d_{1}},\frac{ Qt}{d_{4}}\\ \frac{Qt}{q}\end{array}};t,\frac{d_{1}d_{4}\Lambda}{q^{2}}\right]-\frac{1-d_{1}}{ 1-\frac{q}{Qt}}\ {}_{2}\phi_{1}\left[{}_{\begin{array}{c}\frac{t}{d_{1}},\frac{ Qt}{d_{4}}\\ \frac{Qt^{2}}{q}\end{array}};t,\frac{d_{1}d_{4}\Lambda}{q^{2}}\right]x\] \[={}_{2}\phi_{1}\left[{}_{\begin{array}{c}a,z_{2}\\ bz_{2}\end{array}};t,z_{1}\right]+\frac{bz_{2}(1-\frac{1}{a})}{1-bz_{2}}\ {}_{2}\phi_{1}\left[{}_{ \begin{array}{c}ta,z_{2}\\ tbz_{2}\end{array}};t,z_{1}\right]x, \tag{3.35}\] _where we put \(\Lambda=\frac{aqz_{1}}{b}\), \(Q=\frac{bqz_{2}}{t}\), \(d_{1}=\frac{1}{a}\), \(d_{4}=bq\) for simplicity. Then the coefficients \(y_{0},y_{1}\) of \(Z=y_{0}+y_{1}x\) satisfy_ \[(1-\frac{az_{1}}{b})T_{t,z_{1}}Y=YM(z_{1}), \tag{3.36}\] \[(1-\frac{t}{bz_{2}})T_{t,z_{2}}^{-1}Y=YM(\frac{t}{z_{2}}), \tag{3.37}\] _where_ \[Y=[y_{0},y_{1}],\quad M(u)=\left[{}_{\begin{array}{c}1\\ \frac{az_{1}}{bz_{2}}\end{array}}\right]\left[{}_{\begin{array}{c}1-\frac{u }{b}&1-\frac{1}{a}\\ 1-\frac{1}{b}&1-\frac{1}{au}\end{array}}\right]\left[{}_{\begin{array}{c}1 \\ -1\end{array}}\right]. \tag{3.38}\] _The equation (3.36) is the truncated form of (3.34), while the equation (3.37) follows from_ \[b^{-1}(1-\frac{t}{z_{2}})T_{t,z_{2}}^{-1}\tilde{Y}=\tilde{Y}M( \frac{t}{z_{2}}), \tag{3.39}\] _where \(\tilde{Y}=[\tilde{y}_{0},\tilde{y}_{1}]\), \(\tilde{Z}=\tilde{y}_{0},+\tilde{y}_{1}x\), and_ \[\tilde{Z}:={}_{2}\phi_{1}\left[{}_{\begin{array}{c}b,z_{1}\\ az_{1}\end{array}};t,z_{2}\right]+\frac{bz_{2}(1-\frac{1}{a})}{1-az_{1}}\ {}_{2}\phi_{1}\left[{}_{ \begin{array}{c}tb,z_{1}\\ tax_{1}\end{array}};t,z_{2}\right]x\] \[= \frac{(bz_{2},z_{1};t)_{\infty}}{(az_{1},z_{2};t)_{\infty}}Z. \tag{3.40}\] _For \(x=0\), the last relation in (3.40) is Heine's transformation. Note that the ratio \(\tilde{Z}/Z\) is independent of \(x\), which explains how a single function can satisfy the dual pair of equations (3.36) and (3.37)._ ## 4. Nekrasov partition function as Jackson integral Let us show that the \(K\)-theoretic Nekrasov partition function from the affine Laumon space agrees with the Matsuo bases. When \(n=2\) the general formula (F.28) derived in Appendix F for the orbifolded Nekrasov factor reduces to \[\mathsf{N}^{(0|2)}_{\lambda,\mu}(u|q,t) = \prod_{i,j=1}^{\infty}\frac{[uq^{j-i}t^{1+\lfloor\frac{\mu_{i}^{ \vee}-\lambda_{j}^{\vee}}{2}\rfloor};t]_{\infty}}{[uq^{j-i-1}t^{1+\lfloor\frac {\mu_{i}^{\vee}-\lambda_{j}^{\vee}}{2}\rfloor};t]_{\infty}}\frac{[uq^{j-i-1}t ;t]_{\infty}}{[uq^{j-i}t;t]_{\infty}}, \tag{4.1}\] \[\mathsf{N}^{(1|2)}_{\lambda,\mu}(u|q,t) = \prod_{i,j=1}^{\infty}\frac{[uq^{j-i}t^{\frac{1}{2}+\lfloor\frac{ \mu_{i}^{\vee}-\lambda_{j}^{\vee}+1}{2}\rfloor};t]_{\infty}}{[uq^{j-i-1}t^{ \frac{1}{2}+\lfloor\frac{\mu_{i}^{\vee}-\lambda_{j}^{\vee}+1}{2}\rfloor};t]_ {\infty}}\frac{[uq^{j-i-1}t^{\frac{1}{2}};t]_{\infty}}{[uq^{j-i}t^{\frac{1}{2} };t]_{\infty}}. \tag{4.2}\] When one of the partitions is empty, the formula simplifies to \[\mathsf{N}^{(0|2)}_{\lambda,\emptyset}(u|q,\kappa) = \prod_{i\geq 1}[uq^{i-1};\kappa^{2}]_{\lfloor\frac{\lambda^{ \vee}+1}{2}\rfloor},\quad\mathsf{N}^{(1|2)}_{\lambda,\emptyset}(u|q,\kappa)= \prod_{i\geq 1}[uq^{i-1}\kappa;\kappa^{2}]_{\lfloor\frac{\lambda^{\vee}}{2} \rfloor}, \tag{4.3}\] \[\mathsf{N}^{(0|2)}_{\emptyset,\mu}(u|q,\kappa) = \prod_{i\geq 1}[uq^{-i}\kappa^{-2\lfloor\frac{\mu_{i}^{\vee}}{2} \rfloor};\kappa^{2}]_{\lfloor\frac{\mu_{i}^{\vee}}{2}\rfloor},\quad\mathsf{N} ^{(1|2)}_{\emptyset,\mu}(u|q,\kappa)=\prod_{i\geq 1}[uq^{-i}\kappa^{1-2\lfloor\frac{ \mu_{i}^{\vee}+1}{2}\rfloor};\kappa^{2}]_{\lfloor\frac{\mu_{i}^{\vee}+1}{2} \rfloor}. \tag{4.4}\] Note that \(\lfloor\frac{m+1}{2}\rfloor+\lfloor\frac{m}{2}\rfloor=m\). When \(\lambda_{i}^{\vee},\mu_{i}^{\vee}\) are even, \(\mathsf{N}^{(0|2)}\) and \(\mathsf{N}^{(1|2)}\) have the same number of factors. But when they are odd, \(\lambda_{i}^{\vee}\) contributes more to \(\mathsf{N}^{(0|2)}\), while \(\mu_{i}^{\vee}\) does to \(\mathsf{N}^{(1|2)}\). This is due to a difference of the coloring of \(\lambda\) and \(\mu\) (See Figure 4). Recall that by the localization formula the Nekrasov partition function with a surface defect is given by \[\mathcal{Z}_{\rm AL}=\mathcal{Z}_{\rm AL}\left(\begin{array}{c}u_{1},u_{2} \\ v_{1},v_{2}\\ w_{1},w_{2}\end{array}\Bigg{|}x_{1},x_{2}\Bigg{|}q,t\right)\\ =\sum_{(\lambda^{(1)},\lambda^{(2)})}\prod_{i,j=1}^{2}\frac{ \mathsf{N}^{(j-i|2)}_{\emptyset,\lambda^{(j)}}(u_{i}/v_{j}|q,t)\mathsf{N}^{(j -i|2)}_{\lambda^{(i)},\emptyset}(v_{i}/w_{j}|q,t)}{\mathsf{N}^{(j-i|2)}_{ \lambda^{(i)},\lambda^{(j)}}(v_{i}/v_{j}|q,t)}\cdot x_{1}^{|\lambda^{(1)}|_{ \partial}+|\lambda^{(2)}|_{e}}x_{2}^{|\lambda^{(1)}|_{e}+|\lambda^{(2)}|_{o}}, \tag{4.5}\] Figure 3. \(\mathbb{Z}_{2}\)-coloring of a pair of Young diagrams where \((\lambda^{(1)},\lambda^{(2)})\) is a fixed point of the toric action on the affine Laumon space. The weight in the summation over the fixed points is \(x_{1}^{|\lambda^{(1)}|_{o}+|\lambda^{(2)}|_{e}}x_{2}^{|\lambda^{(1)}|_{e}+| \lambda^{(2)}|_{o}}\), where \(|\lambda|_{o}=\sum_{k\geq 1}\lambda_{2k-1}\) and \(|\lambda|_{e}=\sum_{k\geq 1}\lambda_{2k}\). The expansion parameters \((x_{1},x_{2})\) are related to the physical parameters \((\tilde{\Lambda},x)\) by \[x_{1}=-\frac{\sqrt{Qd_{1}d_{2}}}{\kappa}x,\qquad x_{2}=-\sqrt{\frac{d_{3}d_{4}} {q^{2}Q}}\frac{\Lambda}{x}. \tag{4.6}\] By the relations (2.2) and (2.3), this specialization gives \(x_{1}=-t^{1/2}T_{1}^{1/2}T_{2}^{1/2}x\) and \(x_{2}=-t^{1/2}T_{3}^{1/2}T_{4}^{1/2}\frac{\Lambda}{x}\), which agree with Eq.(6.19) in [10]. We employ the following specialization of the spectral parameters with \(\kappa=t^{-\frac{1}{2}}\); \[u_{1}=\frac{qQ}{d_{3}},\quad u_{2}=\frac{\kappa q}{d_{1}};\qquad v_{1}=1,\quad v _{2}=\frac{Q}{\kappa};\qquad w_{1}=\frac{1}{d_{2}},\quad w_{2}=\frac{Q}{d_{4} \kappa}, \tag{4.7}\] which gives the same function as \({\cal F}^{(1)}\) defined in section 6 of [10]. The overall scaling of parameters by \(Q^{1/2}\) is necessary for the matching of the parameters \(v_{i}\) of \({\cal Z}_{\rm AL}\) and \({\cal F}^{(1)}\). After the same scaling the specialization for the function \({\cal F}^{(1)}\) is \[u_{1}=q^{1/2}\kappa^{-1}QT_{3},\quad u_{2}=q^{1/2}T_{1}^{-1},\quad w_{1}=q^{-1/ 2}\kappa^{-1}QT_{2},\quad w_{2}=q^{-1/2}T_{4}^{-1}. \tag{4.8}\] By substituting (2.3), we can see the agreement with (4.7). There are four possibilities of a specialization of the spectral parameters \(u_{i}\) and \(w_{i}\) due to the symmetry \(d_{1}\leftrightarrow d_{2}\), \(d_{3}\leftrightarrow d_{4}\) of the Hamiltonian \({\cal H}_{\rm S}\). But the symmetry is broken by choosing the specialization (4.7). The mass parameters are paired into two \(SU(2)\) doublets; \((d_{1},d_{3})\) and \((d_{2},d_{4})\) in the case of the affine Laumon space. Such a rearrangement of \(SU(2)\) doublets of mass parameters between the five point conformal block with a degenerate field insertion and the four point current block was already observed in the four dimensional theory (see eq.(5.33) in [9]). From the viewpoint of the orbifold coloring of four mass parameters, the decomposition into the pairs \((d_{1},d_{2})\) and \((d_{3},d_{4})\) is by the parity of \({\mathbb{Z}}_{2}\) coloring. On the other hand the decomposition into \((d_{1},d_{3})\) and \((d_{2},d_{4})\) corresponds to the fundamental and the anti-fundamental representations of \(SU(2)\) gauge symmetry. Let us introduce a notation for the length of the columns of the Young diagrams; \[(\lambda^{(1)})^{\vee}=(\ell_{1},\ell_{2},\cdots),\qquad(\lambda^{(2)})^{\vee }=(k_{1},k_{2},\cdots). \tag{4.9}\] Omitting the normalization factors for simplicity, we find the following contributions to the partition function; 1. Fundamental and anti-fundamental matter contribution \[\prod_{i=1}^{\infty}\frac{[d_{2}q^{i-1}t^{1-[\frac{\ell_{i}+1}{2}]};t]_{ \infty}[d_{3}q^{i-1}t^{-[\frac{k_{i}-1}{2}]};t]_{\infty}[Q^{-1}d_{3}q^{i-1}t^{ -[\frac{\ell_{i}}{2}]};t]_{\infty}[Qd_{2}q^{i-1}t^{1-[\frac{k_{i}}{2}]};t]_{ \infty}}{[d_{4}^{-1}Qq^{1-i}t^{1+[\frac{\ell_{i}}{2}]};t]_{\infty}[d_{1}^{-1} Q^{-1}q^{1-i}t^{1-[\frac{k_{i}}{2}]};t]_{\infty}[d_{1}^{-1}q^{1-i}t^{1+[ \frac{k_{i}-1}{2}]};t]_{\infty}}.\] (4.10) 2. Vector multiplet contribution \[\prod_{i,j=1}^{\infty}\frac{[q^{j-i-1}t^{1+[\frac{\ell_{i}-\ell_{j} }{2}]};t]_{\infty}}{[q^{j-i}t^{1+[\frac{\ell_{i}-\ell_{j}}{2}]};t]_{\infty}} \prod_{i,j=1}^{\infty}\frac{[q^{j-i-1}t^{1+[\frac{k_{i}-k_{j}}{2}]};t]_{\infty}} {[q^{j-i}t^{1+[\frac{k_{i}-k_{j}}{2}]};t]_{\infty}}\] \[\prod_{i,j=1}^{\infty}\frac{[Q^{-1}q^{j-i-1}t^{1+[\frac{k_{i}-\ell_ {j}-1}{2}]};t]_{\infty}}{[Q^{-1}q^{j-i}t^{1+[\frac{k_{i}-\ell_{j}-1}{2}]};t]_{ \infty}}\prod_{i,j=1}^{\infty}\frac{[Qq^{j-i-1}t^{1+[\frac{\ell_{i}-k_{j}+1}{2 }]};t]_{\infty}}{[Qq^{j-i}t^{1+[\frac{\ell_{i}-k_{j}+1}{2}]};t]_{\infty}}\] (4.11) By tuning some of mass parameters (See Appendix E), we can truncate the Young diagrams with a finite width. Namely we have \(\ell_{i}=0\) for \(m<i\) and \(k_{j}=0\) for \(n<j\). In this case the matter contribution becomes simple. In terms of the variables16 Footnote 16: Note that \(\lfloor\frac{\ell_{i}}{2}\rfloor\) and \(\lfloor\frac{k_{j}}{2}\rfloor\) count the vertical dominos of length \(2\) in the Young diagrams \(\lambda\) and \(\mu\), respectively. \[z_{i}=q^{1-i}t^{\lfloor\frac{\ell_{i}}{2}\rfloor},\qquad w_{j}=q^{1-j}t^{ \lfloor\frac{k_{j}}{2}\rfloor}, \tag{4.12}\] we obtain \[\prod_{i=1}^{m}\frac{[d_{2}z_{i}^{-1}t^{1-(\ell_{i})};t]_{\infty}[Q^{-1}d_{3} z_{i}^{-1};t]_{\infty}}{[d_{4}^{-1}Qz_{i}t;t]_{\infty}[d_{1}^{-1}z_{i}t^{( \ell_{i})};t]_{\infty}}\prod_{j=1}^{n}\frac{[d_{3}w_{j}^{-1}t^{1-(k_{j})};t]_{ \infty}[Qd_{2}w_{j}^{-1}t;t]_{\infty}}{[d_{1}^{-1}Q^{-1}w_{j};t]_{\infty}[d_{4 }^{-1}w_{j}t^{(k_{j})};t]_{\infty}}, \tag{4.13}\] where we have used \[\lfloor\frac{K-L}{2}\rfloor=\lfloor\frac{K}{2}\rfloor-\lfloor\frac{L}{2} \rfloor+\{(K)-1\}\cdot(L),\qquad K,L\in\mathbb{Z} \tag{4.14}\] with \(L=\pm 1\). Here \((K)\) denotes the parity of an integer \(K\); \((K)=1\) for an odd integer and \((K)=0\), otherwise. However, for the vector multiplet contribution we have to make an appropriate decomposition of \((i,j)\in\mathbb{N}\times\mathbb{N}\) into four regions \(R_{\rm I},\cdots,R_{\rm IV}\) according to the truncation condition. For example, for a pair \((\ell_{i},k_{j})\), we define \[R_{\rm I}=\{1\leq i\leq m,1\leq j\leq n\},\quad R_{\rm II}=\{m+1 \leq i<\infty,1\leq j\leq n\},\] \[R_{\rm III}=\{1\leq i\leq m,n+1\leq j<\infty\},\quad R_{\rm IV}=\{ m+1\leq i<\infty,n+1\leq j<\infty\}, \tag{4.15}\] and for \((\ell_{i},\ell_{j})\) we take \(m=n\). The vector multiplet contribution becomes trivial only for \(R_{\rm IV}\). We find the vector multiplet contributions are 1. From the region \(R_{\rm I}\), \[\prod_{i\neq j=1}^{m}\frac{[(tz_{i}/qz_{j})\ t^{\{(\ell_{i})-1\} \cdot(\ell_{j})};t]_{\infty}}{[(tz_{i}/z_{j})\ t^{\{(\ell_{i})-1\}\cdot(\ell_{j })};t]_{\infty}}\prod_{i\neq j=1}^{n}\frac{[(tw_{i}/qw_{j})\ t^{\{(k_{i})-1\} \cdot(k_{j})};t]_{\infty}}{[(tw_{i}/w_{j})\ t^{\{(k_{i})-1\}\cdot(k_{j})};t]_{ \infty}}\] \[\prod_{i=1}^{m}\prod_{j=1}^{n}\frac{[(Q^{-1}w_{j}/qz_{i})\ t^{\{( k_{j})\cdot\{1-(\ell_{i})\}}};t]_{\infty}}{[(Q^{-1}w_{j}/z_{i})\ t^{\{(k_{j})\cdot\{1-(\ell_{i})\}}};t]_{ \infty}}\frac{[(Qtz_{i}/qw_{j})\ t^{\{(\ell_{i})\cdot\{1-(k_{j})\}}};t]_{ \infty}}{[(Qtz_{i}/w_{j})\ t^{\{(\ell_{i})\cdot\{1-(k_{j})\}}};t]_{\infty}}.\] (4.16) Here and henceforth, we use a shorthand notation \(\prod_{i\neq j=1}^{m}\) for the sum over \(1\leq i,j\leq m\) without the diagonal part \(i=j\). 2. For two semi-infinite regions \(R_{\rm II}\) and \(R_{\rm III}\) only the "boundary" contributions remain; \[\prod_{i=1}^{m}\frac{[q^{m-i}t^{1+\lfloor\frac{\ell_{i}}{2}\rfloor} ;t]_{\infty}}{[q^{i-m-1}t^{1+\lfloor-\frac{\ell_{i}}{2}\rfloor};t]_{\infty}} \frac{[Qq^{n-i}t^{1+\lfloor\frac{\ell_{i}+1}{2}\rfloor};t]_{\infty}}{[Q^{-1}q^ {i-n-1}t^{\lfloor\frac{1-\ell_{i}}{2}\rfloor};t]_{\infty}}\] (4.17) \[\prod_{j=1}^{n}\frac{[q^{n-j}t^{1+\lfloor\frac{k_{j}}{2}\rfloor};t ]_{\infty}}{[q^{j-n-1}t^{1+\lfloor\frac{-k_{j}}{2}\rfloor};t]_{\infty}}\frac{ [Q^{-1}q^{m-j}t^{1+\lfloor\frac{k_{j}-1}{2}\rfloor};t]_{\infty}}{[Qq^{j-m-1} t^{1+\lfloor\frac{1-k_{j}}{2}\rfloor};t]_{\infty}}\] \[= \prod_{i=1}^{m}\frac{[z_{i}q^{m-1}t;t]_{\infty}}{[z_{i}^{-1}q^{-m }t^{1-(\ell_{i})};t]_{\infty}}\frac{[Qz_{i}q^{n-1}t^{1+(\ell_{i})};t]_{\infty} }{[Q^{-1}z_{i}^{-1}q^{-n};t]_{\infty}}\] \[\prod_{j=1}^{n}\frac{[w_{j}q^{n-1}t;t]_{\infty}}{[w_{j}^{-1}q^{-n }t^{1-(k_{j})};t]_{\infty}}\frac{[Q^{-1}w_{j}q^{m-1}t^{(k_{j})};t]_{\infty}}{[ Qw_{j}^{-1}q^{-m};t]_{\infty}},\] where we have used (4.14). In summary, up to a \(z_{i},w_{j}\) independent normalization factor, we can rewrite the Nekrasov partition function as a sum over the positive cone of a lattice with the following weight function \[W_{m+n}(z) \tag{4.18}\] \[=\prod_{i=1}^{m}\frac{[d_{2}z_{i}^{-1}t^{1-(\ell_{i})};t]_{\infty }[Q^{-1}d_{3}z_{i}^{-1};t]_{\infty}}{[d_{4}^{-1}Qz_{i}t;t]_{\infty}[d_{1}^{-1} z_{i}t^{(\ell_{i})};t]_{\infty}}\frac{[z_{i}q^{m-1}t;t]_{\infty}}{[z_{i}^{-1}q^{-m }t^{1-(\ell_{i})};t]_{\infty}}\frac{[Qz_{i}q^{n-1}t^{1+(\ell_{i})};t]_{\infty} }{[Q^{-1}z_{i}^{-1}q^{-n};t]_{\infty}}\] \[\prod_{j=1}^{n}\frac{[Q^{-1}d_{3}z_{m+j}^{-1};t]_{\infty}[d_{2}z_ {m+j}^{-1}t^{(k_{i})-1};t]_{\infty}}{[d_{1}^{-1}z_{m+j}t^{1-(k_{j})};t]_{ \infty}[d_{4}^{-1}Qz_{m+j}t;t]_{\infty}}\frac{[Qz_{m+j}q^{n-1}t^{2-(k_{j})};t ]_{\infty}}{[Q^{-1}z_{m+j}^{-1}q^{-n};t]_{\infty}}\frac{[z_{m+j}q^{m-1}t;t]_{ \infty}}{[z_{m+j}^{-1}q^{-m}t^{(k_{j})-1};t]_{\infty}}\] \[\prod_{i\neq j=1}^{m}\frac{[(tz_{i}/qz_{j})\ t^{(\ell_{i})-1} \cdot(\ell_{j});t]_{\infty}}{[(tz_{i}/z_{j})\ t^{(\ell_{i})-1}\cdot(\ell_{j});t ]_{\infty}}\prod_{i\neq j=1}^{n}\frac{[(tz_{m+i}/qz_{m+j})\ t^{(k_{i})\cdot(k_ {j})-1};t]_{\infty}}{[(tz_{m+i}/z_{m+j})\ t^{(k_{i})\cdot(k_{j})-1};t]_{\infty}}\] \[\prod_{i=1}^{m}\prod_{j=1}^{n}\frac{[(tz_{m+j}/qz_{i})\ t^{-(k_{j })\cdot(\ell_{i})};t]_{\infty}}{[(tz_{m+j}/z_{i})\ t^{-(k_{j})\cdot(\ell_{i})}; t]_{\infty}}\frac{[(tz_{i}/qz_{m+j})\ t^{\{(\ell_{i})-1\}\cdot\{1-(k_{j})\}};t]_{ \infty}}{[(tz_{i}/z_{m+j})\ t^{\{(\ell_{i})-1\}\cdot\{1-(k_{j})\}};t]_{ \infty}},\] where we have defined17\(z_{m+j}=Q^{-1}w_{j}t^{(k_{j})-1}=Q^{-1}q^{1-j}t^{\lfloor\frac{k_{j}-1}{2}\rfloor}\). Substitution of the mass tuning condition \(d_{2}=q^{-m}\) and \(d_{3}=q^{-n}\) leads some cancellation in the weight function (4.18). By further defining \((\ell_{m+j})=1-(k_{j})\), we can make \(W_{m,n}(z,w)\) completely symmetric in \(z\) and \(w\); \[W_{m+n}(z)=\prod_{I=1}^{m+n}\frac{[z_{I}q^{m-1}t;t]_{\infty}[Qz_{I}q^{n-1}t^{1+( \ell_{I})};t]_{\infty}}{[d_{4}^{-1}Qz_{I}t;t]_{\infty}[d_{1}^{-1}z_{I}t^{(\ell_{ I})};t]_{\infty}}\prod_{I\neq J=1}^{m+n}\frac{[(tz_{I}/qz_{J})\ t^{\{(\ell_{I})-1\}\cdot(\ell_{J})};t] _{\infty}}{[(tz_{I}/z_{J})\ t^{\{(\ell_{I})-1\}\cdot(\ell_{J})};t]_{\infty}}. \tag{4.19}\] In the expansion of Nekrasov partition function the contribution from a fixed point \((\lambda^{(1)},\lambda^{(2)})\) has the weight \(x_{1}^{|\lambda^{(1)}|_{o}+|\lambda^{(2)}|_{e}}x_{2}^{|\lambda^{(1)}|_{e}+| \lambda^{(2)}|_{o}}\). where \[|\lambda|_{o}=\sum_{k\geq 1}\lambda_{2k-1}=\sum_{k\geq 1}\lfloor\frac{\lambda_{k} ^{\vee}+1}{2}\rfloor,\quad|\lambda|_{e}=\sum_{k\geq 1}\lambda_{2k}=\sum_{k\geq 1} \lfloor\frac{\lambda_{k}^{\vee}}{2}\rfloor. \tag{4.20}\] Recall that we have specialized \(x_{1}\) and \(x_{2}\) as (4.6), which implies that the dependence of the weight on \(\Lambda\) and \(x\) is \[\Lambda^{|\lambda^{(1)}|_{e}+|\lambda^{(2)}|_{o}}x^{|\lambda^{(1)}|_{o}-| \lambda^{(1)}|_{e}+|\lambda^{(2)}|_{e}-|\lambda^{(2)}|_{o}}. \tag{4.21}\] Since \(|\lambda|_{o}-|\lambda|_{e}=\sum_{k\geq 1}(\lambda_{k}^{\vee})\), the power of \(x\), which is identified with the \(SU(2)\) spin variable, is \[2s=\sum_{i=1}^{m}(\ell_{i})-\sum_{j=1}^{n}(k_{j})=\sum_{I=1}^{m+n}(\ell_{I})-n. \tag{4.22}\] Hence only columns with odd length can produce a non-vanishing power of \(x\). We see that the range of the power \(p\) of \(x\) is \(-n\leq p\leq m\) (See also Figure 1). Similarly the power of \(\Lambda\), which we identify as the instanton number, is \[\sum_{i=1}^{m}\lfloor\frac{\ell_{i}}{2}\rfloor+\sum_{j=1}^{n}\lfloor\frac{k_{j }+1}{2}\rfloor=\sum_{i=1}^{m}\lfloor\frac{\ell_{i}}{2}\rfloor+\sum_{j=1}^{n} \lfloor\frac{k_{j}}{2}\rfloor+\sum_{j=1}^{n}(k_{j}). \tag{4.23}\] Now we are ready to prove the following theorem. **Theorem 4.1**.: _Under the identification of the parameters_ \[d_{1}=\frac{1}{q^{m-1}a_{1}b_{1}},\quad d_{2}=q^{-m},\quad d_{3}=q^{-n},\quad d _{4}=\frac{1}{q^{n-1}a_{2}b_{2}},\quad Q=\frac{q^{m-n}a_{1}}{ta_{2}}, \tag{4.24}\] _the affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\) (4.5) coincides with the \(\mathbb{C}^{N+1}\)-valued function (3.7) with the integration cycle \(\xi\) of (3.21). More precisely, the \(N+1\) components of \(\mathcal{Z}_{\mathrm{AL}}\) with respect to the expansion parameter \(x\) are identified as the Jackson integrals \(\langle e_{k}(a_{2},b_{1}),\xi\rangle\) with respect to the Matsuo basis \(e_{k}\)\((0\leq k\leq N=m+n)\)._ Proof.: Rewriting the dummy indices in (4.19), we consider \[W_{N}(z)=\prod_{i=1}^{N}\frac{[z_{i}q^{m-1}t;t]_{\infty}[Qz_{i}q^{n-1}t^{1+( \ell_{i})};t]_{\infty}}{[d_{4}^{-1}Qtz_{i};t]_{\infty}[d_{1}^{-1}z_{i}t^{(\ell_ {i})};t]_{\infty}}\prod_{i\neq j=1}^{N}\frac{[(tz_{i}/qz_{j})\ t^{\{(\ell_{i})-1 \}\cdot(\ell_{j})};t]_{\infty}}{[(tz_{i}/z_{j})\ t^{\{(\ell_{i})-1\}\cdot(\ell_ {j})};t]_{\infty}}, \tag{4.25}\] where \(N=m+n\). By the parameter relation (4.24), it can be written as \[W_{N}(z)=\prod_{i=1}^{N}\frac{[z_{i}q^{m-1}t;t]_{\infty}[q^{m-1}\frac{a_{1}}{a_{2}} z_{i}t^{(\ell_{i})};t]_{\infty}}{[q^{m-1}a_{1}b_{2}z_{i};t]_{\infty}[q^{m-1}a_{1}b_{ 1}z_{i}t^{(\ell_{i})};t]_{\infty}}\prod_{i\neq j=1}^{N}\frac{[(tz_{i}/qz_{j}) \ t^{\{(\ell_{i})-1\}\cdot(\ell_{j})};t]_{\infty}}{[(tz_{i}/z_{j})\ t^{\{(\ell_{ i})-1\}\cdot(\ell_{j})};t]_{\infty}}. \tag{4.26}\] Rescaling the integration variables as \(z_{i}\to z_{i}/(q^{m-1}a_{1})\), we have \[W_{N}(z)=\prod_{i=1}^{N}\frac{[\frac{z_{i}}{a_{1}}t;t]_{\infty}[\frac{z_{i}}{a _{2}}z_{i}t^{(\ell_{i})};t]_{\infty}}{[b_{2}z_{i};t]_{\infty}[b_{1}z_{i}t^{( \ell_{i})};t]_{\infty}}\prod_{i\neq j=1}^{N}\frac{[(tz_{i}/qz_{j})\ t^{\{(\ell_{ i})-1\}\cdot(\ell_{j})};t]_{\infty}}{[(tz_{i}/z_{j})\ t^{\{(\ell_{i})-1\}\cdot(\ell_{j})};t]_{ \infty}}. \tag{4.27}\] We decompose the index set as \[I\cup J=\{1,2,\cdots,N\},\qquad I=\{i|(\ell_{i})=0\},\quad J=\{j|(\ell_{j})=1\}. \tag{4.28}\] From (4.22), we see that \(2s+n=|J|\). Then \[W_{N}(z)=W_{N}^{(0)}(z)P_{J}(z) \tag{4.29}\] where \[W_{N}^{(0)}(z)=\prod_{i=1}^{N}\frac{[\frac{z_{i}}{a_{1}}t;t]_{ \infty}[\frac{z_{i}}{a_{2}}t;t]_{\infty}}{[b_{2}z_{i};t]_{\infty}[b_{1}z_{i};t ]_{\infty}}\prod_{i\neq j=1}^{N}\frac{[tz_{i}/qz_{j};t]_{\infty}}{[tz_{i}/z_{j} ;t]_{\infty}}, \tag{4.30}\] \[P_{J}(z)=\prod_{j\in J}\frac{1-b_{1}z_{j}}{1-\frac{z_{j}}{a_{2}}} \times\prod_{i\in I}\prod_{j\in J}\frac{z_{j}-q^{-1}z_{i}}{z_{j}-z_{i}}. \tag{4.31}\] From (4.22) the term of order \((k-n)\) in \(x\) is a sum of \(\binom{N}{k}\) terms with \(|J|=k\). We will show the term corresponding to the pair of the partition \((\lambda,\mu)\) in the Laumon partition function coincides with the term \(z_{i}=\xi_{i}t^{\nu_{i}}\)\((i=1,\ldots,N)\) in the Jackson integration, where \[\mu_{i}^{\vee} =\left\{\begin{array}{cc}2\nu_{i}&i\in J\\ 2\nu_{i}-1&i\notin J\end{array}\right.\quad(i=1,\ldots,n), \tag{4.32}\] \[\lambda_{i}^{\vee} =\left\{\begin{array}{cc}2\nu_{i+n}+1&i+n\in J\\ 2\nu_{i+n}&i+n\notin J\end{array}\right.\quad(i=1,\ldots,m). \tag{4.33}\] From (3.3), (3.6), we see that \(W^{(0)}(z)=\Delta(z,1)\Phi(z)\). The remaining task is to show the identification of \(P_{J}(z)\) with the cocycle factor (3.8), which is achieved as \[\sum_{J\in[N],|J|=k}P_{J}(z)=c_{k}\hat{e}_{k}(z)\prod_{i=1}^{N}\frac{1}{1-\frac {z_{i}}{a_{2}}}, \tag{4.34}\] where \(c_{k}=q^{(k-m)(k-n)/2}[\begin{smallmatrix}N\\ k\end{smallmatrix}]_{q^{-1}}/[\begin{smallmatrix}N\\ m\end{smallmatrix}]_{q^{-1}}\). ## 5. Four dimensional limit and KZ equation In [10] we computed the four dimensional limit of the Hamiltonian \(\mathcal{H}_{\mathrm{S}}\) and confirmed that Shakirov's non-stationary difference equation has the correct four dimensional limit. In section 2, we have shown the representation matrix of \(\mathcal{H}_{\mathrm{S}}\) is nothing but the \(R\)-matrix of \(U_{\mathrm{v}}(A_{1}^{(1)})\) with generic spins. Let us work out the four dimensional limit of the \(R\)-matrix. **Proposition 5.1**.: _Under the limit \(q=e^{h},d_{i}=q^{m_{i}}\), \(h\to 0\), we have_ \[R=1+hR^{(1)}+O(h^{2}), \tag{5.1}\] _where \(R^{(1)}\) is a tridiagonal matrix, which is generically infinite dimensional. Furthermore, it can be truncated to a finite matrix of size \(m+n+1\), if \(m_{1}\)(or \(m_{2}\))\(=-m\) and \(m_{3}\)(or \(m_{4}\))\(=-n\) for \(m,n\in\mathbb{Z}_{\geq 0}\)._ For example in case of \(m_{1}=-2,m_{3}=-1\), we have \[R^{(1)}=\left[\begin{array}{cccc}-\frac{3(\Lambda m_{2}-\Lambda)}{\Lambda-1 }&\frac{3(m_{2}-1)}{\Lambda-1}&0&0\\ \frac{\Lambda m_{4}}{\Lambda-1}&-\frac{2\Lambda m_{2}+\Lambda m_{4}}{\Lambda -1}&\frac{2m_{2}}{\Lambda-1}&0\\ 0&\frac{2\Lambda(m_{4}-1)}{\Lambda-1}&-\frac{\Lambda+\Lambda m_{2}+2\Lambda m_ {4}-2}{\Lambda-1}&-\frac{-m_{2}-1}{\Lambda-1}\\ 0&0&\frac{3(\Lambda m_{4}-2\Lambda)}{\Lambda-1}&-\frac{3(\Lambda m_{4}-2)}{ \Lambda-1}\end{array}\right].\] Proof.: Under the limit \(q=e^{h}\), \(h\to 0\), we have \[\frac{(-q^{\alpha}x;q)_{\infty}}{(-q^{\beta}x;q)_{\infty}}=(1+x)^{\beta-\alpha }\Big{\{}1-\frac{h}{2}(\alpha-\beta)(\alpha+\beta-1)\frac{x}{x-1}+O(h^{2}) \Big{\}}. \tag{5.2}\] Then the defining relation of the matrix \(R=(r_{i,j})\) \[q^{i(i+1)/2}x^{i}\frac{(-q^{i}d_{1}d_{2}x;q)_{\infty}}{(-d_{2}x;q)_{\infty}} \frac{(-q^{-i}d_{3}d_{4}\frac{\Lambda}{x};q)_{\infty}}{(-d_{4}\frac{\Lambda}{x };q)_{\infty}}=\sum_{j}r_{i,j}q^{-j(j+1)/2}x^{j}\frac{(-d_{1}x;q)_{\infty}}{(-q ^{-j}x;q)_{\infty}}\frac{(-d_{3}\frac{\Lambda}{x};q)_{\infty}}{(-q^{j}\frac{ \Lambda}{x};q)_{\infty}}, \tag{5.3}\] can be written as \[A_{i}=\sum_{j}\left(\frac{x+\Lambda}{x+1}\right)^{j-i}B_{j}r_{i,j}, \tag{5.4}\] where up to \(O(h^{2})\), \[A_{i}= 1+\frac{1}{2}h\left\{-\frac{\Lambda(i-m_{3})(i-m_{3}-2m_{4}+1)}{ x+\Lambda}-\frac{x(i+m_{1})(i+m_{1}+2m_{2}-1)}{x+1}+i(i+1)\right\},\] \[B_{j}= 1+\frac{1}{2}h\left\{\frac{\Lambda(j-m_{3})(j+m_{3}-1)}{x+\Lambda }+\frac{x(j-m_{1}+1)(j+m_{1})}{x+1}-j(j+1)\right\}. \tag{5.5}\] We put \(r_{i,j}=r_{i,j}^{(0)}+hr_{i,j}^{(1)}+O(h^{2})\). From the leading term, we have \[\sum_{j}\left(\frac{x+\Lambda}{x+1}\right)^{j-i}r_{i,j}^{(0)}=1, \tag{5.6}\] hence \[r_{i,j}^{(0)}=\delta_{i,j}. \tag{5.7}\] Then, from \(O(h)\) terms, we obtain \[\sum_{j}\left(\frac{x+\Lambda}{x+1}\right)^{j-i}r_{i,j}^{(1)}=i(i+1)+\frac{x( i+m_{1})(i+m_{2})}{1+x}+\frac{\Lambda(i-m_{3})(i-m_{4})}{x+\Lambda}. \tag{5.8}\] and the solution is given by \[r_{i,i-1}^{(1)}=\frac{\Lambda(i-m_{3})(i-m_{4})}{\Lambda-1},\] \[r_{i,i}^{(1)}=-\frac{\Lambda\{(i+m_{1})(i+m_{2})+(i-m_{3})(i-m_{ 4})\}}{\Lambda-1}+i(i+1),\] \[r_{i,i+1}^{(1)}=\frac{(i+m_{1})(i+m_{2})}{\Lambda-1},\] \[r_{i,j}^{(1)}=0,\quad\mathrm{o}therwise. \tag{5.9}\] The truncation can also be seen from this expression. ### Identification with the KZ equation The result given above is consistent with the four dimensional limit of the operator \(\mathcal{H}_{\mathrm{S}}\) (see [10] eq.(5.32)): \[\mathcal{H}_{\mathrm{S}} =1+hH_{4d}+O(h^{2}), \tag{5.10}\] \[H_{4d} =\vartheta_{x}(\vartheta_{x}+1)+\frac{\Lambda-x}{1-\Lambda}( \vartheta_{x}+m_{1})(\vartheta_{x}+m_{2})+\frac{\Lambda}{x}\frac{x-1}{1- \Lambda}(\vartheta_{x}-m_{3})(\vartheta_{x}-m_{3}). \tag{5.11}\] Using (5.11) and defining \(\kappa\) and \(a\) by \(t=q^{\kappa},Q=q^{a}\)18, we can see that the four dimensional limit of the Shakirov's equation Footnote 18: We have scaled the Coulomb parameter \(a\) by \(\epsilon_{1}\). \[\Psi(t\Lambda,x)=\mathcal{H}_{\mathrm{S}}\Psi(\Lambda,\frac{x}{qtQ}), \tag{5.12}\] takes the form that looks like the Knizhnik-Zamolodchikov equation \[\kappa\frac{\partial}{\partial\Lambda}\Psi(\Lambda,x)=\frac{H_{4d}-(\kappa+1+ a)\vartheta_{x}}{\Lambda}\Psi(\Lambda,x)=\left\{\frac{A_{0}}{\Lambda}+\frac{A_{1}} {\Lambda-1}\right\}\Psi(\Lambda,x), \tag{5.13}\] where \(A_{0},A_{1}\) are operators acting on variable \(x\): \[A_{0}=\vartheta_{x}(\vartheta_{x}-\kappa-a)-x(\vartheta_{x}+m_{1})(\vartheta_ {x}+m_{2}), \tag{5.14}\] \[A_{1}=(x-1)(\vartheta_{x}+m_{1})(\vartheta_{x}+m_{2})+(\frac{1}{x}-1)(\vartheta_{ x}-m_{3})(\vartheta_{x}-m_{4}). \tag{5.15}\] Below, we will show the relation with the standard KZ equation more explicitly. Following [24], we write the \(\widehat{\mathfrak{sl}_{2}}\) KZ equation for generic complex spins \(j_{a}\) and level \(k\) (\(\kappa=k+2=\log t/\log q\)) in the form \[\kappa\frac{\partial}{\partial z_{a}}\psi=\sum_{b(\neq a)=1}^{n}\frac{\Omega_ {a,b}}{z_{a}-z_{b}}\psi,\quad(a=1,\dots,n) \tag{5.16}\] \[\Omega_{a,b}=-(x_{a}-x_{b})^{2}\frac{\partial^{2}}{\partial x_{a}\partial x_{b }}+2(x_{a}-x_{b})\left(j_{a}\frac{\partial}{\partial x_{b}}-j_{b}\frac{ \partial}{\partial x_{a}}\right)+2j_{a}j_{b}. \tag{5.17}\] The number of variables \(z_{a},x_{a}\) can be reduced to \(2(n-3)\), thanks to the \(\mathrm{SL}(2,\mathbb{C})\times\mathrm{SL}(2,\mathbb{C})\) symmetry for \(x_{a}\) and \(z_{a}\): \[\sum_{a=1}^{n}x_{a}^{i}\{x_{a}\frac{\partial}{\partial x_{a}}-j_{a}(i+1)\}\psi= 0,\quad\sum_{a=1}^{n}z_{a}^{i}\{z_{a}\frac{\partial}{\partial z_{a}}+h_{a}(i+1)\} \psi=0, \tag{5.18}\] where \(i=0,\pm 1\) and \(h_{a}=j_{a}(j_{a}+1)/\kappa\). We will consider the case \(n=4\), where the conditions (5.18) can be solved as \[\psi=uvf_{1}(z,x),\quad z=\frac{z_{12}z_{34}}{z_{13}z_{24}},\quad x =\frac{x_{12}x_{34}}{x_{13}x_{24}}, \tag{5.19}\] \[u=z_{12}^{-2h_{1}}z_{23}^{h_{1}-h_{2}-h_{3}+h_{4}}z_{24}^{-h_{1} -h_{2}+h_{3}-h_{4}}z_{34}^{h_{1}+h_{2}-h_{3}-h_{4}},\] \[v=x_{12}^{2j_{1}}x_{23}^{-j_{1}+j_{2}+j_{3}-j_{4}}x_{24}^{j_{1}+ j_{2}-j_{3}+j_{4}}x_{34}^{-j_{1}-j_{2}+j_{3}+j_{4}},\] where \(z_{ab}=z_{a}-z_{b},x_{ab}=x_{a}-x_{b}\). Then the equations (5.16) for \(a=1,\dots,4\) are all equivalent and we obtain a single equation for \(f_{1}(z,x)\). Through the gauge transformation \[f(z,x)=gf_{1}(z,x),\quad g=x^{j_{1}+j_{2}}z^{-h_{1}-h_{2}}(z-1)^{\frac{2j_{1}j _{4}}{\kappa}}, \tag{5.20}\] the KZ equation for \(f(z,x)\) can be written concisely as \[\Big{\{}\kappa\vartheta_{z}-\vartheta_{x}(\vartheta_{x}-1) +\frac{x-z}{1-z}(\vartheta_{x}-j_{1}+j_{2})(\vartheta_{x}+j_{3}-j _{4})\] \[+\frac{z(1-x)}{x(1-z)}(\vartheta_{x}+j_{1}+j_{2})(\vartheta_{x}+j _{3}+j_{4})\Big{\}}f(z,x)=0, \tag{5.21}\] where \(\vartheta_{x}=x\frac{\partial}{\partial x}\) and \(\vartheta_{z}=z\frac{\partial}{\partial z}\). In order to identify (5.21) with the four dimensional limit of Shakirov's equation, we change the parameters as19 Footnote 19: Here \(\tilde{a}=a+\kappa\). \[(j_{1},j_{2},j_{3},j_{4})=\left(\frac{-m_{1}-m_{3}}{2},\frac{\tilde{a}-1+m_{1} -m_{3}}{2},\frac{\tilde{a}-1+m_{2}-m_{4}}{2},\frac{-m_{2}-m_{4}}{2}\right), \tag{5.22}\] and make a gauge transformation \[\Psi=x^{-m_{3}}(z-1)^{(m_{1}+m_{3})(m_{2}+m_{4})\over 2\kappa}z^{(m_{3}-m_{1}) \tilde{a}-(m_{1}-1)m_{1}-(m_{3}-1)m_{3}\over 2\kappa}f. \tag{5.23}\] Then the equation (5.21) exactly agrees with (5.13) \[\Big{\{}\kappa\vartheta_{z}-\vartheta_{x}(\vartheta_{x}-\tilde{a})+{x-z\over 1 -z}(\vartheta_{x}+m_{1})(\vartheta_{x}+m_{2})+{z(1-x)\over x(1-z)}( \vartheta_{x}-m_{3})(\vartheta_{x}-m_{4})\Big{\}}\Psi=0, \tag{5.24}\] by the identification \(z=\Lambda\). There are \(2^{4}\) such identifications. This is because the equation (5.24) (=the quantum \(P_{\rm VI}\) equation), has \((\mathbb{Z}_{2})^{4}\subset W(D^{(1)}_{4})\) symmetry \[\{m_{1}\to m_{2},m_{2}\to m_{1}\}\times\{m_{3}\to m_{4},m_{4}\to m_{3}\} \times\{m_{1}\to 1-\tilde{a}-m_{2},m_{2}\to 1-\tilde{a}-m_{1}\}\] \[\times\{m_{3}\to 1+a-m_{4},m_{4}\to 1+a-m_{3}\}, \tag{5.25}\] up to a gauge transformation of the form \(g=x^{k_{1}}(x-1)^{k_{2}}z^{k_{3}}(z-1)^{k_{4}}(x-z)^{k_{5}}\). Each \(\mathbb{Z}_{2}\) transform in (5.25) corresponds to the Weyl reflection with respect to the outer nodes of \(D^{(1)}_{4}\) Dynkin diagram. **Remark 5.2**.: _The relation between the KZ equation and the quantum \(P_{\rm VI}\) was studied by [42]. The derivation of the KZ equation from the gauge theory is given by [47]. The equation (5.13) was previously obtained from Virasoro five-point functions._ **Acknowledgements.**_We would like to thank S.Arthamonov, M.Bershtein, P.Gavrylenko, M.Ito, M.Noumi, M.Schlosser and G.Shibukawa for useful discussions. Our work is supported in part by Grants-in-Aid for Scientific Research (Kakenhi); 18K03274 (H.K.), 4323K03087 (H.K.), 21K03180 (R.O.), 19K03512 (J.S.), 19K03530 (J.S.) and 22H01116 (Y.Y.). The work of R.O. was partly supported by Osaka Central Advanced Mathematical Institute: MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849, and the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University._ ## Appendix A \(q\) Borel transformation and refined Chern-Simons theory In this appendix, we would like to explain a part of motivation for the original non-stationary difference equation of [51]. Namely, we would like to elucidate the reason for introduction of the operator \(\mathcal{B}\) in the form that it was introduced. A part of motivation to study non-stationary difference equations comes from the most well-studied example, which is the non-stationary Ruijsenaars-Schneider equation suggested by [34]. This non-stationary equation is satisfied by the \(K\)-theoretic character of the affine Laumon space of type \(\widehat{\mathfrak{gl}}_{N}\). The character in question is a special function generalizing the Macdonald polynomial. For Macdonald polynomials, a relation to Chern-Simons theory and knot invariants is well-known [2]. The relation is obtained through so-called refined Chern-Simons theory. This theory provides some of the most prominent equations satisfied by Macdonald polynomials, among them a key role is played by the integral identity; \[\oint_{|x_{1}|=1}\frac{dx_{1}}{x_{1}}\ldots\oint_{|x_{N}|=1}\frac {dx_{N}}{x_{N}}\ \exp\left(-{\sum_{i=1}^{N}}\frac{x_{i}^{2}}{2g}\right)\ P_{ \lambda}(x_{1},\ldots,x_{N})\ \prod_{i\neq j=1}^{N}\prod_{m=0}^{\infty}\frac{(1-q^{m}x_{i}/x_{j})}{(1- tq^{m}x_{i}/x_{j})}\] \[=\text{const}\ T_{\lambda}\ P_{\lambda}(t^{\rho_{1}},\ldots,t^{ \rho_{N}}).\] (A.1) Here \(\rho_{i}=(N+1)/2-i\) is the Weyl vector of \(\mathfrak{gl}_{N}\), the partition \(\lambda\) is arbitrary, and \(T_{\lambda}\) is a certain product of \(q,t\) to the power of quadratic form of \(\lambda\). In the appendix B to [2], it is explained that changing the variables via \(u_{i}=\log(x_{i})\), it is possible to trade the Gaussian exponents \(e^{-x_{i}^{2}/2g}\) under the integration sign for theta functions \(\sum_{n}q^{n^{2}/2}u_{i}^{n}\). This can be understood simply as the result of action on the rest of the integrand by a product of operators \[\mathcal{B}^{\prime}u^{n}=q^{n^{2}/2}u^{n},\] (A.2) or equivalently (see (1.4)); \[\mathcal{B}u^{n}=q^{n^{2}/2+n/2}u^{n}\] (A.3) in each variable \(u_{i}\), and setting all \(u_{1}=\ldots=u_{N}=1\) afterwards. The linear difference between operators \(\mathcal{B}^{\prime}\) and \(\mathcal{B}\) is inessential and amounts to a shift of variable. In our view, equations for more complicated functions generalizing the Macdonald polynomials (such as the \(K\)-theoretic character of the affine Laumon space) are most likely far-going generalizations of the elementary equations for Macdonald polynomials. For this reason, when considering the non-stationary difference equation for degenerate five-point conformal blocks of \(q\)-Virasoro algebra in [51], we payed attention to the following interesting fact. In [34], the main part of the non-stationary difference equation is the operator \(q^{\Delta/2}\), where \(\Delta\) is a certain second order differential operator. For the case of rank \(1\), i.e. \(\mathfrak{gl}_{2}\), the action of \(q^{\Delta/2}\) on a monomial of the form \((x_{1}/x_{2})^{n}\) is proportional (up to factors of the form \(q,t\) to the linear power in \(n\)) to \(q^{n^{2}}\). From the perspective described above, it is natural to expect that this operator should be understood as a product of two operators, each of which contributes \(q^{n^{2}/2}\). This is natural both from the perspective of Macdonald theory, where this operator (\(\mathcal{B}^{\prime}\) or \(\mathcal{B}\), equivalently) arises in the integral identities, as well as from the perspective of Ruijsenaars-Schneider system, where \(\Delta\) is a sum of \(N\) contributions, i.e. \(2\) contributions in the \(\mathfrak{gl}_{2}\) case. This was the principal reason why in [51] we chose to look for the Hamiltonian operator on the right hand side of the non-stationary equation in the form (see (1.3)); \[\mathcal{A}_{1}\mathcal{B}\mathcal{A}_{2}\mathcal{B}\mathcal{A}_{3},\] where the two operators \(\mathcal{B}\) are not joined together but separate. It eventually turned out to be the correct ansatz. ## Appendix B Lemma on the \(q\)-Borel transformation A proof of the following lemma is given in Appendix B to [10]. Here we provide another more direct proof. In the following we use a shorthand notation for the infinite product \(\varphi(z):=(z;q)_{\infty}\). **Lemma B.1** ([51]).: _For \(n\in\mathbb{Z}\), we have_ \[\mathcal{B}\cdot\frac{1}{\varphi(\alpha x)\varphi(\beta\Lambda/x)} x^{n}=\frac{\varphi(-q^{1+n}\alpha x)\varphi(-q^{-n}\beta\Lambda/x)}{\varphi( \alpha\beta\Lambda)}q^{n(n+1)/2}x^{n},\] (B.1) \[\mathcal{B}^{-1}\cdot\varphi(\alpha x)\varphi(\beta\Lambda/x)x^{ n}=\frac{\varphi(q^{-1}\alpha\beta\Lambda)}{\varphi(-q^{-1-n}\alpha x) \varphi(-q^{n}\beta\Lambda/x)}q^{-n(n+1)/2}x^{n}.\] (B.2) Proof.: We will show the first relation (the second relation follows from the first one). By a rescaling of \(\Lambda,x\), it is enough to show \[f(\Lambda,x):=\frac{1}{\varphi(-qx)\varphi(-\frac{\Lambda}{x})}\mathcal{B} \frac{1}{\varphi(x)\varphi(\frac{\Lambda}{x})}=\frac{1}{\varphi(\Lambda)}.\] (B.3) First we note that, for any function \(F(x)\), we have \[\mathcal{B}\cdot x^{n}F(x) = (px)^{n}F(px)=q^{n(n+1)/2}x^{n}p^{n}F(px)\] (B.4) \[= q^{n(n+1)/2}x^{n}F(pq^{n}x)=q^{n(n+1)/2}x^{n}\mathcal{B}\cdot F (q^{n}x).\] Using this identity, we obtain the following difference equation \[f(q\Lambda,x) = \frac{1}{\varphi(-qx)\varphi(-\frac{q\Lambda}{x})}\mathcal{B}\frac{ 1-\frac{\Lambda}{x}}{\varphi(x)\varphi(\frac{\Lambda}{x})}\] (B.5) \[= \frac{1}{\varphi(-qx)\varphi(-\frac{q\Lambda}{x})}\mathcal{B}\frac {1}{\varphi(x)\varphi(\frac{\Lambda}{x})}-\frac{1}{\varphi(-qx)\varphi(-\frac {q\Lambda}{x})}\mathcal{B}\frac{\frac{\Lambda}{x}}{\varphi(x)\varphi(\frac{ \Lambda}{x})}\] \[= (1+\frac{\Lambda}{x})f(\Lambda,x)-\frac{1+x}{\varphi(-x)\varphi(- \frac{q\Lambda}{x})}\frac{\Lambda}{x}\mathcal{B}\frac{1}{\varphi(q^{-1}x) \varphi(\frac{q\Lambda}{x})}\] \[= (1+\frac{\Lambda}{x})f(\Lambda,x)-(1+x)\frac{\Lambda}{x}f( \Lambda,q^{-1}x).\] Similarly we have \[f(q\Lambda,qx) = \frac{1}{\varphi(-q^{2}x)\varphi(-\frac{\Lambda}{x})}\mathcal{B} \frac{1-x}{\varphi(x)\varphi(\frac{\Lambda}{x})}\] (B.6) \[= \frac{1}{\varphi(-q^{2}x)\varphi(-\frac{\Lambda}{x})}\mathcal{B} \frac{1}{\varphi(x)\varphi(\frac{\Lambda}{x})}-\frac{1}{\varphi(-q^{2}x) \varphi(-\frac{\Lambda}{x})}\mathcal{B}\frac{x}{\varphi(x)\varphi(\frac{ \Lambda}{x})}\] \[= (1+qx)f(\Lambda,x)-\frac{1+\frac{\Lambda}{qx}}{\varphi(-q^{2}x) \varphi(-\frac{\Lambda}{qx})}qx\mathcal{B}\frac{1}{\varphi(qx)\varphi(\frac{ \Lambda}{qx})}\] \[= (1+qx)f(\Lambda,x)-(qx+\Lambda)f(\Lambda,qx).\] Comparing the \(f(q\Lambda,qx)\) given by (B.5) \(|_{x\to qx}\) and (B.6), we have \[f(\Lambda,qx)=f(\Lambda,x).\] (B.7) Hence from (B.5) we have \[f(q\Lambda,x)=(1-\Lambda)f(\Lambda,x).\] (B.8) Since \(f(0,x)=1\), we obtain \[f(\Lambda,x)=\varphi(\Lambda)^{-1},\] (B.9) as desired. ### Relation to the pentagon identity There is yet another proof of Lemma B.1. \[\mathcal{B}\frac{1}{\varphi(\alpha x)\varphi(\beta\frac{\Lambda}{x})}x^{n}= \frac{\varphi(-q^{n+1}\alpha x)\varphi(-q^{-n}\beta\frac{\Lambda}{x})}{\varphi (\alpha\beta\Lambda)}q^{n(n+1)/2}x^{n},\] (B.10) and show its relation to the pentagon (quantum dilogarithm) identity. (i) First we note that (B.10) follows from its special case \(n=0\): \[\mathcal{B}\frac{1}{\varphi(\alpha x)\varphi(\beta\frac{\Lambda}{x})}=\frac{ \varphi(-q\alpha x)\varphi(-\beta\frac{\Lambda}{x})}{\varphi(\alpha\beta \Lambda)}.\] (B.11) In fact, we have \[\mathcal{B}f(x)x^{n}=(px)^{n}\mathcal{B}f(x)=q^{n(n+1)/2}x^{n}p^{n}\mathcal{B} f(x)=q^{n(n+1)/2}x^{n}\mathcal{B}f(q^{n}x).\] Apply this to \(f(x)=\frac{1}{\varphi(\alpha x)\varphi(\beta\frac{\Lambda}{x})}\) and using (B.11) we obtain (B.10): \[\mathcal{B}\frac{1}{\varphi(\alpha x)\varphi(\beta\frac{\Lambda}{x})}x^{n}=q^{n (n+1)/2}x^{n}\mathcal{B}\frac{1}{\varphi(\alpha q^{n}x)\varphi(\beta\frac{q^{- n}\Lambda}{x})}=q^{n(n+1)/2}x^{n}\frac{\varphi(-q^{n+1}\alpha x)\varphi(-q^{-n} \beta\frac{\Lambda}{x})}{\varphi(\alpha\beta\Lambda)}.\] (ii) The equation (B.11) can be written as \[\frac{\varphi(\alpha\beta\Lambda)}{\varphi(\alpha x)\varphi(\beta \frac{\Lambda}{x})} =\mathcal{B}^{-1}\varphi(-q\alpha x)\varphi(-\beta\frac{\Lambda}{ x})\] \[=\mathcal{B}^{-1}\sum_{k,l=0}^{\infty}\frac{q^{k(k+1)/2}(\alpha x )^{k}}{(q)_{k}}\frac{q^{l(l-1)/2}(\beta\frac{\Lambda}{x})^{l}}{(q)_{l}}=\sum_ {k,l=0}^{\infty}\frac{(\alpha x)^{k}}{(q)_{k}}\frac{(\beta\frac{\Lambda}{x})^ {l}}{(q)_{l}}q^{kl},\] (B.12) hence it is enough to show \[\frac{\varphi(\alpha\beta\Lambda)}{\varphi(\alpha x)\varphi(\beta\frac{ \Lambda}{x})}=\sum_{k,l=0}^{\infty}\frac{(\alpha x)^{k}}{(q)_{k}}\frac{(\beta \frac{\Lambda}{x})^{l}}{(q)_{l}}q^{kl}.\] (B.13) In terms of \(q\)-commutative variables \(ba=qab\) and \(q\)-exponential function \(e_{q}(x)=\varphi(x)^{-1}\), the eq.(B.13) is equivalent to the quantum dilogarithm (pentagon) identity: \[e_{q}(b)e_{q}(a)=e_{q}(a)e_{q}(-ab)e_{q}(b).\] (B.14) The identity (B.14) can be easily derived as follows \[e_{q}(a)e_{q}(b) =e_{q}(a+b),\] (B.15) \[e_{q}(b)e_{q}(a)e_{q}(b)^{-1} =e_{q}(e_{q}(b)ae_{q}(b)^{-1})=e_{q}(ae_{q}(qb)e_{q}(b)^{-1})\] \[=e_{q}(a(1-b))=e_{q}(a-ab)=e_{q}(a)e_{q}(-ab).\] (B.16) ## Appendix C List of various \(R\) matrices and their characterization Since various \(R\)-matrices are used in the main text, we summarize them along with its characterization. (1) The matrix \(R^{\rm Sh}=(r_{i,j}(\Lambda))_{i,j=-n}^{m}\) arising from Shakirov's operator in (2.5): \[q^{\frac{1}{2}i(i+1)}x^{i}(-d_{1}q^{i-m}x)_{m-i}(-d_{4}q^{-i-n} \frac{\Lambda}{x})_{i+n}\] \[=\sum_{j=-n}^{m}q^{-\frac{1}{2}j(j+1)}x^{j}(-q^{-m}x)_{m-j}(-q^{- n}\frac{\Lambda}{x})_{j+n}\ r_{i,j}(\Lambda).\] (C.1) (2) The matrix \(R^{\rm HG}=(R^{\rm HG}_{i,j})_{i,j=0}^{\ell}\) defined by \({}_{4}\phi_{3}\) series in (2.12): \[(q^{\ell-1}\frac{x}{\beta};q^{-1})_{i}(\frac{\alpha x}{z};q)_{\ell-i}=\sum_{j =0}^{\ell}(q^{\ell-1}\frac{x}{z};q^{-1})_{\ell-j}(x;q)_{j}\ R^{\rm HG}_{i,j}.\] (C.2) (3) The matrix \(R^{\rm Ito}=(R^{\rm Ito}_{i,j})_{i,j=0}^{n}\) in Theorem 3.2: \[(q^{n-1}b_{1}x;q^{-1})_{j}(\frac{x}{a_{2}};q)_{n-j}=\sum_{i=0}^{n}(q^{n-1}b_{2}x; q^{-1})_{n-i}(\frac{x}{a_{1}};q)_{i}\;R^{\rm Ito}_{i,j}.\] (C.3) Actually these \(R\) matrices are not unrelated; when \(\ell=n+m\), the relation of \(R^{\rm Sh}\) and \(R^{\rm HG}\) is already proved in SS2.1, namely we have (2.13). One may confirm it by making a change of the "dummy" variable \(x\) in (C.2) to \(-q^{-n}\Lambda/x\), which implies (C.1) up to a scaling constant. On the other hand, with the identification \[\ell=n;\qquad a_{1}=1,\quad a_{2}=\frac{z}{\alpha},\quad b_{1}=\frac{1}{\beta },\quad b_{2}=\frac{1}{z},\] (C.4) we have \(R^{\rm HG}_{i,j}=R^{\rm Ito}_{j,i}\). (4) The matrix \(A=A^{\rm Ito}=(A^{\rm Ito}_{i,j})_{i,j=0}^{n}\) in Theorem 3.2: \[\left[\begin{matrix}n\\ i\end{matrix}\right]_{q}q^{\frac{i(i-1)}{2}}(-\frac{q^{1-n}}{z})^{i}(\frac{x}{ z};q)_{n-i}(q^{n-i}x;q)_{i}\] \[=\sum_{j=0}^{n}\left[\begin{matrix}n\\ j\end{matrix}\right]_{q}q^{\frac{(n-j)(n-j-1)}{2}}b_{1}^{n}(-\frac{q^{1-n}}{a _{2}b_{1}z})^{j}(q^{n-1}a_{2}b_{2}x;q)_{j}(\frac{q^{j-n+1}x}{a_{1}b_{1}z};q)_{ n-j}A^{I}_{i,j}.\] (C.5) (5) The matrix \(\tilde{A}=(\tilde{A}_{i,j})_{i,j=0}^{N}\) for the normalized solution in SS3.2: \[(q^{N-j}a_{2}b_{2}x;q)_{j}(\frac{q^{1-N}}{a_{1}b_{1}}\frac{x}{\Lambda};q)_{N- j}=\sum_{i=0}^{N}(x;q)_{i}(q^{i+1-N}\frac{x}{\Lambda};q)_{N-i}\tilde{A}_{i,j}.\] (C.6) (6) The matrix \({\bf R}_{\mu,\nu}=(\delta_{i+j,k+l}{\bf R}_{\mu,\nu i,j}^{\quad\quad\quad k,l \geq 0})_{i,j,k,l\geq 0}\) for \(U_{\rm v}(A_{1}^{(1)})\): \[(\frac{b_{\nu}x}{z_{\nu}};q)_{i}(\frac{q^{i}x}{z_{\mu}};q)_{j}=\sum_{k=0}^{n}( \frac{b_{\mu}x}{z_{\mu}};q)_{k}(\frac{q^{k}x}{z_{\nu}};q)_{n-k}{\bf R}_{\mu,\nu i,j}^{\quad\quad\quad k,n-k}.\] (C.7) This matrix \({\bf R}_{\mu,\nu}\) is related to the matrix \(R^{\rm HG}\) by \[{\bf R}_{\mu,\nu i,n-i}^{j,n-j}=\left[R^{\rm HG}\right]_{\ell\to n,z \to b_{\mu}\frac{z_{\nu}}{z_{\mu}},\alpha\to b_{\nu},\beta\to b_{\mu}}J,\quad J =(\delta_{i+j,n})_{i,j=0}^{n}.\] A significance of the matrix \({\bf R}\) is that it satisfies the Yang-Baxter relation. The following direct proof using (C.7) may be instructive. **Proposition C.1**.: _The matrix \({\bf R}_{\mu,\nu}\) satisfies the Yang-Baxter equation, namely for any \((i,j,k)\), \((i^{\prime\prime},j^{\prime\prime},k^{\prime\prime})\in\mathbb{Z}^{3}\) such that \(i+j+k=i^{\prime\prime}+j^{\prime\prime}+k^{\prime\prime}\) we have_ \[\sum_{i^{\prime},j^{\prime},k^{\prime}}{\bf R}_{1,2i^{\prime},j^{\prime}}^{i^{ \prime\prime},j^{\prime\prime}}{\bf R}_{1,3i,k^{\prime}}^{i^{\prime},k^{\prime \prime}}{\bf R}_{2,3j,k}^{j^{\prime},k^{\prime}}=\sum_{i^{\prime},j^{\prime}, k^{\prime}}{\bf R}_{2,3j^{\prime},k^{\prime\prime}}^{j^{\prime\prime},k^{\prime \prime}}{\bf R}_{1,3i^{\prime},k^{\prime}}^{i^{\prime\prime},k^{\prime}}{\bf R}_ {1,2i^{\prime},j}^{i^{\prime},j^{\prime}}.\] (C.8) Proof.: For \(k_{1},k_{2},\ldots,k_{s}\in\mathbb{Z}_{\geq 0}\) with fixed \(n=k_{1}+k_{2}+\cdots+k_{s}\), define a rational function in \(t_{1},\ldots,t_{n}\) as \[B_{k_{1},k_{2},\ldots,k_{s}}=\mathcal{S}\Big{(}\prod_{1\leq i<j\leq n}\frac{qt_{ i}-t_{j}}{t_{i}-t_{j}}\cdot\prod_{a=1}^{s}\prod_{i_{a}=k_{1}+\cdots+k_{a-1}+1}^{k_{1 }+\cdots+k_{a}}f_{a}(t_{i_{a}})\Big{)},\] (C.9) where \(f_{a}(t)=\prod_{i=1}^{a-1}\frac{1-\frac{\beta_{i}}{z_{i}}t}{1- \frac{t}{z_{i}}}\)\(\frac{1}{1-\frac{t}{z_{a}}}\), and \(\mathcal{S}\) is the symmetrization on variables \(t_{1},\ldots,t_{n}\). The denominator of \(B_{k_{1},k_{2},\ldots,k_{s}}\) is \(\prod_{i=1}^{n}\prod_{\mu=1}^{s}(1-\frac{t_{i}}{z_{\mu}})\) and the numerator is a symmetric polynomial in \(t_{1},\ldots,t_{n}\) whose degree in each \(t_{i}\) is \(s-1\). The functions \(B_{k_{1},\ldots,k_{s}}\) form a basis of the linear space of such rational functions of dimension \(\binom{n+s-1}{s-1}\). Define the action of a permutation \(\sigma\) of \(\{1,\ldots,s\}\) on \(\{z_{a},\beta_{a}\}\) as \[s_{\sigma}=\{z_{a}\to z_{\sigma(a)},\beta_{a}\to\beta_{\sigma(a)}\},\] then we have the connection relation \[s_{\sigma}(B_{k_{1},\ldots,k_{s}})=\sum_{\{l\}}B_{l_{1},\ldots,l_{s}}C_{\sigma \{k\}}^{\{l\}}.\] Note that the matrix \(C_{(a,a+1)}\) is local in index \(a\), i.e. it depends only on \(k_{a},k_{a+1},l_{a},l_{a+1}\) and \(z_{a},z_{a+1},\beta_{a},\beta_{a+1}\), since \(s_{(a,a+1)}(f_{b}(t))=f_{b}(t)\) for \(a\neq b,b+1\). Then, due to the relation \(s_{(2,3)}s_{(1,2)}s_{(1,2)}B_{\{k\}}=s_{(1,2)}s_{(2,3)}s_{(1,2)}B_{\{k\}}\), the matrices \(C_{\sigma}\) satisfy the Yang-Baxter relation \[C_{(2,3)}(s_{(2,3)}C_{(1,2)})(s_{(2,3)}s_{(1,2)}C_{(2,3)})=C_{(1,2)}(s_{(1,2)} C_{(2,3)})(s_{(1,2)}s_{(2,3)}C_{(1,2)}).\] (C.10) Furthermore, by the locality, the matrix \(C_{(a,a+1)}\) reduces to \(C_{(1,2)}\), and we have \[C_{(a,a+1)}=C_{(1,2)}|_{z_{1}\to z_{a},z_{2}\to z_{a+1},\beta_{1}\to\beta_{a}, \beta_{2}\to\beta_{a+1}}.\] (C.11) Thanks to the specialization \[B_{k_{1},k_{2}}\Big{|}_{t_{i}\to xq^{i-1}}=\frac{(q;q)_{n}}{(1-q)^{-n}}\frac{( \frac{b_{1}x}{z_{1}};q)_{k_{2}}}{(\frac{x}{z_{1}};q)_{k_{1}+k_{2}}(\frac{x}{z _{2}};q)_{k_{2}}},\] the coefficient \(C_{(1,2)}\)\({}_{k_{1},k_{2}}^{l_{1},l_{2}}\)\((k_{1}+k_{2}=l_{1}+l_{2})\) is obtained from \[\frac{(\frac{b_{2}x}{z_{2}};q)_{k_{2}}}{(\frac{x}{z_{2}};q)_{k_{1}+k_{2}}( \frac{x}{z_{1}};q)_{k_{2}}}=\sum_{l_{1},l_{2}}\frac{(\frac{b_{1}x}{z_{1}};q)_{ l_{2}}}{(\frac{x}{z_{1}};q)_{l_{1}+l_{2}}(\frac{x}{z_{2}};q)_{l_{2}}}(C_{1,2}) _{k_{1},k_{2}}^{l_{1},l_{2}},\] i.e. \[(\frac{b_{2}x}{z_{2}};q)_{k_{2}}(q^{k_{2}}\frac{x}{z_{1}};q)_{k_{1}}=\sum_{l_{1},l_{2}}(\frac{b_{1}x}{z_{1}};q)_{l_{2}}(q^{l_{2}}\frac{x}{z_{2}};q)_{l_{1}}(C_ {1,2})_{k_{1},k_{2}}^{l_{1},l_{2}}.\] (C.12) Equations (C.10), (C.11) and (C.12) give the desired Yang-Baxter relation and defining equation for the matrix \(C\) (=**R**). ## Appendix D Proof of \(RD_{2}A=Arb_{2}\) ### Recollection of Ito's \(R\) and \(A\) matrices Let \(n\in\mathbb{Z}_{\geq 0}\)20, and \(a_{1},a_{2},b_{1},b_{2},q,\Lambda\in\mathbb{C}\) be generic parameters. Ito has introduced the matrices \(R\) and \(A\) (in [30, **Theorem 1.4**] and [30, **Theorem 1.7**]) given as follows. Remark that to have a consistent notation with the main body of our paper, we change the notation as \(t_{\rm Ito}\to q\), \((q^{\alpha})_{\rm Ito}\to\Lambda\). Footnote 20: The non-negative integer \(n\) is denoted by \(N\) in the main text. **Definition D.1**.: _Let \(R\) be the \((n+1)\)-dimensional square matrix given in the Gauss decomposed forms:_ \[R=L_{R}D_{R}U_{R}=U^{\prime}_{R}D^{\prime}_{R}L^{\prime}_{R},\] \[L_{R}=(l^{R}_{ij})_{0\leq i,j\leq n},\qquad D_{R}=(d^{R}_{ij})_{0 \leq i,j\leq n},\qquad U_{R}=(u^{R}_{ij})_{0\leq i,j\leq n},\] \[U^{\prime}_{R}=({u^{\prime}}^{R}_{ij})_{0\leq i,j\leq n},\qquad D ^{\prime}_{R}=({d^{\prime}}^{R}_{ij})_{0\leq i,j\leq n},\qquad L^{\prime}_{R}= ({l^{\prime}}^{R}_{ij})_{0\leq i,j\leq n},\] _where_ \[l^{R}_{ij}=\left[\begin{matrix}n-j\\ n-i\end{matrix}\right]_{q^{-1}}\frac{(-1)^{i-j}q^{-\left(\begin{smallmatrix}i- j\\ 2\end{smallmatrix}\right)}(a_{2}b_{2}q^{j};q)_{i-j}}{(a_{1}^{-1}a_{2}q^{-(n-2j-1)};q )_{i-j}},\] \[d^{R}_{j}=\frac{(a_{1}a_{2}^{-1}q^{-j};q)_{n-j}(a_{2}b_{1};q)_{j }}{(a_{1}b_{2};q)_{n-j}(a_{1}^{-1}a_{2}q^{-(n-j)};q)_{j}},\] \[u^{R}_{ij}=\left[\begin{matrix}j\\ i\end{matrix}\right]_{q^{-1}}\frac{(a_{1}b_{1}q^{n-j};q)_{j-i}}{(a_{1}a_{2}^{- 1}q^{n-i-j};q)_{j-i}},\] (D.1) \[{u^{\prime}}^{R}_{ij}=\left[\begin{matrix}j\\ i\end{matrix}\right]_{q}\frac{(-1)^{j-i}q^{\left(\begin{smallmatrix}j-i\\ 2\end{smallmatrix}\right)}(a_{1}^{-1}b_{1}^{-1}q^{-(n-i-1)};q)_{j-i}}{(b_{1}^{- 1}b_{2}q^{i+j-n};q)_{j-i}},\] \[{d^{\prime}}^{R}_{j}=\frac{(b_{1}b_{2}^{-1}q^{n-2j+1};q)_{j}(a_{2 }^{-1}b_{1}^{-1}q^{-(n-j-1)};q)_{n-j}}{(a_{1}^{-1}b_{2}^{-1}q^{-(j-1)};q)_{j}(b _{1}^{-1}b_{2}q^{-(n-2j-1)};q)_{n-j}},\] \[{l^{\prime}}^{R}_{ij}=\left[\begin{matrix}n-j\\ n-i\end{matrix}\right]_{q}\frac{(a_{2}^{-1}b_{2}^{-1}q^{-(i-1)};q)_{i-j}}{(b_{1 }b_{2}^{-1}q^{n-2i+1};q)_{i-j}}.\] **Definition D.2**.: _Let \(A\) be the \((n+1)\)-dimensional square matrix given in the Gauss decomposed forms:_ \[A=L_{A}D_{A}U_{A}=U^{\prime}_{A}D^{\prime}_{A}L^{\prime}_{A},\] \[L_{A}=(l^{A}_{ij})_{0\leq i,j\leq n},\qquad D_{A}=(d^{A}_{ij})_{ 0\leq i,j\leq n},\qquad U_{A}=(u^{A}_{ij})_{0\leq i,j\leq n},\] \[U^{\prime}_{A}=({u^{\prime}}^{A}_{ij})_{0\leq i,j\leq n},\qquad D ^{\prime}_{A}=({d^{\prime}}^{A}_{ij})_{0\leq i,j\leq n},\qquad L^{\prime}_{A}= ({l^{\prime}}^{A}_{ij})_{0\leq i,j\leq n},\] _where_ \[l^{A}_{ij}=(-1)^{i-j}q^{\left(\begin{smallmatrix}n-i\\ 2\end{smallmatrix}\right)-\left(\begin{smallmatrix}n-j\\ 2\end{smallmatrix}\right)}\left[\begin{matrix}n-j\\ n-i\end{matrix}\right]_{q}\frac{(a_{2}b_{2}q^{j};q)_{i-j}}{(\Lambda a_{2}b_{2}q^ {2j};q)_{i-j}},\] \[d_{j}^{A}=a_{1}^{n-j}a_{2}^{j}q^{\binom{j}{2}+\binom{n-j}{2}}\frac{( \Lambda;q)_{j}(\Lambda a_{2}b_{2}q^{2j};q)_{n-j}}{(\Lambda a_{2}b_{2}q^{j-1};q)_{ j}(\Lambda a_{1}a_{2}b_{1}b_{2}q^{n+j-1};q)_{n-j}},\] \[u_{ij}^{A}=(-\Lambda a_{1}^{-1}a_{2})^{j-i}q^{\binom{j}{2}-\binom{ i}{2}}\begin{bmatrix}j\\ i\end{bmatrix}_{q}\frac{(a_{1}b_{1}q^{n-j};q)_{j-i}}{(\Lambda a_{2}b_{2}q^{2i};q )_{j-i}},\] (D.2) \[{u^{\prime}}_{ij}^{A}=(-\Lambda)^{j-i}q^{\binom{n-i}{2}-\binom{n- j}{2}}\begin{bmatrix}j\\ i\end{bmatrix}_{q}\frac{(a_{1}b_{1}q^{n-j};q)_{j-i}}{(\Lambda a_{1}b_{1}q^{2(n- j)};q)_{j-i}},\] \[{d^{\prime}}_{j}^{A}=a_{1}^{n-j}a_{2}^{j}q^{\binom{j}{2}+\binom{ n-j}{2}}\frac{(\Lambda a_{1}b_{1}q^{2(n-j)};q)_{j}(\Lambda;q)_{n-j}}{(\Lambda a _{1}a_{2}b_{1}b_{2}q^{2n-j-1};q)_{j}(\Lambda a_{1}b_{1}q^{n-j-1};q)_{n-j}},\] \[{l^{\prime}}_{ij}^{A}=(-a_{1}a_{2}^{-1})^{i-j}q^{\binom{j}{2}- \binom{i}{2}}\begin{bmatrix}n-j\\ n-i\end{bmatrix}_{q}\frac{(a_{2}b_{2}q^{j};q)_{i-j}}{(\Lambda a_{1}b_{1}q^{2(n- i)};q)_{i-j}}.\] It was shown that a certain system of Jackson integrals satisfies two different types of \(q\)-KZ equations described by two matrices \(R\) and \(A\) (see [30, **Proposition 1.2** (Matsuo)] and [30, **Theorem 1.7**]), which one may regard as a consequence of the so-called base-fiber duality of the gauge theory. Hence the compatibility of the two dual systems guarantees the commutativity of the pertinent matrices. **Theorem D.3**.: _Let \(R\) and \(A\) be Ito's matrices given as above. Set \(D_{2}=((\Lambda q^{n-1})^{i}\delta_{ij})_{i,j=0}^{n}\). Then we have \(RD_{2}A=ARD_{2}\)._ In this appendix, we give an alternative direct proof of this commutativity. ### Notations We use the standard notations for the basic hypergeometric \({}_{r+1}\phi_{r}\) series \[{}_{r+1}\phi_{r}\left[\begin{matrix}a_{1},a_{2},\ldots,a_{r+1}\\ b_{1},\ldots,b_{r}\end{matrix};q,z\right]=\sum_{n=0}^{\infty}\frac{(a_{1};q)_{ n}(a_{2};q)_{n}\cdots(a_{r+1};q)_{n}}{(q;q)_{n}(b_{1};q)_{n}\cdots(b_{r};q)_{n}}z^{n}.\] A \({}_{r+1}\phi_{r}\) series is called balanced (or Saalschutzian) if \(b_{1}b_{2}\cdots b_{r}=qa_{1}a_{2}\cdots a_{r+1}\) and \(z=q\). We use the shorthand notation \({}_{r+1}W_{r}\) for the very-well-poised \({}_{r+1}\phi_{r}\) series \[{}_{r+1}W_{r}(a_{1};a_{4},a_{5},\ldots,a_{r+1};q,z)={}_{r+1}\phi_{r}\left[ \begin{matrix}a_{1},qa_{1}^{1/2},-qa_{1}^{1/2},a_{4},\ldots,a_{r+1}\\ a_{1}^{1/2},-a_{1}^{1/2},qa_{1}/a_{4},\ldots,qa_{1}/a_{r+1}\end{matrix};q,z\right]\] \[=\sum_{n=0}^{\infty}\frac{(a_{1};q)_{n}(a_{4};q)_{n}\cdots(a_{r+1};q)_{n}}{(q;q )_{n}(qa_{1}/a_{4};q)_{n}\cdots(qa_{1}/a_{r+1};q)_{n}}\frac{1-a_{1}q^{2n}}{1-a_ {1}}z^{n}.\] As for the detail, see [28]. ### Andrews' and Bressoud's matrix inversion formulas Let \(\mathcal{A}(a)=(\mathcal{A}_{ij}(a))_{i,j=0}^{\infty}\) be Andrews' infinite-dimensional lower triangular matrix [7], \[\mathcal{A}_{ij}(a)=\frac{1}{(q;q)_{i-j}(aq;q)_{i+j}}.\] (D.3) Then the inverse matrix \(\mathcal{A}^{-1}(a)=(\mathcal{A}^{-1}_{ij}(a))^{\infty}_{i,j=0}\) is given by \[\mathcal{A}^{-1}_{ij}(a)=\frac{(1-aq^{2i})(a;q)_{i+j}(-1)^{i-j}q^{(i-j)}}{(1-a)( q;q)_{i-j}}.\] (D.4) Let \(\mathcal{D}(a,b)=(\mathcal{D}_{ij}(a,b))^{\infty}_{i,j=0}\) be Bressoud's infinite-dimensional lower triangular matrix [18] in the original form, \[\mathcal{D}_{ij}(a,b)=\frac{(1-aq^{2j})(b;q)_{i+j}(ba^{-1};q)_{i-j}(ba^{-1})^{j }}{(1-a)(aq;q)_{i+j}(q;q)_{i-j}}.\] (D.5) Then the inverse matrix is given by \(\mathcal{D}^{-1}(a,b)=(\mathcal{D}_{ij}(b,a))^{\infty}_{i,j=0}\). Closely following the line in [3], one can state the interrelations between Andrews' and Bressoud's matrices in the following manner, aiming at (D.8) and (D.9) in Cororally D.6 below. Consider the gauge transformation \(\mathcal{B}(a,b)=(\mathcal{B}_{ij}(a,b))^{\infty}_{i,j=0}\) of Bressoud's matrix \[\mathcal{B}_{ij}(a,b) =\frac{(aq;q)_{2i}}{(a;q)_{2i}}b^{i}\cdot\mathcal{D}_{ij}(b,a) \cdot\frac{(b;q)_{2j}}{(bq;q)_{2j}}a^{-j}\] \[=\frac{(1-aq^{2i})(a;q)_{i+j}(a/b;q)_{i-j}}{(1-a)(bq;q)_{i+j}(q;q) _{i-j}}b^{i-j}.\] (D.6) **Remark D.4**.: _Our choice of the gauge transformation is different from the one in [3]._ **Proposition D.5** (Agarwal-Andrews-Bressoud).: _We have the initial condition \(\mathcal{B}(a,a)=(\delta_{ij})^{\infty}_{i,j=0}\) and the transition property_ \[\mathcal{B}(a,b)\mathcal{B}(b,c)=\mathcal{B}(a,c),\] (D.7) _which in the particular case \(a=c\) means Bressoud's matrix inversion \(\mathcal{B}^{-1}(a,b)=\mathcal{B}(b,a)\), or \(\mathcal{D}^{-1}(a,b)=\mathcal{D}(b,a)\)._ For the readers' convenience, we reproduce the proof in [3]. Proof.: Let \(i\geq j\). We need to calculate \(\sum_{k=j}^{i}\mathcal{B}_{ik}(a,b)\mathcal{B}_{kj}(b,c)\). In the following summation, by using the definition (D.6), one finds the summable very-well-poised \({}_{6}W_{5}\) series [28, (2.4.2)], as \[\sum_{l=0}^{i-j}\frac{\mathcal{B}_{i,j+l}(a,b)\mathcal{B}_{j+l,j}(b,c)}{ \mathcal{B}_{ij}(a,b)\mathcal{B}_{jj}(b,c)}={}_{6}W_{5}\left(bq^{2j};aq^{i+j}, b/c,q^{-i+j};q,cq/a\right)=\frac{(a/c,bq^{2j+1};q)_{i-j}}{(a/b,cq^{2j+1};q)_{i-j}}(c/b )^{i-j}.\] Then simplification of the factors shows that \[\mathcal{B}_{ij}(a,b)\mathcal{B}_{jj}(b,c)\frac{(a/c,bq^{2j+1};q)_{i-j}}{(a/ b,cq^{2j+1};q)_{i-j}}(c/b)^{i-j}=\mathcal{B}_{ij}(a,c).\] Hence we have (D.7). **Corollary D.6**.: _By taking the limits \(b\to 0\) or \(\infty\) in (D.7), we have_ \[\sum_{k}a^{i}\,\mathcal{A}_{ik}^{-1}(a)\,a^{-k}c^{k}\,\mathcal{A}_{ kj}(c)\,c^{-j}=\mathcal{B}_{ij}(a,c),\] (D.8) \[\sum_{k}q^{-i^{2}}\mathcal{A}_{ik}^{-1}(a)\mathcal{A}_{kj}(c)\,q^ {j^{2}}=\mathcal{B}_{ij}(a,c),\] (D.9) _which reproduce Andrews' matrix inversion by setting \(a=c\)._ Proof.: It follows from \[\lim_{b\to 0}\mathcal{B}_{ik}(a,b)=a^{i-k}\mathcal{A}_{ik}^{-1}(a), \lim_{b\to 0}\mathcal{B}_{kj}(b,c)=c^{k-j}\mathcal{A}_{kj}(c),\] \[\lim_{b\to\infty}b^{2k}\mathcal{B}_{ik}(a,b)=q^{-i^{2}-k(k+1)} \mathcal{A}_{ik}^{-1}(a), \lim_{b\to\infty}b^{-2k}\mathcal{B}_{kj}(b,c)=q^{k(k+1)+j^{2}} \mathcal{A}_{kj}(c).\] Expressions for \(L_{x},U_{x},U^{\prime}_{x},L^{\prime}_{x}\) (\(X=R,A\)) in terms of Andrews' triangular matrices \(\mathcal{A}^{\pm 1}(a)\) **Definition D.7**.: _Set_ \[\alpha=1/a_{2}b_{2},\qquad\beta=1/a_{1}b_{1},\qquad z=1/a_{1}b_{2},\qquad w= \Lambda q^{n-1}.\] (D.10) **Definition D.8**.: _Define the gauge factors \(g_{i}^{L},g_{i}^{U},g_{i}^{\prime L},g_{i}^{\prime U}\) and \(h_{i}^{R},h_{i}^{A},h_{i}^{\prime R},h_{i}^{\prime A}\) by_ \[g_{i}^{L}=q^{i}(1/\alpha;q)_{i}(q^{-n};q)_{i}, g_{i}^{U}=(z/\alpha\beta)^{i}(q^{-n+1}\beta;q)_{i}(q;q)_{i},\] \[g_{i}^{\prime L}=q^{i}(z/\alpha\beta)^{-i}(1/\alpha;q)_{i}(q^{-n };q)_{i}, g_{i}^{\prime U}=(q^{-n+1}\beta;q)_{i}(q;q)_{i},\] (D.11) \[h_{i}^{R}=(q^{-n+1}z/\alpha;q)_{2i}, h_{i}^{A}=(q^{-n+1}w/\alpha;q)_{2i},\] \[h_{i}^{\prime R}=(q^{-n+1}\beta/z;q;q)_{2i}, h_{i}^{\prime A}=(q^{-n+1}\beta/w;q)_{2i}.\] **Proposition D.9**.: _We have_ \[l_{ij}^{R}=g_{i}^{L}\,\mathcal{A}_{ij}(q^{-n}z/\alpha)\,\frac{h_ {j}^{R}}{g_{j}^{L}}, l_{ij}^{A}=g_{i}^{L}\,\mathcal{A}_{ij}(q^{-n}w/\alpha)\,\frac{h _{j}^{A}}{g_{j}^{L}},\] (D.12) \[u_{ij}^{R}=\frac{h_{i}^{R}}{g_{i}^{U}}\,\mathcal{A}_{ji}(q^{-n}z /\alpha)\,g_{j}^{U}, u_{ij}^{A}=\frac{h_{i}^{A}}{w^{i}g_{i}^{U}}\,\mathcal{A}_{ji}(q ^{-n}w/\alpha)\,w^{j}g_{j}^{U},\] (D.13) \[u^{\prime R}_{\,\,ij}=\frac{1}{g_{i}^{\prime U}}\,\mathcal{A}_{ ji}^{-1}(q^{-n}\beta/z)\,\frac{g_{j}^{\prime U}}{h_{j}^{\prime R}}, u^{\prime A}_{\,\,ij}=\frac{1}{g_{i}^{\prime U}}\, \mathcal{A}_{ji}^{-1}(q^{-n}\beta/w)\,\frac{g_{j}^{\prime U}}{h_{j}^{\prime A}},\] (D.14) \[l^{\prime R}_{\,\,ij}=\frac{g_{i}^{\prime L}}{h_{i}^{\prime R}} \,\mathcal{A}_{ij}^{-1}(q^{-n}\beta/z)\,\frac{1}{g_{j}^{\prime L}}, l^{\prime A}_{\,\,ij}=\frac{g_{i}^{\prime L}}{w^{i}h_{i}^{ \prime A}}\,\mathcal{A}_{ij}^{-1}(q^{-n}\beta/w)\,\frac{w^{j}}{g_{j}^{\prime L}}.\] (D.15) Proof.: Straightforward calculation by using the definitions in (D.1), (D.2), the parametrization (D.10), the gauge factors (D.11), and Andrews' matrices (D.3), (D.4). Matrices \(R\) and \(A\) and Watson's transformation formula for terminating very-well-poised \({}_{8}\phi_{7}\) series. **Proposition D.10**.: _We have_ \[d_{j}^{R} =\frac{(\alpha/z;q)_{n}}{(1/z;q)_{n}}\cdot q^{j^{2}}(q^{n}\alpha)^ {-j}\frac{(z/\alpha\beta;q)_{j}(qz/\alpha;q)_{j}(q^{-n}z/\alpha;q)_{j}(q^{-n+1} z;q)_{j}}{(q^{-n}z/\alpha;q)_{2j}(q^{-n+1}z/\alpha;q)_{2j}},\] (D.16) \[d_{j}^{\prime R} =\alpha^{n}\frac{(z/\alpha\beta;q)_{n}}{(z/\beta;q)_{n}}\cdot q^ {-j^{2}}(q^{n}/\beta)^{j}\frac{(q^{-n}\beta/z;q)_{2j}(q^{-n+1}\beta/z;q)_{2j}}{ (1/z;q)_{j}(q\beta/z;q)_{j}(q^{-n}\beta/z;q)_{j}(q^{-n+1}\alpha\beta/z;q)_{j}},\] (D.17) \[d_{j}^{A} =(-a_{1}w/\alpha)^{n}\frac{(1/w;q)_{n}}{(w/\alpha\beta;q)_{n}}\] (D.18) \[\quad\cdot\frac{(\alpha/w;q)_{n}}{(1/w;q)_{n}}\cdot z^{j}q^{j^{2 }}(q^{n}\alpha)^{-j}\frac{(w/\alpha\beta;q)_{j}(qw/\alpha;q)_{j}(q^{-n}w/ \alpha;q)_{j}(q^{-n+1}w;q)_{j}}{(q^{-n}w/\alpha;q)_{2j}(q^{-n+1}w/\alpha;q)_{2 j}},\] \[d_{j}^{\prime A} =(-a_{1}w/\alpha)^{n}\frac{(1/w;q)_{n}}{(w/\alpha\beta;q)_{n}}\] (D.19) \[\quad\cdot\alpha^{n}\frac{(w/\alpha\beta;q)_{n}}{(w/\beta;q)_{n} }\cdot z^{j}q^{-j^{2}}(q^{n}/\beta)^{j}\frac{(q^{-n}\beta/w;q)_{2j}(q^{-n+1} \beta/w;q)_{2j}}{(1/w;q)_{j}(q\beta/w;q)_{j}(q^{-n}\beta/w;q)_{j}(q^{-n+1} \alpha\beta/w;q)_{j}}.\] Proof.: Straightforward calculation by using the definitions in (D.1), (D.2), and the parametrization (D.10). **Definition D.11**.: _Set_ \[\mathsf{R}_{ij}^{n}(\alpha,\beta|z) =\beta^{-i}\frac{(q;q)_{n}(\alpha/z;q)_{n-j}(1/\beta;q)_{n-i}( \beta/z;q)_{i}}{(q;q)_{i}(q;q)_{n-i}(1/z;q)_{n}(1/\beta;q)_{n-j}}\] (D.20) \[\quad\cdot{}_{4}\phi_{3}\left[\begin{array}{c}q^{-i},q^{-n+j},z /\alpha\beta,q^{-n+1}z\\ q^{-n},q^{-i+1}z/\beta,q^{-n+j+1}z/\alpha\end{array};q,q\right].\] **Proposition D.12**.: _We have_ \[R_{ij} =\mathsf{R}_{ij}^{n}(\alpha,\beta|z),\] (D.21) \[A_{ij} =(-a_{1}w/\alpha)^{n}\frac{(1/w;q)_{n}}{(w/\alpha\beta;q)_{n}} \cdot\mathsf{R}_{ij}^{n}(\alpha,\beta|w)\cdot z^{j}.\] (D.22) Recalling that \(D_{2}=(w^{i}\delta_{ij})_{i,j=0}^{n}\), the commutativity \(RD_{2}A=ARD_{2}\) (in Theorem D.3) is recast as follows. **Theorem D.13**.: _We have_ \[\sum_{k=0}^{n}\mathsf{R}_{ik}^{n}(\alpha,\beta|z)w^{k}\mathsf{R}_{kj}^{n}( \alpha,\beta|w)z^{j}=\sum_{k=0}^{n}\mathsf{R}_{ik}^{n}(\alpha,\beta|w)z^{k} \mathsf{R}_{kj}^{n}(\alpha,\beta|z)w^{j}.\] (D.23) Proof of Proposition D.12.: Starting from the Gauss decomposition \(R=L_{R}D_{R}U_{R}\), one can proceed as follows. We have \[R_{ij}=\sum_{k}l_{ik}^{R}d_{k}^{R}u_{kj}^{R}=l_{i0}^{R}d_{0}^{R}u_{0j}^{R}\sum_{k =0}^{\min(i,j)}\frac{l_{ik}^{R}}{l_{i0}^{R}}\frac{d_{k}^{R}}{d_{0}^{R}}\frac{u_{ kj}^{R}}{u_{0j}^{R}}=l_{i0}^{R}d_{0}^{R}u_{0j}^{R}\cdot X,\] where the summation \(X\) can be calculated by using (D.12), (D.13), and (D.25), (D.28), (D.31) below as \[X =\sum_{k=0}^{\min(i,j)}\frac{\mathcal{A}_{ik}(q^{-n}z/\alpha)}{ \mathcal{A}_{i0}(q^{-n}z/\alpha)}\frac{h_{k}^{R}}{g_{k}^{L}}\frac{d_{k}^{R}}{d _{0}^{R}}\frac{h_{k}^{R}}{g_{k}^{U}}\frac{\mathcal{A}_{jk}(q^{-n}z/\alpha)}{ \mathcal{A}_{j0}(q^{-n}z/\alpha)}\] \[={}_{8}W_{7}\left(q^{-n}z/\alpha;q^{-i},q^{-j},z/\alpha\beta,qz/ \alpha,q^{-n+1}z;q,q^{-n+i+j}\beta/z\right)\] \[=\frac{(q^{-n+1}z/\alpha;q)_{i}(\beta/z;q)_{i}}{(q^{-n+1}\beta;q) _{i}(1/\alpha;q)_{i}}{}_{4}\phi_{3}\left[\begin{array}{c}q^{-i},q^{-n+j},z/ \alpha\beta,q^{-n+1}z\\ q^{-n},q^{-i+1}z/\beta,q^{-n+j+1}z/\alpha\end{array};q,q\right].\] Here we have used Watson's transformation formula for terminating very-well-poised \({}_{8}\phi_{7}\) series [28, (2.5.1)]. From (D.11), (D.12), (D.13), and (D.24), (D.30) below we have \[l_{i0}^{R}d_{0}^{R}u_{0j}^{R}=g_{i}^{L}\mathcal{A}_{i0}(q^{-n}z/ \alpha)d_{0}^{R}\mathcal{A}_{j0}(q^{-n}z/\alpha)g_{j}^{U}\] \[=q^{i}(z/\alpha\beta)^{j}\frac{(1/\alpha;q)_{i}(q^{-n};q)_{i}}{(q ^{-n+1}z/\alpha;q)_{i}(q;q)_{i}}\frac{(q^{-n+1}\beta;q)_{j}}{(q^{-n+1}z/\alpha ;q)_{j}}\frac{(\alpha/z;q)_{n}}{(1/z;q)_{n}}.\] Hence it holds that \[R_{ij} =q^{i}(z/\alpha\beta)^{j}\frac{(\beta/z;q)_{i}(q^{-n};q)_{i}}{(q^ {-n+1}\beta;q)_{i}(q;q)_{i}}\frac{(q^{-n+1}\beta;q)_{j}}{(q^{-n+1}z/\alpha;q) _{j}}\frac{(\alpha/z;q)_{n}}{(1/z;q)_{n}}\] \[\quad\cdot{}_{4}\phi_{3}\left[\begin{array}{c}q^{-i},q^{-n+j},z/ \alpha\beta,q^{-n+1}z\\ q^{-n},q^{-i+1}z/\beta,q^{-n+j+1}z/\alpha\end{array};q,q\right].\] Simplifying the prefactor, we have (D.21). The formula (D.22) for the matrix \(A\) immediately follows from the one (D.21) for \(R\), by making a comparison of the matrix elements givin in (D.12)-(D.15), and (D.16)-(D.19). **Remark D.14**.: _If we use the opposite Gauss decomposition \(R=U_{R}^{\prime}D_{R}^{\prime}L_{R}^{\prime}\), we have:_ \[R_{ij}=\sum_{k}u_{ik}^{\prime R}d_{k}^{\prime R}l_{kj}^{\prime R}=u_{in}^{ \prime R}d_{n}^{\prime R}l_{nj}^{\prime R}\sum_{k=\max(i,j)}^{n}\frac{u_{ik}^{ \prime R}}{u_{in}^{\prime R}}\frac{d_{k}^{\prime R}}{d_{n}^{\prime R}}\frac{l_ {kj}^{\prime R}}{l_{nj}^{\prime R}}=u_{in}^{\prime R}d_{n}^{\prime R}l_{nj}^{ \prime R}\cdot X^{\prime},\] _where \(X\) can be calculated by using (D.14), (D.15), and (D.27), (D.29) and (D.33) below_ \[X^{\prime}=\sum_{l=0}^{\min(i,j)}\frac{\mathcal{A}_{n-l,i}^{-1}(q^{-n}\beta/z)} {\mathcal{A}_{n,i}^{-1}(q^{-n}\beta/z)}\frac{g_{n-l}^{\prime U}h_{n}^{\prime R }}{g_{n}^{\prime U}h_{n-l}^{\prime R}}\frac{d_{n-l}^{\prime R}}{d_{n}^{\prime R }}\frac{g_{n-l}^{\prime L}h_{n}^{\prime R}}{g_{n}^{\prime L}h_{n-l}^{\prime R }}\frac{\mathcal{A}_{n-l,j}^{-1}(q^{-n}\beta/z)}{\mathcal{A}_{n,j}^{-1}(q^{-n }\beta/z)}\] \[={}_{8}W_{7}\left(q^{-n}z/\beta;q^{-n+i},q^{-n+j},z/\alpha\beta,qz/ \beta,q^{-n+1}z;q,q^{-n+i+j}\alpha/z\right)\] \[=\frac{(q^{-n+1}z/\beta;q)_{n-j}(\alpha/z;q)_{n-j}}{(q^{-n+1}\alpha ;q)_{n-j}(1/\beta;q)_{n-j}}{}_{4}\phi_{3}\left[\begin{matrix}q^{-i},q^{-n+j},z/ \alpha\beta,q^{-n+1}z\\ q^{-n},q^{-i+1}z/\beta,q^{-n+j+1}z/\alpha\end{matrix};q,q\right].\] _Note that from (D.11), (D.14), (D.15), and (D.26), (D.32) below_ \[u_{in}^{\prime R}d_{n}^{\prime R}l_{nj}^{\prime R}=\frac{1}{g_{i} ^{\prime\prime U}}\mathcal{A}_{n,i}^{-1}(q^{-n}\beta/z)\frac{g_{n}^{\prime U }}{h_{n}^{\prime R}}d_{n}^{\prime R}\frac{g_{n}^{\prime L}}{h_{n}^{\prime R}} \mathcal{A}_{n,j}^{-1}(q^{-n}\beta/z)\frac{1}{g_{j}^{\prime L}}\] \[=q^{i}(z/\alpha\beta)^{j-n}\frac{(\beta/z;q)_{i}(q^{-n};q)_{i}}{( q^{-n+1}\beta;q)_{i}(q;q)_{i}}\frac{(\beta/z;q)_{j}}{(1/\alpha;q)_{j}}\frac{(1/ \beta;q)_{n}}{(1/z;q)_{n}}.\] _Simplifying the prefactor, we also have (D.21)._ **Lemma D.15**.: _We have_ \[\mathcal{A}_{i0}(a)=\frac{1}{(aq;q)_{i}(q;q)_{i}},\] (D.24) \[\frac{\mathcal{A}_{ik}(a)}{\mathcal{A}_{i0}(a)}=q^{-\binom{k}{2}} (-q^{i})^{k}\frac{(q^{-i};q)_{k}}{(aq^{i+1};q)_{k}},\] (D.25) \[\mathcal{A}_{n,i}^{-1}(a)=(-1)^{n}q^{\binom{n}{2}}\frac{(aq;q)_{2 n}}{(a;q)_{2n}}\frac{(a;q)_{n}}{(q;q)_{n}}\cdot q^{i}(aq^{n};q)_{i}(q^{-n};q)_{i},\] (D.26) \[\frac{\mathcal{A}_{n-l,i}^{-1}(a)}{\mathcal{A}_{n,i}^{-1}(a)}=q^ {\binom{l}{2}}(-q^{i+n}a)^{-l}\frac{1-a^{-1}q^{-2n+2l}}{1-a^{-1}q^{-2n}}\frac{ (q^{-n+i};q)_{l}}{(a^{-1}q^{1-n-i};q)_{l}}.\] (D.27) **Lemma D.16**.: _We have_ \[\frac{h_{k}^{R}}{g_{k}^{L}}\frac{h_{k}^{R}}{g_{k}^{U}}=(qz/\alpha \beta)^{-k}\frac{(q^{-n+1}z/\alpha;q)_{2k}(q^{-n+1}z/\alpha;q)_{2k}}{(1/\alpha ;q)_{k}(q^{-n};q)_{k}(q^{-n+1}\beta;q)_{k}(q;q)_{k}},\] (D.28) \[\frac{g_{n-l}^{\prime U}h_{n}^{\prime R}}{g_{n}^{\prime U}h_{n-l} ^{\prime R}}\frac{g_{n-l}^{\prime L}h_{n}^{\prime R}}{g_{n}^{\prime L}h_{n-l} ^{\prime R}}=q^{-2l^{2}}(q^{2n+1}\beta^{2}/z^{3})^{l}\frac{(q^{-n}z/\beta;q)_{2 l}(q^{-n}z/\beta;q)_{2l}}{(1/\beta;q)_{l}(q^{-n};q)_{l}(q^{-n+1}\alpha;q)_{l}(q;q)_{l}}.\] (D.29) **Lemma D.17**.: _We have_ \[d_{0}^{R}=\frac{(\alpha/z;q)_{n}}{(1/z;q)_{n}},\] (D.30) \[\frac{d_{k}^{R}}{d_{0}^{R}}=q^{k^{2}}(q^{n}\alpha)^{-k}\frac{(qz/ \alpha;q)_{k}(q^{-n+1}z;q)_{k}(z/\alpha\beta;q)_{k}(q^{-n}z/\alpha;q)_{k}}{(q ^{-n+1}z/\alpha;q)_{2k}(q^{-n}z/\alpha;q)_{2k}},\] (D.31) \[d_{n}^{\prime R}=\beta^{-n}\frac{(\beta/z;q)_{n}}{(1/z;q)_{n}},\] (D.32) \[\frac{d_{n-l}^{\prime R}}{d_{n}^{\prime R}}=q^{l^{2}}(q^{-n} \alpha)^{l}\frac{(qz/\beta;q)_{l}(q^{-n+1}z;q)_{l}(z/\alpha\beta;q)_{l}(q^{-n} z/\beta;q)_{l}}{(q^{-n+1}z/\beta;q)_{2l}(q^{-n}z/\beta;q)_{2l}}.\] (D.33) Products of triangular matrices \(L_{A}^{-1}L_{R},U_{R}D_{2}U_{A}^{\prime},U_{A}U_{R}^{\prime}\), and \(L_{R}^{\prime}D_{2}L_{A}^{\prime\,-1}\) in terms of Bressoud's triangular matrix \(\mathcal{B}(a,b)\) For the convenience of our study on the commutativity \(RD_{2}A=ARD_{2}\), we write the matrices \(R\) and \(A\) in Gauss decomposed forms and investigate \[(L_{R}D_{R}U_{R})D_{2}(U_{A}^{\prime}D_{A}^{\prime}L_{A}^{\prime})=(L_{A}D_{A}U _{A})(U_{R}^{\prime}D_{R}^{\prime}L_{R}^{\prime})D_{2},\] (D.34) which is equivalent to \[\mathsf{L}^{1}\cdot D_{R}\cdot\mathsf{U}^{1}=D_{A}\cdot\mathsf{U}^{2}\cdot D_ {R}^{\prime}\cdot\mathsf{L}^{2}\cdot{D_{A}^{\prime\,-1}},\] (D.35) where \[\mathsf{L}^{1}=L_{A}^{-1}L_{R},\qquad\mathsf{U}^{1}=U_{R}D_{2}U_{A}^{\prime}, \qquad\mathsf{U}^{2}=U_{A}U_{R}^{\prime},\qquad\mathsf{L}^{2}=L_{R}^{\prime}D _{2}{L_{A}^{\prime\,-1}}.\] (D.36) Note from (D.12) and (D.15) we have that \[(L_{A}^{-1})_{ij}=\frac{g_{i}^{L}}{h_{i}^{A}}\,\mathcal{A}_{ij}^ {-1}(q^{-n}w/\alpha)\,\frac{1}{g_{j}^{L}},\] (D.37) \[({L_{A}^{\prime\,-1}})_{ij}=\frac{g_{i}^{\prime L}}{w^{i}}\, \mathcal{A}_{ij}(q^{-n}\beta/w)\,\frac{w^{j}h_{j}^{\prime A}}{g_{j}^{\prime L}}.\] (D.38) One finds that all the products \(\mathsf{L}^{1},\mathsf{U}^{1},\mathsf{U}^{2}\) and \(\mathsf{L}^{2}\) in (D.36) can be treated by the summation formulas (D.8) and (D.9) in Corollary D.6, which are recast as \[\sum_{k}\mathcal{A}_{ik}^{-1}(a)\,(b/a)^{k}\,\mathcal{A}_{kj}(b)=a^{-i}\, \mathcal{B}_{ij}(a,b)\,b^{j},\qquad\sum_{k}\mathcal{A}_{ik}^{-1}(a)\mathcal{A }_{kj}(b)=q^{i^{2}}\mathcal{B}_{ij}(a,b)\,q^{-j^{2}}.\] (D.39) **Proposition D.18**.: _We have_ \[\mathsf{L}^{1}_{ij} =\frac{q^{i^{2}}g_{i}^{L}}{h_{i}^{A}}\mathcal{B}_{ij}(q^{-n}w/ \alpha,q^{-n}z/\alpha)\frac{h_{j}^{R}}{q^{j^{2}}g_{j}^{L}},\] (D.40) \[\mathsf{U}^{1}_{ij} =\frac{(q^{-n}z/\alpha)^{i}h_{i}^{R}}{g_{i}^{U}}\mathcal{B}_{ji}( q^{-n}\beta/w,q^{-n}z/\alpha)\frac{g_{j}^{\prime U}}{(q^{-n}\beta/w)^{j}h_{j}^{ \prime A}},\] (D.41) \[\mathsf{U}^{2}_{ij} =\frac{(q^{-n}w/\alpha)^{i}h_{i}^{A}}{w^{i}g_{i}^{U}}\mathcal{B}_ {ji}(q^{-n}\beta/z,q^{-n}w/\alpha)\frac{g_{j}^{\prime U}}{(q^{-n}\beta/z)^{j}h _{j}^{\prime R}},\] (D.42) \[\mathsf{L}^{2}_{ij} =\frac{q^{i^{2}}g_{i}^{\prime L}}{h_{i}^{\prime R}}\mathcal{B}_ {ij}(q^{-n}\beta/z,q^{-n}\beta/w)\frac{w^{j}h_{j}^{\prime A}}{q^{j^{2}}g_{j}^{ \prime L}}.\] (D.43) Proof.: Straightforward calculation by using \(D_{2}=(w^{i}\delta_{ij})_{i,j=0}^{n}\), (D.11), (D.12)-(D.15), (D.37), (D.38), and (D.39). Products \(\mathsf{L}^{1}\cdot D_{R}\cdot\mathsf{U}^{1}\) and \(\mathsf{U}^{2}\cdot D^{\prime}_{R}\cdot\mathsf{L}^{2}\) in terms of terminating very-well-poised balanced \({}_{10}\phi_{9}\) series **Proposition D.19**.: _We have_ \[\frac{1}{\mathsf{L}^{1}_{i0}\cdot d^{R}_{0}\cdot\mathsf{U}^{1}_{0 j}}(\mathsf{L}^{1}\cdot D_{R}\cdot\mathsf{U}^{1})_{ij}\] (D.44) \[= {}_{10}W_{9}\left(q^{-n}z/\alpha;q^{-i},q^{-j},q^{-n+i}w/\alpha,q^ {-n+j}\beta/w,z/\alpha\beta,qz/\alpha,q^{-n+1}z;q,q\right),\] \[\frac{1}{\mathsf{U}^{2}_{in}\cdot d^{R}_{n}\cdot\mathsf{L}^{2}_{nj }}(\mathsf{U}^{2}\cdot D^{\prime}_{R}\cdot\mathsf{L}^{2})_{ij}\] (D.45) \[= {}_{10}W_{9}\left(q^{-n}z/\beta;q^{-n+i},q^{-n+j},q^{-i}\alpha/w, q^{-j}w/\beta,z/\alpha\beta,qz/\beta,q^{-n+1}z;q,q\right),\] _and_ \[\frac{\mathsf{U}^{2}_{in}\cdot d^{\prime R}_{n}\cdot\mathsf{L}^{ 2}_{nj}}{\mathsf{L}^{1}_{i0}\cdot d^{R}_{0}\cdot\mathsf{U}^{1}_{0j}}\cdot \frac{d^{A}_{i}}{d^{\prime A}_{j}} =\frac{(1/\alpha,1/\beta,w/z,\alpha\beta/zw;q)_{n}}{(\alpha/z, \beta/z,1/w,w/\alpha\beta;q)_{n}}\frac{(\beta/z,w/\alpha\beta,q^{-n+1}z/ \alpha,q^{-n+1}w;q)_{i}}{(1/\alpha,w/z,q^{-n+1}\beta,q^{-n+1}zw/\alpha\beta;q )_{i}}\] \[\cdot\frac{(\beta/z,1/w,q^{-n+1}z/\alpha,q^{-n+1}\alpha\beta/w;q) _{j}}{(1/\alpha,\alpha\beta/zw,q^{-n+1}\beta,q^{-n+1}z/w;q)_{j}}.\] (D.46) Proof.: For (D.44), we have from (D.31), (D.40), (D.41), and (D.47), (D.51) below that \[\frac{1}{\mathsf{L}^{1}_{i0}\cdot d^{R}_{0}\cdot\mathsf{U}^{1}_{0 j}}(\mathsf{L}^{1}\cdot D_{R}\cdot\mathsf{U}^{1})_{ij}=\sum_{k=0}^{\min(i,j)} \frac{\mathsf{L}^{1}_{ik}\cdot d^{R}_{k}\cdot\mathsf{U}^{1}_{kj}}{\mathsf{L}^ {1}_{i0}\cdot d^{R}_{0}\cdot\mathsf{U}^{1}_{0j}}\] \[= \sum_{k=0}^{\min(i,j)}\frac{\mathcal{B}_{ik}(q^{-n}w/\alpha,q^{- n}z/\alpha)}{\mathcal{B}_{i0}(q^{-n}w/\alpha,q^{-n}z/\alpha)}\frac{h^{R}_{k}}{q^{k ^{2}}g^{L}_{k}}\frac{d^{R}_{k}}{d^{R}_{0}}\frac{(q^{-n}z/\alpha)^{k}h^{R}_{k}}{ g^{U}_{k}}\frac{\mathcal{B}_{jk}(q^{-n}\beta/w,q^{-n}z/\alpha)}{\mathcal{B}_{j0}(q^{-n} \beta/w,q^{-n}z/\alpha)}\] \[= {}_{10}W_{9}\left(q^{-n}z/\alpha;q^{-i},q^{-j},q^{-n+i}w/\alpha,q^ {-n+j}\beta/w,z/\alpha\beta,qz/\alpha,q^{-n+1}z;q,q\right).\] For (D.45), in a similar manner, we have from (D.33), (D.42), (D.43), and (D.48), (D.52) below that \[\frac{1}{\mathsf{U}^{2}_{in}\cdot d^{\prime R}_{n}\cdot\mathsf{L} ^{2}_{nj}}(\mathsf{U}^{2}\cdot D^{\prime}_{R}\cdot\mathsf{L}^{2})_{ij}=\sum_{k =\max(i,j)}^{n}\frac{\mathsf{U}^{2}_{ik}\cdot d^{\prime R}_{k}\cdot\mathsf{L}^ {2}_{kj}}{\mathsf{U}^{2}_{in}\cdot d^{\prime R}_{n}\cdot\mathsf{L}^{2}_{nj}}= \sum_{l=0}^{\min(i,j)}\frac{\mathsf{U}^{2}_{i,n-l}\cdot d^{\prime R}_{n-l} \cdot\mathsf{L}^{2}_{n-l,j}}{\mathsf{U}^{2}_{in}\cdot d^{\prime R}_{n}\cdot \mathsf{L}^{2}_{nj}}\] \[= \sum_{l=0}^{\min(i,j)}\frac{\mathcal{B}_{n-l,i}(q^{-n}\beta/z,q^{ -n}w/\alpha)}{\mathcal{B}_{ni}(q^{-n}\beta/z,q^{-n}w/\alpha)}\frac{g^{\prime U }_{n-l}}{g^{\prime U}_{n}}\frac{(q^{-n}\beta/z)^{n}h^{\prime R}_{n}}{(q^{-n} \beta/z)^{n-l}h^{\prime R}_{n-l}}\frac{d^{\prime R}_{n-l}}{d^{\prime R}_{n}}\] \[\qquad\cdot\frac{q^{(n-l)^{2}}g^{\prime L}_{n-l}}{q^{n^{2}}g^{ \prime L}_{n}}\frac{h^{\prime R}_{n}}{h^{\prime R}_{n-l}}\frac{\mathcal{B}_{n-l, j}(q^{-n}\beta/z,q^{-n}\beta/w)}{\mathcal{B}_{nj}(q^{-n}\beta/z,q^{-n}\beta/w)}\] \[= {}_{10}W_{9}\left(q^{-n}z/\beta;q^{-n+i},q^{-n+j},q^{-i}\alpha/w,q ^{-j}w/\beta,z/\alpha\beta,qz/\beta,q^{-n+1}z;q,q\right).\] Concerning (D.46), one can proceed as \[\frac{\mathsf{U}_{in}^{2}\cdot d_{n}^{\prime R}\cdot\mathsf{L}_{nj}^{ 2}}{\mathsf{L}_{i0}^{1}\cdot d_{0}^{R}\cdot\mathsf{U}_{0j}^{1}}\cdot\frac{d_{i}^ {A}}{d_{0}^{\prime A}}\] \[= \mathcal{B}_{n0}(q^{-n}\beta/z,q^{-n}w/\alpha)\mathcal{B}_{n0}(q^ {-n}\beta/z,q^{-n}\beta/w)\frac{d_{0}^{A}}{d_{0}^{\prime A}}\frac{d_{n}^{\prime R }}{d_{0}^{R}}\frac{g_{n}^{\prime U}}{(q^{-n}\beta/z)^{n}h_{n}^{\prime R}}\frac{ q^{n^{2}}g_{n}^{\prime L}}{h_{n}^{\prime R}}\] \[\cdot\frac{\mathcal{B}_{ni}(q^{-n}\beta/z,q^{-n}w/\alpha)}{ \mathcal{B}_{n0}(q^{-n}\beta/z,q^{-n}w/\alpha)}\frac{1}{\mathcal{B}_{i0}(q^{- n}w/\alpha,q^{-n}z/\alpha)}\frac{(q^{-n}w/\alpha)^{i}h_{i}^{A}h_{i}^{A}}{q^{i ^{2}}g_{i}^{L}w^{i}g_{i}^{U}}\frac{d_{i}^{A}}{d_{0}^{A}}\] \[\cdot\frac{\mathcal{B}_{nj}(q^{-n}\beta/z,q^{-n}\beta/w)}{ \mathcal{B}_{n0}(q^{-n}\beta/z,q^{-n}\beta/w)}\frac{1}{\mathcal{B}_{j0}(q^{-n} \beta/w,q^{-n}z/\alpha)}\frac{(q^{-n}\beta/w)^{j}h_{j}^{\prime A}w^{j}h_{j}^{ \prime A}}{g_{j}^{\prime U}q^{j^{2}}g_{j}^{\prime L}}\frac{d_{0}^{\prime A}}{ d_{j}^{\prime A}}.\] Then from (D.53), (D.54), and (D.55) below, we obtain (D.46). **Lemma D.20**.: _We have_ \[\frac{\mathcal{B}_{ik}(a,b)}{\mathcal{B}_{i0}(a,b)}=(a^{-1}q)^{k} \frac{(aq^{i};q)_{k}(q^{-i};q)_{k}}{(bq^{i+1};q)_{k}(a^{-1}bq^{1-i};q)_{k}},\] (D.47) \[\frac{\mathcal{B}_{n-l,i}(a,b)}{\mathcal{B}_{n,i}(a,b)}=(a^{-2}b) ^{l}\frac{(a^{-1}q^{1-2n};q)_{2l}}{(a^{-1}q^{-2n};q)_{2l}}\frac{(b^{-1}q^{-n-i };q)_{l}(q^{-n+i};q)_{l}}{(a^{-1}q^{1-n-i};q)_{l}(a^{-1}bq^{1-n+i};q)_{l}},\] (D.48) \[\mathcal{B}_{i0}(a,b)=b^{i}\frac{(aq;q)_{2i}}{(a;q)_{2i}}\frac{(a ;q)_{i}(a/b;q)_{i}}{(bq;q)_{i}(q;q)_{i}},\] (D.49) \[\frac{\mathcal{B}_{n,i}(a,b)}{\mathcal{B}_{n,0}(a,b)}=(a^{-1}q)^{ i}\frac{(aq^{n};q)_{i}(q^{-n};q)_{i}}{(bq^{1+n};q)_{i}(a^{-1}bq^{1-n};q)_{i}}.\] (D.50) **Lemma D.21**.: _We have_ \[\frac{h_{k}^{R}}{q^{k^{2}}g_{k}^{L}}\frac{(q^{-n}z/\alpha)^{k}h_ {k}^{R}}{g_{k}^{U}}=q^{-k^{2}}(q^{-n-1}\beta)^{k}\frac{(q^{-n+1}z/\alpha;q)_{2 k}(q^{-n+1}z/\alpha;q)_{2k}}{(1/\alpha;q)_{k}(q^{-n};q)_{k}(q^{-n+1}\beta;q)_{k}(q ;q)_{k}},\] (D.51) \[\frac{g_{n-l}^{\prime U}}{g_{n}^{\prime U}}\frac{(q^{-n}\beta/z)^ {n}h_{n}^{\prime R}}{(q^{-n}\beta/z)^{n-l}h_{n-l}^{\prime R}}\frac{q^{(n-l)^{2 }}g_{n-l}^{\prime L}}{q^{n^{2}}g_{n}^{\prime L}}\frac{h_{n}^{\prime R}}{h_{n- l}^{\prime R}}\] \[\qquad=q^{-l^{2}}(q^{-n+1}\beta^{3}/z^{4})^{l}\frac{(q^{-n}z/ \beta;q)_{2l}(q^{-n}z/\beta;q)_{2l}}{(1/\beta;q)_{l}(q^{-n};q)_{l}(q^{-n+1} \alpha;q)_{l}(q;q)_{l}}.\] (D.52) **Lemma D.22**.: _We have_ \[\mathcal{B}_{n0}(q^{-n}\beta/z,q^{-n}w/\alpha)\mathcal{B}_{n0}(q^ {-n}\beta/z,q^{-n}\beta/w)\frac{d_{0}^{A}}{d_{0}^{\prime A}}\frac{d_{n}^{ \prime R}}{d_{0}^{R}}\frac{g_{n}^{\prime U}}{(q^{-n}\beta/z)^{n}h_{n}^{\prime R }}\frac{q^{n^{2}}g_{n}^{\prime L}}{h_{n}^{\prime R}}\] \[= \frac{(1/\alpha,1/\beta,w/z,\alpha\beta/zw;q)_{n}}{(\alpha/z, \beta/z,1/w,w/\alpha\beta;q)_{n}},\] (D.53) \[\frac{\mathcal{B}_{ni}(q^{-n}\beta/z,q^{-n}w/\alpha)}{\mathcal{B}_ {n0}(q^{-n}\beta/z,q^{-n}w/\alpha)}\frac{1}{\mathcal{B}_{i0}(q^{-n}w/\alpha,q ^{-n}z/\alpha)}\frac{(q^{-n}w/\alpha)^{i}h_{i}^{A}h_{i}^{A}}{q^{i^{2}}g_{i}^{L} w^{i}g_{i}^{U}}\frac{d_{i}^{A}}{d_{0}^{A}}\] \[=\frac{(\beta/z,w/\alpha\beta,q^{-n+1}z/\alpha,q^{-n+1}w;q)_{i}}{(1/ \alpha,w/z,q^{-n+1}\beta,q^{-n+1}zw/\alpha\beta;q)_{i}},\] (D.54) \[\frac{\mathcal{B}_{nj}(q^{-n}\beta/z,q^{-n}\beta/w)}{\mathcal{B}_{ n0}(q^{-n}\beta/z,q^{-n}\beta/w)}\frac{1}{\mathcal{B}_{j0}(q^{-n}\beta/w,q^{-n}z/ \alpha)}\frac{(q^{-n}\beta/w)^{j}h^{\prime A}_{j}w^{j}h^{\prime A}_{j}}{g^{ \prime U}_{j}q^{j^{2}}g^{\prime L}_{j}}\frac{d^{\prime A}_{0}}{d^{\prime A}_{j}}\] \[=\frac{(\beta/z,1/w,q^{-n+1}z/\alpha,q^{-n+1}\alpha\beta/w;q)_{j} }{(1/\alpha,\alpha\beta/zw,q^{-n+1}\beta,q^{-n+1}z/w;q)_{j}}.\] (D.55) Final step of proof with Bailey's transformation formula for terminating very-well-poised balanced \({}_{10}\phi_{9}\) series Recall that Bailey's transformation formula for terminating very-well-poised balanced \({}_{10}\phi_{9}\) series [28, (2.9.4), (2.9.5)] reads: \[{}_{10}W_{9}(a;b,c,d,e,f,g,h;q,q)\] \[=\frac{(aq,aq/ef,aq/eg,aq/eh,aq/fg,aq/fh,aq/gh,aq/efgh;q)_{\infty} }{(aq/e,aq/f,aq/g,aq/h,aq/efg,aq/efh,aq/egh,aq/fgh;q)_{\infty}}\] (D.56) \[\cdot{}_{10}W_{9}(qa^{2}/bcd;aq/bc,aq/bd,aq/cd,e,f,g,h;q,q),\] where at least one of the parameters \(e,f,g,h\) is of the form \(q^{-n},n=0,1,2,\ldots\), and the balancing condition is satisfied, namely \[q^{2}a^{3}=bcdefgh.\] (D.57) In particular, by setting \(h=q^{-n}\), we have \[{}_{10}W_{9}(a;b,c,d,e,f,g,q^{-n};q,q)\] (D.58) \[=\frac{(aq,aq/ef,aq/eg,aq/fg;q)_{n}}{(aq/e,aq/f,aq/g,aq/efg;q)_{ n}}{}_{10}W_{9}(qa^{2}/bcd;aq/bc,aq/bd,aq/cd,e,f,g,q^{-n};q,q).\] **Proposition D.23**.: _Let \(n,i,j\in\mathbb{Z}_{\geq 0}\) and \(i,j\leq n\). Applying Bailey's transformation twice, we have_ \[{}_{10}W_{9}\left(q^{-n}z/\alpha;q^{-i},q^{-j},q^{-n+i}w/\alpha,q ^{-n+j}\beta/w,z/\alpha\beta,qz/\alpha,q^{-n+1}z;q,q\right)\] \[=\frac{(1/\alpha,1/\beta,w/z,\alpha\beta/zw;q)_{n}}{(\alpha/z, \beta/z,1/w,w/\alpha\beta;q)_{n}}\frac{(\beta/z,w/\alpha\beta,q^{-n+1}z/ \alpha,q^{-n+1}w;q)_{i}}{(1/\alpha,w/z,q^{-n+1}\beta,q^{-n+1}zw/\alpha\beta;q)_ {i}}\] \[\cdot\frac{(\beta/z,1/w,q^{-n+1}z/\alpha,q^{-n+1}\alpha\beta/w;q) _{j}}{(1/\alpha,\alpha\beta/zw,q^{-n+1}\beta,q^{-n+1}z/w;q)_{j}}\] (D.59) \[\cdot{}_{10}W_{9}\left(q^{-n}z/\beta;q^{-n+i},q^{-n+j},q^{-i} \alpha/w,q^{-j}w/\beta,z/\alpha\beta,qz/\beta,q^{-n+1}z;q,q\right).\] Proof.: Observe that the balancing condition as in (D.57) is satisfied: \[q^{2}\left(\frac{q^{-n}z}{\alpha}\right)^{3}=q^{-i}q^{-j}\frac{q^{-n+i}w}{ \alpha}\frac{q^{-n+j}\beta}{w}\frac{z}{\alpha\beta}\frac{qz}{\alpha}q^{-n+1}z.\] Hence Bailey's transformation formula (D.58) applies and we have \[{}_{10}W_{9}\left(q^{-n}z/\alpha;q^{-i},q^{-j},q^{-n+i}w/\alpha,q^{-n+j}\beta/ w,z/\alpha\beta,qz/\alpha,q^{-n+1}z;q,q\right)\] \[={}_{10}W_{9}\left(q^{-n}z/\alpha;q^{-j},q^{-n+j}\beta/w,qz/\alpha,q^{-n+i}w/ \alpha,z/\alpha\beta,q^{-n+1}z,q^{-i};q,q\right)\] \[=\frac{(q^{-n+1}z/\alpha,q^{-i+1}\alpha\beta/w,q^{-i+n}/w,\beta/z; q)_{i}}{(q^{-i+1}z/w,q^{-n+1}\beta,1/\alpha,q^{-i+n}\alpha\beta/zw;q)_{i}}\] \[\qquad\cdot{}_{10}W_{9}\left(q^{-n}zw/\alpha\beta;qzw/\alpha\beta,q^{-n+j},q^{-j}w/\beta,q^{-n+i}w/\alpha,z/\alpha\beta,q^{-n+1}z,q^{-i};q,q \right).\] Applying Bailey's transformation once again, we have \[{}_{10}W_{9}\left(q^{-n}zw/\alpha\beta;qzw/\alpha\beta,q^{-n+j},q^ {-j}w/\beta,q^{-n+i}w/\alpha,z/\alpha\beta,q^{-n+1}z,q^{-i};q,q\right)\] \[={}_{10}W_{9}\left(q^{-n}zw/\alpha\beta;q^{-i},q^{-n+i}w/\alpha, qzw/\alpha\beta,q^{-j}w/\beta,z/\alpha\beta,q^{-n+1}z,q^{-n+j};q,q\right)\] \[=\frac{(q^{-n+1}zw/\alpha\beta,q^{-n+j+1}\beta,q^{j}/\alpha,w/z; q)_{n-j}}{(q^{-n+j+1}z/\alpha,q^{-n+1}w,w/\alpha\beta,q^{j}\beta/z;q)_{n-j}}\] \[\qquad\cdot{}_{10}W_{9}\left(q^{-n}z/\beta;qz/\beta,q^{-n+i},q^{- i}\alpha/w,q^{-j}w/\beta,z/\alpha\beta,q^{-n+1}z,q^{-n+j};q,q\right).\] Simplifying the prefactor we have (D.59). Now we are ready to state our proof of Theorem D.3, therewith its reformulation Theorem D.13. Proof of Theorem D.3.: In view of (D.44), (D.45), (D.46) in Proposition D.19, and (D.59) in Proposition D.23, we have the equality of matrices \[\frac{1}{\mathsf{L}_{i0}^{1}\cdot d_{0}^{R}\cdot\mathsf{U}_{0j}^{1}}\mathsf{L} ^{1}\cdot D_{R}\cdot\mathsf{U}^{1}=\frac{1}{\mathsf{L}_{i0}^{1}\cdot d_{0}^{R }\cdot\mathsf{U}_{0j}^{1}}D_{A}\cdot\mathsf{U}^{2}\cdot D_{R}^{\prime}\cdot \mathsf{L}^{2}\cdot D_{A}^{\prime},\] which is equivalent to the commutativity \(RD_{2}A=ARD_{2}\). ## Appendix E Truncation by tuning mass parameters In [10] we employed the following orbifolded Nekrasov factor for the affine Laumon space; \[\mathsf{N}_{\lambda,\mu}^{(k|N)}(u|q,\kappa)=\mathsf{N}_{\lambda, \mu}^{(k)}(u|q,\kappa)\] \[=\prod_{\begin{subarray}{c}j\geq i\geq 1\\ j-i\equiv k\ (\mathrm{mod}\,N)\end{subarray}}[uq^{-\mu_{i}+\lambda_{j+1}} \kappa^{-i+j};q]_{\lambda_{j}-\lambda_{j+1}}\cdot\prod_{\begin{subarray}{c} \beta\geq\alpha\geq 1\\ \beta-\alpha\equiv-k-1\ (\mathrm{mod}\,N)\end{subarray}}[uq^{\lambda_{\alpha}-\mu_{ \beta}}\kappa^{\alpha-\beta-1};q]_{\mu_{\beta}-\mu_{\beta+1}},\] (E.1) with \([u;q]_{n}:=u^{-n/2}q^{-n(n-1)/4}(u;q)_{n}\). To work out the tuning condition of mass parameters required for the truncation of the Young diagrams, it is convenient to recast (E.1) to the "Nakajima" form, which is expressed as a product over the boxes of the Young diagrams \((\lambda,\mu)\). In the following we temporally replace \([u;q]_{n}\) by \((u;q)_{n}\), since we can neglect the monomial factors for our purpose. By taking the product over \(\mathsf{N}^{(0)}\) to \(\mathsf{N}^{(N-1)}\), we can remove the selection rule; \[\mathsf{N}_{\lambda,\mu}(u|q,\kappa):=\prod_{k=0}^{N-1}\mathsf{N}_{\lambda,\mu }^{(k)}(u|q,\kappa)\] \[=\prod_{j\geq i\geq 1}(uq^{-\mu_{i}+\lambda_{j+1}}\kappa^{-i+j};q)_{ \lambda_{j}-\lambda_{j+1}}\cdot\prod_{\beta\geq\alpha\geq 1}(uq^{\lambda_{ \alpha}-\mu_{\beta}}\kappa^{\alpha-\beta-1};q)_{\mu_{\beta}-\mu_{\beta+1}}.\] (E.2) Namely we can obtain \(\mathsf{N}^{(k|N)}_{\lambda,\mu}(u|q,\kappa)\) by factorizing the total Nekrasov factor \(\mathsf{N}_{\lambda,\mu}(u|q,\kappa)\) according to the power of \(\kappa\). Now we can transform the total Nekrasov factor as follows; \[\mathsf{N}_{\lambda,\mu}(u|q,\kappa) =\prod_{j\geq i\geq 1}\frac{(uq^{-\mu_{i}+\lambda_{j+1}}\kappa^{-i+j} ;q)_{\infty}}{(uq^{-\mu_{i}+\lambda_{j}}\kappa^{-i+j};q)_{\infty}}\cdot\prod_{i \geq j\geq 1}\frac{(uq^{\lambda_{j}-\mu_{i}}\kappa^{j-i-1};q)_{\infty}}{(uq^{ \lambda_{j}-\mu_{i+1}}\kappa^{j-i-1};q)_{\infty}}\] \[=\prod_{i,j=1}^{\infty}\frac{(uq^{\lambda_{j}-\mu_{i}}\kappa^{j-i -1};q)_{\infty}}{(uq^{\lambda_{j}-\mu_{i}}\kappa^{j-i};q)_{\infty}}\cdot\frac{ (u\kappa^{j-i};q)_{\infty}}{(u\kappa^{j-i-1};q)_{\infty}}\] \[=\prod_{(i,j)\in\lambda}(1-uq^{\lambda_{i}-j}\kappa^{-\mu_{j}^{ \vee}+i-1})\cdot\prod_{(i,j)\in\mu}(1-uq^{-\mu_{i}+j-1}\kappa^{\lambda_{j}^{ \vee}-i}).\] (E.3) where we have used the combinatorial identity (cf. Proposition in [11], (2.12)=(2.9) with \(\kappa=t^{-1}\)) for the last equality. When one of the partitions is empty as in the case of fundamental matter contributions, the Nekrasov factor is simplified considerably. \[\mathsf{N}_{\lambda,\emptyset}(u|q,\kappa)=\prod_{(i,j)\in\lambda}(1-uq^{ \lambda_{i}-j}\kappa^{i-1}),\qquad\mathsf{N}_{\emptyset,\mu}(u|q,\kappa)= \prod_{(i,j)\in\mu}(1-uq^{-\mu_{i}+j-1}\kappa^{-i}).\] (E.4) When \(N=2\), the factors from the first rows of \(\lambda\) and \(\mu\) are contained in \(\mathsf{N}^{(0)}_{\lambda,\emptyset}(u|q,\kappa)\) and \(\mathsf{N}^{(1)}_{\emptyset,\mu}(u|q,\kappa)\), respectively, which are explicitly \[\prod_{k=0}^{\lambda_{1}-1}(1-uq^{k}),\qquad\prod_{k=1}^{\mu_{1}}(1-uq^{-k} \kappa^{-1}).\] (E.5) Hence, we can see that if \(u=q^{-m}\), \(\mathsf{N}^{(0)}_{\lambda,\emptyset}(u|q,\kappa)\) vanishes for \(m\leq\lambda_{1}-1\). Similarly, if \(u\kappa^{-1}=q^{n+1}\), \(\mathsf{N}^{(1)}_{\emptyset,\mu}(u|q,\kappa)\) vanishes for \(n+1\leq\mu_{1}\). In the localization formula of the partition function, the numerator of the contribution from a fixed point \((\lambda^{(1)},\lambda^{(2)})\), which is identified with the matter contribution is explicitly given by \[\prod_{i,j=1}^{2}\mathsf{N}^{(j-i|2)}_{\emptyset,\lambda^{(j)}}(u _{i}/v_{j}|q,\kappa)\mathsf{N}^{(j-i|2)}_{\lambda^{(i)},\emptyset}(v_{i}/w_{j }|q,\kappa)\] \[=\mathsf{N}^{(0)}_{\emptyset,\lambda^{(1)}}(u_{1}/v_{1}|q,\kappa )\mathsf{N}^{(1)}_{\emptyset,\lambda^{(2)}}(u_{1}/v_{2}|q,\kappa)\mathsf{N}^{ (1)}_{\emptyset,\lambda^{(1)}}(u_{2}/v_{1}|q,\kappa)\mathsf{N}^{(0)}_{ \emptyset,\lambda^{(2)}}(u_{2}/v_{2}|q,\kappa)\] \[\mathsf{N}^{(0)}_{\lambda^{(1)},\emptyset}(v_{1}/w_{1}|q,\kappa) \mathsf{N}^{(1)}_{\lambda^{(1)},\emptyset}(v_{1}/w_{2}|q,\kappa)\mathsf{N}^{(1 )}_{\lambda^{(2)},\emptyset}(v_{2}/w_{1}|q,\kappa)\mathsf{N}^{(0)}_{\lambda^ {(2)},\emptyset}(v_{2}/w_{2}|q,\kappa).\] (E.6) Let us look at the mass parameter tuning conditions such that the columns of the Young diagrams of \(\lambda^{(1)}\) and \(\lambda^{(2)}\) are at most \(m\) and \(n\), respectively. We need a zero coming from \({\sf N}^{(0)}_{\lambda^{(1)},\emptyset}(v_{1}/w_{1}|q,\kappa)\) or \({\sf N}^{(1)}_{\emptyset,\lambda^{(1)}}(u_{2}/v_{1}|q,\kappa)\), namely \[(v_{1}/w_{1})=q^{-m},\qquad(u_{2}/v_{1})\kappa^{-1}=q^{m+1}.\] (E.7) Similarly we need a zero coming from \({\sf N}^{(0)}_{\lambda^{(2)},\emptyset}(v_{2}/w_{2}|q,\kappa)\) or \({\sf N}^{(1)}_{\emptyset,\lambda^{(2)}}(u_{1}/v_{2}|q,\kappa)\), namely \[(v_{2}/w_{2})=q^{-n},\qquad(u_{1}/v_{2})\kappa^{-1}=q^{n+1}.\] (E.8) Recall that we use the following specialization (see (4.7)) in this paper; \[u_{1}=\frac{qQ}{d_{3}},\quad u_{2}=\frac{\kappa q}{d_{1}};\qquad v_{1}=1,\quad v _{2}=\frac{Q}{\kappa};\qquad w_{1}=\frac{1}{d_{2}},\quad w_{2}=\frac{Q}{d_{4} \kappa}.\] (E.9) Hence, the above conditions (E.7) and (E.8) are translated to \[d_{2}=q^{-m},\qquad d_{1}=q^{-m},\] (E.10) and \[d_{4}=q^{-n},\qquad d_{3}=q^{-n}.\] (E.11) The conditions for \(d_{1}\) and \(d_{2}\) are the same and those for \(d_{3}\) and \(d_{4}\) are also the same. Thus, there are four equivalent choices for the truncation of the Young diagrams. In the main text of the paper, we choose \(d_{2}=q^{-m}\) and \(d_{3}=q^{-n}\). ## Appendix F Affine Laumon space and orbifolded Nekrasov factor The affine Laumon space is the moduli space of parabolic torsion free sheaves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Originally in [35],[36], a partial compactification of the space of quasi-maps from \(\mathbb{P}^{1}\) to the flag variety of \(GL_{N}\) was defined. The geometric representation theory of the compactification is controlled by \(\mathfrak{gl}_{N}\). There is an action of the \(\mathfrak{gl}_{N}\)-Yangian on the equivariant cohomology of the Laumon space [25]. The affine Laumon space is an affine analogue, where the base manifold is replaced by \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and we have an action of the affine Yangian of \(\mathfrak{gl}_{N}\). In [44] it was shown that \(U_{\sf v}(\widehat{\mathfrak{gl}}_{n})\) acts on the equivariant \(K\)-theory of the affine Laumon space, which cannot be irrelevant to the present work. In algebraic geometry such "affine" quasi-maps are described by torsion free sheaves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) with a canonical framing at \(\{\infty\}\times\{\infty\}\) and a parabolic structure on \(\mathbb{P}^{1}\times\{0\}\). In the context of four dimensional \(\mathcal{N}=2\) supersymmetric gauge theories the affine Laumon space was first featured in [15],[16], where the conjecture on the relation of the Nekrasov partition function and the Seiberg-Witten prepotential was proved. Then as a generalization of the AGT correspondence for Virasoro and \(\mathcal{W}_{N}\) conformal blocks, Alday-Tachikawa [6] pointed out that we could employ the affine Laumon space to describe the theory with a surface defect whose classification by the embedding of \(\mathfrak{sl}_{2}\) subalgebra into \(\mathfrak{sl}_{N}\) is exactly the same as that of the parabolic structure on \(\mathbb{P}^{1}\times\{0\}\). In [6] it is conjectured when the defect is of the full type the instanton partition function gives a conformal block for the \(\mathfrak{sl}_{N}\) current algebra21. Footnote 21: For the codimension two surface defect of general type the conjecture is that the corresponding chiral algebra is the \(\mathcal{W}\) algebra obtained from Drinfeld-Sokolov reduction of \(\widehat{\mathfrak{sl}}_{N}\) current algebra, see for example [33]. It is known that the parabolic sheaves are equally described by the sheaves with the \(\mathbb{Z}_{M}\) orbifold action \(\mathbb{P}^{1}\times\mathbb{P}^{1}\ni(z,w)\to(z,\omega w)\) with \(\omega^{M}=1\). Moreover a natural toric action on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) induces that on the affine Laumon space. The fixed points of the toric action are labelled by \(N\)-tuples of Young diagrams, which allows us to use the technique of the equivariant localization. In practice the relevant quiver is called the chain-saw quiver [26]. In particular, the equivariant character of the Yangian action on the equivariant cohomology of the affine Laumon space was computed in [25]. It is straightforward to up-lift the formula in [25] to the \(K\) theory version, which we use in the present paper. On the method of orbifolding for the computation of the instanton partition function with a surface defect, see for example [33],[19],[45],[46] and [47]. In [17] the relation of the asymptotically free Macdonald polynomials and the geometric representation theory of the Laumon space was clarified. In an attempt at generalizing this result to an affine version, a formula of the non-stationary Ruijsenaars function is obtained in [52] based on the affine screening operators. The non-stationary Ruijsenaars function proposed in [52] agrees with the instanton partition function obtained from the equivariant Euler character of of the affine Laumon space22. For the relation of the formula in [52], which we employ in the present paper, and the formula of the equivariant character in [25], see for example, Appendix B of [12]. See also [43] on the relations of the geometry of affine Laumon space, intertwines of the quantum toroidal algebra and associated integrable systems. Footnote 22: Physically this corresponds to the gauge theory with adjoint hypermultiplet, sometimes called \(\mathcal{N}=2^{*}\) theory. As we have seen in Appendix E, the total Nekrasov factor \[\mathsf{N}_{\lambda,\mu}(u|q,\kappa) =\prod_{(i,j)\in\lambda}(1-uq^{\lambda_{i}-j}\kappa^{-\mu_{j}^{ \vee}+i-1})\cdot\prod_{(i,j)\in\mu}(1-uq^{-\mu_{i}+j-1}\kappa^{\lambda_{j}^{ \vee}-i}),\] \[=\prod_{i,j=1}^{\infty}\frac{(uq^{\lambda_{j}-\mu_{i}}\kappa^{j-i -1};q)_{\infty}}{(uq^{\lambda_{j}-\mu_{i}}\kappa^{j-i};q)_{\infty}}\cdot\frac{ (u\kappa^{j-i};q)_{\infty}}{(u\kappa^{j-i-1};q)_{\infty}}\] (F.1) can be factorized into \(\mathsf{N}_{\lambda,\mu}^{(k|n)}(u|q,\kappa)\) (\(0\leq k\leq n-1\)) according to the power of \(\kappa\). The combinatorial identities in [11] (see (2.8) and (2.13)) also show the following "transposed" formula for the total Nekrasov factor; \[\mathsf{N}_{\lambda,\mu}(u|q,t=\kappa^{-1})=\prod_{(i,j)\in\mu}(1-uq^{\lambda _{i}-j}t^{\mu_{j}^{\vee}-i+1})\cdot\prod_{(i,j)\in\lambda}(1-uq^{-\mu_{i}+j-1 }t^{-\lambda_{j}^{\vee}+i})\] \[=\prod_{i,j=1}^{\infty}\frac{(ut^{\mu_{i}^{\vee}-\lambda_{j}^{\vee}+1}q^{j-i};t)_ {\infty}}{(ut^{\mu_{i}^{\vee}-\lambda_{j}^{\vee}+1}q^{j-i-1};t)_{\infty}}\cdot \frac{(utq^{j-i-1};t)_{\infty}}{(utq^{j-i};t)_{\infty}}.\] (F.2) Compared with (F.1), the partitions \(\lambda\) and \(\mu\) in the formula (F.2) are exchanged in the product over the boxes of Young diagram, while the form of each factor is kept intact. Factorization of (F.2) according to the power of \(\kappa\) gives the following formula of the orbifolded Nekrasov factor; \[\mathsf{N}_{\lambda\mu}^{(k|n)}=\prod_{i\leq j}\prod_{\ell=0\atop\lambda_{j+1} ^{\vee}-\mu_{i}^{\vee}+\ell=k}^{\lambda_{j}^{\vee}-\lambda_{j+1}^{\vee}-1}(1- q^{j-i}\kappa^{\lambda_{j+1}^{\vee}-\mu_{i}^{\vee}+\ell})\cdot\prod_{i\leq j} \prod_{\ell=0\atop\lambda_{i}^{\vee}-\mu_{j}^{\vee}+\ell=k}^{\mu_{j}^{\vee}- \mu_{j+1}^{\vee}-1}(1-q^{i-j-1}\kappa^{\lambda_{i}^{\vee}-\mu_{j}^{\vee}+ \ell}),\] (F.3) where we have substituted \(u=1\) for simplicity. Usually the Nekrasov factor \(\mathsf{N}_{\lambda,\mu}(u|q,\kappa)\) is expressed in terms of \((u;q)_{n}\). But we find in the case of the affine Laumon space it is more appropriate to use \[[u;q]_{n}=u^{-n/2}q^{-n(n-1)/4}(u;q)_{n}\] \[=(u^{-1/2}-u^{1/2})(q^{-1/2}u^{-1/2}-q^{1/2}u^{1/2})\cdots(q^{-(n- 1)/2}u^{-1/2}-q^{(n-1)/2}u^{1/2}).\] (F.4) When \(n\to\infty\) we have to regularize the monomial factor which connects \([u;q]_{n}\) and \((u;q)_{n}\). One of the ways of the regularization is to define \([u;q]_{\infty}\) by \[[u;q]_{\infty}:=\frac{(u;q)_{\infty}}{\vartheta_{q^{1/2}}(-u^{1/2})}=\frac{( u^{1/2};q^{1/2})_{\infty}}{(-q^{1/2}u^{-1/2};q^{1/2})_{\infty}},\] (F.5) where \(\vartheta_{p}(z):=(z;p)_{\infty}(pz^{-1};p)_{\infty}\). One can check \[\frac{[u;q]_{\infty}}{[q^{n}u;q]_{\infty}} =\frac{(u^{1/2};q^{1/2})_{\infty}}{(-q^{1/2}u^{-1/2};q^{1/2})_{ \infty}}\frac{(-q^{(1-n)/2}u^{-1/2};q^{1/2})_{\infty}}{(q^{n/2}u^{1/2};q^{1/2} )_{\infty}}\] \[=(u^{1/2};q^{1/2})_{n}(-q^{(1-n)/2}u^{-1/2};q^{1/2})_{n}=[u;q]_{n}.\] (F.6) In particular we have \([u;q]_{\infty}=(u^{-1/2}-u^{1/2})[qu;q]_{\infty}\). We also find \[\frac{[u;q]_{\infty}}{[q/u;q]_{\infty}}=\frac{(u^{1/2};q^{1/2})_{\infty}}{(-q ^{1/2}u^{-1/2};q^{1/2})_{\infty}}\frac{(-u^{1/2};q^{1/2})_{\infty}}{(q^{1/2}u^ {-1/2};q^{1/2})_{\infty}}=\frac{(u;q)_{\infty}}{(q/u;q)_{\infty}}.\] (F.7) It is possible to express the result of the selection rule in the orbifolded Nekrasov factor (F.3) in terms of the floor function \(\lfloor\bullet\rfloor\). In the following manipulations we use the formulas for the floor function; \[\lfloor\frac{\ell}{n}\rfloor+1=-\lfloor\frac{-\ell-1}{n}\rfloor,\qquad\lfloor \frac{\ell+m}{n}\rfloor=\lfloor\frac{\ell}{n}\rfloor+\lfloor\frac{(\ell)+m}{n }\rfloor,\qquad\ell,m\in\mathbb{Z},\] (F.8) where \((\ell)\) denotes the residue of an integer \(\ell\) modulo \(n\). **Proposition F.1**.: _The orbifolded Nekrasov factor (F.3) is given by the following formula;_ \[\mathsf{N}^{(k|n)}_{\lambda\mu} =\prod_{i\leq j}[q^{j-i}\kappa^{k+n\lfloor\frac{\lambda_{j+1}^{ \vee}+n-1-k-(\mu_{i}^{\vee})}{n}\rfloor-n\lfloor\frac{\mu_{i}^{\vee}}{n} \rfloor};\kappa^{n}]_{\lfloor\frac{\lambda_{j}^{\vee}+n-1-k-(\mu_{i}^{\vee})} {n}\rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n}\rfloor}\] \[\times\prod_{i\leq j}[q^{i-j-1}\kappa^{k+n\lfloor\frac{\lambda_{i} ^{\vee}+n-1-k-(\mu_{j}^{\vee})}{n}\rfloor-n\lfloor\frac{\mu_{j}^{\vee}}{n} \rfloor};\kappa^{n}]_{\lfloor\frac{\mu_{j}^{\vee}+k+(-\lambda_{i}^{\vee})}{n} \rfloor-\lfloor\frac{\mu_{j+1}^{\vee}+k+(-\lambda_{i}^{\vee})}{n}\rfloor}\] \[=\prod_{i\leq j}[q^{j-i}\kappa^{k+n\lfloor\frac{\lambda_{j+1}^{ \vee}+n-1-k-(\mu_{i}^{\vee})}{n}\rfloor-n\lfloor\frac{\mu_{i}^{\vee}}{n} \rfloor};\kappa^{n}]_{\lfloor\frac{\lambda_{j}^{\vee}-1-k-(\mu_{i}^{\vee})}{ n}\rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}-1-k-(\mu_{i}^{\vee})}{n}\rfloor}\] \[\times\prod_{i\leq j}[q^{i-j-1}\kappa^{k+n\lfloor\frac{\lambda_{i} ^{\vee}+n-1-k-(\mu_{j}^{\vee})}{n}\rfloor-n\lfloor\frac{\mu_{j}^{\vee}}{n} \rfloor};\kappa^{n}]_{\lfloor\frac{\mu_{j}^{\vee}+k+(-\lambda_{i}^{\vee})}{n} \rfloor-\lfloor\frac{\mu_{j+1}^{\vee}+k+(-\lambda_{i}^{\vee})}{n}\rfloor}.\] (F.9) Proof.: Since the selection rule on \(\ell\) is imposed by \(\equiv\) (the equality mod \(n\)), \(\mathsf{N}^{(k|n)}_{\lambda\mu}\) can be rewritten in terms of \(\kappa^{n}\)-shifted factorial \((z_{0};\kappa^{n})_{m}\). Let us check that the initial value \(z_{0}\) and the number \(m\) of factors for each shifted factorial agree with those in the formula (F.9). To compute them for the first shifted factorial, let us assume \[\mu_{i}^{\vee}=q_{1}n+r_{1},\qquad\lambda_{j+1}^{\vee}=q_{2}n+r_{2},\qquad \lambda_{i}^{\vee}=q_{3}n+r_{3},\qquad(0\leq r_{i}\leq n-1).\] (F.10) Namely we may write, for example, \(q_{1}=\lfloor\frac{\mu_{i}^{\vee}}{n}\rfloor\) and \(r_{1}=(\mu_{i}^{\vee})\). But in the following we will use \(q_{i}\) and \(r_{i}\) for simplicity of notations. The selection rule in (F.3) tells \[\ell\equiv k+r_{1}-r_{2}.\] (F.11) Since \(0\leq k,r_{1},r_{2}\leq n-1\), we have \(1-n\leq k+r_{1}-r_{2}\leq 2n-2\). Accordingly we can write the initial value of \(\ell\) uniformly as \[\ell_{0}=k+r_{1}-r_{2}-n\lfloor\frac{k+r_{1}-r_{2}}{n}\rfloor.\] (F.12) Hence, we see \[z_{0}=q^{j-i}\kappa^{\lambda_{j+1}^{\vee}-\mu_{i}^{\vee}+\ell_{0}}=q^{j-i} \kappa^{k+n\lfloor\frac{\lambda_{j+1}^{\vee}-\mu_{i}^{\vee}}{n}\rfloor-n \lfloor\frac{k+r_{1}-r_{2}}{n}\rfloor}.\] (F.13) Now let us count the number of factors in the first shifted factorial. Since the initial value of \(\ell\) is given by (F.12) and the upper bound is \(\lambda_{j}^{\vee}-\lambda_{j+1}^{\vee}-1\), we see \[m = \lfloor\frac{\lambda_{j}^{\vee}-\lambda_{j+1}^{\vee}-1-\ell_{0}}{ n}\rfloor+1\] (F.14) \[= (q_{3}-q_{2})+1+\lfloor\frac{r_{3}-r_{1}-k-1}{n}\rfloor+\lfloor \frac{k+r_{1}-r_{2}}{n}\rfloor\] \[= \lfloor\frac{\lambda_{j}^{\vee}-r_{1}-k-1}{n}\rfloor+\lfloor\frac {k+r_{1}-\lambda_{j+1}^{\vee}}{n}\rfloor+1\] Since \[\lfloor\frac{\mu_{i}^{\vee}}{n}\rfloor=q_{1},\] (F.15) and \[\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n}\rfloor=\begin{cases} q_{2}+1,\qquad 0\leq r_{2}-r_{1}-1-k\\ q_{2},\qquad-n\leq r_{2}-r_{1}-1-k<0\\ q_{2}-1,\qquad r_{2}-r_{1}-1-k<-n\end{cases}\] (F.16) the power of \(\kappa\) is \[\begin{cases}\lambda_{j+1}^{\vee}-\mu_{i}^{\vee}+k+r_{1}-r_{2}+n,\qquad 1 \leq r_{2}-r_{1}-k\\ \lambda_{j+1}^{\vee}-\mu_{i}^{\vee}+k+r_{1}-r_{2},\qquad 1-n\leq r_{2}-r_{1}-k<1\\ \lambda_{j+1}^{\vee}-\mu_{i}^{\vee}+k+r_{1}-r_{2}-n,\qquad r_{2}-r_{1}-k<1-n. \end{cases}\] (F.17) Comparing this with (F.12), we can confirm an agreement. For the number of factors, what we have to check is \[\lfloor\frac{k+r_{1}-\lambda_{j+1}^{\vee}}{n}\rfloor+1=-\lfloor\frac{\lambda _{j+1}^{\vee}-1-k-(\mu_{i}^{\vee})}{n}\rfloor.\] (F.18) But this is a special case of the first formulas of (F.8). We have checked a complete agreement of the first factorial. Let us proceed to the second factorial. Concerning the initial condition, we can see the formula for the second is obtained from the first simply by the replacement; \[(\mu_{i}^{\vee},\lambda_{j+1}^{\vee})\longrightarrow(\mu_{j}^{\vee},\lambda_ {i}^{\vee})\] (F.19) thus an agreement is proved by appropriate changes of variables. However, counting the number of factors is more involved. Let us assume \[\lambda_{i}^{\vee}=q_{4}n+r_{4},\qquad\mu_{j}^{\vee}=q_{5}n+r_{5},\qquad\mu_{ j+1}^{\vee}=q_{6}n+r_{6},\qquad(0\leq r_{i}\leq n-1).\] (F.20) Then a similar computation as before gives the number of factors in the formula (F.3) \[m^{\prime}=\lfloor\frac{\mu_{j}^{\vee}-\mu_{j+1}^{\vee}-1-\ell_{0}^{\prime}} {n}\rfloor+1,\] (F.21) where \[\ell_{0}^{\prime}=k+r_{5}-r_{4}-n\lfloor\frac{k+r_{5}-r_{4}}{n}\rfloor.\] (F.22) Hence \[m^{\prime} = (q_{5}-q_{6})+1+\lfloor\frac{r_{4}-r_{6}-k-1}{n}\rfloor+\lfloor \frac{k+r_{5}-r_{4}}{n}\rfloor\] (F.23) \[= \lfloor\frac{r_{4}-\mu_{j+1}^{\vee}-k-1}{n}\rfloor+\lfloor\frac {\mu_{j}^{\vee}+k-r_{4}}{n}\rfloor+1.\] We note \((-\lambda_{i}^{\vee})=n-r_{4}\) for \(r_{4}\neq 0\), but \((-\lambda_{i}^{\vee})=-r_{4}=0\) for \(r_{4}\neq 0\). But such a difference does not matter, since we take a difference of the floor function. Hence we can safely use \((-\lambda_{i}^{\vee})=-r_{4}\) and what we have to check is \[\lfloor\frac{r_{4}-\mu_{j+1}^{\vee}-k-1}{n}\rfloor+1=-\lfloor\frac{\mu_{j+1}^{ \vee}+k+(-\lambda_{i}^{\vee})}{n}\rfloor,\] (F.24) which follows from the first formula of (F.8). \(\Box\) We can also write the formula of Proposition F.1 in terms of the ratio of the regularized infinite products \([u;q]_{\infty}\) defined by (F.5). Using the formulas (F.6) and (F.8), we find \[\mathsf{N}_{\lambda,\mu}^{(k|n)}(u|q,\kappa) = \frac{\prod_{j>i\geq 1}[uq^{j-i-1}\kappa^{k-n\lfloor\frac{\mu_{i}^{ \vee}}{n}\rfloor+n\lfloor\frac{\lambda_{j}^{\vee}}{n}\rfloor-n\lfloor\frac{-( \lambda_{j}^{\vee})+k+(\mu_{i}^{\vee})}{n}\rfloor};\kappa^{n}]_{\infty}}{ \prod_{j\geq i\geq 1}[uq^{j-i}\kappa^{k-n\lfloor\frac{\mu_{i}^{ \vee}}{n}\rfloor+n\lfloor\frac{\lambda_{j}^{\vee}}{n}\rfloor-n\lfloor\frac{-( \lambda_{j}^{\vee})+k+(\mu_{i}^{\vee})}{n}\rfloor};\kappa^{n}]_{\infty}}\] \[\times\frac{\prod_{i\geq j\geq 1}[uq^{j-i-1}\kappa^{k+n\lfloor\frac{ \lambda_{j}^{\vee}}{n}\rfloor-n\lfloor\frac{\mu_{i}^{\vee}}{n}\rfloor-n \lfloor\frac{(\mu_{i}^{\vee})+k-(\lambda_{j}^{\vee})}{n}\rfloor};\kappa^{n}] _{\infty}}{\prod_{i>j\geq 1}[uq^{j-i}\kappa^{k+n\lfloor\frac{\lambda_{j}^{ \vee}}{n}\rfloor-n\lfloor\frac{\mu_{i}^{\vee}}{n}\rfloor-n\lfloor\frac{(\mu_ {i}^{\vee})+k-(\lambda_{j}^{\vee})}{n}\rfloor};\kappa^{n}]_{\infty}}\] \[= \prod_{i,j=1}^{\infty}\frac{[uq^{j-i-1}\kappa^{k-n\lfloor\frac{\mu _{i}^{\vee}+k-\lambda_{j}^{\vee}}{n}\rfloor};\kappa^{n}]_{\infty}}{[uq^{j-i} \kappa^{k-n\lfloor\frac{\mu_{i}^{\vee}+k-\lambda_{j}^{\vee}}{n}\rfloor}; \kappa^{n}]_{\infty}}\frac{[uq^{j-i}\kappa^{k};\kappa^{n}]_{\infty}}{[uq^{j-i- 1}\kappa^{k};\kappa^{n}]_{\infty}}.\] (F.25) For later convenience, let us express the orbifolded Nekrasov factor in terms of \(t=\kappa^{-n}\)-shifted factorials. We have \[\mathsf{N}_{\lambda,\mu}^{(k|n)}(u|q,t) = \prod_{j\geq i\geq 1}[uq^{j-i}t^{-\frac{k}{n}+\lfloor\frac{\mu_{i}^{ \vee}}{n}\rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n} \rfloor};t^{-1}]_{\lfloor\frac{\lambda_{j}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n} \rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n}\rfloor}\] \[\times\prod_{j\geq i\geq 1}[uq^{i-j-1}t^{-\frac{k}{n}-\lfloor \frac{\lambda_{i}^{\vee}}{n}\rfloor+\lfloor\frac{\mu_{j}^{\vee}+k-(\lambda_{i}^ {\vee})}{n}\rfloor};t^{-1}]_{\lfloor\frac{\mu_{j}^{\vee}+k-(\lambda_{i}^{\vee} )}{n}\rfloor};t^{-1}]_{\lfloor\frac{\mu_{j}^{\vee}+k-(\lambda_{i}^{\vee})}{n} \rfloor-\lfloor\frac{\mu_{j+1}^{\vee}+k-(\lambda_{i}^{\vee})}{n}\rfloor}\] \[= \prod_{j\geq i\geq 1}[uq^{j-i}t^{1-\frac{k}{n}+\lfloor\frac{\mu_{i}^{ \vee}}{n}\rfloor-\lfloor\frac{\lambda_{j}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n} \rfloor};t]_{\infty}\] \[\times\prod_{j\geq i\geq 1}[uq^{i-j-1}t^{1-\frac{k}{n}+ \lfloor\frac{\mu_{i}^{\vee}}{n}\rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k- (\mu_{i}^{\vee})}{n}\rfloor};t]_{\infty}\] \[= \prod_{j\geq i\geq 1}\frac{[uq^{j-i}t^{1-\frac{k}{n}+\lfloor\frac{ \mu_{i}^{\vee}}{n}\rfloor-\lfloor\frac{\lambda_{j}^{\vee}+n-1-k-(\mu_{i}^{\vee} )}{n}\rfloor};t]_{\infty}}{[uq^{j-i}t^{1-\frac{k}{n}+\lfloor\frac{\mu_{i}^{ \vee}}{n}\rfloor-\lfloor\frac{\lambda_{j+1}^{\vee}+n-1-k-(\mu_{i}^{\vee})}{n} \rfloor};t]_{\infty}}\] \[\times\prod_{j\geq i\geq 1}\frac{[uq^{i-j-1}t^{1-\frac{k}{n}-\lfloor\frac{ \lambda_{i}^{\vee}}{n}\rfloor+\lfloor\frac{\mu_{j}^{\vee}+k-(\lambda_{i}^{\vee} )}{n}\rfloor};t]_{\infty}}{[uq^{i-j-1}t^{1-\frac{k}{n}-\lfloor\frac{\lambda_{i} ^{\vee}}{n}\rfloor+\lfloor\frac{\mu_{j}^{\vee}+k-(\lambda_{i}^{\vee})}{n} \rfloor};t]_{\infty}},\] (F.26) where we have used \[[u;t^{-1}]_{n}=[ut^{-n+1};t]_{n},\qquad n>0.\] (F.27) Finally the same manipulation as the last equality of (F.25) gives \[\mathsf{N}_{\lambda,\mu}^{(k|n)}(u|q,t) = \prod_{i,j=1}^{\infty}\frac{[uq^{j-i}t^{1-\frac{k}{n}+\lfloor\frac {\mu_{i}^{\vee}+k-\lambda_{j}^{\vee}}{n}\rfloor};t]_{\infty}}{[uq^{j-i-1}t^{1 -\frac{k}{n}+\lfloor\frac{\mu_{i}^{\vee}+k-\lambda_{j}^{\vee}}{n}\rfloor};t] _{\infty}}\frac{[uq^{j-i-1}t^{1-\frac{k}{n}};t]_{\infty}}{[uq^{j-i}t^{1-\frac {k}{n}};t]_{\infty}}.\] (F.28) When one of the partitions is empty, \(\mathsf{N}_{\lambda,\mu}^{(k|n)}\) simplifies as follows; \[\mathsf{N}_{\lambda,\emptyset}^{(k|n)}(u|q,\kappa) = \frac{\prod_{j\geq i\geq 1}[uq^{j-i}\kappa^{k};\kappa^{n}]_{ \lfloor\frac{\lambda_{i}^{\vee}+n-1-k}{n}\rfloor}}{\prod_{j \geq i\geq 1}[uq^{(j+1)-(i+1)}\kappa^{k};\kappa^{n}]_{\lfloor\frac{ \lambda_{i+1}^{\vee}+n-1-k}{n}\rfloor}}=\prod_{i\geq 1}[uq^{i-1}\kappa^{k}; \kappa^{n}]_{\lfloor\frac{\lambda_{i}^{\vee}+n-1-k}{n}\rfloor}\] (F.29) \[\mathsf{N}_{\emptyset,\mu}^{(k|n)}(u|q,\kappa) = \frac{\prod_{j\geq i\geq 1}[uq^{i-j-1}\kappa^{k-n\lfloor \frac{\mu_{j}^{\vee}+k}{n}\rfloor};\kappa^{n}]_{\lfloor\frac{\mu_{j}^{\vee}+k }{n}\rfloor}}{\prod_{j\geq i\geq 1}[uq^{i-(j+1)}\kappa^{k-n\lfloor\frac{\mu_{j+1}^{ \vee}+k}{n}\rfloor};\kappa^{n}]_{\lfloor\frac{\mu_{j+1}^{\vee}+k}{n}\rfloor}}= \prod_{i\geq 1}[uq^{-i}\kappa^{k-n\lfloor\frac{\mu_{i}^{\vee}+k}{n}\rfloor}; \kappa^{n}]_{\lfloor\frac{\mu_{i}^{\vee}+k}{n}\rfloor}.\] ## Appendix G (Anti-)symmetrization in a factorized form A basis of the space of cohomology classes for the \(N\)-tuple Jackson integral is defined by using the anti-symmetrization of the variables \(z=(z_{1},\ldots,z_{N})\) with the weight function \(\Delta(q,z)\) (see (3.2)). In general the result of (anti-)symmetrization is expanded in terms of symmetric polynomials (divided by the Vandermonde determinant for anti-symmetrization). However, from the viewpoint of the correspondence of the Jackson integral and the Bethe vector for quantum integrable system [48], what we need is the (anti-)symmetrization that keeps the factorization. One can rephrase it as a summation over partitions of the index set \([N]:=\{1,2,\ldots,N\}\). We give a proof for the following well-known formulae for completeness. **Proposition G.1**.: _Let \(I_{1}\sqcup\cdots\sqcup I_{s}\) be a partition of \(\{1,2,\ldots,N\}\) with \(|I_{a}|=k_{a}\geq 1\) and \(\mathcal{A}\) be the anti-symmetrization operator of \(z=(z_{1},\ldots,z_{N})\). For any functions \(f_{a}(x)\) (\(1\leq a\leq s\)), we have_ \[\begin{split}&\frac{1}{\Delta(1,z)}\mathcal{A}\Big{(}\prod_{i_{1}=1} ^{k_{1}}f_{1}(z_{i_{1}})\prod_{i_{2}=1}^{k_{2}}f_{2}(z_{k_{1}+i_{2}})\cdots\prod _{i_{s}=1}^{k_{s}}f_{s}(z_{k_{1}+\cdots+k_{s-1}+i_{s}})\Delta(q,z)\Big{)}\\ &=\prod_{a=1}^{s}[k_{a}]_{q^{-1}}!\sum_{I_{1}\sqcup\cdots\sqcup I_ {s}}\Big{\{}\prod_{a=1}^{s}\prod_{i_{a}\in I_{a}}f_{a}(z_{i_{a}})\prod_{1\leq a <b\leq s}\prod_{i\in I_{a}}\prod_{j\in I_{b}}\frac{z_{i}-q^{-1}z_{j}}{z_{i}-z_{ j}}\Big{\}},\end{split}\] (G.1) _where the sum \(\sum_{I_{1}\sqcup\cdots\sqcup I_{s}}\) on the right hand side is taken over the partitions \(I_{1}\sqcup\cdots\sqcup I_{s}\)._ Proof.: For a permutation \(\sigma\in S_{N}\), we define \(I_{a}=\{\sigma(k_{1}+\cdots+k_{a-1}+i)|\ 1\leq i\leq k_{a}\}\). Since \[\frac{1}{\Delta(1,z)}\mathcal{A}\Big{(}F(z)\Delta(q,z)\Big{)}=\sum_{\sigma \in S_{N}}\sigma\Big{(}F(z)\frac{\Delta(q,z)}{\Delta(1,z)}\Big{)},\] for any \(F(z)=F(z_{1},\ldots,z_{N})\), we can recast the left hand side of (G.1) to \[\sum_{\sigma\in S_{N}}\Big{(}\prod_{a=1}^{s}\prod_{i\in I_{a}}f_{a}(z_{i}) \frac{\Delta(q,z_{I_{a}})}{\Delta(z_{I_{a}})}\prod_{1\leq a<b\leq s}\prod_{i \in I_{a}}\prod_{j\in I_{b}}\frac{z_{i}-q^{-1}z_{j}}{z_{i}-z_{j}}\Big{)}.\] Let \(S_{I}\) be the symmetric group of a set \(I\). Then according to the coset decomposition \(S_{N}=\underset{I_{1}\sqcup\cdots\sqcup I_{s}}{\sqcup}S_{I_{1}}\times\cdots \times S_{I_{s}}\), we take the sum in the following manner; \[\sum_{\sigma\in S_{N}}=\sum_{I_{1}\sqcup\cdots\sqcup I_{s}}\sum_{w_{1}\in S_{ I_{1}}}\cdots\sum_{w_{s}\in S_{I_{s}}}.\] By applying the formula ([37]: Chap.III eq.(1.4)) \[\sum_{w_{a}\in S_{I_{a}}}w_{a}\Big{(}\frac{\Delta(q,z_{I_{a}})}{\Delta(1,z_{I_ {a}})}\Big{)}=[k_{a}]_{q^{-1}}!,\] for each sum over \(S_{I_{a}}\), we obtain the desired result (G.1). ## Appendix H Shakirov's equation as a coupled system Consider Shakirov's equation in the form \[\psi= \mathcal{H}_{\mathrm{S}}T_{t,\Lambda}^{-1}T_{qtQ,x}^{-1}\psi,\] (H.1) \[\mathcal{H}_{\mathrm{S}}= \frac{1}{\varphi(qx)\varphi(\frac{\Lambda}{x})}\mathcal{B}\frac{ \varphi(\Lambda)\varphi(q^{-1}d_{1}d_{2}d_{3}d_{4}\Lambda)}{\varphi(-d_{1}x) \varphi(-d_{2}x)\varphi(-d_{3}\frac{\Lambda}{x})\varphi(-d_{4}\frac{\Lambda}{x })}\mathcal{B}\frac{1}{\varphi(\frac{d_{1}d_{2}}{q}x)\varphi(d_{3}d_{4}\frac{ \Lambda}{x})}.\] It has a unique formal series solution normalized as \(\psi=1+\mathcal{O}(x,\frac{\Lambda}{x})\). In terms of the affine Laumon partition function \(\mathcal{Z}_{\mathrm{AL}}\), the solution is given by \[\psi(\Lambda,x)=\mathcal{Z}_{\mathrm{AL}}\left(\begin{array}{c}u_{1},u_{2}\\ v_{1},v_{2}\\ w_{1},w_{2}\end{array}\Bigg{|}x_{1},\frac{\Lambda_{1}}{x_{1}}\Bigg{|}q,t\right)\] \[=\mathcal{Z}_{\mathrm{AL}}\left(\begin{array}{c}\frac{qQ}{d_{3}}, \frac{q\kappa}{d_{1}}\\ 1,\frac{Q}{\kappa d_{4}}\end{array}\right|-\frac{\sqrt{Qd_{1}d_{2}}}{\kappa}x,- \sqrt{\frac{d_{3}d_{4}}{q^{2}Q}}\frac{\Lambda}{x}\Bigg{|}\,q,\kappa^{-2} \right).\] (H.2) We will give a coupled form of Shakirov's equation and its solution. To do this, we define operators \(T\) and \(K\) by23 Footnote 23: One may use \(d_{1}\) (or \(d_{3}\)) instead of \(d_{2}\) (or \(d_{4}\)). \[T= \left\{d_{2}\mapsto\frac{q}{tQd_{2}},\quad d_{4}\mapsto\frac{qQ} {d_{4}},\quad\Lambda\mapsto\frac{d_{2}d_{4}\Lambda}{q},\quad x\mapsto-\frac{d_ {2}x}{q}\right\},\] \[K= \frac{1}{\varphi(qx)\varphi(\frac{\Lambda}{x})}\mathcal{B}\frac{ 1}{\varphi(-d_{1}x)\varphi(-d_{3}\frac{\Lambda}{x})}.\] (H.3) On the parameters \(u_{i},v_{i},w_{i},x_{1},\Lambda_{1}\), the operator \(T\) acts as \[T=\left\{w_{i}\mapsto\frac{tQ}{qw_{i}},\quad x_{1}\mapsto-\frac{x_{1}}{\sqrt{ qtQ}},\quad\Lambda_{1}\mapsto\frac{\Lambda_{1}}{\sqrt{t}}\right\}.\] (H.4) In our previous paper [10], we used the renormalized \(q\)-Borel transformation \(\widetilde{\mathcal{B}}:=T^{-1}_{(qt^{1/2}Q)^{1/2},x}\mathcal{B}\) in writing down the coupled system in order to make the correspondence with the \(qq\)-Painleve VI equation clear. If this is taken into account, the action of \(T\) on the expansion parameters becomes \(x_{1}=x\mapsto-t^{-1/4}x_{1}\) and hence \(x_{2}=\Lambda_{1}/x_{1}\mapsto-t^{-1/4}x_{2}\). The \(T\) action is the same for \(x_{1}\) and \(x_{2}\) and coincides with the action of the shift operator \(\widetilde{T}_{\mathsf{p},b}\) with \(\mathsf{p}=t^{1/4}\), which is regarded as a square root of the time evolution of the parameters \(b_{i}\) in the \(qq\)-Painleve VI equation (See eq.(6.5) in [10]). **Proposition H.1**.: _The normalized solution \(\psi\) of (H.1) and its parameter transform \(\chi:=T\psi\) satisfy the following coupled system of equations;_ \[\begin{cases}\psi=gK\chi,\\ \chi=TgKT\psi,\end{cases}\] (H.5) _where_ \[g=\frac{(\frac{td_{2}d_{4}}{q}\Lambda,d_{1}d_{3}\Lambda;q,t)_{\infty}}{(t \Lambda,\frac{td_{1}d_{2}d_{3}d_{4}}{q}\Lambda;q,t)_{\infty}}.\] (H.6) _Conversely the equation (H.1) follows from the coupled system (H.5)._ Proof.: From (H.1) and (H.3), one can verify the following relations: \[T^{2}=T^{-1}_{t,\Lambda}T^{-1}_{qQt,x},\] (H.7) \[TK=\frac{1}{\varphi(-d_{2}x)\varphi(-d_{4}\frac{\Lambda}{x})} \mathcal{B}\frac{1}{\varphi(\frac{d_{1}d_{2}}{q}x)\varphi(d_{3}d_{4}\frac{ \Lambda}{x})}T,\] (H.8) \[KTKT=K\frac{1}{\varphi(-d_{2}x)\varphi(-d_{4}\frac{\Lambda}{x})}\mathcal{B}\frac{ 1}{\varphi(\frac{d_{1}d_{2}}{q}x)\varphi(d_{3}d_{4}\frac{\Lambda}{x})}T^{2}= \frac{1}{\varphi(\Lambda)\varphi(\frac{d_{1}d_{2}d_{3}d_{4}}{q}\Lambda)} \mathcal{H}_{\mathrm{S}}T^{2},\] (H.9) \[KT\mathcal{H}_{\mathrm{S}}T^{2}=\varphi(\frac{d_{2}d_{4}}{q}\Lambda)\varphi( \frac{d_{1}d_{3}}{t}\Lambda)(KT)^{3}=\frac{\varphi(\frac{d_{2}d_{4}}{q}\Lambda) \varphi(\frac{d_{1}d_{3}}{t}\Lambda)}{\varphi(\Lambda)\varphi(\frac{d_{1}d_{2} d_{3}d_{4}}{q}\Lambda)}\mathcal{H}_{\mathrm{S}}T^{2}KT.\] (H.10) From (H.7), (H.10) and (H.1), we have \[KT\psi=KT\mathcal{H}_{\mathrm{S}}T^{2}\psi=\frac{\varphi(\frac{d_{2}d_{4}}{q} \Lambda)\varphi(\frac{d_{1}d_{3}}{t}\Lambda)}{\varphi(\Lambda)\varphi(\frac{d_ {1}d_{2}d_{3}d_{4}}{q}\Lambda)}\mathcal{H}_{\mathrm{S}}T^{2}KT\psi.\] (H.11) Noting that \(T^{-1}_{t,\Lambda}(g)=\frac{\varphi(\frac{d_{2}d_{4}}{q}\Lambda)\varphi( \frac{d_{1}d_{3}}{t}\Lambda)}{\varphi(\Lambda)\varphi(\frac{d_{1}d_{2}d_{3}d_{4 }}{q}\Lambda)}g\), we see the function \(gKT\psi\) satisfies the same equation as (H.1) with the same initial condition as \(\psi\). By the uniqueness of the solution, we obtain the first relation \(\psi=gKT\psi=gK\chi\). The second relation is the \(T\)-transform of the first one. To check the converse we note that \[g\cdot T\cdot g=\frac{(\Lambda,\frac{d_{1}d_{2}d_{3}d_{4}}{q}\Lambda;q,t)_{ \infty}}{(t\Lambda,\frac{td_{1}d_{2}d_{3}d_{4}}{q}\Lambda;q,t)_{\infty}}\cdot T =\varphi(\Lambda)\varphi(\frac{d_{1}d_{2}d_{3}d_{4}}{q}\Lambda)\cdot T.\] (H.12) Hence \[\psi=g\cdot KT\cdot g\cdot KT\psi=\varphi(\Lambda)\varphi(\frac{d_{1}d_{2}d_ {3}d_{4}}{q}\Lambda)\cdot KTKT\psi=\mathcal{H}_{\mathrm{S}}T^{2}\psi.\] (H.13)
2309.03506
Towards Robust Natural-Looking Mammography Lesion Synthesis on Ipsilateral Dual-Views Breast Cancer Analysis
In recent years, many mammographic image analysis methods have been introduced for improving cancer classification tasks. Two major issues of mammogram classification tasks are leveraging multi-view mammographic information and class-imbalance handling. In the first problem, many multi-view methods have been released for concatenating features of two or more views for the training and inference stage. Having said that, most multi-view existing methods are not explainable in the meaning of feature fusion, and treat many views equally for diagnosing. Our work aims to propose a simple but novel method for enhancing examined view (main view) by leveraging low-level feature information from the auxiliary view (ipsilateral view) before learning the high-level feature that contains the cancerous features. For the second issue, we also propose a simple but novel malignant mammogram synthesis framework for upsampling minor class samples. Our easy-to-implement and no-training framework has eliminated the current limitation of the CutMix algorithm which is unreliable synthesized images with random pasted patches, hard-contour problems, and domain shift problems. Our results on VinDr-Mammo and CMMD datasets show the effectiveness of our two new frameworks for both multi-view training and synthesizing mammographic images, outperforming the previous conventional methods in our experimental settings.
Thanh-Huy Nguyen, Quang Hien Kha, Thai Ngoc Toan Truong, Ba Thinh Lam, Ba Hung Ngo, Quang Vinh Dinh, Nguyen Quoc Khanh Le
2023-09-07T06:33:30Z
http://arxiv.org/abs/2309.03506v1
# Towards Robust Natural-Looking Mammography Lesion Synthesis on ###### Abstract In recent years, many mammographic image analysis methods have been introduced for improving cancer classification tasks. Two major issues of mammogram classification tasks are leveraging multi-view mammographic information and class-imbalance handling. In the first problem, many multi-view methods have been released for concatenating features of two or more views for the training and inference stage. Having said that, most multi-view existing methods are not explainable in the meaning of feature fusion, and treat many views equally for diagnosing. Our work aims to propose a simple but novel method for enhancing examined view (main view) by leveraging low-level feature information from the auxiliary view (ipsilateral view) before learning the high-level feature that contains the cancerous features. For the second issue, we also propose a simple but novel malignant mammogram synthesis framework for upsampling minor class samples. Our easy-to-implement and no-training framework has eliminated the current limitation of the CutMix algorithm which are unreliable synthesized images with random pasted patches, hard-contour problems, and domain shift problems. Our results on VinDr-Mammo and CMMD datasets show the effectiveness of our two new frameworks for both multi-view training and synthesizing mammographic images, outperforming the previous conventional methods in our experimental settings. + Footnote †: Thanh-Huy Nguyen and Quang Hien Kha have equal contributions. ## 1 Introduction Breast cancer has one of the highest rates of mortality and incidence among women worldwide, making it one of the most common cancers to cause death. Cancer detection, in particular at the early stage, must be crucial in screening mammogram exams. Both the craniocandal (CC) view and the mediolateral oblique (MLO) view, which are top-down and side views of the breast, respectively, can be used to classify each patient's breasts. Radiologists frequently examine both views of the same breast (ipsilateral views) and the same view of both breasts (bilateral views) to make a sound, intuitive decision. Figure 1: Our proposed pipeline for training and synthesizing mammographic images. Two stages are the supervised training on ipsilateral views mammograms. and synthesis framework that takes the saliency map and region malignant annotations. Based on that, prior works nowadays can be classified into various groups: ipsilateral-only based, bilateral-only based, and ipsilateral-bilateral combination. Recent papers expose bilateral-only based, Liu et al. [16] enhanced mammogram mass detection using a contrasted bilateral network (CBN). Furthermore, Zhao et al. [32] used a well-known attention module between adaptive spatial and channel that yields the categorization. In contrast, those strategies still struggle with the conflict between the two breast sides that cause noise in the model because one patient might have the disease on one breast while another does not. Another group is the ipsilateral-bilateral combination approach used three views or four views as the inputs which create a full overview of breasts. Liu et al. [14, 15] achieved this by proposing a remarkable Anatomy-aware Graph Convolutional Network (AGN) that relies on the mass shape and region to construct the graphical correspondence among different mammographic views. Although the performance of these models is noteworthy, they require massive computational, thus might be hardly embedded in hospital facilities. Continuously, Nguyen et al. [22] proposed four views input, independently, each view will be learned to extract features and then fed into Light-GBM [10] classifier for prediction. Afterward, the result operates the max function between ipsilateral view sides, which can be inaccurate and lead to a poor learning process. Furthermore, mammogram synthesis and augmentation techniques are also one of the most promising approaches for handling class imbalance. MixUp [31] reduces the proportion of informative pixels of two images to produce a new image that shows impressive results in many medical image applications. Similar to MixUp, the CutMix [30] algorithm generates a simple augmentation methodology that replaces the patch of two images together. However, both of these methods might cause a conflict in the label because the random of choosing a patch in the mammographic image can create two different classes in the same image. About MixUp, the algorithm itself uses the image-level mixing between two images without semantic label preserving for mammographic cancer classification. Besides, the region generated from CutMix is random, that might or might not contain the cancerous information when conducting copy-paste. CutMix also might create new untrustworthy samples due to the solid rectangle's boundary of pasted patches and the difference in style space. To take full advantage, we propose a Dual Ipsilateral Views Fusion Network (DIVF-Net) for mammographic image classification. This network can be separated into three parts: Low-Level Feature Blocks, Features Fusion Blocks, and High-Level Feature Blocks. Our network can leverage low-level information such as the shape, contour, and density of the breasts. The DIVF-Net combines two low-level features for extracting the relevant information before using it for enhancing the main view feature. The high-level information part of DIVF-Net aims to focus on the lesions that highly contain semantic information for cancer classification. Additionally, a Malignant Lesions Synthesis Framework also is proposed in this paper which overcomes the current limitations of CutMix and MixUp algorithms. It includes three stages: Region Selection, Domain Adaptation, and Soft Contour Transformation. The framework carefully picks the radiologist-annotated region for replacing the benign-information-contained region. The rest of the framework aims to close the gap of different between source and target patches before replacing it with a gradient-contour MixUp algorithm. In summary, the main contributions of our work are as follows: * A novel multi-view network DIVF-Net with two types of fusion operations that leverages information on both CC and MLO views for accurate cancer classification. * A new robust mammogram synthesis framework that replaces the benign to malignancy region with an informative region. The created patches are also being smoothed and Fourier-adapted before replacing the indicated regions. * Experimental results and ablation studies based on a combination of these two show the robustness and generalizability on multiple fusion settings and datasets. ## 2 Related Work **Multi-view Network:** Compared with 2D views, 3D objects have much more knowledge to guide the model, which is described in visual understanding [24, 6] and stereo vision [3, 4, 23]. In visual understanding, they set several cameras around a target object to model region-to-region and views from various angles. Each view is embedded in a shared weight Convolutional Neural Network (CNN). In stereo vision, two cameras are placed closely. This approach is mainly used in self-driving cars, which manipulate the depth estimation via disparity map fusion. The depth estimation helps the system knows the closeness to itself the straight object to immediately avoid the car collapse and keep a safe distance. Multi-view-based approaches [22, 26] collect features from various 2D views to represent the 3D object. First, they fed each view into a feature extractor to learn the appropriate embedding feature. Then, they proposed their work to significantly fuse all of them for 3D representation. Inspired by that, mammographic screening also has a differentiated imaging process that is efficient to represent 3D objects. _Wu et al_[28] proposed the four views mammogram network to predict the malignant or not malignant classification. They aggregate between the bilateral views at the first stage and then the softmax layer. Finally, they presented four strategies with a combination of several layers. _Khan et al_[19] enhanced the way mammogram image preprocessing and decreased the computational complexity in the backbone. They extract directly a mass via augmented ROIs and modify a small VGGNet-like architecture used for the feature extraction stage. In general, ipsilateral views consist of CC and MLO views of the same breast side. This advantage in extracting rich information for 3D medical image analysis. Thus, fusing the ipsilateral views increases the global features in fusion operation beside the local features from individual views. **Medical Image Synthesis/Augmentation:** Augmentation is one of the most fundamental procedures for synthesizing training data for further generalizability. Existing works on data augmentation [13, 30, 31] synthesize two images into soft images. Thus, the generated new training images direct the model to concentrate more on shape than texture, which improves classification and object identification performances. CutOut [5] revivals the object occlusion, which is a common issue in many computer vision tasks. It randomly chooses one defined size patch to remove. While CutMix [30] replaces the binary mask with another image and mixes the label via the combination ratio. Mixup [31] sampling from the mixup vicinal distribution produces virtual feature-target vectors. In recent years, Generative Adversarial Networks (GAN) [7] become a well-known deep-learning-based medical image synthesis framework. For the synthesis of mammograms, Dimitrios Korkinof et al [12] employ a progressive GAN (PGGAN), achieving high resolutions and positive outcomes when comparing the low-level pixel distributions of real and artificial images. Rui Man et al.'s research [18] focuses on creating synthetic samples, but in this instance, they create patches of a histopathological image. This AnoGAN (Anomaly Detection GAN) has many benefits for teaching classification systems. Xiangyuan Ma et al.'s [17] research focuses on creating samples of mammogram lesion segmentation masks. This enables overcoming image labeling, one of the most difficult tasks involved in dataset construction. Having said that, the biggest concern of GAN-based approaches is the realism and trustworthiness of synthesized samples. It may not be practical in real-world applications when using synthesized mammograms for training and testing. ## 3 Methodology ### Dual Ipsilateral Views Fusion Network This work aims to exploit the dual-view mammograms of the same breast using a new proposed network, DIVF-Net. Our network takes two ipsilateral views (CC and MLO) of a single breast to assess the cancerous. For each patient, the model takes one view of the breast as an examined view, and the other is an auxiliary view to support. As shown in Fig. 2, both examined view and auxiliary view Figure 2: Dual Ipsilateral Views Fusion Network (DIVF-Net) for Mammographic Cancer Diagnosis. This framework consists of three stages: Low-Level Features Blocks (left-top), Features Fusion Blocks (middle-top), and High-Level Features Blocks(right-top). In Ipsilateral Views Fusion (IVF) Block, there are two operations used to fuse both examined features and auxiliary features: average and concatenate. After going through IVF Block, the fused features combine with examined features to improve the performance. are fed into Low-Level Features Blocks (the first half of the popular backbone like ResNet [8]). Then, the output features of these views are combined by IVF Block. The IVF Block includes 4 components: Aggregation mechanism, 2D convolutional layer, batch normalization layer [9], and ReLU activation function [1]. In the aggregation part, there are two ways to combine two feature maps: Average and Concatenation, shown in 2.1 and 2.2 of Fig. 2. For average aggregation, the output feature takes two feature maps to compute using element-wise average before being fed to the other three components in IVF Block. For concatenate aggregation, the output feature is a depth stack of two input feature maps before the convolutional layer takes a two-dimension-depth feature map to get the one-dimension-depth feature map. The batch normalization layer and ReLU activation function remain the same as average aggregation for normalizing the input features. To enhance the examined view with informative information, the output feature map of IVF Block and examined view feature map are combined by element-wise addition. The High-Level Features Blocks (the last half of the backbone) take the enhanced feature maps to learn the high-level information such as abnormalities. Subsequently, we feed it into fully connected layers, followed by a softmax layer, to get the final output binary classification. This framework's concept is based on how radiologists examine mammograms for diagnosis. Instead of treating two ipsilateral views equally for cancer diagnosis, the model seeks to distinguish one as the primary view and the other as a support view. As shown in Part 2 of Fig. 2, the examined view feature and fused feature play important roles in classifying breast cancer. The examined view is the radiologist's main focus, which is kept the same. On the other hand, the auxiliary view along with examined view is for comparing these two to having more perspectives. ### Malignant Lesions Synthesis Framework Inherited from the previous successful use of domain adaptation on mammogram classification [27], and mammogram detection tasks [21], we proposed a novel framework to create the natural-looking malignant findings synthesis framework. The framework includes three stages: 1. We first propose a way to select the important region from the benign breast by getting a saliency map from warm-up pre-trained supervised learning. Then, the region with a high-intensity score was replaced by a malignant region that was annotated by radiologists. 2. Secondly, To solve the domain shift issue brought on by breast density or device differences, we conduct the style transfer from the source region to the target region based on Fourier Domain Adaptation [29]. 3. Finally, to make the malignant lesions naturally mix with the destination region, we propose a soft contour mask and its inverse to combine source and target regions before pasting to a region of the benign sample. Figure 3: Proposed Soft-Adapted Malignancy Synthesis Framework consists of three phases: 1) Region Selection, extracting the important region (left); 2) Domain Adaptation, adapting target style to the source image (middle); 3) Soft Contour Transformation, smoothing the target region with an inverse soft mask and the transformed source region with a soft mask (right). In the region selection part, the supervised training for warming up is conducted before getting a saliency map. Grad-CAM [25] uses the gradient information flowing into the last convolutional layer of the CNN to assign importance values to each neuron for a particular decision of interest. Mathematically, with given class \(c\), the saliency map from Grad-CAM \(L_{GC}^{c}\in R^{H\times W}\) of height \(H\) and width \(W\) is obtained by computing the gradient of the class \(c\) score and \(y^{c}\) with respect to feature map activations \(A^{k}\) of the convolutional layer (denotes by \(\frac{\partial y^{c}}{\partial A^{k}}\)). To obtain the neuron importance weights (called \(a_{k}^{c}\)), these gradients are global-average-pool over the width and height dimensions (indexed by i and j, respectively): \[a_{k}^{c}=\frac{1}{Z}\sum_{i}\sum_{j}\frac{\partial y^{c}}{\partial A_{ij}^{k}}. \tag{1}\] To obtain \(L_{GC}^{c}\), we perform a weighted combination of forward activation maps followed by a ReLU: \[L_{GC}^{c}=ReLU\left(\sum_{k}a_{k}^{c}A^{k}\right). \tag{2}\] Based on radiologists' malignant abnormalities annotations with width \(W_{r}\) and height \(H_{r}\), coordinates \((i,j)\) are set as top-left corner starting coordinates. We calculate the bottom-right saliency area value with beginning coordinates \(i,j\) and the shape of regions \(H_{r},W_{r}\). Region value of class-discriminative localization map from Grad-CAM, called \(I_{region}\left(L_{GC},i,j,H_{r},W_{r}\right)\), is defined as: \[I_{region}\left(L_{GC},i,j,H_{r},W_{r}\right)=\sum_{m=i}^{H_{r}+i-1}\sum_{n=j} ^{W_{r}+j-1}L_{GC}\left(m,n\right). \tag{3}\] For selecting a wanted region given class c as a pasting destination, we compute the values of regions and find the highest class-discriminative patch as below: \[I_{region}^{*}=I_{region}\left(L_{GC},i^{*},j^{*},H_{r},W_{r}\right), \tag{4}\] whereas \(i^{*},j^{*}\) are computed by: \[i^{*},j^{*}=\operatorname*{arg\,max}_{i,j}I_{region}\left(L_{GC},i,j,H_{r},W_{ r}\right). \tag{5}\] Using \(i^{*},j^{*}\) with \(H_{r},W_{r}\), we can get the patch containing the benign information for mixing. The detailed pseudo-code is described in Algorithm 1 below. ``` Input:\(H,W,H_{r},W_{r}\). for\(i=1\) to \(H-H_{r}+1\)do for\(j=1\) to \(W-W_{r}+1\)do Calculate \(I_{region}\) using Eq.3 {Compute accumulative intensity of Region saliency map.} if\(I_{region}^{*}<I_{region}\)then \(I_{region}^{*}\gets I_{region}\) {Update the biggest intensity of region.} \((i^{*},j^{*})\leftarrow(i,j)\) {Update coordinate of the biggest intensity of region.} endif endfor endfor Return:\((i^{*},j^{*})\) and \(I_{region}^{*}\). ``` **Algorithm 1** High class-discriminative Region Selection Next, in the domain adaptation stage, the domain shift problem between two patches, which brings the different bright fields and device information, could make the noise Figure 4: The synthesis mammogram image with various algorithms. a) Soft Mask: following the Gaussian distribution (bottom) to generate the blending masks (top), b) Reference image contains malignancy mass and c) Target image will be added malignancy mass, d) the hard region synthesis CutMix algorithm image, e) the middle smooth region synthesis image with CutMix and Domain Adapted algorithms, f) Our proposed Soft-Adapted Malignancy Synthesis image. for model training. Inspired by FDA [29], the proposed framework conducts spectral transfer, mapping a benign sample to a malignant sample without changing semantic content. Given that \(F^{A}\), \(F^{P}:R^{H\times W\times 1}\to R^{H\times W\times 1}\) are the amplitude and phase components of the Fourier transform \(F\) of a mammogram patch, we have: \[F(x)(m,n)=\sum_{h,w}x(h,w)e^{-k2\pi\left(\frac{h}{H}m+\frac{w}{W}n\right)}, \tag{6}\] where \(k^{2}=-1\). With mask \(M_{\beta}\) contains zero value except for center region with \(\beta\in(0,1)\) as follows: \[M_{\beta}(h,w)=\mathbb{S}_{(h,w)\in[-\beta H:\beta H,-\beta W:\beta W]}, \tag{7}\] \(\mathbb{S}\) indicates an all-ones matrix. As shown in Fig. 3, Benign patch and Malignant patch are \(x^{s}\sim D^{s}\),\(x^{t}\sim D^{t}\) respectively, FDA algorithm is shown as: \[x^{s\to t}=F^{-1}\left(M_{\beta}\circ F^{A}(x^{t})+(1-M_{\beta})\circ F^{A} (x^{s}),F^{P}(x^{s})\right), \tag{8}\] where \(F^{-1}\) is the inverse Fourier transform mapping spectral information back to 2D-image space. The center (low frequency) part of the amplitude of the source image \(F^{A}(x^{s})\) will be transferred in the target style of \(x^{t}\). This notation only modifies the amplitude component without altering the phase component \(F^{P}\). Both components of the Fourier transform will be inversed back to a new image \(x^{s\to t}\), whose remaining content of source image \(x^{s}\) but the style of target image \(x^{t}\). Finally, the original malignant and domain-adapted benign are used for blending before pasting back to the benign sample. We proposed a novel soft mask and its inverse for mixing two patches. With any image having height \(H\) and width \(W\), a soft mask \(S\) is defined as \(S\in[0,1]^{\overline{H}\times W}\). Therefore, its inverse soft mask is \((1-S)\in[0,1]^{H\times W}\). The output image mixing between two images \(x^{s},x^{t}\) is formularized as: \[\overline{x}=\left(S\otimes x^{t}\right)\oplus\left((1-S)\otimes x^{s}\right), \tag{9}\] whereas, \(x^{s},x^{t}\) are the source image (benign patch) and target image (malignant patch) respectively. The label of image \(\overline{x}\) is the label of the target image. The blending masks are generated following the Gaussian distribution. The gradient radial soft mask is the result of the outer product of two one-dimensional Gaussian distributions. It can be seen as: \[S_{W}=e^{-\frac{(x-\mu_{W})^{2}}{2\sigma^{2}}},\ S_{H}=e^{-\frac{(x-\mu_{H})^ {2}}{2\sigma^{2}}}, \tag{10}\] whereas \(\mu_{W}\), \(\mu_{H}\), and \(\sigma\) are uniformly sampled from the input images' width W, height H ranges, and its spread in the image space, respectively. A sample of the mask can be seen in Fig. 3 and 4a. ## 4 Experimental Settings ### Datasets **CMMD.** The Chinese Mammography Database (CMMD) [2] includes 5.202 screening mammogram images conducted on 1.775 studies. We trained on 1.172 non-malignant mammograms and 2.728 malignant screening images with 85%:15% ratio splitting on the training set and test set. Furthermore, we employ stratified sampling, resulting in 498 benign and 1157 malignancy ipsilateral view samples on the training set and 88 benign and 205 malignancy ipsilateral view samples on the testing set. **VinDr-Mammo.** A large-scale full-field digital mammography dataset [20], which contains 20.000 scans from 5.000 studies of Vietnamese patients. Because of the untrustworthy of BI-RADS 3, the inconsistency between BI-RADS 4 and 5, and the heavy imbalance of BI-RADS 1, we arrange the image-level labels into two classes: Suspicious Benign (BI-RADS 2) and Suspicious Malignancy (BI-RADS 4 and 5). So as the preprocessing on CMMD, there are 2.831 ipsilateral view samples (CC and MLO views on the same breast) that were split into training set (1870 benign and 395 malignancy cases) and testing set (467 benign and 99 malignancy cases). Besides, for the malignant lesions synthesis framework, we use all region-level annotations for making new malignant samples. ### Implementation Details ResNet family architectures are used for the Feature Extractor part of the framework, including ResNet-18 and ResNet-34. In the data loading part, the images are loaded with a batch size of 32 (two ipsilateral views for each breast with a total of 16 breasts). The model was trained for 200 epochs using SGD optimizer [11] with an initial learning rate \(1\times 10^{-3}\) and decays by \(0.1\) after \(20,40,60\), and \(80\) epochs. We resized images to the same 800 x 800 for both the training and testing phases. Our work was built on Pytorch version 1.9.1 and trained by using NVIDIA RTX 3090Ti GPU (24GB). We used the Macro F1-Score to evaluate and reduce the imbalance effectiveness in the dataset, which is computed using the arithmetic (unweighted) mean of all the per-class F1 scores. Besides, the Area under the ROC Curve (ROC AUC) is used for measuring the model performance under slightly imbalanced dataset training. ## 5 Results and Ablation Studies ### Dual Ipsilateral Views Fusion Network In this section, there are three main approaches we want to test. 1) No Fusion, a single view is fed into the backbone, no combining of two views CC and MLO in this case. 2) Average Fusion and Concatenate Fusion, there is no skip connection with two examined features and fused features in the Features Fusion Blocks phase. 3) DIVF, contains all components described in Section 3.1. Table 1 shows the testing results of our proposed methods on VinDr-Mammo and CMMD datasets. Our DIVF framework shows a significant improvement compared to the conventional techniques, with a mean of around 5% on VinDr-Mammo and 7% on CMMD. For each method of combining features, the DIVF shows the apparent effectiveness of the feature fusion mechanism for classifying the benign and malignant. Testing on VinDr-Mammo, DIVF Framework with concatenate method achieves the highest Macro F1-score and AUC-ROC on both backbones, 75.98% and 74.86% respectively. Different from VinDr-Mammo, we use average aggregation with the DIVF method seems to be more robust on CMMD. This strategy outperforms the normal fusion or no fusion approaches, which achieved 81.45% on ResNet-18 and 82.44% on ResNet-34 in Macro F1-score evaluation metrics. Fig. 5 highlights the trade-off between the true positive rate (sensitivity) and the false positive rate (1-specificity) by plotting the ROC curve for malignant and benign categorization. The DIVF (Average) achieved the best performance with a sensitivity of 87.8% and a specificity of 70.45%, which resulted in 0.8416 of AUC. Continuously, the second high performance is the DIVF (Concatenate), which obtained 80.42%, lower than 3.74% compared with the best one. In contrast, Average Fusion and Concatenate Fusion do not outcome the DIVF version which achieved 79.13% and 77.1%, respectively. This overcome can be explained in the way we support the model by adding the examined features in the features fusion blocks phase of the framework. After going through the IVF block, fused features might lose detailed information because fusion operation tends to generalize the feature in both views. Thus, this alleviates the examined features. Therefore, adding the examined features prevents two problems: solving the vanishing problem in the IVF block and diversifying information. ### Soft-Adapted Malignancy Synthesis Framework Table 2 shows the ablation studies of our proposed synthesis framework on VinDr-Mammo malignant sample synthesis. As shown in the table, we can see the effect of each element contributing to the final F1 score of our method. The whole framework combined three mechanisms for creating new samples achieves 77.02% on the F1-Score metric. The limitation of the original CutMix seems to be eliminated with Fourier Adaptation and Soft Mask. The new \begin{table} \begin{tabular}{l l c c c c} \hline \hline **Backbone** & \multicolumn{4}{c}{**ResNet-18**} & \multicolumn{2}{c}{**ResNet-34**} \\ \hline Dataset & Method & F1-Score & AUC-ROC & F1-Score & AUC-ROC \\ \hline \multirow{6}{*}{VinDr-Mammo} & No Fusion & 70.12 & 68.79 & 71.48 & 70.22 \\ & Average Fusion & 72.54 & 74.20 & 73.25 & 72.88 \\ & Concatenate Fusion & 73.22 & 70.66 & 74.63 & 72.18 \\ & DIVF(Average) & 74.00 & 72.15 & 74.17 & 71.67 \\ & DIVF(Concatenate) & **75.34** & **74.24** & **75.98** & **74.86** \\ \hline \multirow{6}{*}{CMMD} & No Fusion & 73.26 & 76.70 & 75.52 & 77.18 \\ & Average Fusion & 79.22 & 79.13 & 79.97 & 81.80 \\ \cline{1-1} & Concatenate Fusion & 75.86 & 77.10 & 78.12 & 77.67 \\ \cline{1-1} & DIVF(Average) & **81.45** & **84.14** & **82.44** & 80.92 \\ \cline{1-1} & DIVF(Concatenate) & 77.77 & 80.42 & 79.51 & **81.97** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results (%) among our proposed DIVF frameworks, normal fusion frameworks, and no fusion approach Figure 5: AUC-ROC for benign/malignant classification on CMMD dataset. Testing performance of the average fusion, concatenate fusion, EA average fusion, and EA concatenate fusion samples are no longer containing bad-looking malignant tumors with different color-style and hard contours when conducting copy-and-paste patches. The detailed outputs of each part in our framework are visualized in Fig. 4b-f. Fig. 4d shows the synthesis image using the CutMix algorithm. In the replaced region, the cancer mass seems incompatible with the source image in the style. This can cause the unwanted detect edge in CNN sliding filters, thus leading to outlier features and poor representation. This strategy achieved a slight improvement in performance with (+0.56%) compared with the baseline model, ResNet-34 DIVF Concatenate in Table 1. Furthermore, the results also increase a bit (+0.42%) when the Fourier Domain Adaptation method is applied. Fig. 4e proves the improvement with smooth style in the replaced region. However, the suddenly changing pixel value that occurs on the top-left corner of the transformed region does not perfectly make the synthesis image look natural. Afterward, our proposed Soft-Adapted Malignancy Synthesis Framework can alleviate those problems which perfectly adapting the target style to the source image. Fig. 4f and Fig. 6c show natural-looking, yet trustworthy, mammography screening that achieved 77.32% on Macro F1-Score. Those upsampling data shown in Fig. 6c, created by Fig. 6a,b, are reliable for the training stage to handle most of the imbalance mammogram dataset. This framework has shown its robustness on many different types of lesions including Mass, Calcification, Asymmetry, etc. ## 6 Conclusion In this work, we proposed a DIVF framework to leverage the ipsilateral multi-view information for classifying cancerous mammograms. The model learns the low-level features separately from two ipsilateral views and conducts feature aggregation for fusion learning on the high-level features. Our model learned low-level features from two ipsilateral views and effectively fused high-level features. The IVF block enhanced the examined view, resulting in improved classification. Additionally, our natural-looking malignant lesions synthesis framework generated reliable samples, leading to state-of-the-art performance and generalizability across two datasets. Our research shows promise for enhancing breast cancer diagnosis and treatment. Future work aims to extend our research to lesion detection and density classification tasks and conduct further statistical analyses to gain deeper insights. \begin{table} \begin{tabular}{l|c c c|c} \hline \hline & \begin{tabular}{c} DIVF \\ \end{tabular} & \begin{tabular}{c} Region Selection \\ \& CutMix \\ \end{tabular} & \begin{tabular}{c} Fourier Adaptation \\ \end{tabular} & Soft Mask & Macro F1-Score \\ \hline \hline Baseline & ✓ & & & 75.98 \\ \hline \multirow{4}{*}{Proposed Methods} & ✓ & ✓ & & 76.54 \\ \cline{2-6} & ✓ & ✓ & ✓ & 76.96 \\ \cline{1-1} \cline{2-6} & ✓ & ✓ & & ✓ & 76.78 \\ \cline{1-1} \cline{2-6} & ✓ & ✓ & ✓ & ✓ & **77.32** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation studies of our proposed Soft-Adapted Malignancy Synthesis Framework on DIVF Concatenate with ResNet-34 on VinDr-Mammo Dataset Figure 6: Our framework results on CC (top) and MLO (bottom) views. a) Reference image b) Target image c) Synthesis image. ## 7 Acknowledgement This paper is partially supported by AI VIETNAM. We thank Integrated MechanoBioSystems Lab (IMBSL) from the Biomedical Engineering Department of National Cheng Kung University for providing the GPU to support the numerical calculations in this paper.
2309.03260
Imposters among us: globular cluster kinematics and the halo mass of ultra-diffuse galaxies in clusters
The velocity dispersion of globular clusters (GCs) around ultra-diffuse galaxies (UDGs) in the Virgo cluster spans a wide range, including cases where GC kinematics suggest halos as massive as (or even more massive than) that of the Milky Way around these faint dwarfs. We analyze the catalogs of GCs derived in post-processing from the TNG50 cosmological simulation to study the GC system kinematics and abundance of simulated UDGs in galaxy groups and clusters. UDGs in this simulation reside exclusively in dwarf-mass halos with $M_{200} \sim 10^{11}$ M$_{\odot}$. When considering only GCs gravitationally bound to simulated UDGs, we find GCs properties that overlap well with several observational measurements for UDGs. In particular, no bias towards overly massive halos is inferred from the study of bound GCs, confirming that GCs are good tracers of UDG halo mass. However, we find that contamination by intra-cluster GCs may, in some cases, substantially increase velocity dispersion estimates when performing projected mock observations of our sample. We caution that targets with less than $10$ GC tracers are particularly prone to severe uncertainties.Measuring the stellar kinematics of the host galaxy should help confirm the unusually massive halos suggested by GC kinematics around some UDGs
Jessica E. Doppel, Laura V. Sales, José A. Benavides, Elisa Toloba, Eric W. Peng, Dylan Nelson, Julio F. Navarro
2023-09-06T18:00:00Z
http://arxiv.org/abs/2309.03260v1
Imposters among us: globular cluster kinematics and the halo mass of ultra-diffuse galaxies in clusters ###### Abstract The velocity dispersion of globular clusters (GCs) around ultra-diffuse galaxies (UDGs) in the Virgo cluster spans a wide range, including cases where GC kinematics suggest halos as massive as (or even more massive than) that of the Milky Way around these faint dwarfs. We analyze the catalogs of GCs derived in post-processing from the TNG50 cosmological simulation to study the GC system kinematics and abundance of simulated UDGs in galaxy groups and clusters. UDGs in this simulation reside exclusively in dwarf-mass halos with \(M_{200}\sim 10^{11}\) M\({}_{\odot}\). When considering only GCs gravitationally bound to simulated UDGs, we find GCs properties that overlap well with several observational measurements for UDGs. In particular, no bias towards overly massive halos is inferred from the study of bound GCs, confirming that GCs are good tracers of UDG halo mass. However, we find that contamination by intra-cluster GCs may, in some cases, substantially increase velocity dispersion estimates when performing projected mock observations of our sample. We caution that targets with less than 10 GC tracers are particularly prone to severe uncertainties.Measuring the stellar kinematics of the host galaxy should help confirm the unusually massive halos suggested by GC kinematics around some UDGs. keywords: galaxies: dwarf - galaxies: halos - galaxies: clusters: intracluster medium - galaxies: star clusters ## 1 Introduction Ultra-diffuse galaxies (UDGs), galaxies of extremely low surface brightness for their stellar mass, are enigmatic systems whose origin remains unclear. While the presence of such objects has been known for several decades (see e.g. Reaves, 1983; Binggeli et al., 1985; Impey et al., 1988; Bothun et al., 1991; Dalcanton et al., 1997), they have only recently entered the realm of systematic study with the observation of many UDGs in the Coma cluster (see Abraham & van Dokkum, 2014; van Dokkum et al., 2015, 2016). UDGs were thought to reside primarily in the environments of galaxy clusters (see van Dokkum et al., 2015, 2016; Koda et al., 2015; Mihos et al., 2015; Peng & Lim, 2016; Yagi et al., 2016; Gannon et al., 2022), but they have since been observed in a much wider range of environments (van der Burg et al., 2017; Lee et al., 2017, 2020; Marleau et al., 2021; La Marca et al., 2022; Venhola et al., 2022), including in the field (Martinez-Delgado et al., 2016; Roman & Trujillo, 2017; Leisman et al., 2017; Martin-Navarro et al., 2019; Rong et al., 2020). While many are observed to be devoid of gas (Martinez-Delgado et al., 2016; Papastergis et al., 2017; Roman et al., 2019; Junais et al., 2021), more recent observations find gas-rich UDGs (e.g., Leisman et al., 2017; Mancera Pina et al., 2020; Jones et al., 2023). In addition to spanning a wide range of gas fraction and environments, UDGs also broadly span nucleation fraction (Lim et al., 2020). Given the apparent diversity of UDGs, it has proven particularly difficult to pinpoint a unique formation path that may explain their origin. Several theoretical and numerical studies have pointed to differences between the dark matter halos that host UDGs and normal dwarfs, suggesting the possibility that UDGs may reside in dark matter halos with higher-than-average spin (Amorisco & Loeb, 2016; Rong et al., 2017; Mancera Pina et al., 2020; Kong et al., 2022; Benavides et al., 2023). Other studies present more baryon-focused formation scenarios. Star formation and feedback processes associated with starburst-driven outflows have the potential to leave the stellar component of galaxies rather extended (e.g., Di Cintio et al., 2017; Chan et al., 2018), although even galaxies passively forming stars have been shown to form UDGs (Tremmel et al., 2020). To add an additional complication in the search for UDG formation, environmental effects, such as tidal heating (Carleton et al., 2019) and tidal stripping (Maccio et al., 2021; Doppel et al., 2021; Moreno et al., 2022), have also been argued to give rise to UDG-like galaxies. Moreover, combinations of the aforementioned scenarios are also possible (Jiang et al., 2019; Sales et al., 2020), thus an obvious UDG-formation route has yet to emerge. Constraining the dark matter content of UDGs provides an additional dimension to understanding the origin of UDGs. For example, UDGs with little to no dark matter could suggest a primary formation mechanism of tidal stripping or other processes that preferentially removes dark matter) as a main driver (see e.g., van Dokkum et al., 2018, 2019, 2022; Trujillo-Gomez et al., 2022). At the other extreme, UDGs that inhabit overly-massive halos for their stellar mass could indicate that UDGs may originate as systems originally destined to become large, massive galaxies but where star formation was truncated early on (see e.g., Forbes et al., 2020; van Dokkum et al., 2017, 2015; Toloba et al., 2023). Between these two extremes, UDGs that reside in dark matter halos on par with other those of galaxies of similar stellar mass could suggest that UDGs are simply the tail of the surface brightness distribution of normal galaxies, and thus lack a distinct origin(e.g. Toloba et al., 2018; Lee et al., 2017, 2020; Saifollahi et al., 2021; Toloba et al., 2023). Illuminating the dark matter content of UDGs is, therefore, a necessary component for pinpointing--potential spectrum of--formation scenarios through which UDGs may arise and help to solidify their place in our understanding of dwarf galaxies. Unfortunately, the dark matter content reported thus far for UDGs is as varied as their potential formation scenarios. Observations of luminous, kinematical tracers such as stars (e.g., DF44 (van Dokkum et al., 2017) and DF4 (Danieli et al., 2019) among others), globular clusters (GCs) (see e.g. van Dokkum et al., 2018; Toloba et al., 2018; van Dokkum et al., 2019), and gas (Mancera Pila et al., 2020) suggest that the dark matter halos of UDGs span the entire range between lacking dark matter (such as DF2 and DF4 in NGC1052,) to residing in halos with masses far exceeding those expected for their stellar masses (Beasley et al., 2016; Janssens et al., 2022; Gannon et al., 2023; Toloba et al., 2023), with others between these extremes (see e.g. Lee et al., 2017; Toloba et al., 2018; Lee et al., 2020; Saifollahi et al., 2021; Toloba et al., 2023). For UDGs for which kinematical tracers, such as stars and gas, are unavailable, globular clusters (GCs) offer an alternative measure of their halo masses due to their relative ease of observation over large distances and their rather extended spatial distributions. The numerous GCs often associated to UDGs have been intepreted to indicate that they reside in over-massive dark matter halos (van Dokkum et al., 2015; Peng and Lim, 2016; van Dokkum et al., 2017; Lim et al., 2018, 2020; Danieli et al., 2022; Janssens et al., 2022) if the power-law relation between GC mass and halo mass (see e.g. Peng et al., 2008; Harris et al., 2015) holds for UDGs. However, recent observations from the Coma cluster suggest that, by GC counts, there appears to be two types of UDGs: those that reside in apparently over-massive halos for their stellar mass, and those that appear to reside in halos of more typical in mass for dwarf galaxies (Lim et al., 2018; Muller et al., 2021; Forbes et al., 2020; Jones et al., 2023). A further characterization, as well as the theoretical context, of the observations of the GC systems of UDGs will further help to disentangle the dark matter component of UDGs. With the high resolution of the TNG50 simulation of the IllustrisTNG suite, it is possible to morphologically define a set of simulated UDGs with similar structural parameters to observed UDGs (Benavides et al., 2023). Coupled with the recent addition of a catalog of GCs added to the simulation (Doppel et al., 2023), we can investigate UDGs in conjunction with their GC systems across a variety of environments, ranging from those comparable with massive elliptical systems to those comparable with the mass of the Fornax and Virgo clusters. We can thus make a realistic comparison with the observations of the GC systems of UDGs in these types of environments to provide possible interpretations for these observations and their implications for the dark matter content of UDGs. In Section 2, we briefly discuss the details of TNG50 as well as the tagging model used to produce its GC catalog. In Section 3, we discuss how the modeled GC abundances and kinematics compare to observations as well as what, if any, effect environment has on UDGs and their GC systems. In Section 4, we compare mock observations of the GCs and UDGs in TNG50 to observed UDGs, and we use those mock observations to understand the inferred dark matter content of UDGs, both in the presence of contamination in their assigned GC systems as well as other complicating factors. Finally, in Section 5, we provide a short discussion and summary of our results. ## 2 Methods ### Simulation For this study, we use the highest resolution run of the cosmological hydrodynamical TNG50 (Pillepich et al., 2019; Nelson et al., 2019) simulation--which is part of the larger IllustrisTNG project (Naiman et al., 2018; Pillepich et al., 2018; Nelson et al., 2018; Springel et al., 2018; Marinacci et al., 2018; Nelson et al., 2019). TNG50 features a box size of \(51.7\) Mpc on each side with \(2160^{3}\) gas cells and dark matter particles evolved assuming a flat, \(\Lambda\)CDM cosmology consistent with parameters from Planck Collaboration et al. (2016). This configuration results in a mass resolution of, on average, \(8.4\times 10^{4}\)\(M_{\odot}\) for its baryonic component and \(5.4\times 10^{5}\)\(M_{\odot}\) for dark matter particles. The gravitational softening length is \(288\) pc at \(z=0\) for collisionless components. The baryonic treatment in TNG50 is introduced in detail in (Weinberger et al., 2017; Pillepich et al., 2018). Briefly, it includes star formation in the dense interstellar medium (ISM), stellar evolution, including chemical enrichment from stars and supernovae; primordial cooling, metal line cooling, and heating, via background radiation, of gas; additionally, the seeding and growth of supermassive black holes, low and high accretion AGN feedback, galactic winds, and magnetic fields (Weinberger et al., 2017; Pillepich et al., 2018). #### 2.1.1 Sample Selection Halos and subhalos within the TNG50 simulation are identified using the Friends-of-Friends (FOF, Davis et al., 1985) and SubFind (Springel et al., 2001; Dolag et al., 2009) respectively. Using these catalogs, we select 39 halos with virial masses between \(M_{200}=[5\times 10^{12},2\times 10^{14}]\) M\({}_{\odot}\)(where "virial" in this study refers to quantities associated to a sphere enclosing 200 times the critical density of the universe). The mass resolution of TNG50 allows us to resolve galaxies with a stellar component of \(M_{*}\sim 5\times 10^{6}\) M\({}_{\odot}\), which therefore contain at least 60 stellar particles. A stricter resolution threshhold is considered for this study: we consider only UDGs in the stellar mass range \(M_{*}=[10^{7.5},10^{9}]\) M\({}_{\odot}\)--which are resolved with a minimum of \(\sim 375\) stellar particles--that also reside in galaxy groups and clusters. The evolution of these objects are followed using the SubLink merger trees (Rodriguez-Gomez et al., 2015). ### GC Catalog We use the GC catalog presented in Doppel et al. (2023), which has been added in post-processing to the 39 most massive galaxy groups and clusters in TNG50, spanning a virial mass range \(M_{200}=[5\times 10^{12},2\times 10^{14}]\) M\({}_{\odot}\). GCs are tagged to all galaxies in the selected groups and clusters provided they satisfy a maximum stellar mass throughout their history of at least \(5\times 10^{6}\) M\({}_{\odot}\) and a minimum of 100 dark matter particles (this latter condition is required to avoid spurious baryonic clumps). All galaxies are tagged at their infall time, which is here defined as the last time the galaxy is its own central. On average, this corresponds to the time at which a galaxy crosses the virial radius of its present day host halo, but it might be an earlier time if the galaxy joins a smaller halo or group before joining their final host system. GC candidate particles are selected from the dark matter particles associated to the host galaxy at infall time. Following Lokas & Mamon (2001), we fit an NFW profile (Navarro et al., 1996): \[\rho_{\rm NFW}(r)=\frac{\rho_{\rm NFW}^{0}}{(r/r_{\rm NFW})(1+r/r_{\rm NFW}) ^{2}} \tag{1}\] to the dark matter component of the galaxy. The scale radius \(r_{NFW}=r_{max}/\alpha\), where \(r_{max}\) is the radius of maximum circular velocity and \(\alpha=2.1623\)(Navarro et al., 1997). The GCs are assumed to follow a Hernquist (1990) profile: \[\rho_{\rm HQ}(r)=\frac{\rho_{\rm HQ}^{0}}{(r/r_{\rm HQ})(1+r/r_{\rm HQ})^{3}} \tag{2}\] which allows us to control the normalization and radial extension of the tagged GCs. We assign two populations of GCs: a red, metal-rich component of GCs that formed in-situ, and blue GCs, representative of older, more metal-poor GCs that were accreted into the galaxies. The red GCs are chosen to be more spatially concentrated than the blue GCs, with scale radii \(r_{HQ}=0.5r_{NFW}\) and \(3.0r_{NFW}\) for red and blue GCs respectively, \(\rho_{HQ}\) is chosen to maximize the number of GC candidates. The GC candidates are then selected in relative energy using the distribution function (Binney & Tremaine, 2008): \[f_{i}(\epsilon)=\frac{1}{8\pi}\left[\int_{0}^{\epsilon}\frac{\rm d^{2}\rho_{i }}{\rm d\psi^{2}}\frac{\rm d\psi}{\sqrt{\epsilon-\psi}}+\frac{1}{\sqrt{\epsilon }}\left(\frac{\rm d\rho_{i}}{\rm d\psi}\right)\right|_{\psi=0}\right]\!, \tag{3}\] where \(\rho_{i}\) is the density profile of i = (dark matter, red GCs, and blue GCs), \(\psi\) is the relative gravitational potential, and \(\epsilon\) is the relative energy. In equally spaced bins of relative energy, a fraction \(f_{\rm HQ}.j/r_{NFW}\), where i = red or blue GCs, of dark matter particles is selected. Inspired by constraints inferred for the Milky Way (Yahagi & Bekki, 2005), a cutoff radius of \(r_{h}/3\), where \(r_{h}\) is the total half-mass radius of the halo in question, for the GC candidate particles. The selected GC candidate particles are assigned masses at infall such that by \(z=0\) those that still remain gravitationally associated to their host follow the \(M_{GC}-M_{halo}\) relation from Harris et al. (2015). To make this calibration, we assume that a power-law relation similar to the \(M_{GC}-M_{halo}\) relation exists at infall such that: \[M_{\rm GC,inf}=\frac{1}{f_{\rm bound}}M_{\rm GC,z=0}=a_{\rm inf}M_{\rm halo, inf}^{b_{\rm inf}}. \tag{4}\] where \(f_{bound}\) is the fraction of GCs that are still gravitationally bound to their host galaxy at \(z=0\). We find for red and blue GCs respectively, \(a_{inf}=2.6\times 10^{-7}\) and \(7.3\times 10^{-5}\) and \(b_{inf}=1.14\) and \(0.98\). Since the GC candidates are a much larger set of particles than the observed number of GCs, we subsample a realistic number of GCs from the candidates. This realistic population of GCs follows a Gaussian luminosity function using constraints from Jordan et al. (2007). Individual GC masses are obtained assuming a mass-to-light ratio of 1. GCs are randomly selected from the luminosity function until the total mass of GCs is within \(7\times 10^{3}\) M\({}_{\odot}\) (the assumed minimum mass of one GC) of the total calibrated infall mass. The realistic subsample of GCs is followed to \(z=0\) and constitutes the GCs we consider in this work. (Doppel et al., 2023) shows that this method reproduces the available observational constraints in number, specific frequency, and GC occupation fraction over a wide range of masses, including dwarfs. In this paper we focus on the specific predictions of this GC catalog for the particular case of UDGs in galaxy groups and clusters. By design, our GC tagging method is able to capture the range in GC numbers and kinematics that is expected due solely to variations in the dark matter halos of UDGs at infall, being an excellent tool to guide the interpretation of current observations. ### Sample of UDGs in groups and clusters The UDGs considered for this work are satellites of our selected galaxy groups and clusters and were first introduced in Benavides et al. (2023). Simulated UDGs are selected to be in the stellar mass Figure 1: Stellar size (\(r_{h,*}\)) as a function of stellar mass (\(M_{*}\)) in TNG50 for all dwarf galaxies (gray dots), and for the UDG sample (unfilled orange circles). We show the same for UDGs in the Coma cluster (purple squares, Annorsco et al., 2018), the Virgo cluster (purple triangles Toloba et al., 2023), and the Perseus cluster (purple diamonds, Gannon et al., 2022). The size of observed UDGs has been multiplied by \(4/3\)(e.g. Hernquist, 1990; Wolf et al., 2010; Somerville et al., 2018) to transform it into a 3D measurement (sec. 4). Highlighted in pink is the size and mass of the example TNG50 UDG shown in projection in the inset panel, colored by stellar number density and overplotted with its 2D effective radius, \(R_{e}\) (dotted pink circle), and GC system (lime green dots). We can see that, where data is available, there is good agreement between the sizes of the observed satellite UDGs in galaxy clusters and the sample of satellite UDGs in TNG50. range \(M_{\rm*}=[10^{7.5},10^{9}]\) M\({}_{\odot}\)--to ensure that there are sufficient stellar particles to resolve the structure of the galaxy. Inspired by the UDG classification process presented by Lim et al. (2020), wherein UDGs are selected to be \(2.5\sigma\) outliers in scaling relations between luminosity and surface brightness, mean effective surface brightness, and effective radius, UDGs are identified as the 5% most extended outliers in the \(M_{\rm*}\)-size relation. These UDGs are shown in Fig. 1, which shows the relation between stellar halfmass radius, \(r_{h,*}\) and stellar mass, \(M_{\rm*}\). These criteria result in UDGs that are roughly consistent with sizes of UDGs in the Virgo (purple triangles, Toloba et al., 2023), Coma (purple squares Amorisco et al., 2018), and Perseus (purple diamonds Gannon et al., 2022) clusters, low-density environments (Romina et al., 2019; Martin-Navarro et al., 2019; Rong et al., 2020), as well as other commonly assumed cut-offs to identify UDGs in observations (\(R_{\rm e}\geq 1.5\) kpc and \(\mu\gtrsim 24.5\) mag/arcsec\({}^{2}\) measured within the effective radius of stars (e.g. van Dokkum et al., 2015)). As discussed in detail in Benavides et al. (2023), the formation mechanism of UDGs in TNG50 suggests that they inhabit mainly high-spin dark matter halos, although a sub-dominant fraction (\(\sim 10\%\)) of satellite UDGs owe their extended sizes to tidal effects within their groups or clusters. Most importantly, all simulated UDGs in TNG50 formed within dark matter halos in the range \(M_{200}\sim[10^{9.3}-10^{11.2}]\) M\({}_{\odot}\) that are in agreement with expectations from their stellar content. In addition, satellite UDGs are found to be red and quiescent while field UDGs are gas-rich and star-forming, in good agreement with observational results (e.g. van der Burg, Remco F. J. et al., 2016; Lee et al., 2020; Ferre-Mateu et al., 2018; Leisman et al., 2017; Mancera Pina et al., 2020; Jones et al., 2023). Note that our simulations also predict a fraction of quiescent UDGs in the field as a result of backsplash orbits (Benavides et al., 2021) that are not included in our sample as they, by definition, do not reside today within group or cluster halos. Satellite UDGs have typically undergone substantial tidal stripping of their dark matter halos (median mass loss 80%) but only moderate tidal stripping of their stellar component (10% mass loss from their peak stellar mass). A total of 195 UDGs are found associated to our simulated groups in TNG50 and are the core sample of the analysis in this paper. In addition, these groups and clusters have 2195 non-UDG dwarfs in the same mass range as our UDGs that might be included when necessary for helpful comparisons. _This set of UDGs allows us the first opportunity to study the GC systems of UDGs that reside in realistic group and cluster environments._ ## 3 GC abundance and kinematics in UDGs We show in Fig. 2 the predicted GC number (\(N_{GC}\), left panel) and GC specific frequency (\(S_{N}\), right panel) for satellite dwarf galaxies in TNG50 compared to observational constraints. Specific frequency is defined as the number of GCs per unit luminosity normalized to a galaxy with V-band magnitude \(M_{\rm*}=-15\) as follows (Harris & van den Bergh, 1981): \[S_{N}=N_{GC}10^{0.4(M_{\rm*}+15)} \tag{5}\] Overall, we find a good agreement between _all_ simulated dwarfs in groups and clusters in TNG50 (gray dots) and a compilation of ob Figure 2: _Left_: Number of GCs (N\({}_{GC}\)) as a function of host galaxy stellar mass. All simulated TNG50 satellite dwarf galaxies are shown in translucent gray points, with UDGs highlighted by unfilled orange circles. Observations of GC numbers for normal dwarf galaxies are shown in purple, translucent shapes, and those for UDGs in purple, filled shapes. We can see that while there is a large amount of scatter in the predicted GC numbers for the UDGs of TNG50, the scatter is not as large as what is seen in observed UDGs, particularly those of the Coma Cluster (filled squares). We can see that despite the wide scatter, simulated and observational data follows (on average) similar trends. _Right_: the specific frequency of GCs (\(S_{N}\)) as a function of host galaxy V-band absolute magnitude (\(M_{\rm*}\)). Following Doppel et al. (2023), we have applied a correction to the V-band magnitude to account for discrepancies between TNG50 and observations for high-mass galaxies. As in the left panel, all TNG50 dwarfs are shown as gray points, UDGs are highlighted by orange circles, observations of \(S_{N}\) for normal dwarf galaxies are shown as translucent purple shapes, and observations of \(S_{N}\) for UDGs are shown as filled, purple shapes. While the simulated UDGs seem to follow well the \(S_{N}\) of observed normal dwarf galaxies and the bulk of observed UDGs, they are unable to reproduce the extreme \(S_{N}\) for many UDGs in the coma cluster (filled purple diamonds). For both measures of GC abundance in the figures, there is significant overlap between what is predicted by TNG50 and what is observed for the bulk of UDGs; however, we do not predict the most extreme GC systems. servational data (purple symbols) including normal dwarfs (transultent purple shapes Forbes et al., 2018; Peng et al., 2008; Prole et al., 2019; Lim et al., 2018) and UDGs (filled purple shapes Gannon et al., 2022; Amorisco et al., 2018; van Dokkum et al., 2017; Saifollahi et al., 2021; Lim et al., 2018, 2020; Somalwar et al., 2020). We highlight simulated UDGs in TNG50 with orange empty circles, which we compare to observed UDGs shown in solid purple. Fig. 2 indicates that simulated UDGs display GC numbers that overlap well with the majority of available observations of UDGs (left panel), including systems in low mass groups (Somalwar et al., 2020) but also high-density environments like Coma (Amonisco and Loeb, 2016; Gannon et al., 2022). We note, however, that extreme UDGs with \(N_{\rm GC}>30\) are not present in our simulated catalog but seem to be present in observations. This result is not entirely unexpected: all UDGs in TNG50 populate dwarf halos in the mass range \(M_{\rm vir}=[2\times 10^{9},2\times 10^{11}]\) M\({}_{\odot}\)at infall (using the last time a halo is a central as definition of infall time, Doppel et al., 2023), and their GC content is a reflection of this prediction. The specific frequency of GCs for these galaxies is shown on the right panel of Fig. 2 and confirms a similar trend: while there is good overlap for many of the simulated UDGs in TNG50, very extreme values with \(S_{N}\gtrsim 50\) are not produced in our simulated sample but exist in systems like the Virgo or Coma cluster (Lim et al., 2018, 2020). Identifying GCs that are associated to a given galaxy in observations is not without challenge, a subject we return to in Sec.4. The iconic UDG DF44 is a good example (van Dokkum et al., 2016). Originally thought to host nearly 100 GCs (van Dokkum et al., 2016), it has been now estimated to have only \(\sim 20\) GCs (Saifollahi et al., 2021). If we take the latest measurements as correct, our simulated UDGs are a good representation of galaxies like DF44. On the other hand, if earlier estimates are found to hold, then we do not find DF44 analogs in our sample. The example set by DF44 perhaps warrants a closer look into observed galaxies with very extreme GC content. Despite the lack of direct analogs to the most extreme observed UDGs in terms of GC number, simulated GC systems encouragingly span a relatively wide range of GC contents, in good agreement with observational claims (e.g., Lim et al., 2018, 2020; Toloba et al., 2023). Of particular interest are those with the largest numbers of GCs (or specific frequency) at any given mass (or luminosity). A closer look to the set of TNG50 UDGs in the top 15% of GC number and specific frequency at fixed stellar mass (and \(M_{V}\)) reveal that these UDGs tend to reside in higher mass--albeit still dwarf-mass--halos at infall (Fig. 3, where high \(S_{N}\) UDGs are highlighted in red). Interestingly, this bias towards higher mass halos for more extreme UDGs is linked to earlier infall times than their less extreme counterparts. This is illustrated clearly with the color coding of symbols in Fig. 3. This finding is similar to our previous results exploring the GC content of normal dwarfs in the Illustris simulations (Ramos-Almendares et al., 2020). More specifically, at fixed \(z=0\) stellar mass, galaxies with early infall times are biased towards higher halo mass due to the time evolution of the \(M_{*}-M_{halo}\) relation with redshift. Larger halo masses imply a larger number of GCs assigned at infall. In addition, galaxies that infall early stop forming stars longer ago, meaning that they have passively evolved their stellar population becoming fainter in V-band magnitude and consequently increasing their specific frequency. In TNG50, we find a median infall time \(t_{\rm inf}\sim 6.1\) Gyr for our large GC content UDGs compared to \(t_{\rm inf}\sim 8.1\) Gyr for the rest of the UDG sample. As with GC content, the velocity dispersion of observed UDGs has been shown to span a wide range. From the popular DF2 and DF4 galaxies associated to NGC1052, whose velocity dispersions (\(\sigma<10\) km/s) are so low that they are consistent with no dark matter at all (e.g., van Dokkum et al., 2018; Danieli et al., 2019) to UDGs nearing \(\sigma\sim 100\) km/s, compatible with halos so massive that could in theory host MW-like galaxies. Of particular interest is the recent study by Toloba et al. (2023), which represents the first _systematic_ study of the GC kinematics of UDGs in the Virgo cluster. Half of their sample (5 out of 10) shows velocity dispersion \(\sigma\geq 50\) km/s measured within 1.5-2 kpc projected radii, making them consistent with inferred halo masses \(M_{halo}\geq 10^{12}\) M\({}_{\odot}\)--on part with that of the MW (see Fig. 9 from Toloba et al. (2023)). The authors also report at least one UDG that is also consistent with having no dark matter, which seems to be tied to the ongoing tidal disruption of that particular UDG, partially explaining some of the diverse \(\sigma\) values in the sample. We show the measurements presented in Toloba et al. (2023), along with a compilation of other available velocity dispersions for observed UDGs in Fig. 4 (purple shapes). The GC velocity dispersion of simulated UDGs in TNG50 are shown with unfilled orange circles. Following Doppel et al. (2021), we have estimated GC velocity dispersion for these systems following an Markov-Chain Monte Carlo (MCMC) method with a Jeffreys prior on the dispersion itself, as this method was found to be the most adequate to estimate \(\sigma\) with a small number of tracers. The error bars on the orange circles show the 25%-75% spread in the velocity dispersion from the PDF stochastically generated via the MCMC method. This is analogous to the way that velocity dispersions were calculated for the GC systems of Virgo-cluster UDGs (Toloba et al., 2023, among others). We include the dispersion of other UDGs in the literature derived from GC kinematics (NGC1052-DF2, van Dokkum et al., 2018), Figure 3: Stellar mass at \(z=0\) (\(M_{*,z=0}\)) vs. virial mass at infall (\(M_{200,infall}\)) for satellite galaxies in the mass range explored. Symbols are color-coded by their infall time in Gyr, such that yellow colored points correspond to a recent infall and bluer points correspond to an earlier infall. UDGs are highlighted by orange circles. We highlight with red hexagons UDGs with the highest \(S_{N}\) in our sample (top 15% of \(S_{N}\) at fixed \(M_{V}\)). These more extreme UDGs tend to have earlier infall times and more massive halos than their less extreme counterparts. stellar kinematics (DF44, van Dokkum et al., 2019), and stellar spectra (DFX1 (van Dokkum et al., 2017), DGSAT-1 (Martinez-Delgado et al., 2016; Martin-Navarro et al., 2019), UDG7 (Chilingarian et al., 2019), UDG1137+16 (Gannon et al., 2021), and UDGs from the Perseus cluster (Gannon et al., 2022)). This set of observed UDGs are selected here to be all consistent with the UDG definition presented by Lim et al. (2020), in that they are outliers of more than \(2.5\sigma\) in one of the scaling relations between luminosity and surface brightness, mean effective surface brightness, and effective radius. Encouragingly, the range of GC velocity dispersions predicted by the tagged GCs in TNG50 agrees well with the bulk of observed values for UDGs, in particular for objects with normal-dwarf velocity dispersions such as DFX1, UDG7, UDG1137+16, several Virgo UDGs, and DF44. About half of the UDGs with available velocity measurements are consistent with a dark matter content of a dwarf-mass halo--in agreement with predictions from our UDG sample in TNG50. Moreover, the GC velocity dispersion of simulated UDGs overlaps well also with non-UDG dwarf satellites in TNG50 (gray dots). This is indeed expected from the formation scenario of UDGs in this simulation, which place them in dwarf dark matter halos consistent with the non-UDG sample (although with a small bias towards higher mass, e.g., Benavides et al., 2023). Interestingly, we also see in Fig.4 several UDGs and dwarfs from TNG50 that show \(\sigma_{\rm MCMC}<10\) km/s, reminiscent of dark-matter free UDGs such as NGC1052-DF2. A closer inspection of this simulated analogs to NGC1052-DF2 show that several have undergone a rather significant amount of dark matter stripping (as was found in Doppel et al., 2021). However, much of the scatter in the lower \(\sigma\) UDGs arises from having only 3-5 GCs to recover the potential of their host halo. As Doppel et al. (2021) showed, using a Jeffrey's prior for a low number of tracers performed well in recovering dynamical mass in the _median_ of the sample, but with a large galaxy-to-galaxy scatter. This is a large contributor to the source of kinematic analogs to NGC1052-DF2 in TNG50 and highlights the importance of having a sufficient number of tracers to make accurate _individual_ dark matter mass estimates. On the other hand, UDGs with high GC velocity dispersion, \(\sigma_{\rm MCMC}>50\) km/s, are less common in our simulated sample compared to available observational constraints. A closer inspection of our high-velocity cases shows a similar situation as described above: they tend to have 3 - 5 dynamical tracers and scatter upwards of their true velocity dispersion (as measured from their mass content within \(R_{E}\)). High-dispersion objects are interesting because they do not conform to the expectations of dark matter content given their luminosity. Several candidates have been hinted at in observations including, for example, objects like DGSAT-1 (Martinez-Delgado et al., 2016) and NGVSUDG-09, NGVSUDG-05, NGCSUDG-11, NGVSUDG-19, NGVSUDG-20 and NGVSUDG-A04 from the Toloba et al. (2023) study of UDGs in Virgo. These are often interpreted as "failed" massive halos that were destined to form a galaxy more comparable to the Milky Way, but stopped forming stars much earlier than expected, resulting in an overly-massive halo given its stellar mass (van Dokkum et al., 2015; Peng and Lim, 2016; van Dokkum et al., 2017; Lim et al., 2018; Lahen et al., 2020; Danieli et al., 2022; Janssens et al., 2022). Calculations presented in Toloba et al. (2023) show that halos more massive than \(M_{200}\sim 10^{12}\) M\({}_{\odot}\) are necessary to explain the kinematics of the large-\(\sigma_{\rm MCMC}\) UDGs. Such "failed" galaxies are not present in the simulated UDG sample in TNG50. This finding may have different explanations. The most straightforward one is that there may be a legitimate disagreement between theory and observation, implying that the physical mechanisms to form such massive failed galaxies is missing from cosmological simulations (as no other simulation has reported successfully forming such dark matter dominated objects to date) and from our understanding of galaxy formation. Alternatively, the origin of the large velocity dispersion in observed UDGs may be attributed to the presence of observational errors (which are not considered in Fig 4), interlopers and/or observational biases which are not currently included when comparing with theoretical predictions. We use our simulated GC catalog to more closely address whether contamination alone may explain the observed UDGs with large inferred dark matter halo masses. ## 4 Effects of Interlopers on the GC Velocity Dispersion of UDGs The analysis of the simulated UDGs and their GCs in Sec. 3 assumes that only the gravitationally associated GCs are taken into account when estimating GC numbers and kinematics. For the case of the TNG50 simulations, we use information from Subfind to determine whether or not a GC is gravitationally bound to a given UDG. Figure 4: Kinematics of the GC systems of dwarf galaxies in TNG50 calculated via a Markov-Chain Monte Carlo (MCMC) method with as Jeffrey’s prior plotted against host galaxy V-band magnitude, \(M_{V}\). UDGs in TNG50 are highlighted with orange circles with errorbars representing the 25-75 percentiles from the PDF generated stochastically by the MCMC method. All dwarf satellites from TNG50 are shown as gray points. We show observations of GC kinematics from UDGs coming from various studies as large, solid, purple shapes. We find a wide range of UDGs represented in the literature, with some having dispersions that put them in the range of “normal” dwarf galaxies, some with dispersions that put them in the dark matter deficient category. Other observed UDGs sit above what is predicted by TNG50, suggesting that they reside in rather overmassive halos. We note that much of the scatter \(\sigma_{\rm MCMC}\) for the UDGs in TNG50 is due to the presence of few GC tracers, making many of the lower scattering points the product of small number statistics. UDGs and their GC systems in TNG50 thus appear to be kinematically indistinguishable from normal dwarf galaxies. Large \(\sigma\) values seen underrepresented in our sample compared to measurements in the Virgo cluster (Toloba et al., 2023). However, this is not possible in observations, where assigning membership to GCs nearby a galaxy of interest becomes an additional challenge. In the specific sample from the Virgo cluster, where most of the available kinematical constraints on UDGs exist (Toloba et al., 2018, 2023), GC membership is based on a combined criteria in projected distance to the host galaxy: \(R<7R_{e}\), with \(R_{e}\) the effective radius of the host UDG, and an additional restriction on the relative line-of-sight velocity between the candidate GC and the UDG, set to be less than 200 km/s. We can use our simulated catalogs to evaluate the degree to which the selection effects and specific choices applied in observed samples may lead to the possible inclusion of interloper GCs, biasing the velocity or mass estimate for some UDGs. We construct mock observations of our simulated samples by projecting all groups and clusters in a given direction and applying a similar selection criteria as described in Toloba et al. (2023). By doing so, we are considering the top two possible contamination sources: \(i)\) GCs associated to other galaxies that are near the UDG in projection and \(ii)\) GCs in the diffuse intra-cluster GC component (ICGCs). Assuming that the luminous mass of the UDGs is distributed roughly spherically, we make the conversion between 3D stellar half-mass radius (\(r_{h,*}\) and projected effective radius (\(R_{e}\)) using \(R_{e}=3/4r_{h,*}\)(e.g. Hernquist, 1990; Wolf et al., 2010; Somerville et al., 2018). For illustration, Fig. 5 shows 8 representative examples of simulated UDGs and their GCs in our sample. The stellar number density of the UDGs and their surroundings is shown by the background grayscale, and the GCs that fall in projection within the frames are represented by different symbols (see legend). We label them satellite-1 through -8, or S1-S8 for short, with a label on the upper right-hand corner of each panel. We can find UDGs in relatively isolated surroundings (such as S1, S2, S5, and S8) as well as to those in crowded or obviously with interlopers from several companion galaxies in projection (S3, S4, S6, and S7). These examples are chosen to showcase different levels of contamination by interlopers and are not a random selection of UDGs in our sample. Next, we apply the selection criteria in GC radial velocity, \(v_{proj,GC}\). Fig. 6 shows this for the 8 examples discussed above. For convenience, we center the GC velocities on that of the host UDG. Following Toloba et al. (2023), we consider GCs within \(7R_{e}\) of their host galaxy and within \(\pm 200\) km/s of the velocity of their host galaxy as bound to the host galaxy (purple box). GCs that would be selected as members by this method are lime green dots highlighted by large purple circles, while those outside of the selection box are shown in lime green. We use our simulation to obtain additional information for each GC. Those known to be gravitationally bound to the UDGs (based on SubFind information) are outlined by dark blue squares. GCs that belonged to the UDG but have now been tidally stripped are outlined by magenta stars, and those outlined by sky blue hexagons are GCs associated to other subhalos. Line green dots without any outlining shape belong to the intra-cluster GC component. In all panels we quote, on the upper right corner, the actual 1D velocity dispersion calculated with all bound GCs (\(\sigma_{\rm{true}}\)) along with the corresponding velocity dispersion computed using the objects within the selection box (\(\sigma_{\rm{obs}}\)). We emphasize that, similar to observational samples, the velocity dispersion determination is computed using an MCMC method assuming a Jeffreys prior. In general, we find that this simple selection criteria works rather well in most cases considered, with a few exceptions. We can see Figure 5: Mock X-Y projections of stars (background grayscale, colored by the number density of stars in each bin) and GCs (line green points) within \(16R_{e}\) of 8 UDGs within TNG50. We name the satellites S1-S8 as annotated in the upper right corner of each panel. The UDGs shown are selected to have at least 8 GCs in within \(16R_{e}\) of the host UDG and to display a range of scenarios from quite easy to surprisingly difficult for selecting bound GCs (see Sec. 4). GCs that would be considered associated in observations are highlighted with an underlying large purple circles, those that belong to other subhalos by sky blue hexagons, those that are tidally stripped by pink stars, and actual GCs bound to the subhalo by dark blue squares. For reference, we show the \(R_{e}\) of each UDG as dashed, orange circles. Several UDGs, namely S1, S2, S5, and S8 are quite isolated, with the rest having one or more other galaxy in the field of view. From spatial information alone, determining GC boundness is not straightforward. that for all eight featured UDGs, most of the GCs gravitationally bound to the galaxy are recovered by this selection method, with the exception of S2 and S7, which are missing 5 and 1 associated GCs, respectively, when the selection criteria are applied. Note that in neither case does this matter for the velocity dispersion measured, which remains very close to the true value even when missing a few GCs (upper right corner of each panel). As expected, the inclusion of velocity information is critical to remove GC interlopers. For example, S3 and S6 in Fig. 5 have obvious contamination ongoing due to the overlap in projection with other satellites in the group. We can see in Fig. 6 that the addition of velocity removes the interlopers associated with S3. However, this is not the case for S6, where GCs bound to the companion galaxy fulfill the criteria of membership due to chance alignment in the velocities. This results, for the specific case of S6, in a factor 2 overestimation of the velocity dispersion inferred: using the GCs within the selection box results in \(\sigma_{\rm obs}\sim 50\) km/s whereas the truly associated GCs are moving with \(\sigma_{\rm act}\sim 24\) km/s. While the case of S6 demonstrates that care must be exercised when dealing with projected data, it presents a type of contamination that observational studies will avoid unless absolutely necessary. In fact, none of the UDGs considered in the sample of Toloba et al. (2018) or Toloba et al. (2023) contains other galaxies in projection on the line of sight that are brighter than \(M_{V}\sim-13\); therefore, they are not luminous enough to have GCs that pose the risk of significantly contaminating the GC sample (see see 5.1 of Toloba et al., 2023). In what follows, we choose to ignore contamination from GCs associated to other subhalos, as observational studies would purposely remove such complicated systems from their samples. However, a more subtle case is that of S8 in our sample. S8 is seemingly isolated, but several intra-cluster GCs fall within the selection box, artificially enhancing the velocity dispersion measured by a factor of \(\sim 3\). This galaxy would be inferred to inhabit a massive dark matter halo with \(\sigma_{GC}\sim 100\) km/s, while in reality it inhabits a dwarf-mass one with \(\sigma_{\rm true}\sim 35\) km/s. This presents a concrete example where an otherwise relatively normal UDG could be kinematically mistaken as bearing an overly-massive halo. Are cases like S8 common in our sample? For that, we need to evaluate how often contamination from the intra-cluster component sneaks into the selection box. We quantify this in Fig. 7. We show, as a function of the number of GCs within the selection box in our UDGs, \(N_{\rm GC,Selected}\), the ratio of the measured velocity dispersion (including intra-cluster interlopers) and the true value (computed with only bound GCs according to SubFind). For the vast majority of simulated UDGs the velocity dispersion estimate remains within 20% of its true value, suggesting that it is not likely that interlopers will play Figure 6: A mock observation of the radial velocity of the GCs associated to the eight UDGs in Fig. 5. GCs are considered members of the galaxy if they fall within \(7R_{e}\) of a given galaxy and their radial velocities are within \(\pm 200km/s\) of that of their suspected host galaxy. The wide range in acceptable velocities allows for the possibility of detecting UDGs that form in massive dark matter halos. All GCs in the \(16R_{e}\), field of view are represented by line green dots, with those selected to be associated by the radius and velocity cuts are highlighted by large purple circles within the purple box. GCs that have been tidally stripped from their host UDG are outlined by magenta stars, with GCs known to belong to the UDGs highlighted by unfilled dark blue squares and those that belong to other subhalos in the field of view are outlined by sky blue hexagons. We can see that even in the presence of additional galaxies in the field of view, such as S3 that the actual GCs can be recovered by the selection method described in Section 4. For some UDGs (namely S3, S4, and S7 in the set of UDGs presented here), we find the observed radial cut of \(7R_{e}\) to be somewhat conservative. Overall, assigning GCs based on kinematics is overall a powerful tool, with a nearly correct set of GCs often being picked out of crowded fields (e.g., S3). Interestingly, GCs that are considered part of the ICGs that have been tidally stripped from their hosts are often difficult to distinguish kinematically or radially from the set of actual GCs, although it does not seem to affect the estimate of velocity dispersion (see S5). Overlap of several galaxies in the field of view can complicate GC identification (see S6), but cases like this would likely not be included in observational samples. S8 represents an interesting case in which interloper GCs from the intra-cluster component are flagged as members and substantially increase the estimated velocity dispersion. a dominant effect in the majority of UDG measurements. However, for systems with less than 10 GCs, the inclusion of intra-cluster contamination may cause overestimation of the velocity by factors 2-10. The median and percentiles show, however, that it is statistically much more likely to remain within 15% of the true value. Can intra-cluster GCs then explain the high-incidence of large velocity dispersion UDGs found in Virgo? A close inspection of Fig. 6 shows that interlopers tend to have the largest distances and largest velocity difference with the central galaxy (yet still remaining within the selection box). We have re-analyzed the velocity dispersion of the most extreme "failed galaxies" example in Virgo from Toloba et al. (2023) removing the furthest GC or the largest velocity difference GC and found no significant change in the estimates of their velocity dispersion or dynamical mass. These include NGVSUDG-05, NGVSUDG-09, NGVSUDG-11, NGVSUDG-19, NGVSUDG-20, and NGVSUDG-A04 using the nomenclature of the original paper. The most extreme variation is for NGVSUDG-09, which changes from \(\sigma=83^{33}_{-22}\) km/s to \(\sigma=60^{25}_{-15}\) km/s. While these values are still statistically consistent, the median velocity dispersion is brought more in line with TNG50 UDGs. Worth noticing, NGVSUDG-19 has only 3 GCs members identified, so it is necessary to proceed with caution regarding this particular target. In order to evaluate the possibility of contamination in the Toloba et al. (2023) sample more closely, we restrict now our simulated sample to only UDGs outside of \(0.1R_{vir}\) from their host cluster and with \(N_{\rm GC,Selected}\geq 5\), (only excluding 1 target from Toloba et al. 2023). A total of 242 UDGs satisfy these criteria when using 3 different projections--along the \(x\)-, \(y\)- and \(z\)- axis of our 39 groups and clusters in TNG50. We derive from these mock projections: the corresponding 1D MCMC velocity dispersion, the half-number radius of the GCs, and the dynamical mass at half-number radius following Jeans modeling as in Wolf et al. (2010). The dynamical mass may be trivially transformed into mass density by dividing by the spherical volume enclosed within the half-number radii. Fig. 8 shows the inferred mass density for this subsample of simulated UDGs as a function of the half-number radius of their selected GCs. Simulated galaxies with 5-9 selected GCs with at least one interloper are shown in gray, unfilled, circles while those with 10 or more (including interlopers) are indicated by black, unfilled, circles. For reference, the gray lines represent NFW profiles with virial masses \(M_{200}=10^{10},10^{11}\) and \(10^{12}\) M\({}_{\odot}\) and a concentration \(c=10\). We do find that for most cases, in particular those with 10 or more GCs, galaxies cluster around the \(10^{11}\) M\({}_{\odot}\) line, which is in Figure 8: Effects of GC contamination on dark matter mass inferences. We show the mass density, \(\rho_{0}\), as a function of radius as reconstructed from the kinematics of GCs for UDGs with more than 5 GCs in projected mock observations (see Sec. 4 for details). To guide the eye, gray lines show the density profiles of NFW halos with concentration \(c=10\) and halo masses \(M_{200}=10^{10},10^{11},10^{12}\) M\({}_{\odot}\) are shown as the solid, dashed, and dotted gray lines respectively. The instantaneous density at the half-number radius of GCs is represented by brown dots for UDGs with no GC interlopers. The thick brown line and shaded region indicates the median and 25-75 percentiles of this distribution and recovers well the \(M_{200}=10^{11}\) M\({}_{\odot}\) density profile. GCs are good tracers of mass for these systems. However, contamination by intra-cluster GCs seemingly associated in projection cause a larger scatter, including several systems laying close to the \(M_{200}=10^{12}\) M\({}_{\odot}\) line (gray symbols for more than 5 GCs associated in projection, black for more than 10, each system considered in XY, XZ and YZ projection). Error bars in simulated UDGs correspond to the 25%-75% scatter from their MCMC pdfs. Values compiled from the literature for real UDGs are shown in purple symbols as labeled. Specific cases of contamination from Fig. 6 and 5 are highlighted in colored diamonds, connecting their density calculated using their actual GCs and those selected via observational methods. The presence of intra-cluster GC interlopers helps explain the inference of overly-massive halos in some of our simulated UDGs, for example in our example S8. Figure 7: The ratio of the GC velocity dispersion measured via mock observations, \(\sigma_{max,ICGC}\). to the actual GC velocity dispersion, \(\sigma_{true}\) as a function of total GCs selected, \(N_{GC,Selected}\) via the method described in sec. 4, with points colored by \(\log_{10}(\sigma_{\rm mock,ICGC})\). We can see that for small GC numbers, the \(\sigma_{max,ICGC}\) can be greatly inflated from its true value, especially from intra-cluster GC contaminants. For most galaxies, the mock observations do not pick up a significant number of interloping intra-cluster GCs in the mock observations, leading to an overall median \(\sigma_{\rm mock,ICGC}/\sigma_{true}\sim 1\). agreement with the prediction in TNG50 that UDGs occupy dwarf-mass halos with \(M_{200}\sim 10^{9}-10^{11}\) M\({}_{\odot}\)(Benavides et al., 2023). For comparison in Fig. 8, we include the inferred density of several observed UDGs that are derived from their reported velocity dispersions (whether from GCs, stars, or stellar spectra) and half number radii (GCs) or effective radii (stars) via dynamical mass estimation (see Wolf et al., 2010). However, we find several instances where the mocked galaxies wander close to or above the \(M_{200}\sim 10^{12}\) M\({}_{\odot}\) line, despite their true halo mass being substantially smaller. This happens mostly due to contamination from interloping intra-cluster GCs. This can be appreciated relative to the UDGs that possess no interloping GCs in their mocked GC sample, as shown by the brown points, line (median), and shaded region (\(25\%-75\%\) spread) in Fig 8. There is a much larger incidence of gray circles near the \(M_{200}\sim 10^{12}\) M\({}_{\odot}\) line, suggesting that low numbers of kinematical tracers may play a role in the appearance of overmassive halos. Such is the case of S8 introduced before in Figs. 5 and 6, highlighted in pink, which moves from a true mass-density consistent with the dashed line (\(M_{200}\sim 10^{11}\) M\({}_{\odot}\)) to its inferred density more consistent with a MW-mass halo with \(M_{200}>10^{12}\) M\({}_{\odot}\). Worth discussing is also the case of S7, highlighted in Fig. 8 as the purple diamond. As shown in Fig. 6, S7 does not include contamination by GC interlopers in its mocked GC sample. Yet its inner density is high and consistent with MW-like halos both when applying the mock selection in projection or when considering all bound GCs according to Subfind. We have checked that this high density is not the result of an overly-massive halo but instead corresponds to a dwarf-mass halo with a larger-than-typical concentration. The virial mass before infall for S7 is \(M_{200}\sim 9\times 10^{10}\) M\({}_{\odot}\). This galaxy is a good reminder that variations in concentration may also drive some of the scatter in the inferred dark matter content of UDGs, a possibility briefly discussed in Gannon et al. (2022). Given these results, can intra-cluster GCs explain the high-incidence of large velocity dispersion UDGs found in Virgo? Within the range of galaxy groups and clusters that we can explore with TNG50, we find that contamination from intra-cluster GCs is unlikely to explain the high incidence of high-mass UDGs in Virgo reported recently by Toloba et al. (2023). Only a handful of simulated UDGs are driven close to the \(M_{200}\sim 10^{12}\) M\({}_{\odot}\) line due to contamination effects, with only 9.5% of UDGs with 5 GCs or more showing velocity dispersion overestimation by a factor of 2 or more in the mock observations. However, a factor to keep in mind is that even the most massive simulated galaxy cluster in TNG50 (Group 0, \(M_{200}=1.87\times 10^{14}\) M\({}_{\odot}\)) is on the the low end of mass estimates for the Virgo cluster (\(M_{200}\sim 2-9\times 10^{14}\) M\({}_{\odot}\), Weinmann et al., 2011; Karachentsev and Nasonova, 2010), with the remainder of our groups in the simulated sample being lower mass. For our most massive cluster, we predict a total of 34,231 GCs with \(M\geq 7\times 10^{5}\) M\({}_{\odot}\), which is on par with what is expected for the GC number density of the M87 subgroup in Virgo (e.g., Durrell et al., 2014; Lee et al., 2010), but is about a factor of two lower than the _combined_ estimate when considering also the M49 subgroup, \(N_{GC,Virgo}\sim 67,300\pm 14,400\)(Durrell et al., 2014). All of the remaining groups in our simulated sample are less massive and will therefore have less GCs than Group 0. It is therefore possible that chance alignment of ICGCs has a larger impact in the specific case of observations in Virgo than found on average in our study. ## 5 Discussion and Summary We use a catalog of GCs added to the TNG50 cosmological simulation, introduced in Doppel et al. (2023), to study the population of GCs associated to UDGs with stellar mass \(M_{*}=[10^{7.5},10^{9}]\) M\({}_{\odot}\) in 39 groups and clusters with \(M_{200}=[5\times 10^{12}-2\times 10^{14}]\) M\({}_{\odot}\). UDGs are selected as outliers in the mass-size relation as presented in Benavides et al. (2023). UDGs in TNG50 are found to form in dwarf-mass halos with biased-high spins and virial masses between \(M_{200}=[2\times 10^{9},2\times 10^{11}]\) M\({}_{\odot}\). As a result, simulated UDGs have similar GC numbers to those associated with non-UDG dwarfs of similar stellar mass. We find between 1-30 GCs bound to the simulated UDGs, with only \(12\) UDGs having no GCs at all. This seems in agreement with observed UDGs, which show a large spread in GC content (Amorisco et al., 2018; Lim et al., 2018, 2020; Somalwar et al., 2020; Gannon et al., 2022; La Marca et al., 2022; Toloba et al., 2023). However, our sample lacks extreme outliers, with \(N_{GC}>30\), and \(S_{N}>50\), as some observations suggest (e.g. Peng and Lim, 2016; Lim et al., 2018, 2020; Muller et al., 2021). The lack of high specific GC frequency simulated UDGs is ultimately linked to the fact that UDGs in TNG50 all inhabit dwarf-mass halos, which have low GC numbers according to the scaling assumed in the model. We caution, however, that uncertainties are still important in observations. For example, our predictions fall well below the initial number of \(\sim 100\) GCs reported for the iconic DF44 (van Dokkum et al., 2017) but agree very well with its revised value \(\sim 20\) in the more recent work by Saifollahi et al. (2021). As for the GC numbers, we find in general good agreement between the predicted GC velocity dispersion in simulated UDGs and values reported in the literature for observational samples. Our predictions agree well with \(\sigma\) measurements for a number of UDGs, particularly DF44, DGSAT-1, DFX1, UDG7, and several UDGs in the Virgo cluster. However, large velocity-dispersion outwits with \(\sigma>50\) km/s such as those found for half of the UDGs studied in the Virgo cluster in Toloba et al. (2023) are not common in our sample. We can use our simulated GC catalogs to make projected mock observations of our systems and assess whether interloper GCs could affect the observational results. We find that outliers from the intra-cluster GC component associated to the host galaxy group or galaxy cluster may in some cases impact the velocity dispersion measurement, inflating \(\sigma\) by factors of \(>2\). These cases are, however, rare, in particular when focusing on UDGs with a sufficient number of tracers (GCs). In agreement with our previous results Doppel et al. (2021), we find 10 or more GCs are needed for robust kinematical measurements. For instance, only 9.5% of cases with more than 10 GCs have velocity dispersions that are overestimated by more than a factor of 2 because of the presence of interlopers. Such cases will suggest dark matter halos with \(M_{200}\sim 10^{12}\) M\({}_{\odot}\)when in reality they occupy normal dwarf-mass halos. We compare our results with the high incidence of observed UDGs with large velocity dispersions reported in kinematical studies of UDGs in the Virgo cluster and conclude that the frequency of contamination in our systems does not explain the large number of UDGs with \(\sigma>50\) km/s in Virgo. A caveat of our study is that groups and clusters included in TNG50 are on average less massive than Virgo, and the incidence of interloper contamination could be higher in more massive systems. We identify some high inferred halo-mass cases in Toloba et al. (2023), such as UDG 19 or 05 and 20, that have 5 GC tracers or less, making them interesting candidates to follow up spectroscopically for confirmation. Ultimately, for UDGs with a low number of identified GC members, measuring their stellar velocity dispersion might be the only avenue to constrain better their true dark matter mass content and, with it, their possible formation path. ## Acknowledgements JED and LVS are grateful for financial support from the NSF-CAREER-1945310 and NASA ATP-80NSSC20K0566 grants. ET is thankful for the support from NSF-AST-2206498 and HST GO-15417 grants. DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). ## Data Availability The realistic GC catalogs used in this study are available to the public. The catalogs can be downloaded here: www.tng-project.org/doppel22 or as part of the TNG50 public data release (Nelson et al., 2019).
2309.05611
On actions of tori and quaternionic tori on products of spheres
In this paper we study the actions of tori (standard compact tori, as well as their quaternionic analogues) on products of spheres. It is proved that the orbit space of a specific action of a torus on a product of spheres is homeomorphic to a sphere. A similar statement for a real torus $\mathbb{Z}_2^n$ was proved by the second author in 2019. We also provide a statement about arbitrary compact topological groups, generalizing the mentioned results, as well as the results of the first author about the actions of a compact torus of complexity one.
Anton Ayzenberg, Dmitry Gugnin
2023-09-11T16:52:42Z
http://arxiv.org/abs/2309.05611v1
# On actions of tori and quaternionic tori on products of spheres ###### Abstract. In this paper we study the actions of tori (standard compact tori, as well as their quaternionic analogues) on products of spheres. It is proved that the orbit space of a specific action of a torus on a product of spheres is homeomorphic to a sphere. A similar statement for a real torus \(\mathbb{Z}_{2}^{n}\) was proved by the second author in 2019. We also provide a statement about arbitrary compact topological groups, generalizing the mentioned results, as well as the results of the first author about the actions of a compact torus of complexity one. Key words and phrases:torus action, quaternions, orbit spaces 2020 Mathematics Subject Classification: Primary: 57S12, 57S15, 57S25, Secondary: 57R10 This work was supported by the Russian Science Foundation under grant no. 23-11-00143 [https://rscf.ru/en/project/23-11-00143/](https://rscf.ru/en/project/23-11-00143/) ###### Contents * 1 Introduction [MISSING_PAGE_POST] The proof of this theorem is completely analogous to the proof of its quaternionic version, Theorem 3 below. Let \(\mathbb{H}\) denote the algebra of quaternions and \(\operatorname{Sp}(1)\) -- the Lie group of unit quaternions (i.e. the quaternions of unit length). Consider \(k\geqslant 2\) many spheres of dimensions at least \(4\): \[S^{m_{1}}=\{(\mathbf{x}_{1},q_{1})\mid\mathbf{x}_{1}=(x_{1,1},x_{2,1},\ldots,x_ {m_{1}-3,1})\in\mathbb{R}^{m_{1}-3},q_{1}\in\mathbb{H},|\mathbf{x}_{1}|^{2}+|q _{1}|^{2}=1\},\] \[\vdots\] \[S^{m_{k}}=\{(\mathbf{x}_{k},q_{k})\mid\mathbf{x}_{k}=(x_{1,k},x_{2,k},\ldots,x_ {m_{k}-3,k})\in\mathbb{R}^{m_{k}-3},q_{k}\in\mathbb{H},|\mathbf{x}_{k}|^{2}+|q _{k}|^{2}=1\}.\] Consider the direct product \(S^{m_{1}}\times S^{m_{2}}\times\ldots\times S^{m_{k}}\). It carries a (left) smooth action of the quaternionic torus of rank \(k-1\) \[\operatorname{Sp}(1)^{k-1}=\underbrace{\operatorname{Sp}(1)\times \operatorname{Sp}(1)\times\ldots\times\operatorname{Sp}(1)}_{k-1\text{ times}}.\] Namely, the element \((r_{1},r_{2},\ldots,r_{k-1})\in\operatorname{Sp}(1)^{k-1}\) translates a point \[((\mathbf{x}_{1},q_{1}),(\mathbf{x}_{2},q_{2}),\ldots,(\mathbf{x}_{k},q_{k})) \in S^{m_{1}}\times S^{m_{2}}\times\ldots\times S^{m_{k}}\] to the point \[((\mathbf{x}_{1},q_{1}r_{1}^{-1}),(\mathbf{x}_{2},r_{1}q_{2}r_{2}^{-1}),( \mathbf{x}_{3},r_{2}q_{3}r_{3}^{-1}),\ldots,(\mathbf{x}_{k},r_{k-1}q_{k})).\] **Theorem 3**.: _The quotient space \(S^{m_{1}}\times S^{m_{2}}\times\ldots\times S^{m_{k}}/\operatorname{Sp}(1)^{k-1}\) is homeomorphic to the sphere \(S^{m},m=m_{1}+\ldots+m_{k}-3(k-1)\). The canonical projection to the orbit space is given by the formula:_ \[((\mathbf{x}_{1},q_{1}),(\mathbf{x}_{2},q_{2}),\ldots,(\mathbf{x}_{k},q_{k})) \mapsto\frac{(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}q_{2} \ldots q_{k})}{\sqrt{|\mathbf{x}_{1}|^{2}+|\mathbf{x}_{2}|^{2}+\ldots+| \mathbf{x}_{k}|^{2}+|q_{1}q_{2}\ldots q_{k}|^{2}}}. \tag{2.2}\] Proof.: First, let us prove that the canonical projection onto the space of orbits is realized by a simpler formula: \[((\mathbf{x}_{1},q_{1}),(\mathbf{x}_{2},q_{2}),\ldots,(\mathbf{x}_{k},q_{k})) \mapsto(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}q_{2}\ldots q _{k}). \tag{2.3}\] From the definition of the action of the quaternionic torus on the product of spheres, it is clear that any orbit gets mapped to a single point under the map (2.3). Let us prove that two different orbits cannot map to the same point under (2.3). Indeed, assume \[(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}q_{2}\ldots q_{k}) =(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{k},p_{1}p_{2}\ldots p_{k}). \tag{2.4}\] It follows that \(\mathbf{x}_{i}=\mathbf{y}_{i},1\leqslant i\leqslant k\). Therefore, \(|q_{i}|=|p_{i}|\), \(1\leqslant i\leqslant k\). It is easily seen that, remaining within a single orbit, we can assume that \(0\leqslant q_{i}=p_{i}\leqslant 1\), \(1\leqslant i\leqslant k-1\). At first, let us assume that \(q_{1}q_{2}\ldots q_{k}=p_{1}p_{2}\ldots p_{k}\neq 0\). Then, obviously, \(q_{k}=p_{k}\), which was to be proved. Now consider the case \(q_{1}q_{2}\ldots q_{k}=p_{1}p_{2}\ldots p_{k}=0\). If \(q_{k}=p_{k}=0\), then everything is proved. Let \(|q_{k}|=|p_{k}|>0\). Denote the unit quaternion \(p_{k}q_{k}^{-1}\) by \(a\). Let \(q_{k-1}=p_{k-1}=0\). Then one can translate the tuple \((q_{1},q_{2},\ldots,0,q_{k})\) to the tuple \((p_{1},p_{2},\ldots,0,p_{k})\) using the element \((1,1,\ldots,1,r_{k-1}=a)\in\operatorname{Sp}(1)^{k-1}\). If \(q_{k-1}=p_{k-1}>0\), \(q_{k-2}=p_{k-2}=0\), then \((1,1,\ldots,1,a,a)\) is the required element of the group \(\operatorname{Sp}(1)^{k-1}\). If \(q_{k-1}=p_{k-1}>0\), \(q_{k-2}=p_{k-2}>0\), \(q_{k-3}=p_{k-3}=0\), then the required element is \((1,1,\ldots,1,a,a,a)\). Similar arguments work for other cases. In the most extreme case, we have \(q_{k-1}=p_{k-1}>0\), \(q_{k-2}=p_{k-2}>0\), \(\ldots\), \(q_{2}=p_{2}>0\), \(q_{1}=p_{1}=0\). In this case, the required element is \((a,a,\ldots,a)\). Thus, we have proved that formula (2.3) determines a well-defined canonical projection onto the orbit space. It can be seen that the vector on the right hand side of formula (2.3) is nonzero. Since the formula (2.2) on the left hand side involves a compact Hausdorff space (the quotient of a Hausdorff space by a continuous action of a compact Lie group is Hausdorff [6]), and the sphere appears on the right hand side, it suffices to check _bijectivity_ of the map (2.2). **Injectivity**. We need to check that the equality \[(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}q_{2}\ldots q_{k})= \mu(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{k},p_{1}p_{2}\ldots p_{k }),\quad\mu>1 \tag{2.5}\] never occurs for distinct points. We have \(q_{1}q_{2}\ldots q_{k}=\mu p_{1}p_{2}\ldots p_{k}\). Two cases are possible: (A) \(q_{1}q_{2}\ldots q_{k}=p_{1}p_{2}\ldots p_{k}=0\), and (B) \(q_{i}\neq 0,p_{i}\neq 0,1\leqslant i\leqslant k\). _Case (A)._ We have \(p_{i_{0}}=0\) for some \(1\leqslant i_{0}\leqslant k\). Then \(|\mathbf{y}_{i_{0}}|=1\) and \(|\mathbf{x}_{i_{0}}|=\mu|\mathbf{y}_{i_{0}}|=\mu>1\), which is impossible. _Case (B)._ We have \(|\mathbf{x}_{i}|=\mu|\mathbf{y}_{i}|\geqslant|\mathbf{y}_{i}|\), \(1\leqslant i\leqslant k\). This implies that \(0<|q_{i}|\leqslant|p_{i}|\), \(1\leqslant i\leqslant k\). Hence \(0<|q_{1}q_{2}\ldots q_{k}|\leqslant|p_{1}p_{2}\ldots p_{k}|\). On the other hand, we have \(|q_{1}q_{2}\ldots q_{k}|=\mu|p_{1}p_{2}\ldots p_{k}|>|p_{1}p_{2}\ldots p_{k}|\), -- a contradiction. **Surjectivity**. Since the image of a compact space under a continuous mapping is always a compact space, then either the surjectivity is proved or the image of the map (2.2) is a proper subcompact in the sphere \(S^{m}\). Again, we argue from the contrary. Assume there exists a vector \((\mathbf{t}_{1},\mathbf{t}_{2},\ldots,\mathbf{t}_{k},t)\) such that \(\mathbf{t}_{i}\neq 0\), \(1\leqslant i\leqslant k\), \(t\neq 0\), and for any \(\mu>0\) the vector \(\mu(\mathbf{t}_{1},\mathbf{t}_{2},\ldots,\mathbf{t}_{k},t)\) does not have the form \((\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}q_{2}\ldots q_{k})\). Denote \(\min_{1\leqslant i\leqslant k}\{1/|\mathbf{t}_{i}|\}\) by \(\mu_{0}\). As the parameter \(\mu\) runs over the interval \((0,\mu_{0})\), the lengths of the vectors \(\mu\mathbf{t}_{i}=\mathbf{x}_{i}\), \(1\leqslant i\leqslant k\) increase strictly and continuously and run over the intervals \((0,\mu_{0}|\mathbf{t}_{i}|)\subset(0,1)\), \(1\leqslant i\leqslant k\). Moreover, there exists \(1\leqslant i_{0}\leqslant k\) such that \((0,\mu_{0}|\mathbf{t}_{i_{0}}|)=(0,1)\). It follows from the length expressions \(|q_{i}|=\sqrt{1-|\mathbf{x}_{i}|^{2}}\), that the length of \(|q_{1}(\mu)q_{2}(\mu)\ldots q_{k}(\mu)|\) decreases strictly and continuously from \(1\) to \(0\) (not taking extreme values). In this case, it is possible to achieve collinearity of the nonzero quaternions \(t\) and \(q_{1}(\mu)q_{2}(\mu)\ldots q_{k}(\mu)\), \(0<\mu<\mu_{0}\). Since the length of \(|\mu t|\) increases strictly and continuously from \(0\) to \(|\mu_{0}||t|>0\), the Cauchy intermediate value theorem asserts that there exists a parameter \(\mu_{1}\in(0,\mu_{0})\), for which the equality \(\mu_{1}(\mathbf{t}_{1},\mathbf{t}_{2},\ldots,\mathbf{t}_{k},t)=(\mathbf{x}_{ 1},\mathbf{x}_{2},\ldots,\mathbf{x}_{k},q_{1}(\mu_{1})q_{2}(\mu_{1})\ldots q_{k }(\mu_{1}))\) holds. The theorem is completely proven. In the work [9] of the second author, the proof of Theorem 1 was omitted due to its simplicity. However, we still give here the proof of the most nontrivial part, namely, the non-degeneracy (local diffeomorphism) of the corresponding branched covering at the points of the local homeomorphism (away from the branching locus). Proof. It is easily shown (similar to the above reasoning) that the canonical projection onto the orbit space is given by a simpler formula: \[(x_{1,1},\ldots,x_{m_{1},1},q_{1},x_{1,2},\ldots,x_{m_{2},2},q_{2 },\ldots,x_{1,k},\ldots,x_{m_{k},k},q_{k})\mapsto\\ \mapsto(x_{1,1},\ldots,x_{m_{1},1},x_{1,2},\ldots,x_{m_{2},2}, \ldots,x_{1,k},\ldots,x_{m_{k},k},q_{1}q_{2}\cdots q_{k}).\] Here, we denote \(x_{m_{i}+1,i}=q_{i}\in\mathbb{R}\), \(1\leqslant i\leqslant k\) for the sake of simplicity. Moreover, for the initial map onto the unit sphere, the points of the local homeomorphism are either \((A)\) points with \(q_{1}q_{2}\cdots q_{k}\neq 0\), or \((B)\) there is a unique \(1\leqslant j\leqslant k\) with \(q_{j}=0\). It is understood that the points of a local diffeomorphism for the original map of smooth \(m\)-dimensional manifolds \(S^{m_{1}}\times\ldots\times S^{m_{k}}\to S^{m}\) correspond (in both directions) to the points of a local diffeomorphism for the following map of smooth \((m+1)\)-dimensional manifolds \(F\colon S^{m_{1}}\times\ldots\times S^{m_{k}}\times(0,+\infty)\to\mathbb{R}^{ m}\setminus\{0\}\): \[F(x_{1,1},\ldots,x_{m_{1},1},q_{1},x_{1,2},\ldots,x_{m_{2},2},q_{2},\ldots,x_ {1,k},\ldots,x_{m_{k},k},q_{k};\mu)=\] \[\mu(x_{1,1},\ldots,x_{m_{1},1},x_{1,2},\ldots,x_{m_{2},2},\ldots,x_{1,k}, \ldots,x_{m_{k},k},q_{1}q_{2}\cdots q_{k}),\] where the parameter \(\mu\) can be taken arbitrarily. Let us verify that in both cases \((A)\) and \((B)\) the Jacobian of the map \(F\) is nonzero. **Case \((A)\).** In this case, the string \((x_{1,1},\ldots,x_{m_{1},1},x_{1,2},\ldots,x_{m_{2},2},\ldots,x_{1,k},\ldots, x_{m_{k},k};\mu)\) can be taken as the local coordinates in the preimage. We need to calculate the determinant of order \(m+1\). The last column of the desired determinant \((\partial F(\ldots)/\partial\mu)\) is equal to (the transposed string) \[(x_{1,1},\ldots,x_{m_{1},1},\ldots,x_{1,k},\ldots,x_{m_{k},k},q_{1}q_{2}\cdots q _{k})^{\intercal}.\] Further, the column \(\partial F(\ldots)/\partial x_{1,1}\) divided by \(\mu\) equals \[(1,0,\ldots,0,(\partial q_{1}/\partial x_{1,1})q_{2}q_{3}\cdots q_{k})^{ \intercal}=\left(1,0,\ldots,0,-\frac{x_{1,1}}{q_{1}}q_{2}q_{3}\cdots q_{k} \right)^{\intercal}.\] Similarly, the column \(\partial F(\ldots)/\partial x_{2,1}\) divided by \(\mu\) equals \[(0,1,0,\ldots,0,(\partial q_{1}/\partial x_{2,1})q_{2}q_{3}\cdots q_{k})^{ \intercal}=\left(0,1,0,\ldots,0,-\frac{x_{2,1}}{q_{1}}q_{2}q_{3}\cdots q_{k} \right)^{\intercal}.\] Making further calculations, we get that the penultimate column \(\partial F(\ldots)/\partial x_{m_{k},k}\) divided by \(\mu\) equals \[(0,0,\ldots,0,1,(\partial q_{k}/\partial x_{m_{k},k})q_{1}q_{2}\cdots q_{k-1} )^{\intercal}=\left(0,0,\ldots,0,1,-\frac{x_{m_{k},k}}{q_{k}}q_{1}q_{2}\cdots q _{k-1}\right)^{\intercal}.\] Subtracting from the last column \((x_{1,1},\ldots,x_{m_{1},1},\ldots,x_{1,k},\ldots,x_{m_{k},k},q_{1}q_{2}\cdots q _{k})\) the first column multiplied by \(x_{1,1}\), the second column multiplied by \(x_{2,1}\), etc, we obtain, as a result, the lower triangular matrix with the diagonal \[\left(1,1,\ldots,1,q+\frac{x_{1,1}^{2}+x_{2,1}^{2}+\ldots+x_{m_{1},1}^{2}}{q_ {1}^{2}}q+\ldots+\frac{x_{1,k}^{2}+x_{2,k}^{2}+\ldots+x_{m_{k},k}^{2}}{q_{k}^ {2}}q\right),\] where \(q=q_{1}q_{2}\cdots q_{k}\). Since \(q\neq 0\), the determinant of this matrix is nonzero. The required local diffeomorphism in the case \((A)\) is proved. **Case \((B)\).** Due to certain symmetry of the function \(F\) in its arguments, we can assume without loss of generality that \(q_{1}=0\), \(q_{2}q_{3}\cdots q_{k}\neq 0\), \(x_{1,1}\neq 0\). In this situation, one can choose the string \[q_{1},x_{2,1},x_{3,1},\ldots,x_{m_{1},1},x_{1,2},\ldots,x_{m_{2},2},\ldots,x_{1,k},\ldots,x_{m_{k},k},\mu\] as local coordinates in the preimage. Recall the definition of \(F\): \[F(\ldots)=\mu(x_{i,j};q).\] Let us write down the calculations of all first partial derivatives of the function \(F\): \[\frac{1}{\mu}\frac{\partial F}{\partial q_{1}}=(0,0,\ldots,0;q_{2}q_{3}\cdots q _{k}),\] \[\frac{\partial F}{\partial\mu}=(x_{i,j};0),\] \[\frac{1}{\mu}\frac{\partial F}{\partial x_{2,1}}=\left(-\frac{x_{2,1}}{x_{1,1 }},1,0,\ldots,0;0\right),\ \frac{1}{\mu}\frac{\partial F}{\partial x_{3,1}}=\left(-\frac{x_{3,1}}{x_{1, 1}},0,1,0,\ldots,0;0\right),\ldots,\] \[\frac{1}{\mu}\frac{\partial F}{\partial x_{m_{1},1}}=\left(-\frac{x_{m_{1},1 }}{x_{1,1}},0,0,\ldots,0,1,0,\ldots,0;0\right),\] \[\frac{1}{\mu}\frac{\partial F}{\partial x_{1,2}}=(0,\ldots,0,1,0,\ldots,0;0), \ \ldots\,\frac{1}{\mu}\frac{\partial F}{\partial x_{m_{k},k}}=(0,\ldots,0,1;0).\] By carefully computing the Jacobian of the map \(F\) at the given point, we can see that it is nonzero if and only if the following determinant of order \(m_{1}\) is nonzero: \[\left|\begin{array}{cccccc}x_{1,1}&-x_{2,1}&-x_{3,1}&\ldots&-x_{m_{1}-1,1}& -x_{m_{1},1}\\ x_{2,1}&x_{1,1}&0&\ldots&0&0\\ x_{3,1}&0&x_{1,1}&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ x_{m_{1}-1,1}&0&0&\ldots&x_{1,1}&0\\ x_{m_{1},1}&0&0&\ldots&0&x_{1,1}\end{array}\right|\] Denote by \(A\) the matrix under this determinant. If this determinant is zero, then the skew-Hermitian matrix \(A-x_{1,1}E\) has a nonzero real eigenvalue \(\lambda=-x_{1,1}\). However, it is well known, that all eigenvalues of a skew-Hermitian matrix are purely imaginary complex numbers, and it was assumed earlier that \(x_{1}\neq 0\). This contradiction shows that the Jacobian under consideration is nonzero. The desired local diffeomorphism in the case of \((B)\) is completely proved. **Remark 2.1**.: Similar to the proof above one can show that the maps (2.1) and (2.2) to the orbit space from Theorem 2 (complex tori) and Theorem 3 (quaternionic tori) are smooth and they are submersions outside the degeneration locus. This means that in the open set of free orbits, the differentials of these maps have maximal possible rank equal to the dimension of the orbit space. In the work [10] of the second author, it was shown that the number \(k-1\) of commuting involutions on the product of spheres \(S^{m_{1}}\times\ldots\times S^{m_{k}}\) is the minimal possible if one wants to obtain a rational homological sphere as the orbit space. We pose the following problem related to this fact. **Problem 1**.: _Is it true that, for an arbitrary smooth action of the complex torus \(T^{k-2}\) on the product \(S^{m_{1}}\times\ldots\times S^{m_{k}}\) of spheres of dimensions \(\geqslant 2\), the corresponding orbit space _is not a (rational homological) sphere? Is it true that for an arbitrary smooth action of the quaternionic torus \(\operatorname{Sp}(1)^{k-2}\) on the product \(S^{m_{1}}\times\cdots\times S^{m_{k}}\) of spheres of dimensions \(\geqslant 4\) the corresponding orbit space is not a (rational homology) sphere?_ This conjecture seems hard. It is nontrivial even in the simplest case \(k=3\). ## 3. General groups Consider nonempty compact Hausdorff spaces \(X_{1},X_{2},\ldots,X_{k}\), \(k\geqslant 2\), and an arbitrary compact Hausdorff group \(G\). Consider the following (left) continuous action of the group \(G^{k-1}\) on the product of joins \(\prod_{i=1}^{k}(X_{i}*G)\). Namely, the element \((r_{1},r_{2},\ldots,r_{k-1})\in G^{k-1}\) translates a point \[(x_{1},1-t_{1},q_{1},t_{1};x_{2},1-t_{2},q_{2},t_{2};\ldots;x_{k},1-t_{k},q_{k },t_{k})\] to the point \[(x_{1},1-t_{1},q_{1}r_{1}^{-1},t_{1};x_{2},1-t_{2},r_{1}q_{2}r_{2}^{-1},t_{2}; \ldots;x_{k},1-t_{k},r_{k-1}q_{k},t_{k}). \tag{3.1}\] **Theorem 4**.: _The orbit space \(\prod_{i=1}^{k}(X_{i}*G)/G^{k-1}\) is homeomorphic to the join \(X_{1}*X_{2}*\ldots*X_{k}*G\). The canonical projection onto the space of orbits is given by the formula:_ \[(x_{1},1-t_{1},q_{1},t_{1};x_{2},1-t_{2},q_{2},t_{2};\ldots;x_{k}, 1-t_{k},q_{k},t_{k})\mapsto\\ \mapsto(x_{1},1-t_{1};x_{2},1-t_{2};\ldots;x_{k},1-t_{k};q_{1}q_ {2}\cdots q_{k},t_{1}t_{2}\cdots t_{k})/A, \tag{3.2}\] _where \(A=t_{1}\cdots t_{k}+\sum_{i=1}^{k}(1-t_{i})\) is the normalizing factor._ Proof.: Recall that the join of spaces \(Y_{1},\ldots,Y_{m}\) is the identification space \[Y_{1}\times\cdots\times Y_{s}\times\Delta^{m-1}/\!\!\sim,\] where \(\Delta^{m-1}\) is the standard simplex with barycentric coordinates \((s_{1},\ldots,s_{m})\), \(s_{i}\geqslant 0\), \(\sum s_{i}=1\), and the equivalence relation \(\sim\) is generated by the conditions \[(y_{1},\ldots,y_{l},\ldots,y_{m},(s_{1},\ldots,s_{m}))\sim(y_{1},\ldots,y_{l} ^{\prime},\ldots,y_{m},(s_{1},\ldots,s_{m})),\ \text{if}\ s_{l}=0\] for any \(l\in[m]\). Notice that \(t_{i}\in[0;1]\) in (3.2), therefore \(A>0\). Therefore, all coefficients \(\frac{1-t_{i}}{A}\), \(i\in[k]\) and \(\frac{t_{1}\cdots t_{k}}{A}\) are nonnegative and sum to \(1\) due to the choice of the normalizing factor \(A\). Hence formula (3.2) provides a well-defined continuous map of the form \[h\colon\prod\nolimits_{i=1}^{k}(X_{i}\times\Delta^{1}\times G)\to X_{1} \times\cdots\times X_{k}\times G\times\Delta^{k-1}.\] It is easily seen that the map \(h\) is surjective. Let us check that \(h\) descends to a well-defined map of the quotient spaces, the joins from the statement of the theorem. If \(t_{i}=1\) for some \(i\in[k]\), then the corresponding factor \(X_{i}\) collapses to one point both in the space \(X_{i}*G\) and in the space \(X_{i}*\cdots*X_{k}*G\), so equivalent points of this kind are mapped to equivalent points. If \(t_{i}=0\), then \(t_{1}\cdots t_{k}=0\), and, similarly, equivalent points are mapped to equivalent ones. Therefore, the formula (3.2) descends to a well-defined continuous map \[\tilde{h}\colon(X_{1}*G)\times(X_{2}*G)\times\ldots\times(X_{k}*G)/G^{k-1} \to X_{1}*X_{2}*\ldots*X_{k}*G.\] Since all the spaces appearing in this formula are Hausdorff compact, it suffices to show that the map \(\tilde{h}\) is bijective. The surjectivity of \(\tilde{h}\) follows from the surjectivity of \(h\). Let us prove that \(\tilde{h}\) is injective. Generally, the proof is similar to the proof of Theorem 3. Assume that \(\tilde{h}(x_{1},t_{1},q_{1},\ldots,x_{k},t_{k},q_{k})=\tilde{h}(x_{1}^{\prime}, t_{1}^{\prime},q_{1}^{\prime},\ldots,x_{k}^{\prime},t_{k}^{\prime},q_{k}^{\prime})\). Looking at the real parameters \(t_{1},\ldots,t_{k}\) we see that formula (3.2) defines a homeomorphism of the \(k\)-dimensional cube onto the \(k\)-dimensional simplex. Therefore the equalities \(t_{i}=t_{i}^{\prime}\) hold for all \(i\in[k]\). If \(0<t_{i}<1\) for all \(i\), then the assertion of injectivity reduces to the homeomorphism \(G^{k}/G^{k-1}\cong G\) given by the formula \([(q_{1},\ldots,q_{k})]\mapsto q_{1}\cdots q_{k}\). If \(t_{i}=1\) for some \(i\), then the fiber \(X_{i}\) collapses on both sides of the map. If \(t_{1}\cdots t_{k}=0\), then \(t_{i}=0\) for some \(i\in[k]\). This means that the \(i\)-th component \(G_{i}\) of the product \(G^{k}\) collapses in the target of \(\tilde{h}\). Taking the quotient of \(G^{k}\) simultaneously by \(G_{i}\) and the action of the subgroup \(G^{k-1}\) described by the formula (3.1), we see that the entire group \(G^{k}\) collapses into a point. Therefore, the identifications on the face of the simplex \(\{t_{1}\cdots t_{k}=0\}\) are the same in the space \(X_{1}*X_{2}*\ldots*X_{k}*G\) and in the space \((X_{1}*G)\times(X_{2}*G)\times\ldots\times(X_{k}*G)/G^{k-1}\). Injectivity is proved. Note that the extreme trivial cases of Theorem 4 appear to be informative. **Example 3.1**.: Let us apply Theorem 4 to the trivial group \(G=\{1\}\). We get the standard topological fact: \[\operatorname{Cone}X_{1}\times\operatorname{Cone}X_{2}\times\cdots\times \operatorname{Cone}X_{k}\cong\operatorname{Cone}(X_{1}*X_{2}*\ldots*X_{k}).\] **Example 3.2**.: Let us apply Theorem 4 to the one-point topological spaces \(X_{i}=*\), \(i\in[k]\). We get a homeomorphism: \[(\operatorname{Cone}G)^{k}/G^{k-1}\cong\underbrace{\operatorname{Cone}\cdots \operatorname{Cone}}_{k}G\cong\Delta^{k-1}*G.\] **Remark 3.3**.: The topological part of Theorems 1, 2 and 3 is a special case of Theorem 4 if we let \(G\) be a torus: real \(\mathbb{Z}_{2}=O(1)\), complex \(T^{1}\cong U(1)\), or quaternionic \(\operatorname{Sp}(1)\), respectively; and \(X_{i}\) -- spheres of arbitrary dimensions. Note, however, that the above theorems contain stronger assertions. Since both the spaces \(X_{i}\) and the group \(G\) are spheres, their joins are also spheres, and therefore have a natural smooth structure. The question of whether the natural projection onto the orbit space is smooth seems important. For this reason we spent some time proving smoothness in Section 2. Example 3.2 has a useful treatment in the real, complex, and quaternionic cases. Let \(\mathbb{K}\) denote \(\mathbb{R}\), \(\mathbb{C}\), or \(\mathbb{H}\), the number \(d(\mathbb{K})\) be equal to \(1\), \(2\), or \(4\), respectively, and \(S(\mathbb{K})\) denote the compact Lie group of numbers having norm \(1\) in the corresponding division algebra. Thus \(S(\mathbb{K})\) is \(\mathbb{Z}_{2}\), \(U(1)\), or \(\operatorname{Sp}(1)\). Topologically, \(S(\mathbb{K})\) is a sphere of dimension \(d(\mathbb{K})-1\). The group \(S(\mathbb{K})^{k-1}\) acts linearly on \(\mathbb{K}^{k}\cong\mathbb{R}^{d(\mathbb{K})k}\) by the formula \[(g_{1},\ldots,g_{k-1})(q_{1},\ldots,q_{k})=(q_{1}g_{1}^{-1},g_{1}q_{2}g_{2}^{- 1},\ldots,g_{k-2}q_{k-1}g_{k-1}^{-1},g_{k}q_{k}). \tag{3.3}\] **Proposition 3.4**.: _The orbit space \(\mathbb{K}^{k}/S(\mathbb{K})^{k-1}\) is homeomorphic to the space \(\mathbb{R}^{d(\mathbb{K})+k-1}\)._ Proof.: **First version of the proof.** The space \(\mathbb{K}\) is equivariantly diffeomorphic to the open unit ball in \(\mathbb{K}\) that is the interior of the cone \(\operatorname{Cone}S(\mathbb{K})\). Apply the homeomorphism from Example 3.2 and pass to the interiors of the spaces. **Second version of the proof.** Consider the coordinate-wise action of \(S(\mathbb{K})^{k}\) on \(\mathbb{K}^{k}\). We have \(\mathbb{K}^{k}/S(\mathbb{K})^{k}\cong\mathbb{R}^{k}_{\geqslant 0}\). The projection to the quotient space has a natural section, so the space \(\mathbb{K}^{k}\) can be represented as the identification space \[\mathbb{K}^{k}\cong\mathbb{R}^{k}_{\geqslant 0}\times S(\mathbb{K})^{k}/ \!\!\sim, \tag{3.4}\] similarly to how quasitoric manifolds are defined in the toric topology [7]. Moding out the second factor in the construction (3.4) by the torus action (3.3), we obtain \[\mathbb{K}^{k}/S(\mathbb{K})^{k-1}\cong\mathbb{R}^{k}_{\geqslant 0}\times S( \mathbb{K})/\!\!\sim. \tag{3.5}\] Note that \(\mathbb{R}^{k}_{\geqslant 0}\) is homeomorphic to the half-space \(\mathbb{R}^{k-1}\times\mathbb{R}_{\geqslant 0}\). A careful analysis of the stabilizers shows that the relation \(\sim\) in the formula (3.5) collapses the component \(S(\mathbb{K})\) into a point if and only if the corresponding point from \(\mathbb{R}^{k-1}\times\mathbb{R}_{\geqslant 0}\) belongs to the boundary \(\mathbb{R}^{k-1}\times\{0\}\), see [3] for details. Hence we get \[\mathbb{K}^{k}/S(\mathbb{K})^{k-1}\cong\mathbb{R}^{k-1}\times(\mathbb{R}_{ \geqslant 0}\times S(\mathbb{K})/\!\!\sim)\cong\mathbb{R}^{k-1}\times \mathbb{K}\] which completes the proof. Remark 3.5.: The second version of the proof in the general case is completely analogous to the complex case considered in [3, Lem.2.11], see also [12, Thm.3.6]. The real case was studied in detail by Mikhailova [11, Thm.2.2]. Proposition 3.4 can be understood as a local result. The global consequence follows. Corollary 3.6.: _Consider a smooth action of a torus \(G\) (real, complex, or quaternionic) on a closed smooth manifold \(X\). Assume that the linearized action on the normal space to each orbit is equivalent to a representation of the form (3.3) multiplied by a trivial representation. Then the orbit space \(X/G\) is a topological manifold._ In the case of a complex torus, many examples of such actions with isolated fixed points were studied in the works of the first author [3, 4, 5]. Actions of real tori whose orbit spaces are manifolds were studied by Gorchakov [8]. There is also a series of results worth mentioning in this context: \[\mathbb{C}P^{2}/\operatorname{conj}\cong S^{4},\qquad\mathbb{H}P^{2}/U(1) \cong S^{7},\qquad\mathbb{O}P^{2}/\operatorname{Sp}(1)\cong S^{13}. \tag{3.6}\] Here the first homeomorphism is the classical Kuiper-Massey theorem (Arnold [1] attributes this result to Pontryagin), the second homeomorphism \(\mathbb{H}P^{2}/U(1)\cong S^{7}\) is the result of Arnold himself [1, Ex.4], and the third one is due to Atiyah-Berndt [2]. In these examples, the set of fixed points is not discrete, but the linearization of the group action on a normal space to the fixed points' submanifold is equivalent to the linear representations of \(\mathbb{Z}_{2}\) on \(\mathbb{R}^{2}\), \(U(1)\) on \(\mathbb{C}^{2}=\mathbb{R}^{4}\), and \(\operatorname{Sp}(1)\) on \(\mathbb{H}^{2}=\mathbb{R}^{8}\) respectively. Corollary 3.6 explains why the orbit spaces in all cases (3.6) are manifolds; although, by no means, it explains why the orbit spaces are homeomorphic to spheres. In view of Theorems 1, 2, 3 and homeomorphisms 3.6, arises a natural question. **Problem 2**.: _Describe a class of actions of the groups \(\mathbb{Z}_{2}^{k}\), \(T^{k}\), and \(\operatorname{Sp}(1)^{k}\) on smooth manifolds with orbit spaces homeomorphic to spheres which is general enough to include both the products of spheres and the manifolds \(\mathbb{C}P^{2}\), \(\mathbb{H}P^{2}\), and \(\mathbb{O}P^{2}\)._
2309.08335
A Multi-Companion Method to Periodically Integrated Autoregressive Models
There has been an enormous interest in analysing and modelling periodic time series. The research on periodically integrated autoregressive (PIAR) models which capture the periodic structure and the presence of unit roots is widely applied in environmental, financial and energy areas. In this paper, we propose a multi-companion method which uses the eigen information of the multi-companion matrix in the multi-companion representation of PIAR models. The method enables the estimation and forecasting of PIAR models with a single, two and multiple unit roots. We show that the parameters of PIAR models can be represented in terms of the eigen information of the multi-companion matrix. Consequently, the estimation can be conducted using the eigen information, rather than directly estimating the parameters of PIAR models. A Monte Carlo experiment and an application are provided to illustrate the robustness and effectiveness of the multi-companion method.
Yueyun Zhu, Georgi N. Boshnakov
2023-09-15T11:45:24Z
http://arxiv.org/abs/2309.08335v1
# A Multi-Companion Method to Periodically Integrated Autoregressive Models ###### Abstract There has been an enormous interest in analysing and modelling periodic time series. The research on periodically integrated autoregressive (PIAR) models which capture the periodic structure and the presence of unit roots is widely applied in environmental, financial and energy areas. In this paper, we propose a multi-companion method which uses the eigen information of the multi-companion matrix in the multi-companion representation of PIAR models. The method enables the estimation and forecasting of PIAR models with a single, two and multiple unit roots. We show that the parameters of PIAR models can be represented in terms of the eigen information of the multi-companion matrix. Consequently, the estimation can be conducted using the eigen information, rather than directly estimating the parameters of PIAR models. A Monte Carlo experiment and an application are provided to illustrate the robustness and effectiveness of the multi-companion method. Periodic integration PIAR model Multi-companion matrices Unit roots ## 1 Introduction The presence of strong periodicity and seasonal variations in financial, environmental and energy time series has inspired a critical area of research in periodic time series analysis. The analysis of periodic time series can be traced back as early as Hannan (1955) and Gladyshev (1961), where the definition of periodic correlation and basic properties of periodic correlated series are given. Two books, see Franses et al. (1996) and Franses and Paap (2004), give a comprehensive introduction to quarterly periodic time series models, including model representations, model selection, parameter estimation and forecasting. Periodic autoregressive (PAR) models have gained significant attention in recent years due to their ability to capture the periodic structure of the time series. The PAR models extend the conventional Autoregressive (AR) models by allowing the autoregressive parameters to change with seasons, and they are applied to analyse periodically stationary series. The early reference see Pagano et al. (1978) introduced a periodic Yule-Walker method to estimate periodic autoregressive parameters, and the continuous work done by Trottman (1979) derived several key properties of autocovariance and its asymptotic properties of PAR models. Other estimation methods, such as maximum likelihood estimation (see Vecchia, 1985) and weighted least squares (see Basawa and Lund, 2001), are also introduced to estimate PAR models. Later on, the topic of parsimonious PAR models has become popular because they use a minimal number of parameters to effectively capture the periodic behaviour of time series. The idea of parsimonious PAR models was initially proposed by Jones and Brelsford (1967), and the later references can be found in Lund et al. (2006), Anderson et al. (2007), Tesfaye et al. (2011) and Battaglia et al. (2020). Despite the popularity of PAR models, their estimation and prediction can be challenging when the time series is periodically non-stationary. Therefore, the periodically integrated autoregressive (PIAR) models are developed to deal with the periodic non-stationarity, incorporating with the concept of periodic integration and the presence of unit roots in the series. Before delving into periodic integration, we first illustrate the (ordinary) integration for non-periodic cases. For non-periodic cases, the concept of integration is introduced to handle the presence of unit roots in the non-stationary series. A time series is said to be integrated of order \(b\), denoted as I(\(b\)), if it can be transformed into a stationary series by taking the \(b\)-th difference, while the first \((b-1)\) differences are non-stationary. Without loss of generality, we have the notation I(\(0\)) to describe the stationary series. Based on the concept of integration, the autoregressive integrated moving average (ARIMA) models are then developed to link the autoregression, moving average and integration together to analyse non-periodic time series that exhibit non-stationarity. Similar to the non-periodic cases, the concept of periodic integration was introduced to handle the presence of unit roots in periodically non-stationary series. One of the earliest references is Osborn et al. (1988), who discussed the case when there is a single unit root in the periodic series and gave the definition of periodic integration of order one. The later work is followed by Boswijk and Franses (1995) who proposed three tests for checking the quarterly PIAR models with a single unit root. Boswijk and Franses (1996) proposed a class of likelihood ratio tests for a single unit root in PIAR models and derived their asymptotic distributions under the null hypothesis that there is a single unit root. Boswijk et al. (1997) extended the previous studies by proposing a new test which can be employed to check quarterly PIAR models with multiple unit roots. The model selection and forecasting issues of PIAR models can be found in Franses and Paap (1996). In this paper, we propose an innovative method, the multi-companion method, which is based on the eigen information (eigenvectors and eigenvalues) of the multi-companion matrix in the multi-companion representation of PIAR models. The multi-companion matrix can be viewed as a generalization of the companion matrix and it was firstly introduced by Boshnakov (2002). Boshnakov (2002) derived several key properties of the multi-companion matrices, which can be further applied to time series analysis. Due to these special properties, the multi-companion matrices have been a topic of interest in recent years. For instance, Boshnakov and Iqelan (2009) used the eigen information of the multi-companion matrices to generate periodic autoregressive series. Also, Boshnakov and Iqelan (2009) listed the algorithms for generating multi-companion matrices by using their eigen information, and these algorithms prove to be highly beneficial in simulation studies. An R (R Core Team, 2023) implementation of multi-companion matrices, including spectral parametrisation, is provided by Boshnakov (2020). Further functionality, specific for periodic models, is included in package pcts (Boshnakov, 2021). It is worthwhile to mention that the previous studies, such as Boswijk and Franses (1996) and Boswijk et al. (1997), mainly focused on quarterly PIAR models, and their methods may become inefficient when extending the quarterly period to general cases. In contrast, our multi-companion method proposed in this paper provides a more flexible and efficient way to analyse PIAR models with general periods. The paper is organized as follows. Section 2 reviews three different model representations for both PAR and PIAR models. Section 3 proposes the multi-companion method, which decomposes the multi-companion matrix of the multi-companion representation into its Jordan canonical form, and the roles of similarity and Jordan matrices are illustrated respectively. Section 4 applies the multi-companion method to analyse PIAR models with a single, two and multiple unit roots, and proposes an estimation method for PIAR models. In Section 5, Monte Carlo simulations are provided to verify the estimation method introduced in Section 4. Section 6 gives an application of PIAR models to forecast future values of U.S. monthly electricity end use. It is useful to introduce some notation before going into detail. We use \(d\) for denoting the period of the time series, \(s\) for denoting the seasons such that \(s\in\{1,\dots,d\}\). Any time \(t\) can be represented equivalently by the year \(T\) and the season \(s\), such that \([T,\,s]\equiv t\equiv(T-1)d+s\), and therefore, we use the notation \([T,s]\) to refer time \(t\) at year \(T\) and season \(s\)(see Boshnakov and Iqelan, 2009). The notation PAR(\(p\)) is used to describe a periodic autoregression of order \(p\). The notation \(\text{PI}_{b}\text{AR}(p)\) is used to describe a periodically integrated autoregressive model with periodic integration order \(b\) and periodic autoregression order \(p\). Sometimes we may omit the periodic integration order \(b\) to write as PIAR(\(p\)) when the periodic integration order is unknown. ## 2 Models Let \(\{X_{t},t=1,2,\dots\}\) be a periodic time series with period \(d\). Gladyshev (1961) defined that a process \(\{X_{t}\}\) is said to be periodically correlated (periodically stationary) with period \(d\) if \(\mathbb{E}[X_{t}]=\mathbb{E}[X_{t+d}]\) and \(\operatorname{Cov}(X_{\tau+d},X_{t+d})=\operatorname{Cov}(X_{\tau},X_{t})\) for all integer \(\tau\) and \(t\). Jones and Brelsford (1967) is probably the first study of periodic autoregressive models (PAR). Pagano et al. (1978) obtained asymptotic properties of periodic Yule-Walker estimators for PAR models. Osborn et al. (1988) proposed a concept of periodic integration for the case when the series exhibits stochastic trends and therefore is no longer periodically stationary. Periodically integrated autoregressive models for such periodically integrated series have been studied by Boswijk and Franses (1996), Franses et al. (1996), and Franses and Paap (2004). In this section, we will review various representations for both PAR and PIAR models. Importantly, a multi-companion representation will be given in the end of this section which is essential for further analysis. ### Univariate representation We consider univariate periodic time series \(\{X_{t}\}\) with a period of \(d\) that can be transformed to white noise using a periodic filter: \[X_{t}-\sum_{i=1}^{p}\phi_{i,s}X_{t-i}=\varepsilon_{t},\qquad t=1,2,\ldots, \tag{1}\] where \(p\) is the periodic autoregressive order; \(s=s(t)\in\{1,2,\ldots,d\}\) is a function of time \(t\) which returns the corresponding season index at time \(t\), such that \(s=d\) when \(t\bmod d=0\), and otherwise, \(s=t\bmod d\); \(\{\phi_{i,s},i=1,\ldots,p\}\) are seasonally varying parameters with \(d\)-periodic such that \(\phi_{i,s}=\phi_{i,s+d}\) for \(s=1,\ldots,d\); \(\varepsilon_{t}\) is a periodic white noise process with \(\mathbb{E}[\varepsilon_{t}]=0\), \(\mathbb{E}[\varepsilon_{t}\varepsilon_{\tau}]=0\) when \(t\neq\tau\), and \(\mathrm{Var}(\varepsilon_{t})=\sigma_{t}^{2}=\sigma_{t+d}^{2}\). The last property of the periodic white noise indicates that the variance of periodic white noise is \(d\)-periodic and it is sufficient to consider \(\sigma_{s}^{2}\) for \(s=1,\ldots,d\) only. The notation \(\varepsilon_{t}\sim\text{PWN}(0,\sigma_{s}^{2})\) is used to describe the periodic white noise with variance \(\sigma_{s}^{2}\), see Boshnakov (1996) for details. The left-hand side of Eq (1) represents a filter operation which is fully described by the coefficients \(\left\{\phi_{i,s},i=1,\ldots,p\right\}_{s=1}^{d}\). Let \(\phi_{p,s}(z)=1-\phi_{1,s}z-\cdots-\phi_{p,s}z^{p}\) be the polynomial associated with the coefficients for the \(s\)th season, \(s=1,\ldots,d\). The set of polynomials \(\left\{\phi_{p,s}(z)\right\}_{s=1}^{d}\) can be used as an alternative way to specify the filter. The periodic filter of Eq (1) extends the conventional filter \(\phi_{p}(L)=1-\phi_{1}L-\cdots-\phi_{p}L^{p}\) to allow the parameters changing with seasons. Unlike the non-periodic filters, commutativity does not hold, in general, for periodic filters. To demonstrate this, consider two periodic filters \(\alpha_{1,s}(L)=1-\alpha_{1,s}L\) and \(\beta_{1,s}(L)=1-\beta_{1,s}L\). Let \(\left\{\alpha_{1,s}(L)\beta_{1,s}(L)\right\}_{s=1}^{d}\) be the filter corresponding to first applying \(\left\{\beta_{1,s}(L)\right\}_{s=1}^{d}\) to the series, then \(\left\{\alpha_{1,s}(L)\right\}_{s=1}^{d}\) to the filtered series and similarly \(\left\{\beta_{1,s}(L)\alpha_{1,s}(L)\right\}_{s=1}^{d}\) for the commuted order. Consider also the filter \(\left\{\gamma_{1,s}(L)\right\}_{s=1}^{d}\), where \(\gamma_{2,s}(L)=\alpha_{1,s}(L)\beta_{1,s}(L)=1-(\alpha_{1,s}+\beta_{1,s})L+ \alpha_{1,s}\beta_{1,s}L^{2}\) is the algebraic product of the polynomials for each season. Table 1 shows the result of applying the filters \(\alpha_{1,s}(L)\beta_{1,s}(L)\), \(\beta_{1,s}(L)\alpha_{1,s}(L)\) and \(\gamma_{2,s}(L)\) to \(\left\{X_{t}\right\}\). It can be seen from the table that the coefficients of \(X_{t-2}\) for the three cases are, in general, different. In particular, the sequential application of periodic filters is non-commutative and the result is not obtained by simple multiplication of the corresponding polynomials. Note that the univariate representation in Eq (1) can be used to describe both PAR and PIAR models. A useful method to determine whether the model is periodic autoregressive or periodically integrated, is to research the set of polynomials \(\left\{\phi_{p,s}(z)\right\}_{s=1}^{d}\). Note that, unlike the non-periodic case, this does not amount to the study of the roots of the individual polynomials. Our study is based on vector of seasons representation introduced in the following section. ### Vector of seasons representation An alternative way to study univariate periodic time series is to convert them into multivariate ones by stacking the observations in each year in a vector. To this end, let \(X_{[T,s]}\) be the observation for season \(s\) of year \(T\) and \(\mathbf{X}_{T}=\left(X_{[T,d]},\ldots,X_{[T,1]}\right)^{{}^{\prime}}\). A multivariate representation of \(\{X_{t}\}\) is \(\{\mathbf{X}_{T},\ T=1,2,\ldots\}\). This idea was proposed originally by Gladyshev (1961) for the case of periodically stationary time series. Franses (1994) used this representation extensively for the study of (mostly) quarterly time series. He introduced the convenient term _vector of quarters_ (VQ) \begin{table} \begin{tabular}{l l} \hline \hline Filtering order & Result \\ \hline first \(\beta_{1,s}(L)\), then \(\alpha_{1,s}(L)\) & \(X_{t}-(\alpha_{1,s}+\beta_{1,s})X_{t-1}+\alpha_{1,s}\beta_{1,s-1}X_{t-2}\) \\ first \(\alpha_{1,s}(L)\), then \(\beta_{1,s}(L)\) & \(X_{t}-(\alpha_{1,s}+\beta_{1,s})X_{t-1}+\beta_{1,s}\alpha_{1,s-1}X_{t-2}\) \\ \(\gamma_{2,s}(L)\) & \(X_{t}-(\alpha_{1,s}+\beta_{1,s})X_{t-1}+\beta_{1,s}\alpha_{1,s}X_{t-2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: An example for multiplication of periodic filters representation of periodic time series for \(d=4\). Boshnakov and Iqelan (2009) proposed the term _vector of seasons_ (VS) for general values of \(d\). Note that for each fixed \(s\in[1,d]\) the subseries \(\left\{X_{[T,s]},T=1,2,\dots\right\}\) is the seasonal component (corresponding to season \(s\)) of the univariate periodic time series. The VS representation of the model given by Eq (1) is \[\Phi_{0}\mathbf{X}_{T}=\sum_{i=1}^{P}\Phi_{i}\mathbf{X}_{T-i}+\varepsilon_{T},\;\;\;T= 1,2,\dots, \tag{2}\] where \(\varepsilon_{T}=\left(\varepsilon_{[T,d]},\dots,\varepsilon_{[T,1]}\right)^{{}^ {\prime}}\) is the vector of seasons form of \(\varepsilon_{t}\); \(P=1+[(p-1)/d]\) with \([\cdot]\) is the integer function; \(\Phi_{0}\) and \(\Phi_{i}\) are \[(\Phi_{0})_{jk} =\begin{cases}1&j=k\\ 0&j>k\\ -\phi_{j-i,d-i+1}&j<k\end{cases},\] \[(\Phi_{i})_{jk} =\phi_{k+di-j,d-j+1},\;\;\;\;i=1,\dots,P,\] for \(j,k=1,\dots,d\). Notice that notation \((M)_{jk}\) means the \((j,k)\)-th element of matrix \(M\). In order to distinguish between a PAR or a PIAR model, we consider the characteristic equation of Eq (2), see below: \[|\Phi(z)|=|\Phi_{0}-\Phi_{1}z-\dots-\Phi_{p}z^{p}|=0.\] When the roots of the characteristic equation are outside the unit circle, the VS process \(\left\{\mathbf{X}_{T}\right\}\) in Eq (2) is stationary, the corresponding univariate process \(\left\{X_{t}\right\}\) is periodically stationary, and Eq (1) is a periodic autoregression of order \(p\), namely PAR\((p)\). In contrast, when there is at least one root on the unit circle (which is called as unit root), \(\left\{\mathbf{X}_{T}\right\}\) in Eq (2) is integrated, the corresponding univariate process \(\left\{X_{t}\right\}\) is periodically integrated, and Eq (1) is a periodically integrated autoregression of order \(p\), namely PIAR\((p)\). Previous study see Franses (1996), Franses and Paap (2004) and Franses and Van Dijk (2005) provided both theoretical and empirical analysis of forecasts for quarterly periodic models by using the VQ representation. Their methods can be extended to general cases and here, we derive explicit expressions for \(H\)-year ahead forecasts (with \(H\geq 1\)) and forecast error variances for both PAR\((p)\) and PIAR\((p)\) models with \(p\leq d\). Let \(\left\{X_{t},t=1,2,\dots,n\right\}\) be a periodic time series with period \(d\) which is represented in Eq (1), and let \(\left\{\mathbf{X}_{T},T=1,2,\dots,N\right\}\) be the corresponding VS process where \(N=n/d\) is the final year within the observations. The \(H\)-year ahead forecasts are generated from year \(N\) onwards, denoted by \(\hat{\mathbf{X}}_{N+H}\). Based on the VS representation in Eq (2), we derive \(H\)-year ahead forecast, forecast error and forecast error variance: \[\hat{\mathbf{X}}_{N+H} =(\Phi_{0}^{-1}\Phi_{1})^{H}\mathbf{X}_{N},\] \[\mathbf{X}_{N+H}-\hat{\mathbf{X}}_{N+H}=\sum_{h=0}^{H-1}\left[(\Phi_{0}^ {-1}\Phi_{1})^{h}\Phi_{0}^{-1}\right]\varepsilon_{N+H-h},\] \[\mathbb{E}[(\mathbf{X}_{N+H}-\hat{\mathbf{X}}_{N+H})(\mathbf{X}_{N+H}-\hat{ \mathbf{X}}_{N+H})^{{}^{\prime}}]=\sum_{h=0}^{H-1}(\Phi_{0}^{-1}\Phi_{1})^{h}\Phi_ {0}^{-1}\Sigma_{\varepsilon}\left[(\Phi_{0}^{-1}\Phi_{1})^{h}\Phi_{0}^{-1} \right]^{{}^{\prime}},\] where \(\Sigma_{\varepsilon}=\text{diag}(\sigma_{d}^{2},\dots,\sigma_{1}^{2})\), and \((\Phi_{0}^{-1}\Phi_{1})^{0}\) is defined to be an identity matrix. Franses (1996) proved that for a quarterly PIAR\((1)\) model, the matrix \(\Phi_{0}^{-1}\Phi_{1}\) is idempotent, i.e. \((\Phi_{0}^{-1}\Phi_{1})^{h}=\Phi_{0}^{-1}\Phi_{1}\) for \(h=1,2,\dots\). In fact, this property holds for PIAR\((1)\) models with any period. Therefore, the \(H\)-year ahead forecast of PIAR\((1)\) models remains the same, i.e. \(\hat{\mathbf{X}}_{N+H}=(\Phi_{0}^{-1}\Phi_{1})\mathbf{X}_{N}\) for \(H=1,2,\dots\), and the corresponding forecast error variance reduces to \((H-1)(\Phi_{0}^{-1}\Phi_{1})\Phi_{0}^{-1}\Sigma_{\varepsilon}\left[(\Phi_{0}^{- 1}\Phi_{1})\Phi_{0}^{-1}\right]^{{}^{\prime}}+\Phi_{0}^{-1}\Sigma_{\varepsilon} (\Phi_{0}^{-1})^{{}^{\prime}}\). ### Multi-companion representation Lastly, we introduce a multi-companion representation of the model Eq (1). This representation is developed by using the multi-companion matrix (see Boshnakov, 2002), and is significantly useful for our later analysis on periodic integration. Boshnakov and Iqelan (2009) proposed a Markov form of univariate periodic time series models in Eq (1), which is: \[\mathbf{X}_{t}=A_{t}\mathbf{X}_{t-1}+\mathbf{E}_{t},\ \ \ t=1,2,\ldots, \tag{3}\] where \(\mathbf{X}_{t}=\left(X_{t},X_{t-1},\ldots,X_{t-m+1}\right)^{{}^{\prime}}\) and \(\mathbf{E}_{t}=\left(\varepsilon_{t},0,\ldots,0\right)^{{}^{\prime}}\) are \(m\)-dimensional vector with \(m=\max(p,d)\), and \(A_{t}\) is an \(m\times m\) companion matrix such that \[A_{t}=\begin{pmatrix}\phi_{1,t}&\phi_{2,t}&\ldots&\phi_{m-1,t}&\phi_{m,t}\\ 1&0&\ldots&0&0\\ 0&1&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&1&0\end{pmatrix},\] with \(\phi_{i,t}=0\) for \(i>p\). Eq (3) is also called as the companion representation of Eq (1). Note that the companion matrix \(A_{t}\) in Eq (3) is \(d\)-periodic in time \(t\) such that \(A_{t}=A_{t+d}\), hereby, it is sufficient to consider \(A_{1},\ldots,A_{d}\) only. Given the \([T,s]\) notation mentioned in Section 1, we replace time \(t\) with \([T,s]\) and the companion representation Eq (3) changes to \[\mathbf{X}_{[T,s]}=A_{s}\mathbf{X}_{[T,s-1]}+\mathbf{E}_{[T,s]},\ \ \ [T,s]=1,2,\ldots. \tag{4}\] Finally, by iterating Eq (4), we have the multi-companion representation (see Boshnakov and Iqelan, 2009) \[\mathbf{X}_{T}=\mathbf{F}_{d}\mathbf{X}_{T-1}+\mathbf{u}_{T},\ \ \ T=1,2,\ldots, \tag{5}\] where \(\mathbf{X}_{T}=\left(X_{[T,d]},\ldots,X_{[T,d]-m+1}\right)^{{}^{\prime}}\), \(\mathbf{F}_{d}=A_{d}A_{d-1}\cdots A_{1}\), and \(\mathbf{u}_{T}=\mathbf{E}_{[T,d]}+\sum_{i=1}^{d-1}\prod_{j=1}^{i}A_{d-j+1}\mathbf{E}_{[T,d ]-i}\). Note that \(\mathbf{F}_{d}\) in Eq (5) is a product of \(d\) companion matrices, and is a multi-companion matrix with companion order \(d\) (see Boshnakov, 2002, Corollary. 3.2). In addition, \(\mathbf{F}_{d}\) is a constant matrix independent of time \(t\). Importantly, the whole process by expressing Eq (1) into (5) demonstrates that the multi-companion matrix \(\mathbf{F}_{d}\) in multi-companion representation Eq (5) determines completely the properties of the corresponding periodic filter in univariate representation Eq (1). Furthermore, we find that the disturbance term \(\mathbf{u}_{T}\) in Eq (5) can be expressed as a linear combination of the periodic white noise term \(\varepsilon_{t}\) in Eq (1), such that \(\mathbf{u}_{T}=\Omega\varepsilon_{T}\) where \(\varepsilon_{T}=\left(\varepsilon_{[T,d]},\varepsilon_{[T,d]-1},\ldots, \varepsilon_{[T,d]-m+1}\right)^{{}^{\prime}}\) and \(\Omega\) is a matrix defined by \[\Omega=[e_{1},(A_{d})_{\bullet 1},(A_{d}A_{d-1})_{\bullet 1},\ldots,(A_{d}A_{ d-1}\cdots A_{2})_{\bullet 1},0_{m\times(m-d)}], \tag{6}\] where \(e_{1}\) is a unit vector with its first component equal to 1 and all other components equal to 0, and the notation \(()_{\bullet 1}\) stands for the first column of a matrix. Particularly, when \(m=d\), \(\Omega\) is an upper triangular matrix with main diagonal elements equal to one. It is interesting to investigate the characteristic equation of the multi-companion representation in Eq (5), which is \(|I-\mathbf{F}_{d}z|=0\) where \(I\) is an \(m\times m\) identity matrix. In fact, the roots of this characteristic equation are the reciprocal of the eigenvalues of the multi-companion matrix \(\mathbf{F}_{d}\). When the roots of the characteristic equation are all outside the unit circle, which is equivalent to when all the eigenvalues of \(\mathbf{F}_{d}\) have moduli strictly less than one, then Eq (5) is a periodic autoregression of order \(p\). In contrast, when there is at least one root of the characteristic equation on the unit circle, which is equivalent to there is at least one unit eigenvalue of \(\mathbf{F}_{d}\), then Eq (5) is a periodically integrated autoregression. This indicates that the study of the eigen information of the multi-companion matrix can be helpful to examine the properties of periodic models. To continue with \(H\)-year ahead forecast, we still set \(\{X_{t},t=1,2,\ldots,n\}\) as the observations and \(N\) is the final year within the observations such that \(N=n/d\). Based on the multi-companion representation in Eq (5), we derive \(H\)-year ahead forecast, forecast error and forecast error variance as: \[\hat{\mathbf{X}}_{N+H}=\mathbf{F}_{d}^{H}\mathbf{X}_{N},\] \[\mathbf{X}_{N+H}-\hat{\mathbf{X}}_{N+H}=\sum_{h=0}^{H-1}\mathbf{F}_{d}^{h}\mathbf{u }_{N+H-h},\] \[\mathbb{E}[(\mathbf{X}_{N+H}-\hat{\mathbf{X}}_{N+H})(\mathbf{X}_{N+H}-\hat{\mathbf{ X}}_{N+H})^{{}^{\prime}}]=\sum_{h=0}^{H-1}\mathbf{F}_{d}^{h}\Sigma_{u}(\mathbf{F}_{d}^{h})^{{}^{ \prime}},\] where \(\Sigma_{u}=\Omega\Sigma_{\varepsilon}\Omega^{{}^{\prime}}\) and \(\mathbf{F}_{d}^{0}\) is defined to be an identity matrix. We will show in next section that \(\mathbf{F}_{d}\) can be expressed in its Jordan canonical form, i.e. \(\mathbf{F}_{d}=XJX^{-1}\), and hereby, we have \(\mathbf{F}_{d}^{h}=XJ^{h}X^{-1}\) for \(h=1,2,\dots\). It is noticeable that if \(\mathbf{F}_{d}\) is diagonalizable and all the eigenvalues of \(\mathbf{F}_{d}\) are either 1 or 0, then \(\mathbf{F}_{d}\) is idempotent which results in the \(H\)-year ahead forecast of the model remains the same, i.e. \(\hat{\mathbf{X}}_{N+H}=\mathbf{F}_{d}\mathbf{X}_{N}\) for \(H=1,2,\dots\), and the corresponding forecast error variance reduces to \(\Sigma_{u}+(H-1)\mathbf{F}_{d}\Sigma_{u}(\mathbf{F}_{d})^{{}^{\prime}}\). This situation can happen when the corresponding model is a periodically integrated autoregression and we will elaborate on this further in later sections. ## 3 Multi-companion method In this section, we introduce a multi-companion method which is used to investigate the periodic models, particular for periodically integrated autoregressive models. The multi-companion method is based on the multi-companion representation Eq (5) and the eigen information of the multi-companion matrix. It is useful to review some important properties of multi-companion matrices before introducing the multi-companion method. For a given \(d\)-companion matrix \(\mathbf{F}_{d}\) with dimension \(m\), it can be decomposed into its Jordan canonical form \(\mathbf{F}_{d}=XJX^{-1}\), where \(X\) is the similarity matrix consisting of the eigenvectors of \(\mathbf{F}_{d}\), and \(J\) is the Jordan matrix whose diagonal elements are the eigenvalues of \(\mathbf{F}_{d}\). In addition, let \(\lambda_{i}\) be the eigenvalue and \(x_{i}\) be the corresponding eigenvector of \(\mathbf{F}_{d}\) for \(i=1,\dots,m\). The first important property is that each eigenvector \(x_{i}\) of \(\mathbf{F}_{d}\) is determined uniquely by its first (or any consecutive) \(d\) elements and the corresponding eigenvalue \(\lambda_{i}\), see Boshnakov and Iqelan (2009). Hereby, we define the first \(d\) elements of the eigenvector \(x_{i}\) to be the _seed-parameters_, which are denoted by \(c_{i}^{(j)}\) for \(j=1,\dots,d\). The vector \(c_{i}=(c_{i}^{(1)},\dots,c_{i}^{(d)})^{{}^{\prime}}\) which consists of the \(d\) seed-parameters is defined to be a seed-vector. The second property is that the eigenvectors corresponding to zero eigenvalues of \(\mathbf{F}_{d}\) are some appropriate standard basis vectors, see Boshnakov and Iqelan (2009, Lemma.1). This property is a particular example of specifying eigenvectors corresponding to zero eigenvalues and will be employed in Section 4. Considering the aforementioned properties, it seems appropriate to explore the potential use of the eigen information of the multi-companion matrix for periodic time series models. Back to the multi-companion representation Eq (5) of a periodic model, substituting \(\mathbf{F}_{d}\) by its Jordan canonical form and then left-multiplying \(X^{-1}\) to its both sides gives \[X^{-1}\mathbf{X}_{T}=JX^{-1}\mathbf{X}_{T-1}+X^{-1}\mathbf{u}_{T}.\] By defining two processes \(\mathbf{Z}_{T}=X^{-1}\mathbf{X}_{T}\) and \(\mathbf{W}_{T}=X^{-1}\mathbf{u}_{T}\), we can rewrite the above equation into: \[\mathbf{X}_{T}=X\mathbf{Z}_{T}, \tag{7}\] \[\mathbf{Z}_{T}=J\mathbf{Z}_{T-1}+\mathbf{W}_{T}, \tag{8}\] where \(\mathbf{Z}_{T}=(Z_{T}^{(1)},\dots,Z_{T}^{(m)})^{{}^{\prime}}\) is an \(m\)-dimensional process and \(\mathbf{W}_{T}=(W_{T}^{(1)},\dots,W_{T}^{(m)})^{{}^{\prime}}\) is an \(m\)-dimensional white noise. It is then straightforward to see that Eq (7) uses the similarity matrix of \(\mathbf{F}_{d}\) as a coefficient matrix linking process \(\mathbf{Z}_{T}\) with \(\mathbf{X}_{T}\), and Eq (8) is in the vector autoregression form where the Jordan matrix of \(\mathbf{F}_{d}\) is regarded as an autoregressive coefficient matrix. Therefore, the eigen information of \(\mathbf{F}_{d}\) plays an important role in analysing periodic time series models. The subsequent two subsections will provide an in-depth analysis of the roles of similarity and Jordan matrices of \(\mathbf{F}_{d}\) respectively. ### The role of similarity matrix Considering the role of similarity matrix, we concentrate on Eq (7) which shows that \(\mathbf{X}_{T}\) is represented in terms of the similarity matrix and the vector process \(\mathbf{Z}_{T}\). Expanding Eq (7) and only considering the first \(d\) elements gives \[\begin{pmatrix}X_{[T,d]}\\ X_{[T,d-1]}\\ \vdots\\ X_{[T,1]}\end{pmatrix}=\begin{pmatrix}c_{1}^{(1)}&c_{2}^{(1)}&\dots&c_{m}^{(1)}\\ c_{1}^{(2)}&c_{2}^{(2)}&\dots&c_{m}^{(2)}\\ \vdots&\vdots&\dots&\vdots\\ c_{1}^{(d)}&c_{2}^{(d)}&\dots&c_{m}^{(d)}\end{pmatrix}\begin{pmatrix}Z_{T}^{(1) }\\ Z_{T}^{(2)}\\ \vdots\\ Z_{T}^{(d)}\\ \vdots\\ Z_{T}^{(m)}\end{pmatrix},\] which implies \[X_{[T,d]} =c_{1}^{(1)}Z_{T}^{(1)}+c_{2}^{(1)}Z_{T}^{(2)}+\dots+c_{m}^{(1)}Z_ {T}^{(m)},\] \[X_{[T,d-1]} =c_{1}^{(2)}Z_{T}^{(1)}+c_{2}^{(2)}Z_{T}^{(2)}+\dots+c_{m}^{(2)}Z_ {T}^{(m)},\] \[\vdots\] \[X_{[T,1]} =c_{1}^{(d)}Z_{T}^{(1)}+c_{2}^{(d)}Z_{T}^{(2)}+\dots+c_{m}^{(d)}Z_ {T}^{(m)}.\] The above systems can be further summarized as: \[X_{[T,s]}=\sum_{i=1}^{m}c_{i}^{(d-s+1)}Z_{T}^{(i)},\ \ \ T=1,2,\dots, \tag{9}\] for season \(s\in[1,d]\). Eq (9) shows that the seasonal component \(\big{\{}X_{[T,s]},T=1,2,\dots\big{\}}\) at season \(s\) can be viewed as a linear combination of elements of \(\{\mathbf{Z}_{T},T=1,2,\dots\big{\}}\), and the coefficient \(c_{i}^{(d-s+1)}\) (the seed-parameter) is interpreted as the "strength" of the influence of \(Z_{T}^{(i)}\) on the \(s\)-th season. It indicates that if one of \(Z_{T}^{(i)}\) processes is a random walk and its corresponding coefficients for each season are non-zero, then this random walk will be a common stochastic trend driving the entire process. In addition, when some \(Z_{T}^{(i)}\) processes need to be eliminated from one seasonal component at season \(s\), we can directly set their corresponding coefficients to be zero. We will show later how Eq (9) contributes to analysing PIAR models with a single, two and multiple unit roots. ### The role of Jordan matrix We concentrate on Eq (8) when taking into account the significance of the Jordan matrix. It is evident that if all the diagonal elements of the Jordan matrix \(J\) have modulus strictly than one, the process \(\mathbf{Z}_{T}\) is stationary and Eq (8) is a vector autoregression of order one (VAR(1)). In our paper, we pay more attention to the cases when Jordan matrix \(J\) has at least one diagonal element equal to one. Let \(\lambda_{\text{unit}}\) be the unit eigenvalue of \(\mathbf{F}_{d}\), such that \(\lambda_{\text{unit}}=1\). We use notations \(\text{Am}(\lambda_{\text{unit}})\) and \(\text{Gm}(\lambda_{\text{unit}})\) to describe the algebraic and geometric multiplicities of \(\lambda_{\text{unit}}\) respectively. In addition, let \(J_{\text{unit}}\) be the unit Jordan matrix which consists of unit eigenvalues of \(\mathbf{F}_{d}\), such that \[J_{\text{unit}}=\text{diag}(J_{\text{unit}}^{(1)},J_{\text{unit}}^{(2)},\cdots,J_{\text{unit}}^{(g)}), \tag{10}\] where \(J_{\text{unit}}^{(k)}\) is the \(k\)-th unit Jordan block of dimension \(r_{k}\) for \(k=1,\dots,g\); \(g\) is the number of unit Jordan blocks which is determined by the geometric multiplicity of \(\lambda_{\text{unit}}\), namely \(g=\text{Gm}(\lambda_{\text{unit}})\); and the sum of the dimension of each unit Jordan block equals the algebraic multiplicity, namely \(\sum_{k=1}^{g}r_{k}=\text{Am}(\lambda_{\text{unit}})\). Moreover, the general form of each unit Jordan block is \[J_{\text{unit}}^{(k)}=\begin{pmatrix}1&1&&\\ &1&\ddots&\\ &&\ddots&1\\ &&&1\end{pmatrix}\in\mathbb{R}^{r_{k}\times r_{k}},\ \ \ k=1,\dots,g.\] Throughout our paper, we suppose that the diagonal elements of Jordan matrix is arranged in descending order such that if the unit Jordan matrix exists, then it is arranged in the top-left corner of \(J\). In particular, we assume there are \(m_{1}\) unit eigenvalues of \(\mathbf{F}_{d}\) where \(m_{1}\in[1,m]\), and the remaining \((m-m_{1})\) eigenvalues of \(\mathbf{F}_{d}\) have modulus strictly less than one. Under this assumption, the Jordan matrix \(J\) of \(\mathbf{F}_{d}\) can be expressed as \(J=\text{diag}(J_{\text{unit}},\Lambda_{m-m_{1}})\) where \(J_{\text{unit}}\) is defined by Eq (10) with the sum of the dimension of \(g\) unit Jordan blocks equal to \(m_{1}\), namely \(\sum_{k=1}^{g}r_{k}=m_{1}\), and \(\Lambda_{m-m_{1}}\) corresponds to the stationary part whose diagonal elements have moduli strictly less than one. As indicated from Eq (8), each unit Jordan block \(J_{\text{unit}}^{(k)}\) corresponds to \(r_{k}\) elements of \(\mathbf{Z}_{T}\) process, and the highest integration order of the corresponding elements of \(\mathbf{Z}_{T}\) process is determined by the dimension of the unit Jordan block, namely \(r_{k}\). The first unit Jordan block \(J_{\text{unit}}^{(1)}\), for example, corresponds to the first \(r_{1}\) elements of \(\mathbf{Z}_{T}\), where \(Z_{T}^{(1)}\sim\text{I}(r_{1})\), \(Z_{T}^{(2)}\sim\text{I}(r_{1}-1)\),..., \(Z_{T}^{(r_{1})}\sim\text{I}(1)\). Obviously, \(Z_{T}^{(1)}\) has the highest integration order among the first \(r_{1}\) elements of \(\mathbf{Z}_{T}\), which is exactly equal to the dimension of \(J_{\text{unit}}^{(1)}\). Similarly, the \(k\)-th unit Jordan block \(J_{\text{unit}}^{(k)}\) for each \(k=2,\ldots,g\) corresponds to \[Z_{T}^{(\sum_{i=1}^{k-1}r_{i}+1)} \sim\text{I}(r_{k}),\] \[Z_{T}^{(\sum_{i=1}^{k-1}r_{i}+2)} \sim\text{I}(r_{k}-1),\] \[\vdots\] \[Z_{T}^{(\sum_{i=1}^{k-1}r_{i}+r_{k})} \sim\text{I}(1),\] where \(Z_{T}^{(\sum_{i=1}^{k-1}r_{i}+1)}\) has the highest integration order which is exactly equal to the dimension of the \(k\)-th unit Jordan block, \(r_{k}\). To illustrate how the \(Z_{T}^{(i)}\) processes drive the series \(\{X_{t}\}\) periodically non-stationary, we give two examples below. The first example applies when \(\text{Am}(\lambda_{\text{unit}})=\text{Gm}(\lambda_{\text{unit}})=m_{1}\). Under this condition, the unit Jordan matrix is exactly an \(m_{1}\times m_{1}\) identity matrix and Eq (8) is expanded as \[\begin{pmatrix}Z_{T}^{(1)}\\ Z_{T}^{(2)}\\ \vdots\\ Z_{T}^{(m_{1})}\\ \tilde{Z}_{T}\end{pmatrix}=\begin{pmatrix}1&&&&&\\ &1&&&&&\\ &&\ddots&&\\ &&&1&\\ &&&\Lambda_{m-m_{1}}\end{pmatrix}\begin{pmatrix}Z_{T-1}^{(1)}\\ Z_{T-1}^{(2)}\\ \vdots\\ Z_{T-1}^{(m_{1})}\\ \tilde{Z}_{T-1}\end{pmatrix}+\begin{pmatrix}W_{T}^{(1)}\\ W_{T}^{(2)}\\ \vdots\\ W_{T}^{(m_{1})}\\ \tilde{W}_{T}\end{pmatrix},\] where \(\tilde{Z}_{T}=(Z_{T}^{(m_{1}+1)},\ldots,Z_{T}^{(m)^{{}^{\prime}}})\) and \(\tilde{W}_{T}=(W_{T}^{(m_{1}+1)},\ldots,W_{T}^{(m)^{{}^{\prime}}})\) are the two vector processes of dimension \(m-m_{1}\). The above matrix form implies \[Z_{T}^{(i)}=Z_{T-1}^{(i)}+W_{T}^{(i)},\quad i=1,\ldots,m_{1};\] \[\tilde{Z}_{T}=\Lambda_{m-m_{i}}\tilde{Z}_{T-1}+\tilde{W}_{T}.\] where the first equation is a random walk such that \(Z_{T}^{(i)}\sim\text{I}(1)\) for each \(i\in[1,m_{1}]\), and the second equation is a vector autoregression such that \(Z_{T}^{(i)}\sim\text{I}(0)\) for each \(i\in[m_{1},m]\). As indicated from Eq (9), these \(Z_{T}^{(i)}\) processes have impact on the seasonal component of \(\{\mathbf{X}_{T}\}\), such that \[X_{[T,s]}=\sum_{i=1}^{m_{1}}c_{i}^{(d-s+1)}Z_{T}^{(i)}+\sum_{i=m_{1}+1}^{m}c_{ i}^{(d-s+1)}Z_{T}^{(i)},\quad T=1,2,\ldots,\] for each \(s\in[1,d]\). Therefore, it is concluded that \(Z_{T}^{(1)},\ldots,Z_{T}^{(m_{1})}\) are the common stochastic trends (random walks) driving each seasonal component \(\left\{X_{[T,s]},T=1,2,\ldots\right\}\) non-stationary. Moreover, for each season \(s\), if there is at least one seed-parameter \(c_{i}^{(d-s+1)}\) non-zero for any \(i\in[1,m_{1}]\), then the seasonal component will be integrated of order one, denoted by \(\left\{X_{[T,s]},T=1,2,\ldots\right\}\sim\text{I}(1)\). We will explain later that this example corresponds to the case where the series \(\{X_{t}\}\) is periodically integrated of order one. The second example is considered when \(\text{Am}(\lambda_{\text{unit}})=m_{1}>\text{Gm}(\lambda_{\text{unit}})=1\). Under this condition, the unit Jordan matrix is exactly a unit Jordan block of dimension \(m_{1}\times m_{1}\), and Eq (8) is expanded as \[\left(\begin{array}{c}Z_{T}^{(1)}\\ Z_{T}^{(2)}\\ \vdots\\ Z_{T}^{(m_{1})}\\ \tilde{Z}_{T}\end{array}\right)=\left(\begin{array}{ccccc}1&1&&\\ &1&\ddots&&\\ &&\ddots&1&\\ &&&1&\\ &&&\Lambda_{m-m_{1}}\end{array}\right)\left(\begin{array}{c}Z_{T-1}^{(1)}\\ Z_{T-1}^{(2)}\\ \vdots\\ Z_{T-1}^{(m_{1})}\\ \tilde{Z}_{T-1}\end{array}\right)+\left(\begin{array}{c}W_{T}^{(1)}\\ W_{T}^{(2)}\\ \vdots\\ W_{T}^{(m_{1})}\\ \tilde{W}_{T}\end{array}\right),\] where the top-left corner is the unit Jordan block. The above equation implies \[Z_{T}^{(i)}=Z_{T-1}^{(i)}+Z_{T-1}^{(i+1)}+W_{T}^{(i)},\;\;\;i=1, \ldots,m_{1}-1;\] \[Z_{T}^{(m_{1})}=Z_{T-1}^{(m_{1})}+W_{T}^{(m_{1})};\] \[\tilde{Z}_{T}=\Lambda_{m-m_{1}}\tilde{Z}_{T-1}+\tilde{W}_{T}.\] where the first two equations lead to \(Z_{T}^{(i)}\sim\mbox{I}(m_{1}-i+1)\) for \(i=1,\ldots,m_{1}\), and the last equation indicates \(Z_{T}^{(i)}\sim\mbox{I}(0)\) for \(i=m_{1}+1,\ldots,m\). In this example, \(Z_{T}^{(1)}\) has the largest integration order such that \(Z_{T}^{(1)}\sim\mbox{I}(m_{1})\), which ensures the seasonal components are integrated of order \(m_{1}\) if \(c_{1}^{(d-s+1)}\) is non-zero for any \(s\), denoted by \(\left\{X_{[T,s]},T=1,2,\ldots\right\}\sim\mbox{I}(m_{1})\). We will explain later that the second example corresponds to the case where the series \(\left\{X_{t}\right\}\) is periodically integrated of order \(m_{1}\). Note that this case with periodic integration order lager than one has not been discussed by Boswijk and Franses (1996) or other current literatures. In conclusion, we have noticed that the largest dimension of unit Jordan blocks will affect the integration order of seasonal components \(\left\{X_{[T,s]},T=1,2,\ldots\right\}\), and in turn, will influence the periodic integration order of the entire process \(\left\{X_{t},t=1,2,\ldots\right\}\). Therefore, it is reasonable to consider using the property of Jordan matrix to have the following definition for periodic integration. **Definition 3.1**: _(Periodic integration). Let \(\left\{X_{t}\right\}\) be a series defined by Eq (1) with multi-companion representation Eq (5). Suppose \(\mathbf{F}_{d}\) in Eq (5) has at least one unit eigenvalue and its corresponding unit Jordan matrix \(J_{unit}\) is represented in Eq (10). Then, \(\left\{X_{t}\right\}\) is said to be periodically integrated of order \(r\), denoted by \(X_{t}\sim\mbox{PI}(r)\), if the largest dimension of the unit Jordan blocks is \(r=\max(r_{1},\ldots,r_{g})\), where \(r_{k}\) for \(k=1,\ldots,g\) is the dimension of \(k\)-th unit Jordan block._ The previous study Boswijk and Franses (1996, Def. 1) has introduced a definition for (quarterly) periodic integration of order one. They stated that if the VQ representation of the model (see Eq (2) by setting \(d=4\)) has a single unit root and if all the seasonal components of \(\left\{X_{t}\right\}\) are integrated of order one, namely \(\left\{X_{[T,s]},T=1,2,\ldots\right\}\sim\mbox{I}(1)\) for any \(s\), then \(\left\{X_{t}\right\}\) is said to be periodically integrated of order one, denoted by \(X_{t}\sim\mbox{PI}(1)\). It is easy to show that Boswijk and Franses (1996, Def. 1) is a special case of our Definition 3.1. Recall the first example aforementioned, let \(m_{1}=1\) which ensures there is a single unit root of \(\left\{X_{t}\right\}\), and moreover, we have shown that in this case each seasonal component of \(\left\{X_{t}\right\}\) is integrated of order one. Therefore, two conditions in Boswijk and Franses (1996, Def. 1) are satisfied and we have the conclusion that \(X_{t}\sim\mbox{PI}(1)\). On the other hand, it is obvious to have \(X_{t}\sim\mbox{PI}(1)\) according to our Definition 3.1 when setting \(m_{1}=1\) in the first example above, as in this case the largest dimension of the unit Jordan block is one. In general, Definition 3.1 developed by using our multi-companion method extends the previous study of Boswijk and Franses (1996, Def. 1). Furthermore, it is obvious to deduce that the two aforementioned examples correspond to the cases where \(X_{t}\sim\mbox{PI}(1)\) and \(X_{t}\sim\mbox{PI}(m_{1})\) respectively, according to Definition 3.1. For future reference, we use notation \(\mbox{PI}(0)\) to describe the periodically stationary process. ## 4 Multi-companion method applied to PIAR models Section 3 introduces the multi-companion method which relies on the eigen information of the multi-companion matrix. In this section, we will demonstrate how the multi-companion method is applied to analyse PIAR models with a single, two and multiple unit roots. For each case, we find a periodically integrated filter which transforms the periodically integrated series into periodically stationary. In addition, we derive the representation of the the parameters of the periodically integrated filter in terms of the eigen information of the multi-companion matrix. Based on the parametrization process, we propose an innovative estimation method to estimate the parameters of the PIAR models. ### A single unit root Let \(\{X_{t}\}\) be the series generated by Eq (1) with multi-companion representation Eq (5). Suppose \(\mathbf{F}_{d}\) in Eq (5) has a single unit eigenvalue. Under this assumption, the corresponding series \(\{X_{t}\}\) is periodically integrated of order one, denoted by \(X_{t}\sim\text{PI}(1)\), according to Definition 3.1. Subsequently, Eq (1) is a PI\({}_{1}\text{AR}(p)\) model with a singe unit root. For simplicity, we first consider \(\mathbf{F}_{d}\) in Eq (5) has a single unit eigenvalue and all the other eigenvalues are zero. In this case, the Jordan canonical form of \(\mathbf{F}_{d}\) is \[\mathbf{F}_{d} =XJX^{-1} \tag{11}\] \[=\left(\begin{array}{ccccc}c_{1}^{(1)}&0&\dots&0&0\\ c_{1}^{(2)}&1&\dots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d-1)}&0&\dots&1&0\\ c_{1}^{(d)}&0&\dots&0&1\end{array}\right)\left(\begin{array}{ccccc}1&&&\\ &0&&\\ &\ddots&&\\ &&0&\\ &&&0\end{array}\right)\left(\begin{array}{ccccc}c_{1}^{(1)}&0&\dots&0&0\\ c_{1}^{(2)}&1&\dots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d)}&0&\dots&1&0\\ c_{1}^{(d)}&0&\dots&0&1\end{array}\right)\] \[=\left(\begin{array}{ccccc}1&0&\dots&0&0\\ \frac{c_{1}^{(2)}}{c_{1}^{(1)}}&0&\dots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \frac{c_{1}^{(d-1)}}{c_{1}^{(1)}}&0&\dots&0&0\\ \frac{c_{1}^{(d)}}{c_{1}^{(1)}}&0&\dots&0&0\end{array}\right),\] where the eigenvectors corresponding to zero eigenvalues are some appropriate standard basis vectors (see Boshnakov and Iqelan, 2009, Lemma. 1). Considering the role of Jordan matrix, see Eq (8), it implies that \(Z_{T}^{(1)}\) is a random walk which is the only non-stationary part among all the elements of \(\mathbf{Z}_{T}\) process, and \(Z_{T}^{(i)},i=2,\dots,d\) are white noise. On the other hand, the role of similarity matrix, see Eq (7), shows each seasonal component of \(\{\mathbf{X}_{T}\}\) at year \(T=1,2,\dots\) can be expressed as \[X_{[T,d]} =c_{1}^{(1)}Z_{T}^{(1)},\] \[X_{[T,d-1]} =c_{1}^{(2)}Z_{T}^{(1)}+Z_{T}^{(2)},\] \[\vdots\] \[X_{[T,1]} =c_{1}^{(d)}Z_{T}^{(1)}+Z_{T}^{(d)},\] where \(Z_{T}^{(1)}\) is the common stochastic trend driving each seasonal component integrated of order one. In such a situation, a periodic filter \((1-\alpha_{s}L)\) is introduced to remove the non-stationary part \(Z_{T}^{(1)}\) from each seasonal component \(\left\{X_{[T,s]},T=1,2,\dots\right\}\), where \(\alpha_{s}\) are determined by \[\alpha_{s}=\frac{c_{1}^{(d-s+1)}}{c_{1}^{(d-s+2)}},\quad s=1,\dots,d, \tag{12}\] with \(c_{1}^{(d+1)}=c_{1}^{(1)}\). It is noticeable that \(\alpha_{s}\) in Eq (12) automatically satisfies the restriction \(\prod_{s=1}^{d}\alpha_{s}=1\). The previous study Osborn et al. (1988) took this restriction as the defining property of a unit root periodic filter by restricting \(d=4\) for quarterly cases. Obviously, our multi-companion method extends the quarterly cases to general situations. For future reference, we use the term periodically integrated filter (_PI-filter_) to describe the periodic filters which are used to remove the unit roots in the process. Particularly, when the PI-filter is with order one, namely \((1-\alpha_{s}L)\) where \(\alpha_{s}\) satisfies the restriction \(\prod_{s=1}^{d}\alpha_{s}=1\), we call it as a unit PI-filter. In general, when \(\mathbf{F}_{d}\) in Eq (5) has a single unit eigenvalue and all other eigenvalues have moduli strictly less than one, model in Eq (1) can be rewritten as \[\psi_{p-1,s}(L)(1-\alpha_{s}L)X_{t}=\varepsilon_{t},\quad t=1,2,\ldots,\] where \(\psi_{p-1,s}(L)\) is a periodic autoregressive filter with order \(p-1\), and \((1-\alpha_{s}L)\) is a unit PI-filter where \(\alpha_{s}\) are determined by Eq (12), automatically satisfying the non-linear restriction \(\prod_{s=1}^{d}\alpha_{s}=1\). ### Two unit roots Before investigating the two unit roots cases, we first illustrate some key terminology. We use the term _simple unit eigenvalues_ to denote unit eigenvalues of the multi-companion matrix that are in different unit Jordan blocks. In other words, the algebraic multiplicity of the unit eigenvalue is equal to its geometric multiplicity, namely \(\text{Am}(\lambda_{\text{unit}})=\text{Gm}(\lambda_{\text{unit}})\). Consequently, the resulting unit roots in the model Eq (5) are referred to as _simple unit roots_. Conversely, we use the term _chained unit eigenvalues_ to describe unit eigenvalues of the multi-companion matrix that are contained within the same unit Jordan block. In other words, the chained unit eigenvalues are in a same Jordan chain and \(\text{Am}(\lambda_{\text{unit}})>\text{Gm}(\lambda_{\text{unit}})\) holds. Correspondingly, the unit roots generated in the model Eq (5) are termed _chained unit roots_. In this subsection, we assume that \(\mathbf{F}_{d}\) in Eq (5) has two unit eigenvalues. Under the assumption, the series generated by this \(\mathbf{F}_{d}\) can have either two simple or two chained unit roots. Consequently, the model in Eq (5) can either be \(\text{PI}_{1}\text{AR}(p)\) or \(\text{PI}_{2}\text{AR}(p)\), depending on if the two unit eigenvalues are in a same Jordan block. We will show that the PI-filters employed to remove two simple unit roots differ from those used to eliminate the two chained unit roots. Specifically, we will present parametrization results for the PI-filters in both cases. #### 4.2.1 Two simple unit roots The first case occurs when \(\mathbf{F}_{d}\) in Eq (5) has two simple unit eigenvalues. For simplicity, we assume all other eigenvalues of \(\mathbf{F}_{d}\) are zero. Under this assumption, the Jordan canonical form of \(\mathbf{F}_{d}\) can be represented as: \[\mathbf{F}_{d} =XJX^{-1} \tag{13}\] \[=\left(\begin{array}{ccccc}c_{1}^{(1)}&c_{2}^{(1)}&0&\ldots&0& 0\\ c_{1}^{(2)}&c_{2}^{(2)}&0&\ldots&0&0\\ c_{1}^{(3)}&c_{3}^{(3)}&1&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d-1)}&c_{2}^{(d-1)}&0&\ldots&1&0\\ c_{1}^{(d)}&c_{2}^{(d)}&0&\ldots&0&1\end{array}\right)\left(\begin{array}{ cccccc}1&&&&\\ 1&&&&\\ 1&&&&\\ &0&&&&\\ &&\ddots&&\\ &&0&&&\\ \end{array}\right)\left(\begin{array}{ccccc}c_{1}^{(1)}&c_{2}^{(1)}&0&\ldots& 0&0\\ c_{1}^{(2)}&c_{2}^{(2)}&0&\ldots&0&0\\ c_{1}^{(3)}&c_{2}^{(3)}&1&\ldots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d-1)}&c_{2}^{(d-1)}&0&\ldots&1&0\\ c_{1}^{(d)}&c_{2}^{(d)}&0&\ldots&0&1\end{array}\right)\] \[=\left(\begin{array}{ccccc}1&0&0&\ldots&0&0\\ 0&1&0&\ldots&0&0\\ -\frac{\Delta_{22}}{\Delta_{12}}&\frac{\Delta_{13}}{\Delta_{12}}&0&\ldots&0& 0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ -\frac{\Delta_{2d-1}}{\Delta_{12}}&\frac{\Delta_{14-1}}{\Delta_{12}}&0&\ldots &0&0\\ -\frac{\Delta_{2d}}{\Delta_{12}}&\frac{\Delta_{14}}{\Delta_{12}}&0&\ldots&0&0 \end{array}\right),\] where \(\Delta_{ij}=c_{i}^{(i)}c_{2}^{(j)}-c_{1}^{(j)}c_{2}^{(i)}\). Obviously, the largest dimension of unit Jordan block in Eq (13) is one, and therefore, the process \(\{X_{t}\}\) generated by this \(\mathbf{F}_{d}\) is periodically integrated of order one, according to Definition 3.1. In addition, it turns out that the diagonalisable multi-companion matrix with two eigenvalues equal to one has a special form, in which the top left block is \(2\times 2\) diagonal matrix of ones. In conclusion, when \(\mathbf{F}_{d}\) is shown in Eq (13), the generated series \(\{X_{t}\}\) is periodically integrated of order one. Consequently, the model in Eq (5) is a \(\text{PI}_{1}\text{AR}(2)\) model. Following this conclusion, we are interested in finding a PI-filter which removes the two simple unit roots in the model and transforms \(X_{t}\) from PI(1) to periodically stationary. To specify the parameters of the PI-filter, we first consider the role of Jordan matrix. From Eq (8), when Jordan matrix has the form as shown in Eq (13), it indicates: \[Z_{T}^{(1)}=Z_{T-1}^{(1)}+W_{T}^{(1)},\quad Z_{T}^{(2)}=Z_{T-1}^{(2)}+W_{T}^{( 2)}, \tag{14}\] where \(Z_{T}^{(1)}\) and \(Z_{T}^{(2)}\) are two random walks. Notice that the remaining processes of \(\mathbf{Z}_{T}\), namely \(Z_{T}^{(i)}\) for \(i=3,\dots,d\), are all stationary. Subsequently, we consider the role of similarity matrix, particularly focusing on the role of seed-parameters, see Eq (9). Expanding Eq (9) gives \(X_{[T,s]}=c_{1}^{(d-s+1)}Z_{T}^{(1)}+c_{2}^{(d-s+1)}Z_{T}^{(2)}+\sum_{i=3}^{m}c _{i}^{(d-s+1)}Z_{T}^{(i)}\), which indicates the two random walks together drive each seasonal component non-stationary, such that \(\left\{X_{[T,s]},T=1,2,\dots\right\}\sim\text{I}(1)\). In order to remove these two random walks from \(\{X_{t}\}\) and transform \(\{X_{t}\}\) to be periodically stationary, a PI-filter \((1-\theta_{1,s}L-\theta_{2,s}L^{2})\) is introduced, where \(\theta_{1,s}\) and \(\theta_{2,s}\) for \(s=1,\dots,d\) are determined by: \[\begin{split}\theta_{1,s}&=\frac{\Delta_{d-s+1\ d-s +3}}{\Delta_{d-s+2\ d-s+3}}=\frac{c_{1}^{(d-s+1)}c_{2}^{(d-s+3)}-c_{1}^{(d-s+3 )}c_{2}^{(d-s+1)}}{c_{1}^{(d-s+2)}c_{2}^{(d-s+3)}-c_{1}^{(d-s+3)}c_{2}^{(d-s+2) }},\\ \theta_{2,s}&=\frac{\Delta_{d-s+2\ d-s+1}}{\Delta_{d- s+2\ d-s+3}}=\frac{c_{1}^{(d-s+2)}c_{2}^{(d-s+1)}-c_{1}^{(d-s+1)}c_{2}^{(d-s+2)}}{c _{1}^{(d-s+2)}c_{2}^{(d-s+3)}-c_{1}^{(d-s+3)}c_{2}^{(d-s+2)}},\end{split} \tag{15}\] with \[c_{1}^{(d+k)}=c_{1}^{(k)},\ \ \ c_{2}^{(d+k)}=c_{2}^{(k)},\ \ \ k=1,2. \tag{16}\] Additionally, we find that this second order PI-filter is equivalent to a cascaded filter \((1-\beta_{s}L)(1-\alpha_{s}L)\) where \[\alpha_{s}=\frac{c_{i_{1}}^{(d-s+1)}}{c_{i_{1}}^{(d-s+2)}},\ \ \ \beta_{s}=\frac{c_{i_{2}}^{(d-s+1)}-\alpha_{s}c_{i_{2}}^{(d-s+2)}}{c_{i_{2}}^{( d-s+2)}-\alpha_{s-1}c_{i_{2}}^{(d-s+3)}},\ \ \ s=1,\dots,d, \tag{17}\] with \((i_{1},i_{2})=(1,2)\) or \((i_{1},i_{2})=(2,1)\). Note that \(\alpha_{s}\) and \(\beta_{s}\) defined in Eq (17) satisfy the restriction \(\prod_{s=1}^{d}\alpha_{s}=\prod_{s=1}^{d}\beta_{s}=1\), and therefore, \((1-\beta_{s}L)\) and \((1-\alpha_{s}L)\) are two unit PI-filters. In addition, Eq (17) shows there are two solutions for \(\alpha_{s}\) and \(\beta_{s}\) parameters, it is because the two random walks in Eq (14) have a same integration order of one, and either of them can be firstly eliminated when applying \((1-\alpha_{s}L)\) to \(\{X_{t}\}\). For instance, the solution of \(\alpha_{s}\) and \(\beta_{s}\) obtained by setting \((i_{1},i_{2})=(1,2)\) means \(Z_{T}^{(1)}\) is firstly eliminated when applying \((1-\alpha_{s}L)\) to \(\{X_{t}\}\), leaving \(Z_{T}^{(2)}\) as the only non-stationary part which is then eliminated by applying \((1-\beta_{s}L)\) to \((1-\alpha_{s}L)X_{t}\). Furthermore, it can be proved that the two solutions of \(\alpha_{s}\) and \(\beta_{s}\) lead to a same result for PI-parameters \(\theta_{i,s}\) shown in Eq (15), such that \(\theta_{1,s}=\alpha_{s}+\beta_{s}\) and \(\theta_{2,s}=-\beta_{s}\alpha_{s-1}\). #### 4.2.2 Two chained unit roots The second case arises when \(\mathbf{F}_{d}\) in Eq (5) has two chained unit eigenvalues. For simplicity, we assume all other eigenvalues of \(\mathbf{F}_{d}\) are zero. Under this assumption, the Jordan canonical form of \(\mathbf{F}_{d}\) is represented as: \[\begin{split}\mathbf{F}_{d}&=XJX^{-1}\\ &=\left(\begin{array}{ccccc}c_{1}^{(1)}&c_{2}^{(1)}&0&\dots&0 &0\\ c_{1}^{(2)}&c_{2}^{(2)}&0&\dots&0&0\\ c_{1}^{(3)}&c_{2}^{(3)}&1&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d-1)}&c_{2}^{(d-1)}&0&\dots&1&0\\ c_{1}^{(d)}&c_{2}^{(d)}&0&\dots&0&1\end{array}\right)\left(\begin{array}{ccccc} 1&1&&&\\ 1&&&\\ &0&&&\\ &&\ddots&&\\ &&&0&\\ &&&0\end{array}\right)\left(\begin{array}{ccccc}c_{1}^{(1)}&c_{2}^{(1)}&0& \dots&0&0\\ c_{1}^{(2)}&c_{2}^{(2)}&0&\dots&0&0\\ c_{1}^{(3)}&c_{2}^{(3)}&1&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ c_{1}^{(d-1)}&c_{2}^{(d-1)}&0&\dots&1&0\\ c_{1}^{(d)}&c_{2}^{(d)}&0&\dots&0&1\end{array}\right)\\ \end{split} \tag{18}\] Obviously, the largest dimension of unit Jordan block in Eq (18) is two, and therefore, the process \(\{X_{t}\}\) generated by this \(\mathbf{F}_{d}\) is periodically integrated of order two, according to Definition 3.1. Correspondingly, the model in Eq (5) is a PI\({}_{2}\)AR\((2)\) model. Moreover, compared with the representation in Eq (13) where \(\mathbf{F}_{d}\) has two simple unit eigenvalues, the \((1,2)\)-th and \((2,1)\)-th elements of \(\mathbf{F}_{d}\) in Eq (18) cannot be both equal to zero at the same time, since that will make the similarity matrix \(X\) singular. Thus, the \(2\times 2\) upper-left corner of \(\mathbf{F}_{d}\) is sufficient to distinguish the two cases where \(\mathbf{F}_{d}\) has two simple or two chained unit eigenvalues. Next, we derive a PI-filter which is utilized to transform \(X_{t}\sim\text{PI}(2)\) into \(X_{t}\sim\text{PI}(0)\). From Eq (8), the Jordan matrix with two chained unit eigenvalues indicates \[Z_{T}^{(1)}=Z_{T-1}^{(1)}+Z_{T-1}^{(2)}+W_{T}^{(1)},\quad Z_{T}^{(2)}=Z_{T-1}^{ (2)}+W_{T}^{(2)}, \tag{19}\] where \(Z_{T}^{(1)}\sim\text{I}(2)\) and \(Z_{T}^{(1)}\sim\text{I}(1)\). It is worthwhile to mention that different from the previous situation with two simple unit eigenvalues, \(Z_{T}^{(1)}\) process in this case has higher integration order due to the chained unit eigenvalues. In order to remove these two non-stationary parts, a PI-filter \((1-\theta_{1,s}L-\theta_{2,s}L^{2})\) is introduced where \(\theta_{1,s}\) and \(\theta_{2,s}\) have a same general representation as shown in Eq (15) but with \[c_{1}^{(d+k)}=c_{1}^{(k)},\quad c_{2}^{(d+k)}=c_{2}^{(k)}-c_{1}^{(k)},\quad k =1,2. \tag{20}\] In this situation, the second order PI-filter \((1-\theta_{1,s}L-\theta_{2,s}L^{2})\) is equivalent to a cascaded filter \((1-\beta_{s}L)(1-\alpha_{s}L)\) where \(\alpha_{s}\) and \(\beta_{s}\) are uniquely determined by \[\alpha_{s}=\frac{c_{1}^{(d-s+1)}}{c_{1}^{(d-s+2)}},\quad\beta_{s}=\frac{c_{2} ^{(d-s+1)}-\alpha_{s}c_{2}^{(d-s+2)}}{c_{2}^{(d-s+2)}-\alpha_{s-1}c_{2}^{(d-s +3)}},\quad s=1,\dots,d. \tag{21}\] It is noted that the parameters \(\alpha_{s}\) and \(\beta_{s}\) in Eq (21) also satisfy the restriction \(\prod_{s=1}^{d}\alpha_{s}=\prod_{s=1}^{d}\beta_{s}=1\). Compared with the solutions of \(\alpha_{s}\) and \(\beta_{s}\) under two simple unit roots case, see Eq (17), the solutions under two chained unit roots case are uniquely determined by the eigen information of the multi-companion matrix. It is because when there are two chained unit roots, the unit periodic filter \((1-\alpha_{s}L)\) is firstly applied to break the Jordan chain, and eliminate \(Z_{T}^{(1)}\) process which has the highest integration order of two from \(\{X_{t}\}\). After that, the only non-stationary part remaining in the process is \(Z_{T}^{(2)}\) with integration order one, which makes the transformed series \((1-\alpha_{s}L)X_{t}\) periodically integrated of order one. Hereby, the unit periodic filter \((1-\beta_{s}L)\) is then applied to eliminate \(Z_{T}^{(2)}\) from the transformed series and ensures \((1-\beta_{s}L)(1-\alpha_{s}L)X_{t}\) periodically stationary. In conclusion, when \(\mathbf{F}_{d}\) in Eq (5) has two unit eigenvalues and all other eigenvalues have moduli strictly less than one, model Eq (1) can be rewritten as \[\psi_{p-2,s}(L)(1-\theta_{1,s}L-\theta_{2,s}L^{2})X_{t}=\psi_{p-2,s}(L)(1- \beta_{s}L)(1-\alpha_{s}L)X_{t}=\varepsilon_{t},\quad t=1,2,\dots,\] where \(\psi_{p-2,s}(L)\) is a periodic autoregressive filter with order \(p-2\). The second order PI-filter \((1-\theta_{1,s}L-\theta_{2,s}L^{2})\) is determined by Eq (15), where Eq (16) holds when there are two simple unit roots and Eq (20) holds when there are two chained unit roots. Equivalently, a cascade of two unit PI-filters \((1-\beta_{s}L)(1-\alpha_{s}L)\) can also be applied to transform \(X_{t}\) into periodically stationary, which is determined by Eq (17) and (21) for two simple and two chained unit roots respectively. ### Multiple unit roots Based on previous two subsections, we extend the above conclusions to general cases. Consider a PIAR\((p)\) model in Eq (1) which has \(m_{1}\) unit roots with \(m_{1}\in[1,p]\). Subsequently, Eq (1) can be rewritten as \[\psi_{p-m_{1},s}(L)(1-\theta_{1,s}L-\dots-\theta_{m_{1},s}L^{m_{1}})X_{t}= \varepsilon_{t},\quad t=1,2,\dots, \tag{22}\] where \(\psi_{p-m_{1},s}(L)\) is a periodic autoregressive filter with order \(p-m_{1}\) and particularly \(\psi_{0,s}(L)=1\), and \((1-\theta_{1,s}L-\dots-\theta_{m_{1},s}L^{m_{1}})\) is a PI-filter with order \(m_{1}\). This PI-filter is used to eliminate all the \(m_{1}\) unit roots from \(\{X_{t}\}\) and transform \(\{X_{t}\}\) into periodically stationary, such that \((1-\theta_{1,s}L-\dots-\theta_{m_{1},s}L^{m_{1}})X_{t}\sim\text{PI}(0)\). Moreover, similarly to previous two subsections, the PI-parameters in Eq (22) can also be uniquely determined by the eigen information of the corresponding multi-companion matrix \(\mathbf{F}_{d}\) in Eq (5). Here, we provide a general parametrization result for PI-parameters. Let \(\theta(s)=(\theta_{1,s},\ldots,\theta_{m_{1},s})^{{}^{\prime}}\) be the PI-parameters at season \(s\). We construct a \(d\times m_{1}\) matrix \(X^{(1)}\), such that \[X^{(1)}=\begin{pmatrix}c_{1}^{(1)}&c_{2}^{(1)}&\ldots&c_{m_{1}}^{(1)}\\ c_{1}^{(2)}&c_{2}^{(2)}&\ldots&c_{m_{1}}^{(2)}\\ \vdots&\vdots&\ddots&\vdots\\ c_{1}^{(d)}&c_{2}^{(d)}&\ldots&c_{m_{1}}^{(d)}\end{pmatrix}, \tag{23}\] which is the top-left part of the similarity matrix \(X\) of \(\mathbf{F}_{d}\). Note that \(X^{(1)}\) collects all the seed-vectors corresponding to the \(m_{1}\) unit eigenvalues of \(\mathbf{F}_{d}\). Given the special property of the multi-companion matrix, see Boshnakov (2002, after Eq 5.4), \(X^{(1)}\) in Eq (23) is sufficient to determine the entire information of the eigenvectors associated with the \(m_{1}\) unit eigenvalues of \(\mathbf{F}_{d}\). This property is useful when estimating the eigenvectors associated with the \(m_{1}\) unit eigenvalues of \(\mathbf{F}_{d}\). In particular, when \(m>d\), this property helps to reduce the number of unknowns from \(mm_{1}\) to \(dm_{1}\). After that, an \(m_{1}\times 2d\) matrix \(X_{\text{bind}}\) is created as: \[X_{\text{bind}}=\begin{pmatrix}X^{(1)}J_{\text{unit}}\\ X^{(1)}\end{pmatrix}^{{}^{\prime}}, \tag{24}\] where \(J_{\text{unit}}\) is an \(m_{1}\times m_{1}\) unit Jordan matrix defined by Eq (10). We find that the PI-parameters \(\theta(s)\) in Eq (22) are uniquely determined by solving \[(X_{\text{bind}})_{\bullet(d-s+2):(d-s+m_{1}+1)}\theta(s)=(X_{\text{bind}})_ {\bullet d-s+1},\ \ \ s=1,\ldots,d, \tag{25}\] where \((X_{\text{bind}})_{\bullet j}\) stands for the \(j\)-th column of \(X_{\text{bind}}\). The uniqueness of \(\theta(s)\) is guaranteed by the linear independence of columns of \(X^{(1)}\). Based on the above results, we propose a new estimation method for PIAR\((p)\) models which uses the eigen information of the multi-companion matrix in their multi-companion representations. A special case happens when a PIAR\((p)\) model has exactly \(p\) unit roots. In this case, Eq (22) reduces to \((1-\theta_{1,s}L-\cdots-\theta_{p,s}L^{p})X_{t}=\varepsilon_{t}\), and the \(\mathbf{F}_{d}\) matrix in its multi-companion representation has \(p\) unit eigenvalues and all other eigenvalues are zero. Due to the special properties of the multi-companion matrix, see Boshnakov and Iqelan (2009, Lemma1) and Boshnakov (2002, after Eq 5.4), the number of \(dp\) seed-parameters which are collected in \(X^{(1)}\) is sufficient to determine the entire eigen information of \(\mathbf{F}_{d}\). In turn, the \(dp\) seed-parameters are also sufficient to determine the PI-parameters \(\theta(s)\) for all seasons by using Eq (25). Therefore, we regard Eq (25) as a bridge to transfer the eigen information of \(\mathbf{F}_{d}\) into the information of the parameters of the PI-filter. Moreover, instead of estimating the PI-parameters directly, we estimate the eigen information of \(\mathbf{F}_{d}\), or more precisely, the seed-parameters of \(\mathbf{F}_{d}\). At last, an optimization routine is applied to find the estimators of the seed-parameters which minimize the residual sum of squares of the PIAR\((p)\) model, and the estimated PI-parameters can be obtained by solving Eq (25). A more general case happens when a PIAR\((p)\) model has \(m_{1}\) unit roots where \(p>m_{1}\). In this case, Eq (22) can be viewed as a two-step process such that \[\begin{cases}(1-\theta_{1,s}L-\cdots-\theta_{m_{1},s}L^{m_{1}})X_{t}=y_{t},\\ (1-\psi_{1,s}L-\cdots-\psi_{p-m_{1},s}L^{p-m_{1}})y_{t}=\varepsilon_{t},\end{cases} \tag{26}\] where the first and the second are PIAR\((m_{1})\) and PAR\((p-m_{1})\) processes respectively. It is worth noting that the parameters of these two steps in Eq (26) can be estimated separately. In the first step, given that the PIAR\((m_{1})\) process has exactly \(m_{1}\) unit roots, we can construct \(X^{(1)}\) and \(X_{\text{bind}}\), and the estimators of PI-parameters \(\theta(s)\) are obtained by solving Eq (25). After that, applying the PI-filter to \(X_{t}\) transforms \(X_{t}\) into periodically stationary, and therefore, the second step is a PAR\((p-m_{1})\) process which can either be estimated by periodic Yule-Walker (see Pagano et al., 1978) or weighted least squares (see Basawa and Lund, 2001). A key point to emphasize is that rather than estimating PI-parameters of a PIAR\((p)\) model directly, our estimation method sets the seed-parameters of the multi-companion matrix as the unknowns. Subsequently, Eq (25) is utilized as a bridge to transfer the estimation information of seed-parameters to PI-parameters. As a result, our method offers a significant advantage over the existing method Boswijk et al. (1997) that requires dealing with non-linear restrictions between PI-parameters, and also extends the current literature which mainly deals with quarterly PIAR models to general cases. In particular, it is found that by setting \(d=4\), the quarterly PI-parameters derived by solving Eq (25) automatically satisfy the non-linear restrictions given by Boswijk et al. (1997) for a single, two and three unit roots cases. Hereby, the approach used in Boswijk et al. (1997) is a special case of our multi-companion method. ## 5 Monte Carlo Analysis This section provides the results of Monte Carlo experiments to verify the estimation method of periodically integrated autoregressive models, with the theoretical analysis introduced in Section 4. Firstly, in order to generate the periodically integrated series, we use the method introduced by Boshnakov and Iqelan (2009) which is based on the multi-companion representation in Eq (5) and the eigen information of the multi-companion matrix. Table 2 provides the eigen information of the multi-companion matrices, which is used to generate the periodically integrated series with quarterly period. The notation \(c_{i}\) represents the \(i\)-th eigenvector (or seed-vector) associated with the \(i\)-th unit eigenvalue of the multi-companion matrix, and \(c_{i}^{(j)}\) means the \(j\)-th element of the \(i\)-th eigenvector. Note that all the unit eigenvalues given in Table 2 are simple, which results in the generated series having periodic integration order one. Moreover, the remaining eigenvalues of the multi-companion matrices from Model I to Model III are zeros. However, Table 2 does not include the information for eigenvectors corresponding to zero eigenvalues, since they are just standard basis with appropriate arrangement (see Boshnakov and Iqelan, 2009, Lemma 1). The models in Table 2 are with one, two and three simple unit roots, respectively. The corresponding periodic filter representations of the models are: * Model I: \(X_{t}=\theta_{1,s}X_{t-1}+\varepsilon_{t}\) where \(\varepsilon_{t}\sim N(0,\sigma_{s}^{2})\); * Model II: \(X_{t}=\theta_{1,s}X_{t-1}+\theta_{2,s}X_{t-2}+\varepsilon_{t}\) where \(\varepsilon_{t}\sim N(0,\sigma_{s}^{2})\); * Model III: \(X_{t}=\theta_{1,s}X_{t-1}+\theta_{2,s}X_{t-2}+\theta_{3,s}X_{t-3}+\varepsilon _{t}\) where \(\varepsilon_{t}\sim N(0,\sigma_{s}^{2})\). The numerical values of the PI-parameters \(\theta_{i,s}\) and the variance of periodic white noise \(\sigma_{s}^{2}\), are listed in Table 3 in the rows designated as 'true' values. The simulation starts by setting the sample size of the generated series as 240 and the simulation for each model runs 2000 times. Table 3 shows the mean, standard deviation (sd) and root mean squared error (RMSE) of the estimated parameters. We observe that across all three models, the mean values of the estimated parameters derived from 2000 simulations closely align with the true values. Furthermore, the standard deviations and RMSE values are relatively low, indicating the robustness of our estimation method. \begin{table} \begin{tabular}{l l l l l} \hline \hline & & \multicolumn{2}{l}{Model I: PI\({}_{1}\)AR\((1)\)} \\ \(\lambda_{1}=1;c_{1}^{(j)}\) & -0.64 & 0.46 & 0.65 & 0.68 \\ \hline \hline & & \multicolumn{2}{l}{Model II: PI\({}_{1}\)AR\((2)\)} \\ \(\lambda_{1}=1;c_{1}^{(j)}\) & 0.08 & -0.41 & 0.52 & 0.40 \\ \(\lambda_{2}=1;c_{2}^{(j)}\) & 0.22 & 0.29 & -0.58 & -0.49 \\ \hline \hline & & \multicolumn{2}{l}{Model III: PI\({}_{1}\)AR\((3)\)} \\ \(\lambda_{1}=1;c_{1}^{(j)}\) & -0.64 & -0.46 & 0.65 & 0.68 \\ \(\lambda_{2}=1;c_{2}^{(j)}\) & -0.23 & 0.95 & -0.83 & -0.89 \\ \(\lambda_{3}=1;c_{3}^{(j)}\) & -0.30 & 0.91 & 0.47 & -0.15 \\ \hline \hline \end{tabular} \end{table} Table 2: Eigen information of the multi-companion matrix used to generate quarterly periodically integrated series It is worthwhile to point out that, our method avoids considering the non-linear restrictions between PI-parameters for Model I to III during the estimation process. However, it can be checked that in each simulation from Model I to III, the PI-parameters derived by our estimation method automatically satisfy the non-linear restrictions given in Boswijk et al. (1997). This outcome provides additional affirmation of the effectiveness and validity of our estimation approach. ## 6 Application In this section, we apply periodically integrated autoregressive models to forecast future values of U.S. monthly electricity end use, and compare the forecasting performance of the PIAR model with a non-periodic model (namely ARIMA) and a PAR model. The data is downloaded from Monthly Energy Review from U.S. Energy Information Administration 1. The series contains 50 years of data from January 1973 to November 2022, and they are measured in Billion Kilowatt-hours (BKWh). We partition the series into two sets, one consisting of observations from January 1973 to December 2019 (47 years) used for model estimation, and the other containing out-of-sample data from January 2020 to November 2022 used for forecasting comparison. As the series is recorded monthly, we assume the period of the series is \(d=12\). So, the sample size of observations used for model estimation is \(n=N\times d=47\times 12=564\). Footnote 1: [https://www.eia.gov/totalenergy/data/monthly/](https://www.eia.gov/totalenergy/data/monthly/) For \(N=47\) years of data as shown in the top plot of Figure 1, the series exhibits significant monthly variation and an upward trend. The monthly variation is also seen from the middle plot of Figure 1, where the electricity use remains relatively high both in summer (July and Aug) and in winter (Jan and Dec). Sometimes a log-transformation can remove the seasonal variation (by turning it into a seasonal mean or 'level') but not here. Indeed, the bottom graph in Figure 1 shows the log-transformed series, centred by subtracting the overall mean. A PAR(5) model is firstly considered to fit the series, where the AIC and BIC reach their minimum values at -2697 and -2433 respectively. We find that the estimated multi-companion matrix of the PAR(5) model exhibits a pair of eigenvalues whose moduli approximate unity. This indicates the existence of two unit roots in the process, and hereby, a PIAR model with two simple and two chained unit roots should be both considered. As the number of unit roots in a periodic time series does not affect the autoregressive order selection (see Boswijk et al., 1997), we can fix the order to be \(p=5\) when fitting the PIAR models. Therefore, a PI\({}_{1}\)AR\((5)\) model with two simple unit roots and a PI\({}_{2}\)AR\((5)\) model with two chained unit roots are then constructed. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Model I: & \(\theta_{1,1}\) & \(\theta_{1,2}\) & \(\theta_{1,3}\) & \(\theta_{1,4}\) & \(\sigma_{1}^{2}\) & \(\sigma_{2}^{2}\) & \(\sigma_{3}^{2}\) & \(\sigma_{4}^{2}\) & & \\ \hline true & -1.07 & 0.95 & 0.70 & -1.41 & 0.15 & 0.46 & 0.24 & 0.08 & & & \\ mean & -1.07 & 0.95 & 0.70 & -1.41 & 0.15 & 0.45 & 0.23 & 0.07 & & & \\ sd & 0.01 & 0.02 & 0.01 & 0.01 & 0.02 & 0.07 & 0.04 & 0.01 & & & \\ RMSE & 0.01 & 0.02 & 0.01 & 0.01 & 0.02 & 0.07 & 0.04 & 0.01 & & & \\ \hline \hline Model II & \(\theta_{1,1}\) & \(\theta_{1,2}\) & \(\theta_{1,3}\) & \(\theta_{1,4}\) & \(\theta_{2,1}\) & \(\theta_{2,2}\) & \(\theta_{2,3}\) & \(\theta_{2,4}\) & \(\sigma_{1}^{2}\) & \(\sigma_{2}^{2}\) & \(\sigma_{3}^{2}\) & \(\sigma_{4}^{2}\) \\ \hline true & -0.73 & 1.26 & -4.00 & -1.85 & -1.12 & 0.16 & 4.17 & -1.31 & 0.29 & 0.37 & 0.44 & 0.02 \\ mean & -0.72 & 1.27 & -4.00 & -1.86 & -1.10 & 0.16 & 4.15 & -1.33 & 0.28 & 0.37 & 0.43 & 0.02 \\ sd & 0.02 & 0.02 & 0.05 & 0.01 & 0.02 & \(<0.01\) & 0.08 & 0.03 & 0.05 & 0.07 & 0.08 & \(<0.01\) \\ RMSE & 0.02 & 0.02 & 0.05 & 0.01 & 0.03 & \(<0.01\) & 0.08 & 0.04 & 0.05 & 0.07 & 0.08 & \(<0.01\) \\ \hline Model III & \(\theta_{1,1}\) & \(\theta_{1,2}\) & \(\theta_{1,3}\) & \(\theta_{1,4}\) & \(\theta_{2,1}\) & \(\theta_{2,2}\) & \(\theta_{2,3}\) & \(\theta_{2,4}\) & \(\theta_{3,1}\) & \(\theta_{3,2}\) & \(\theta_{3,3}\) & \(\theta_{3,4}\) \\ \hline true & -0.16 & 1.83 & 1.10 & -3.21 & -0.5 & 0.28 & -2.01 & 3.53 & 0.55 & 0.91 & -0.31 & -6.45 \\ mean & -0.15 & 1.83 & 1.10 & -3.23 & -0.5 & 0.28 & -2.02 & 3.56 & 0.55 & 0.91 & -0.31 & -6.52 \\ sd & \(<0.01\) & 0.01 & 0.01 & 0.03 & \(<0.01\) & \(<0.01\) & 0.02 & 0.04 & \(<0.01\) & 0.01 & \(<0.01\) & 0.08 \\ RMSE & \(<0.01\) & 0.01 & 0.01 & 0.03 & \(<0.01\) & \(<0.01\) & 0.02 & 0.05 & \(<0.01\) & 0.01 & \(<0.01\) & 0.10 \\ \hline & \(\sigma_{1}^{2}\) & \(\sigma_{2}^{2}\) & \(\sigma_{3}^{2}\) & \(\sigma_{4}^{2}\) & & & & & & & \\ \hline true & 0.22 & 0.35 & 0.25 & 0.05 & & & & & & & \\ mean & 0.22 & 0.35 & 0.25 & 0.05 & & & & & & & \\ sd & 0.04 & 0.05 & 0.04 & 0.01 & & & & & & & \\ RMSE & 0.04 & 0.05 & 0.04 & 0.01 & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: Simulation results: mean value, standard deviation (sd) and root mean squared error (RMSE) of parameter estimates from simulation for Model I to Model III To determine whether there are two simple or two chained unit roots in the process, a likelihood ratio test is performed. Two separated null hypotheses are set as: the process has two simple unit roots (i.e. PI\({}_{2}\)AR\((5)\)) and the process has two chained unit roots (i.e. PI\({}_{2}\)AR\((5)\)), respectively. The alternative hypothesis is the process does not include any unit roots (i.e. PAR(5)). The likelihood ratio test statistic is calculated by \(Q_{LR}=N\log(|S^{-1}S_{0}|)\) where \(S_{0}\) and \(S\) are the residual sum of squares matrix under the null and alternative hypothesis respectively. Under the null, the test statistic should follow an asymptotic distribution (see Zhu, 2023, Thm. 5.4.2) and the corresponding quantile values can be found from Johansen et al. (1995, Table 15.1). The result turns to be that we accept the null that there are two chained unit roots in the process (with \(Q_{LR}=4.08<12.21\)). In addition, the PI\({}_{2}\)AR\((5)\) model has smallest AIC and BIC values compared with PI\({}_{1}\)AR\((5)\) and PAR(5). Therefore, we choose a PI\({}_{2}\)AR\((5)\) as the final model. Let \(\{X_{t},t=1,\ldots,n\}\) be the log-transformed series, and write the representation of the PI\({}_{2}\)AR\((5)\) as \[(1-\psi_{1,s}L-\psi_{2,s}L^{2}-\psi_{3,s}L^{3})(1-\beta_{s}L)(1-\alpha_{s}L)X_ {t}=\varepsilon_{t},\quad\varepsilon_{t}\sim N(0,\sigma_{s}^{2}),\] where \(\psi_{3,s}(L)=1-\psi_{1,s}L-\psi_{2,s}L^{2}-\psi_{3,s}L^{3}\) is the periodic autoregressive filter of order 3; \((1-\alpha_{s}L)\) and \((1-\beta_{s}L)\) are two unit PI-filters with \(\prod_{s=1}^{d}\alpha_{s}=\prod_{s=1}^{d}\beta_{s}=1\). The two-step method introduced in Eq (26) is applied to estimate the above PI\({}_{2}\)AR\((5)\) model and the estimation result is in Table 4. Table 4 shows the estimated parameters \(\hat{\alpha}_{s}\) and \(\hat{\beta}_{s}\) which satisfy the restrictions \(\prod_{s=1}^{d}\alpha_{s}=\prod_{s=1}^{d}\beta_{s}=1\). Moreover, it can be proved that the roots of the set of polynomials \(\left\{\hat{\psi}_{3,s}(L)\right\}_{s=1}^{d}\) are outside the unit circle, and hereby, the filter \(\hat{\psi}_{3,s}(L)\) is a periodically autoregressive filter. Indeed, the PI\({}_{2}\)AR\((5)\) model is found adequate to capture the periodically integrated structure of transformed data of the U.S. monthly electricity end use. The adequacy is visually validated by Figure 2, where the periodic autocorrelations of the residuals at each season are approximately located within the dashed blue lines (namely \(\pm 1.96/\sqrt{N}\)). This suggests that the periodic autocorrelations of residuals of the PI\({}_{2}\)AR\((5)\) model are insignificant at each season. On the other hand, the adequacy of model PI\({}_{2}\)AR\((5)\) is numerically validated by Table 5, where the modified portmateau McLeod test statistic (see McLeod, 1994, Eq 4.5) is calculated by setting the maximum lag equal to 12. Table 5 shows that except for two seasons (\(s=7\) and \(s=11\)), the periodic autocorrelations of residuals for the other seasons at lag \(1,2,\ldots,12\) are approximately equal to zero. Combined with the information delivered by Figure 2, we have the conclusion that the residuals of PI\({}_{2}\)AR\((5)\) model are periodically uncorrelated with each other. Moreover, we check the normality of the standardized residuals. The standardized residuals are obtained from the original residuals of PI\({}_{2}\)AR\((5)\) divided by their seasonal standard deviation namely \(\hat{\sigma}_{s}\) given in Table 4. Figure 3 shows the density and the Q-Q plot of the standardized residuals, which indicates the standardized residuals are approximately normally distributed. In conclusion, the residuals of PI\({}_{2}\)AR\((5)\) are periodic white noise and are normally distributed with mean 0 and variance \(\hat{\sigma}_{s}^{2}\). Therefore, the PI\({}_{2}\)AR\((5)\) model is verified to be adequate to capture the periodically integrated structure of the series. Next, we provide an explanation of the second order PI-filter \((1-\beta_{s}L)(1-\alpha_{s}L)\) in this PI\({}_{2}\)AR\((5)\) model. Figure ref-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-FigFig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-Fig-FigFig-FigFig-Fig-Fig-FigFig-Fig-Fig-FigFig-Fig-Fig-FigFig-Fig-Fig-FigFig-Fig--FigFig-FigFig-Fig-Fig-FigFig-Fig-FigFigFig-Fig-FigFig-FigFig-Fig-FigFig-FigFig-FigFig--FigFig-Fig-FigFig-FigFig-FigFig-FigFig-FigFig-FigFig-FigFig-FigFig-Fig-FigFig-FigFig-FigFig-Fig-FigFig-FigFig-FigFig-FigFig-FigFig-Fig-FigFig-FigFigFig-FigFigFig-FigFigFig- Finally, the forecasting performance of PI\({}_{2}\)AR\((5)\) model is investigated for the out-of-sample observations from Jan 2020 to Nov 2022, using the estimated parameters given in Table 4. Note that the forecast result has been transformed to the original scale, and the result is provided in Figure 5. The bottom left corner of Figure 5 specifies the result of forecast values (red line) and out-of-sample data (black line) from Jan 2020 to Nov 2022, along with the confidence intervals (blue ribbon) of the forecasts, which directly shows that the forecast result of PI\({}_{2}\)AR\((5)\) is reliable. In addition, the bottom right corner of Figure 5 gives the ACF plot of the forecast errors, where we observe that the forecast errors are uncorrelated with each other and therefore, the PI\({}_{2}\)AR\((5)\) model effectively captures the randomness of the series. Overall, we are satisfied with the out-of-sample forecasting performance of the PI\({}_{2}\)AR\((5)\) model. In comparison, we also use ARIMA(2,1,3) and PAR(5) models to produce the out-of-sample forecasts for our monthly electricity end use data. Note that the order of ARIMA model is automatically determined by using 'forecast' package in R (see Hyndman and Khandakar, 2008). Figure 6 provides the forecast performance of PI\({}_{2}\)AR\((5)\), PAR(5) and ARIMA(2,1,3) models in terms of the MAPE and RMSE values. It is obvious to see both the MAPE and RMSE values of ARIMA(2,1,3) model are significantly higher compared to the values of PAR(5) and PI\({}_{2}\)AR\((5)\), which suggests there is a noticeable increase in forecast accuracy by using periodic models. Moreover, the forecasting performance of periodically integrated autoregressive model PI\({}_{2}\)AR\((5)\) seems to be more accurate than periodic autoregressive model PAR(5) when considering a longer forecast horizon. Therefore, we choose PI\({}_{2}\)AR\((5)\) as the final model to produce forecast values for U.S. monthly electricity use. Figure 4: The effect of PI-filters on univariate series and its seasonal components. The black and brown dashed lines in 4b represent the integrated series \(Z_{T}^{(1)}\) and \(Z_{T}^{(2)}\) respectively. Figure 3: Normality check for standardized residuals of PI\({}_{2}\)AR\((5)\) model ## 7 Conclusion In this paper, we have introduced and applied the multi-companion method for the analysis of PIAR models. This innovative approach relies on the eigen information of the multi-companion matrix, when expressing the PIAR models in their multi-companion representations. We find that by representing the multi-companion matrix into its Jordan canonical form, both the similarity and the Jordan matrices play important roles. The properties of the Jordan matrix are employed to propose a general definition of periodic integration, which extends the existing body of literature (see Osborn et al., 1988 and Boswijk and Franses, 1996 for example) beyond its previous focus exclusively on quarterly periodic integration of order one. Moreover, given that the PI-parameters can be parametrized in terms of the seed-parameters of the multi-companion matrix, we propose a new estimation approach which departs from the conventional method of directly estimating PI-parameters. This new approach initiates the estimation process by first determining the seed-parameters, which then serve as a bridge to derive the estimation of PI-parameters based on parametrization results. As a result, this approach Figure 5: Forecast result from PI\({}_{2}\)AR\((5)\) model, where the top shows actual data (black line) over the whole observation period with the forecasts (red line) and its confidence interval (blue ribbon) covering the pediciton period, the bottom-left panel displays a zoomed-in version for the forecasting period, the bottom-right panel shows the ACF plot for forecast error. Figure 6: Forecasting performance of PI\({}_{2}\)AR\((5)\) (depicted by the red line), PAR\((5)\) (depicted by the green line) and ARIMA(2,1,3) (depicted by the blue line) based on MAPE (left) and RMSE values (right). offers a significant advantage over the existing methods (see Boswijk et al., 1997, Lopez-de Lacalle, 2005), which require dealing with the non-linear restrictions between PI-parameters. Additionally, our method expands the scope of analysis from the estimation of quarterly PIAR models to more general cases. On the other hand, to validate and demonstrate the robustness and effectiveness of our multi-companion method for the estimation and forecasting of PIAR models, we have conducted both a simulation study and a practical application. The results of this paper offer valuable insights for the analysis of periodically integrated series. These insights can be employed to explore various aspects, including the identification of common stochastic trends, the investigation of cointegration and periodic cointegration in macroeconomic series. Moreover, given the prominence of unit root tests in non-periodic time series analysis, it is worthwhile to advance the research in the domain of unit root tests for periodically integrated series. Our multi-companion method, as demonstrated in this paper, holds promise for further exploration and application in this context.
2301.13651
Radio Study of the Pulsar Wind Nebula Powered by PSR B1706-44
PSR B1706$-$44 is an energetic gamma-ray pulsar located inside supernova remnant (SNR) G343.1$-$2.3 and it powers a compact pulsar wind nebula (PWN) that shows torus and jet structure in X-rays. We present a radio study of the PWN using Australia Telescope Compact Array (ATCA) observations at 3, 6, 13, and 21\,cm. We found an overall arc-like morphology at 3 and 6\,cm, and the ``arc" shows two distinct peaks at 6\,cm. The radio emission is faint inside the X-ray PWN and only brightens beyond that. We develop a thick torus model with Doppler boosting effect to explain the radio PWN structure. The model suggests a bulk flow speed of $\sim 0.2c$, which could indicate significant deceleration of the flow from the X-ray emitting region. Our polarization result reveals a highly ordered toroidal $B$-field in the PWN. Its origin is unclear given that the supernova reverse shock should have interacted with the PWN. At a larger scale, the 13 and 21\,cm radio images detected a semi-circular rim and an east-west ridge of G343.1$-$2.3. We argue that the latter could possibly be a pulsar tail rather than a filament of the SNR, as supported by the flat radio spectrum and the alignment between the magnetic field and its elongation.
Y. H. Liu, C. -Y. Ng, R. Dodson
2023-01-31T14:09:04Z
http://arxiv.org/abs/2301.13651v1
# Radio Study of the Pulsar Wind Nebula Powered by PSR B1706\(-\)44 ###### Abstract PSR B1706\(-\)44 is an energetic gamma-ray pulsar located inside supernova remnant (SNR) G343.1\(-\)2.3 and it powers a compact pulsar wind nebula (PWN) that shows torus and jet structure in X-rays. We present a radio study of the PWN using Australia Telescope Compact Array (ATCA) observations at 3, 6, 13, and 21 cm. We found an overall arc-like morphology at 3 and 6 cm, and the "arc" shows two distinct peaks at 6 cm. The radio emission is faint inside the X-ray PWN and only brightens beyond that. We develop a thick torus model with Doppler boosting effect to explain the radio PWN structure. The model suggests a bulk flow speed of \(\sim 0.2c\), which could indicate significant deceleration of the flow from the X-ray emitting region. Our polarization result reveals a highly ordered toroidal \(B\)-field in the PWN. Its origin is unclear given that the supernova reverse shock should have interacted with the PWN. At a larger scale, the 13 and 21 cm radio images detected a semi-circular rim and an east-west ridge of G343.1\(-\)2.3. We argue that the latter could possibly be a pulsar tail rather than a filament of the SNR, as supported by the flat radio spectrum and the alignment between the magnetic field and its elongation. Pulsar wind nebulae (2215) -- Supernova remnants (1667) -- Polarimetry (1278) ## 1 Introduction A pulsar is a compact star that is born in a supernova explosion. It emits periodic signals and has a strong surface magnetic field. Particles around a pulsar are accelerated to form relativistic wind as the pulsar spins down and loses energy constantly. The relativistic wind interacts with the ambient medium and forms a synchrotron nebula, called a pulsar wind nebula (PWN). A PWN is able to accelerate particles to very high energies, emitting synchrotron radiation from the radio to hard X-ray bands. In X-rays, torus-jet features are commonly detected in young PWN systems (Kargaltsev and Pavlov, 2008; Ng and Romani, 2004, 2008). Theories suggest that torus structure is due to the shocked pulsar wind flowing into the equatorial region, and jets are wind in the polar regions confined by magnetic hoop stress (see Porth et al., 2017). In the radio band, however, these features are rarely seen. For instance, the Crab, 3C 58, and the PWN inside G292.0\(+\)1.8 are filled with wisps and filamentary structures (Dubner et al., 2017; Bietenholz, 2006; Gaensler and Wallace, 2003) instead; some show arc-like structure, such as CTB 87 (Kothes et al., 2020) and double-lobed morphology, such as G21.5\(-\)0.9, G76.9\(+\)1.0, and DA 495 (Bietenholz and Bartel, 2008; Arzoumanian et al., 2011; Kothes et al., 2008). Previous radio polarization observations also revealed different magnetic field configurations in young PWNe. From theoretical works, the radial component of the magnetic field decays faster than the toroidal component (\(\sim\)\(r^{-2}\) vs. \(\sim\)\(r^{-1}\)) (Porth et al., 2017). Therefore, the \(B\)-field beyond the termination shock should be mostly toroidal. This, however, is not supported by observations. Only a few cases, including Vela and Boomerang, show toroidal \(B\)-field (Dodson et al., 2003; Kothes et al., 2006), but many others, e.g., the Crab Nebula, 3C 58, G21.5\(-\)0.9, and Dragonfly, have complex or radial field structure (Reich, 2002; Lai et al., 2022; Jin et al., 2022). The physical cause of such diverse morphology and magnetic field structure among radio PWNe is not fully understood and a larger sample is needed for further study. In this work, we present a new radio study of the PWN powered by the Vela-like pulsar B1706\(-\)44. It is one of the few \(\gamma\)-ray pulsars detected in the early days with EGRET (McAdam et al., 1993). It has a characteristic age \(\tau_{c}=\)17.1 kyr and a spin-down power \(\dot{E}\approx 4\times 10^{36}\) erg s\({}^{-1}\). A recent study with _Chandra_ found that the pulsar is moving eastward with a projected velocity of around 130 km s\({}^{-1}\)(de Vries et al., 2021). The association between the pulsar and the nearby supernova remnant (SNR) G343.1\(-\)2.3 is controversial. The remnant has a circular shell and an east-west ridge in the southern part (Dodson and Golap, 2002). The pulsar is located at the tip of the ridge, near the center of the shell. The SNR distance of \(\sim 3.5\) kpc estimated from the \(\Sigma\)-\(D\) relationship is compatible to the pulsar dispersion measure distance \(d\approx 2.3\) kpc (Cordes and Lazio, 2002; Yao et al., 2017; McAdam et al., 1993). The High Energy Stereoscopic System (H.E.S.S) detected extended TeV emission west of the pulsar, which also has some connection with the SNR (H. E. S. S. Collaboration et al., 2011). Besides, the pulsar powers an X-ray PWN that has compact torus and jet structure (Romani et al., 2005). A recent study found diffused emission around the torus and a long curved outer-jet (de Vries et al., 2021). In this study, we aim to perform high resolution radio observations of B1706 PWN for directly comparing with the compact X-ray structures in _Chandra_ images and better understanding the PWN magnetic properties. Previous observations have detected a radio PWN surrounding the pulsar (Frail et al., 1994; Giacani et al., 2001; Dodson and Golap, 2002; Romani et al., 2005). However, the pulsar emission was not clearly distinguished from the PWN. Some ATCA observations excluded the pulsar emission, but few observations with high resolution and sensitivity were included. Besides, there is no previous study of the magnetic field in the B1706 PWN. In this paper, we analyze new and archival radio observations of the PWN powered by PSR B1706\(-\)44 (hereafter B1706 PWN) and SNR G343.1\(-\)2.3 taken with the Australia Telescope Compact Array (ATCA) at 3, 6, 13, and 21 cm images. We employed new observations with high resolution aiming to better study the morphology and polarization information of this PWN. We describe the observations and data reduction process in Section 2. Section 3 shows the results and they are discussed in Section 4. We summarize our results in Section 5. ## 2 Observations and Data Reduction We carried out new radio observations of B1706 PWN at 3 and 6 cm bands with ATCA in 6 km array configurations on 2017 Nov 3 and 2018 Jan 11. We also analyzed archival ATCA observations taken in the 3, 6, 13, and 21 cm bands with various array configurations, which have previously been analyzed by Dodson and Golap (2002); Romani et al. (2005). All the 3 and 6 cm band data were taken with the pulsar binning mode, providing a high time resolution. We then only select off-pulse data to "gate out" the pulsar emission to search for faint PWN structure in the surrounding. Table 1 lists the detailed observation parameters of all the data. The 3 and 6 cm observations were performed simultaneously centering at 8640 MHz and 4800 MHz, as well as at 8997.5 MHz and 5497.5 MHz. Besides, we have also selected observations in 2003 and 2005 at 8640 MHz and 8384 MHz. Our new observations in 2017 and 2018 were taken after the Compact Array Broadband Backend (CABB) upgrade (Wilson et al., 2011), which increased the bandwidth from 128 MHz to 2048 MHz. At 3 cm, the pre-CABB and post-CABB integration times are 64.0 hr and 21.2 hr, respectively, with a total _u-v_ coverage from 0.8 k\(\lambda\) to 197.4 k\(\lambda\). At 6 cm, we have 26.7 hr and 21.2 hr pre- and post-CABB integration time, respectively, covering the _u-v_ space from 0.85 k\(\lambda\) to 127 k\(\lambda\). The 13 and 21 cm datasets with good quality have a total integration time of 19.9 hr and 29.5 hr, respectively. The _u-v_ coverage of the observations at 13 cm is 0.2-5.5 k\(\lambda\) and 18-37 k\(\lambda\), and in the 21 cm band is 0.1-30 k\(\lambda\). We processed the data using the MIRIAD package (Sault et al., 1995). We first flagged the edge channels and data affected by severe radio frequency interference, then followed the standard procedures to calibrate the flux scale, band pass, and gains. After calibration, we formed Stokes I, Q, and U images using multi-frequency synthesis. We weighted the data inversely proportion to the noise. Since the pre- and post-CABB data were taken over 15 yr apart, we formed separated images at 6 cm to check for any morphological changes. The result shows no significant variability, we therefore combined \begin{table} \begin{tabular}{c c c c c c} \hline \hline Obs. Date & Array & Center Freq. & Usable Band- & No. of & Integration & Pulsar \\ & Config. & (MHz) & width (MHz) & Channels & Time (hr) & Binning Mode \\ \hline \multicolumn{6}{c}{**3 cm**} \\ \hline 2002 Jan 06 & 750A & 8640 & 104 & 13 & 7.7 & Y \\ 2002 Feb 16 & 1.5A & 8640 & 104 & 13 & 10.2 & Y \\ 2002 Apr 11 & 6A & 8640 & 104 & 13 & 8.8 & Y \\ 2003 May 18 & 1.5C & 8384, 8640 & 104 & 13 & 8.7 & Y \\ 2003 Jun 23 & 750C & 8384, 8640 & 104 & 13 & 10.2 & Y \\ 2003 Aug 02 & 6D & 8384, 8640 & 104 & 13 & 9.8 & Y \\ 2005 Nov 20 & 1.5C & 8384, 8640 & 104 & 13 & 4.3 & Y \\ 2005 Dec 27 & 6A & 8384, 8640 & 104 & 13 & 4.3 & Y \\ 2017 Nov 03 & 6A & 8997.5 & 1728 & 433 & 10.9 & Y \\ 2018 Jan 11 & 6C & 8997.5 & 1728 & 433 & 10.3 & Y \\ \hline \multicolumn{6}{c}{**6 cm**} \\ \hline 2002 Jan 06 & 750A & 4800 & 104 & 13 & 7.7 & Y \\ 2002 Feb 16 & 1.5A & 4800 & 104 & 13 & 10.2 & Y \\ 2002 Apr 11 & 6A & 4800 & 104 & 13 & 8.8 & Y \\ 2017 Nov 03 & 6A & 5497.5 & 1728 & 433 & 10.9 & Y \\ 2018 Jan 11 & 6C & 5497.5 & 1728 & 433 & 10.3 & Y \\ \hline \multicolumn{6}{c}{**13 cm**} \\ \hline 1998 May 29 & 750E & 2496 & 104 & 13 & 0.8 & N \\ 1999 Nov 03 & 210 & 2496 & 104 & 13 & 19.1 & N \\ \hline \multicolumn{6}{c}{**21 cm**} \\ \hline 1998 May 29 & 750E & 1384 & 104 & 13 & 0.8 & N \\ 1998 Sep 15 & 6A & 1384 & 104 & 13 & 3.3 & N \\ 1999 Nov 03 & 210 & 1384 & 104 & 13 & 19.1 & N \\ 2005 Nov 19 & 1.5C & 1344, 1472 & 104 & 13 & 4.6 & N \\ 2005 Dec 27 & 6A & 1344, 1472 & 104 & 13 & 1.7 & N \\ \hline \end{tabular} \end{table} Table 1: ATCA observations of B1706 PWN used in this study all off-pulse data at each frequency for a joint analysis to boost the signal. We first focused on the region close to the pulsar, and generated the 3 cm image with the best resolution of full width half maximum (FWHM) 6.3''\(\times\)3.5''. The resulting map has root mean square (rms) noise of around 0.06 mJy beam\({}^{-1}\), but we did not detect any significant structure near the pulsar. Then we generated images using _u-v_ tapering with a larger FWHM to boost the signal to noise ratio (S/N). Tapering sizes are 20'' for the 3 and 6 cm images and 70'' for the 13 and 21 cm images. The 3, 6, and 13 cm images were produced with Brigg's robust parameter of 0.5 to suppress side lobes. For the 21cm image, we used natural weighting to maximize the sensitivity. For image decovolution, we first used the task mossdi to clean strong point sources in the Stokes I, Q, and U images. The residual maps were then cleaned simultaneously using pmosmem and the models were restored with beam sizes of 20'' in 3 and 6 cm and 70'' in 13 and 21 cm maps. The rms noise is around 0.06, 0.06, 0.7, and 0.8 mJy beam\({}^{-1}\) in the Stokes I images and around 0.05, 0.04, 0.5, and 0.5 mJy beam\({}^{-1}\) in the Stokes Q and U images at 3, 6, 13, and 21 cm, respectively. Finally, we generated polarization maps with the task impol. Besides, we have also applied the same procedure to produce full Stokes images of the pulsar using the on pulsed data. ## 3 Results ### Morphology Figure 1 shows the total intensity maps of B1706 PWN during the off-pulse phase in the 3 and 6 cm bands. The radio PWN is clearly detected. It is elongated in the east-west direction with a size of \(\sim 4\arcmin\times 2\arcmin\) and wraps PSR B1706\(-\)44 in the north. The eastern part of the PWN is generally brighter, and the flux density peaks at 0.7' and 0.9' east of the pulsar, reaching 0.60 and 0.85 mJy beam\({}^{-1}\) at 3 and 6 cm, respectively. At 3 cm, the nebula has a more uniform brightness distribution than at 6 cm, and it shows arc-like structure overall. We also found a few protrusions in the PWN, one extends 2' north of the pulsar and two others northwest and southwest from the pulsar extend towards west 2' away from the pulsar. All these protrusions are not thicker than 0.5'. The protrusion features should correspond to data with _u-v_ coverages from \(\sim 4.5\) to \(\sim 15\) k\(\lambda\), which are included in the 3 cm data. These features are therefore less likely to result from the missing flux problem. We need more observations to confirm this feature. At 6 cm, the PWN shows two distinct peaks, resembling two lobes bracketing the pulsar. The eastern part has an elliptical shape of 2' in size. It is brighter than the western part and contains 67% of the flux density of the PWN. The western lobe is fainter and more elongated. It has a size of \(2\arcmin\times 1.5\arcmin\) and is oriented along the northeast-southwest direction. Its surface brightness peaks at \(\sim 0.8\arcmin\) west of the pulsar and is only about 2/3 of that of the peak in the eastern lobe. For both radio images, there is a "bay" feature south of the pulsar with no detectable radio emission. The 3\(\sigma\) flux density limit is around 0.18 mJy beam\({}^{-1}\) in both the 3 and 6 cm. The pulsar emission is clearly detected in the on-pulse data in both 3 and 6 cm bands with flux densities of 1.0\(\pm\)0.1 mJy beam\({}^{-1}\) and 2.9\(\pm\)0.1 mJy beam\({}^{-1}\), respectively. In Figure 2, we compare the radio images with a 0.5-7 keV X-ray image obtained with the _Chandra_ X-ray Observatory. We compared the X-ray torus/jet feature close to the pulsar with the 3 cm radio image with a beam of 6.3''\(\times\)3.5'', and found no counterpart of the X-ray PWN. The 3 cm radio image has a rms noise of 0.02 mJy beam\({}^{-1}\). We also smoothed the X-ray image to 20'', same as that of the radio image. Similarly, there is X-ray emission but no radio emission in the inner PWN, and the radio emission only appears in the outer PWN beyond 10'' from the pulsar. Meanwhile, the X-ray emission fades away in the outer PWN \(\sim\)25'' from the pulsar. The 3 and 6 cm images also show a linear, jet-like structure extending west from the end of the northern X-ray jet with a flux density of 8.2\(\pm\)0.6 mJy at 6 cm. It has a length of \(\sim\)3\({}^{\prime}\) and the width is not clearly resolved by the 6 cm observation (see Figure 2a). We also find similar emission beyond the southern X-ray jet after smoothing the 6 cm intensity map to 50\({}^{\prime\prime}\) (see Figure 2b) but it is fainter and more diffused than the emission in the north. More data are needed to confirm these features. Figure 3 shows the total intensity images of the overall SNR in the 13 and 21 cm bands. This is the first time that the 13 cm image is shown, and the SNR shows a similar morphology to that at 21 cm: there is a \(\sim\)40\({}^{\prime}\) semicircular rim in the west and a bright east-west ridge in the south \(\sim\)30\({}^{\prime}\) connecting the pulsar to the western rim. PSR B1706\(-\)44 is located at the tip of the ridge rather than at the center of the SNR. The pulsar emission is visible in the images, since no pulsar binning mode was used in these observations. There is also significant emission detected at the locations of radio outer jets at 6 cm. ### Radio spectrum We measured the flux densities of the overall B1706 PWN and each component in different bands. Background subtraction was performed using measurements from nearby source free regions. The regions selected to measure Figure 1: Total intensity images of the B1706 PWN at the 3 and 6 cm bands in the off-pulse phase with the pulsar emission excluded. The gray scale bar on the right is in units of Jy beam\({}^{-1}\). The crosses at the center of the images show the position of PSR B1706\(-\)44. The circular beams at the bottom left indicate the beam size of FWHM 20\({}^{\prime\prime}\) for both images. The rms noise in both bands is around 60 \(\mu\)Jy beam\({}^{-1}\). The contours correspond to total intensity levels of 0.18, 0.3, 0.45, and 0.6 mJy beam\({}^{-1}\). Figure 2: (a): Comparison between radio and X-ray emission of B1706 PWN. The 6 cm radio emission is shown in blue with the 20\({}^{\prime\prime}\) restored beam. The _Chandra_ 0.5–7 keV X-ray image is shown in red, also smoothed to 20\({}^{\prime\prime}\) resolution. The inset image is the comparison of the X-ray torus/jet feature and the 3 cm high resolution radio image, with the X-ray torus highlighted by the white region the whole PWN and the eastern and western lobes in both bands are shown in Figure 9. The estimated flux densities of the entire PWN are 18.5\(\pm\)1.0, 21.2\(\pm\)0.7, 39.2\(\pm\)1.4, and 47.4\(\pm\)4.0 mJy in 3 cm, 6 cm, 13 cm, and 21 cm, respectively. All these are plotted in Figure 4 and shown in Table 2. The values at 13 and 21 cm have been subtracted for the pulsar flux density and those at 3 and 6 cm are measured from the off-pulse phase images. Due to the lack of short _u-v_ spacing below 0.8 k\(\lambda\) in the 3 and 6 cm observations, the maps have low sensitivity to structures larger than 4\({}^{\prime}\). In this case, we separately derived the spectrum from the two higher and lower frequency bands. They both give a similar spectral index \(\alpha\approx-0.3\) (\(S_{\nu}\propto\nu^{\alpha}\)) (see Figure 4). We note that our flux density measurement of the overall PWN at 6 cm (\(\sim\)21 mJy) is comparable to the one measured with the VLA (\(\sim\)28 mJy), although slightly lower (Giacani et al., 2001). As the VLA data has similar _u-v_ coverage as our ATCA observations. The discrepancy could be due to different choices of source and background regions. The flux densities of the eastern lobe are 14.1\(\pm\)0.8 mJy and 8.9\(\pm\)1.0 mJy in 6 and 3 cm, respectively; and those of the western lobe are 7.3\(\pm\)0.4 mJy and 10.5\(\pm\)0.8 mJy, respectively. These give \(\alpha=-0.97\) for the eastern lobe and \(\alpha=+0.78\) for the western lobe. We also plotted the 3 and 6 cm images after filtering data to the same _u-v_ coverages, so that data in both bands have the same missing flux problem for a correction of the spectral index. We obtained a spectral index \(\alpha\sim 0\) for the whole PWN and spectral indices of \(-0.05\) and \(+0.96\) in the eastern lobe and the western lobe, respectively. The latter is rather unusual and more observations are needed to confirm this. \begin{table} \begin{tabular}{c c c c c} \hline \hline Freq. & Total & Eastern & Western & Pulsar \\ Bands & PWN (mJy) & Lobe (mJy) & Lobe (mJy) & (mJy) \\ \hline 3 cm & 18.5\(\pm\)1.0 & 8.9\(\pm\)1.0 & 10.5\(\pm\)0.8 & 1.0\(\pm\)0.1 \\ 6 cm & 21.2\(\pm\)0.7 & 14.1\(\pm\)0.8 & 7.3\(\pm\)0.4 & 2.9\(\pm\)0.1 \\ 13 cm & 39.2\(\pm\)1.4 & – & – & 8.0\(\pm\)0.1 \\ 21 cm & 47.4\(\pm\)4.0 & – & – & 10.2\(\pm\)0.4 \\ \hline \end{tabular} \end{table} Table 2: Flux density of the pulsar, entire PWN, and different components Figure 3: Total intensity maps of SNR G343.1\(-\)2.3 in 13 cm and 21 cm bands. The contours are at levels of 4, 8, 12, and 16 mJy beam\({}^{-1}\). The gray scale bars on the right have units of Jy beam\({}^{-1}\). The boxes indicate the field of view of Figure 1. Both images have a beam size of FWHM 70\({}^{\prime\prime}\), which is shown in the bottom left. The rms noise at 13 cm is around 0.7 mJy beam\({}^{-1}\)and at 21 cm is around 0.8 mJy beam\({}^{-1}\). We also estimated the pulsar flux densities with extraction region same as the beam size. For measurements at 3 and 6 cm, we generated images with only on-pulse bins to show the pulsar emission. The pulsar has flux densities of 1.0\(\pm\)0.1, 2.9\(\pm\)0.1, 8.0\(\pm\)0.1, and 10.2\(\pm\)0.4 mJy at 3, 6, 13, and 21 cm, respectively. The results are plotted in Figure 4. Due to low resolution, the measurement at 21 cm could be contaminated by the PWN emission, but we note that the result is consistent with the pulsed flux density obtained from the single dish Parkes Radio Telescope, and the flux densities in all bands are in line with the extrapolation of the pulsar spectrum (Lyne et al., 1998; Jankowski et al., 2017). We performed a multiwavelength comparison with the _Chandra_ X-ray data. We reprocessed all archival data of B1706 PWN using the CIAO software package (Fruscione et al., 2006), then extracted the X-ray PWN spectrum using specextract to fit the background subtracted spectrum with an absorbed power law. We obtained a photon index \(\Gamma\sim 1.53\pm 0.07\) and a total unabsorbed flux \(f_{pwn}\)=12.2\(\pm\)0.1\(\times\) 10\({}^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.5-7 keV range (excluding the pulsar emission). This gives \(\alpha_{X}=-0.53\) in the X-rays. We plot the spectral energy distribution (SED) of the PWN from radio to X-ray bands in Figure 4. A comparison with \(\alpha_{radio}\sim-0.3\) in the radio band suggests a spectral break of \(\Delta\alpha\sim 0.2\). However, we note that the extrapolation of the radio and X-ray spectra do not intersect. This could be due to the X-ray observations being insensitive to faint emission in the outer PWN region. In this case, emission from the inner PWN region would dominate and the obtained photon index would be smaller than that of the overall PWN. We also did such a comparison in high resolution about the X-ray torus region. For the X-ray torus, recent _Chandra_ results show a flux \(f_{torus}\)=1.26\(\pm\)0.03 erg cm\({}^{-2}\) s\({}^{-1}\) from 0.5 to 7 keV with a photon index \(\Gamma\)=1.46\(\pm\)0.05 (de Vries et al., 2021). The flux density in radio is estimated in a region shown in Figure 2 and show a sensitivity of 0.06 mJy for the torus region. We compared the radio flux density with the ex Figure 4: Top: radio spectra of the overall B1706 PWN and different components. The green dashed line shows the extrapolated spectrum of the pulsar emission from 0.4 and 1.4 GHz data. The flux densities of PSR B1706\(-\)44 at 0.4 and 1.4 GHz are from the ATNF pulsar Catalog (Manchester et al., 2005) and are shown as triangles with error bars. Middle: SED of the PWN from radio to X-ray bands. The black dots with error bar represent the measurement obtained with ATCA, and the lines in the top right show the best-fit unabsorbed X-ray spectrum obtained from _Chandra_. Bottom: Multiwavelength SED of the X-ray torus. The X-ray spectrum is extrapolated to 3 cm wavelength (the band in gray), and the upper limit in red shows the 3\(\sigma\) rms noise of the 3 cm observations. trapolated X-ray spectrum. The SED is shown in Figure 4. ### Polarization Figure 5 shows the polarized emission of the PWN and the SNR. We clipped the 3 and 13 cm maps where the polarization intensity has a signal-to-noise ratio (S/N) \(<\)3, total intensity S/N \(<\)5, or uncertainty of the position angle (PA) \(>\)10\({}^{\circ}\). For the 6 and 21 cm maps, we applied the same clipping criteria for the polarization intensity S/N and PA, but \(<\)3 for the total intensity S/N. The PWN is highly linearly polarized in all the bands and the polarized emission generally follows the total intensity. The 3 cm polarization emission is east-west elongated and the size is \(\sim 4^{\prime}\times 1^{\prime}\). However, it shows a peak \(\sim 0.8^{\prime}\) west of the pulsar, different from that of the total intensity map. Whereas, the 6 cm polarization image shows two-lobed structure resembling the total intensity emission. The eastern lobe is brighter and has a peak flux density of 0.37 mJy beam\({}^{-1}\). Polarized emission was detected in both the northern and southern radio features beyond X-ray jets, but the point source in northern one is unpolarized. The linear polarization fraction in both 3 and 6 cm bands is around 45% for the entire PWN, and around 85% for the pulsar. The circular polarization fraction of the pulsar is around 15% in both bands. In the 13 and 21 cm images, we found polarized emission on large scales including the PWN, the ridge, and the SNR shell. The PWN is detected at the end of the ridge with a blob-like structure aligning with the total intensity contours. The PWN polarized emission is fainter than that of the ridge and the SNR shell. The polarization fractions of the PWN in both 13 and 21 cm are around 30%. The shell structure of the SNR shows significant polarized emission. The polarization fractions of the entire SNR are around 50% and 40% at 13 cm and 21 cm, respectively. ### Rotation measure and intrinsic magnetic field orientation The observed PA of the polarization vectors are rotated due to Faraday effect in the interstellar medium. The amount of rotation is proportional to the rotation measure (RM) times square of the wavelength (\(\lambda^{2}\)). We attempted to derive a high resolution RM map using the 3 and 6 cm data, but it has too large uncertainty to be useful. Therefore, we simply used the RM of the pulsar (0.7\(\pm\)0.07 rad m\({}^{-2}\); Johnston et al., 2005) to derotate the polarization vectors at these two bands. To determine the RM of the SNR, we selected edge channels with 32 MHz bandwidth from the 13 and 21 cm data to generate Stokes Q and U maps. We used a _u-v_ taper of 80\({}^{\prime\prime}\) FWHM to boost the S/N. The images were then deconvolved using the same procedure as mentioned above, and restored with a circular beam of FWHM 80\({}^{\prime\prime}\), which is the resolution of the lowest frequency band. We formed four PA maps and applied a linear fit to determine the RM value at each pixel. The result is plotted in Figure 6 and the typical uncertainty of the map is \(\sim\)1 rad m\({}^{-2}\). We found that the RM of the SNR varies from \(-\)90 rad m\({}^{-2}\) to \(+\)92 rad m\({}^{-2}\) and it is \(\sim 0\) rad m\({}^{-2}\) near the pulsar position. The latter is in line with the RM of PSR B1706\(-\)44 (Johnston et al., 2005). The RM of the PWN and the ridge varies smoothly compared with that in the SNR rim. Figure 5 shows the intrinsic magnetic field direction of B1706 PWN at 3 and 6 cm and of the SNR at 13 and 21 cm after correcting for the Faraday effect. The magnetic field of the PWN is highly ordered. It is oriented along the PWN elongation and wraps around the pulsar in the north, indicating a toroidal configuration. For the outer jets, only faint polarized emission is detected, we are therefore only able to determine the polarization angle in the brightest regions. At large scale, the magnetic field of the ridge well aligns with its elongation. It then gradually switches to tangential along the rim of the SNR shell. ## 4. Discussion Our radio intensity maps of the B1706 PWN reveal an overall arc-like morphology. The emission is bright in the outer region of PWN but faint in the inner part, which is in contrast to the X-ray emission. Similar X-ray-radio anti-correlation is also found in a few other PWNe, including Vela PWN, DA 495, G76.9+1.0, G319.9\(-\)0.7, G327.1\(-\)1.1 (Dodson et al., 2003; Kothes et al., 2008; Arzoumanian et al., 2011; Kargaltsev et al., 2008; Ng et al., 2010; Ma et al., 2016). The cause is not clearly understood. It was suggested that the radio emission in the inner PWN could be too faint to detect. As the outflow decelerates, the particle number density increases outward, resulting in brighter radio emission. On the other hand, Figure 5: Linear polarized intensity maps of B1706 PWN and SNR G343.1\(-\)2.3, overlaid with the total intensity contours and polarization vectors that show the intrinsic \(B\)-field orientation. The contours are at levels of 0.18, 0.3, 0.45, and 0.6 \(\mu\)Jy beam\({}^{-1}\) for 3 and 6 cm, and 4, 8, 12, and 16 mJy beam\({}^{-1}\) for 13 and 21 cm. The gray scale bars correspond to the polarized intensity level in units of Jy beam\({}^{-1}\). The vector length is proportional to the polarization intensity, with the bars in lower left indicating polarization intensity of 0.5 mJy beam\({}^{-1}\) in 3 and 6 cm maps and 10 mJy beam\({}^{-1}\) in 13 and 21 cm maps. The black cross represents the position of PSR B1706\(-\)44. As shown at the bottom left, the restoring beam sizes are 20\({}^{\prime\prime}\) FWHM for 3 and 6 cm images and 70\({}^{\prime\prime}\) FWHM for 13 and 21 cm images. synchrotron cooling makes the X-ray emission invisible in the outer PWN (Kargaltsev et al., 2008). We applied this idea to B1706: we extrapolated the X-ray spectrum of the torus reported by de Vries et al. (2021) down to the radio band with a simple unbroken power law. This suggests a flux density from \(\sim\)0.01 to \(\sim\)0.2 mJy at 3 cm. From our highest resolution 3 cm intensity map, the 3\(\sigma\) limit at this region is \(\sim\)0.06 mJy beam\({}^{-1}\), giving a detection limit of \(\sim\)0.06 mJy based on the torus area (see Figure 4). In this case, we still cannot firmly rule out the scenario that the X-ray torus follows a simple unbroken power-law distribution from radio to X-rays. A further non-detection with about 3 times better sensitivity is required to reject such a scenario. Alternatively, we note that the injected spectrum could have an intrinsic spectral break, if the particle acceleration is due to magnetic reconnection in the termination shock or Weibel instability (Lyubarsky & Kirk, 2001; Weibel, 1959). In this case, the radio emission could be a few orders of magnitude fainter, making it very difficult to detect. Deeper radio observations are needed to discriminate between these. Figure 6: RM map of SNR G343.1\(-\)2.3. The RM values vary from \(-\)90 rad m\({}^{-2}\) to \(+\)92 rad m\({}^{-2}\), with the solid squares representing positive values and hollow boxes for negative. The contours are total intensity levels at 4, 8, 12, and 16 mJy beam\({}^{-1}\) at the 13 cm. The cross indicates the position of PSR B1706\(-\)44 and the box at bottom left corresponds to RM value of \(-\)50 rad m\({}^{-2}\). Our high resolution radio maps reveal overall arc-like structure for B1706 PWN, which is similar to Vela, Boomerang (G106.6+2.9), and G76.9+1.0 (Dodson et al., 2003; Kothes et al., 2006; Arzoumanian et al., 2011). It was suggested that this could be caused by the passage of supernova reverse shock and a thick toroidal model has been developed (Chevalier & Reynolds, 2011). We tried to apply the same model to our 3 cm image, but found that it fails to explain the lack of emission in the bay south of the pulsar. Indeed the model is always symmetric and also cannot explain the tongue-like morphology of the Boomerang. It is worthy mentioning that many X-ray PWNe also show an asymmetric torus feature close to the pulsar due to the Doppler boosting effect, such that emission is brighter if the particles are moving toward the observer and vice verse. We therefore added Doppler boosting effect to the model following a similar procedure as Ng & Romani (2004, 2008). We built a torus in 3D with circular cross-section and an outer radius of \(2^{\prime}\), assuming uniform and isotropic emission inside. We set the viewing angle between the torus axis and the line of sight to be 53.3\({}^{\circ}\)(Romani et al., 2005) and the outer boundary radius 6 times of the inner boundary radius. We considered Doppler boosting effect following Pelling et al. (1987). The apparent intensity \(I\) is \[I\propto(1-n\cdot\beta)^{-(1-\Gamma)}I_{0}, \tag{1}\] where \(n\) is the unit vector from the observer, \(\beta=v/c\) is the assumed velocity of the radial post-shock bulk flow, \(\Gamma\) is the photon index in the rest frame, and \(I_{0}\) is the intrinsic intensity of synchrotron emission taken to be constant. We projected the model onto the plane of the sky to generate a 2D brightness map for comparison with the data. Figure 7 shows models of a ra Figure 7: Models of a thick torus with Doppler boosting effect with different \(\beta\) (from 0.0 to 0.3) in the case of B1706 PWN. The total intensity images at 6 and 3 cm are shown at the bottom panels for comparison. dio B1706 PWN having a thick equatorial torus and the Doppler boosting effect with different \(\beta\) values in the bulk flow, as well as the 6 and 3 cm radio images of the B1706 PWN. In the scenario of \(\beta=0\) (i.e., negligible Doppler boosting), the model shows two equatorial lobes both having a brightness peak inside, and a fainter region between the lobes. We note that this model is not only horizontally symmetric, but also vertically symmetric. Considering Doppler boosting will result in enhanced brightness in an approaching bulk flow and reduced brightness in flows leaving away, such that the upper part is brighter than the lower part in our model. The brightness of the upper part becomes comparable to or even overwhelms that of the lobes in the scenario of \(\beta\geq 0.2\). In the latter case, the model has a kidney-shape feature wrapping the pulsar with a single peak north of the pulsar region. We suggest that the model can capture characteristic features of the radio PWN observed, including the overall arc-like PWN wrapping the pulsar in the north and the faint bay in the south. Comparisons between these models and the observations show that a constant value \(\beta\sim 0.2\) over the entire torus gives the best result of the 3 cm PWN, while the 6 cm PWN can be better described by \(\beta\sim 0.1\). Our 6 cm image is slightly different from the 3 cm image (e.g., the gap between the eastern and western parts), so that the best fit \(\beta\) are slightly different. To resolve this, we consider a spectral gradient across the PWN, as motivated by the different spectra between the eastern and western parts (see Figure 4). We fix \(\beta=0.2\) and consider that \(I_{0}\) depends on frequency as \[I_{0,\nu}=I_{0,\nu_{0}}\cdot(\frac{\nu}{\nu_{0}})^{\alpha}, \tag{2}\] where \(I_{0,\nu}\) is the intrinsic intensity in frequency \(\nu\), and \(I_{0,\nu_{0}}\) is the intensity in a reference frequency \(\nu_{0}\). We also assume that \(\nu_{0}=10\) GHz and \[\alpha=0.9\frac{x_{EW}}{R_{pwn}}, \tag{3}\] where \(x_{EW}\) is the position of a point in the east-west direction with a scale of the PWN radius \(R_{pwn}\), indicating \(\alpha=-0.9\) at the eastern end of the PWN and \(\alpha=+0.9\) at the western end. We then obtain the model at 3 and 6 cm and also extrapolate it to around 30 cm. Besides, we compare all these models with the 3 and 6 cm ATCA images and 30 cm ASKAP image (Norris et al., 2019), and all these are shown in Figure 8. Our simulation shows the PWN with a much brighter eastern part at 30 cm, a larger brighter eastern lobe and a smaller western lobe at 6 cm, and lobes connected from the north of the pulsar at 3 cm. These models can reproduce the main morphological features of the observed PWN in these bands. we also tried different values of \(\beta\) but found that 0.2 gives the best fit result. A direct comparison with \(\beta_{X}=0.7\) in the X-ray emitting region (Romani et al., 2005) suggests deceleration along the flow. This also implies the particle accumulation, which could give rise to the observed anti-correlation between the radio and X-ray emission. Our modeling result shows that B1706 PWN could have a toroidal structure in 3D, and it appears as arc-like due to Doppler effect. This model could be used to explain the arc-like morphology found in other radio PWNe, e.g., Vela and Boomerang. Finally, we note that toroidal structure can be resulted from a toroidal \(B\)-field as simulations suggest (Porth et al., 2017). This is supported by our polarization result, which reveals a toroidal \(B\)-field configuration. ### Equipartition magnetic field We estimated the equipartition \(B\)-field strength of the PWN \[B_{eq}=[6\pi(1+k)c_{12}L_{syn}\Phi^{-1}V_{pwn}^{-1}]^{2/7}, \tag{4}\] where \(V_{pwn}\) is the emission volume, \(L_{syn}\) is the synchrotron luminosity, \(\Phi\) is a filling factor for the emission (it is usually taken as 1 even though not 100% of the volume emits), \(k\) is the ratio between electron energy and the energy of heavy particles, \(c_{12}\) is a constant related to synchrotron radiation process and weakly depends on the frequency range (Pacholczyk, 1970). We selected PWN flux density at 6 cm as a reference and assumed a simple power-law spectrum with a spectral index \(\alpha\sim-0.3\) from \(10^{7}\) to \(10^{11}\) Hz to obtain \(L_{syn}=7.9\times 10^{30}d_{2.3}^{2}\) erg s\({}^{-1}\), where \(d_{2.3}\) is the source distance in units of 2.3 kpc. To estimate the volume of the PWN, we assumed an oblate spheroid for the emission volume in 3D. The oblate spheroid is \(2.6^{\prime}\times 4.2^{\prime}\times 4.2^{\prime}\) in size, which gives a volume of \(V_{pwn}\)=\(2.2\times 10^{56}\) cm\({}^{3}\). These give \[B_{eq}=10.5(1+k)^{2/7}\Phi^{-2/7}d_{2.3}^{-2/7}\,\mu\mathrm{G}.\] Taking \(k\)=0 and \(\Phi\)=1, the \(B\)-field is in the order of 10 \(\mu\)G, slightly lower than 27 \(\mu\)G and 15 \(\mu\)G estimated for the inner and outer X-ray PWN, respectively (de Vries et al., 2021). However, the decay in \(B\)-field strength is much slower than \(B\propto 1/r\) predicted by theory (Porth et al., 2017), given that the radio PWN is \(\sim\)5 times larger than that in the X-rays. We also roughly estimated the equapartition \(B\)-field of the linear radio feature beyond the X-ray jet northwest of the pulsar, assuming it associated with the PWN. Following the same procedure as above and taking the emission volume as a cylinder, the flux density measurements at 3 and 6 cm gives \[B_{ls}=70(1+k)^{2/7}\Phi^{-2/7}d_{2.3}^{-2/7}\,\mu\mathrm{G}.\] The result is similar if we exclude the point source, \[B_{ls^{\prime}}=59(1+k)^{2/7}\Phi^{-2/7}d_{2.3}^{-2/7}\,\mu\mathrm{G}.\] This is significantly higher than that of the main radio PWN, but is comparable to that of the X-ray bright inner PWN. Identification the association between the PWN and this feature and requires more following observations and Figure 8: Models of the B1706 PWN in different wavelengths with a gradient in the east-west direction (_the top row_), as well as the observation results of B1706 PWN at 30, 6, and 3 cm (_the bottom row_) taken with ASKAP (Norris et al., 2019) and ATCA. detailed modeling of the multiwavelength spectrum, which is beyond the scope of this work (e.g., Zhang et al., 2008). ### Nature of the ridge It is clear from our 13 and 21 cm maps that the ridge protruding east well aligns with the pulsar motion direction (de Vries et al., 2021). We therefore suggest that the ridge could be a pulsar tail instead of SNR structure. In addition, our polarization maps show a good alignment between the \(B\)-field and the orientation of the ridge, which is a common feature of pulsar tails (e.g., G319.9\(-\)0.7, G327.1\(-\)1.1, and the Mouse; Ng et al., 2010; Ma et al., 2016; Yusef-Zadeh and Gaensler, 2005). To confirm the nature of the Figure 9: Total intensity image of SNR G343.1\(-\)2.3 at 13 cm. _Inset:_ Zoom in of the PWN at 6 cm. Extraction regions (e.g., shell, ridge, the whole PWN, and the eastern and western lobes) for spectral measurements are indicated. The circular beams of 20\({}^{\prime\prime}\) (inset) and 70\({}^{\prime\prime}\) (SNR) are shown bottom left of the images as scale references. The dashed vector shows the pulsar motion direction (de Vries et al., 2021). ridge, we compared its radio spectrum with that of the SNR rim, using extraction regions shown in Figure 9. We found a significantly flatter spectral index of \(-0.3\) in the ridge than \(-1.1\) in the shell, implying that the ridge likely comprises of pulsar wind. Moreover, the RM map shows a comparable RM as that of the pulsar and no significant variation in the entire ridge, therefore, supporting this idea. Although the ridge extends beyond the pulsar birth site suggested by de Vries et al. (2021), it could be formed by fast outflow or pulsar wind swept by the SNR reverse shock. The latter scenario could also explain the TeV emission (H. E. S. S. Collaboration et al., 2011). Deeper X-ray observations along the ridge can reveal any spectral evolution. Any enhanced synchrotron cooling could support the interaction with the supernova reverse shock. If confirmed, B1706 PWN could be in the same evolutionary state as Vela and Boomerang, both are suggested to be after the passage of reverse shock. We note that it remains unclear how the toroidal \(B\)-field in all these sources could be retained. It could be magnetic pressure against compression by the shock, or the radio PWNe are recently formed after the shock interaction. Further numerical simulations are needed to distinguish between these cases. As is mentioned, the polarized emission of the ridge is brighter than that in the PWN. We suspect that the ridge region should be thicker than the B1706 PWN locating at the tip of it. It is also likely that the particles in the ridge have a larger density due to mixing with external materials (e.g., SNR ejecta) or particle slowing down. Therefore, there are more particles emitting along the line of sight through the ridge than the PWN. ## 5 Conclusion We present a radio study of B1706 PWN using new and archival ATCA observations at 3, 6, 13, and 21 cm bands. Our main results are summarized below: * The 3 and 6 cm total intensity images show an arc-like structure with a scale of \(\sim 4^{\prime}\times 2^{\prime}\) wrapping around PSR B1706\(-\)44 in the north. No radio emission is detected at the X-ray torus and jet location, and the radio PWN only brightens beyond \(10^{\prime\prime}\) from the pulsar. We show that the radio PWN morphology can be fit by a thick torus model with Doppler boosting effect. The result suggests a bulk flow velocity of \(0.2c\), lower than that in the X-ray torus. * Our polarization results reveal a toroidal \(B\)-field for the PWN and we estimate a field strength of \(\sim 10\mu\)G assuming equipartition between particle and magnetic field energies. This value suggests a slight decay compared with that of the X-ray bright region. * The ridge of the SNR has elongation and magnetic field well aligned with the pulsar proper motion direction. It also has a radio spectrum flatter than the rest of the shell. All these suggest that it could be a pulsar tail instead of a filamentary structure of the SNR. ## Acknowledgments We thank the anonymous referee for providing useful suggestions. C.-Y. Ng is supported by a GRF grant of the Hong Kong Government under HKU 17301618. The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. This study also makes use of software in the application packages Miriad, CIAO and Sherpa.
2306.17490
Reflected entropy and Markov gap in non-inertial frames
We explore the reflected entropy and the Markov gap between two modes of a free fermionic field as observed by accelerating observers. This is done for both bipartite system which is described by Bell state and tripartite systems which are represented by Werner and Greenberger-Horne-Zeilinger states. The reflected entropy degrades monotonically as a result of the Unruh effect, eventually reaching a non-zero minimum value in the limit of infinite acceleration. Furthermore, we show that the Markov gap exhibits monotonic behavior with regard to acceleration in all three cases. In addition, we suggest a function for reflected entropy which decreases monotonically with decreasing Unruh temperature for all states. Finally, we confirm that the reflected entropy for our system does reduce under the partial tracing of the degrees of freedom for our states.
Jaydeep Kumar Basak, Dimitrios Giataganas, Sayid Mondal, Wen-Yu Wen
2023-06-30T09:06:09Z
http://arxiv.org/abs/2306.17490v1
# Reflected entropy and Markov gap in non-inertial frames ###### Abstract We explore the reflected entropy and the Markov gap between two modes of a free fermionic field as observed by accelerating observers. This is done for both bipartite system which is described by Bell state and tripartite systems which are represented by Werner and Greenberger-Horne-Zeilinger states. The reflected entropy degrades monotonically as a result of the Unruh effect, eventually reaching a non-zero minimum value in the limit of infinite acceleration. Furthermore, we show that the Markov gap exhibits monotonic behavior with regard to acceleration in all three cases. In addition, we suggest a function for reflected entropy which decreases monotonically with decreasing Unruh temperature for all states. Finally, we confirm that the reflected entropy for our system does reduce under the partial tracing of the degrees of freedom for our states. ###### Contents * 1 Introduction * 2 The States and the Non-inertial Observers * 2.1 Bell state * 2.2 Werner state * 2.3 Greenberger-Horne-Zeilinger state * 3 Reflected entropy * 3.1 Bounds of reflected entropy * 4 Markov gap * 5 A monotonic function for reflected entropy * 6 Summary and discussion * 7 Acknowledgment * A The Density Matrices of Bell state * B Polygamy inequality * C Monotonicity of Reflected Entropy Introduction Entanglement has emerged as a central issue in diverse areas of theoretical and experimental physics from condensed matter physics to quantum theory of gravity. It has served as a resource of several non local observables for quantum information tasks and quantum communications. A large part of entanglement studies, consists of non-relativistic systems. More recently, the understanding of entanglement has been extended in relativistic settings and has been explored in different directions. It is believed to be important from a fundamental point of view and for applications. Various experiments in quantum information theory involve observers with relativistic velocities which demand a rich theoretical understanding of the characteristics of entanglement in non inertial frames. It is known that entanglement between observers in inertial frame remains constant. On the other hand, when relativistic non-inertial motion is involved the quantum information becomes observer dependent. A simple system to study this phenomenon is to consider the entanglement of a non-interacting massless field from the point of view of an observer who is uniformly accelerated [1]. One may assume in the inertial frame a maximally entangled pure state, whose modes are obtained from a massless scalar field as the solution of the Klein-Gordon equation in Minkowski coordinates. To describe the state from the point of view of non-inertial observers, the massless scalar field should be now considered in Rindler spacetime. A Bogoliubov transformation on the former solution in Minkowski leads to the later one in the Rindler spacetime [2]. An immediate consequence is that a pure state described by inertial observers becomes mixed for the uniformly accelerated observers. Following this approach it has been found that non-inertial observers see a degradation of the entanglement compared to the inertial ones. The studies have been extended to the fermionic systems, [3] following a similar methodology with the solution and their transformation of the Dirac equation in different spacetimes, eventually obtaining the same qualitative results. The ground state of a given mode for inertial observers becomes a two modes state for accelerated observers, each one corresponding to the field observed in the two causally disconnected Rindler regions. This is due to the fact that now the state is thermal, where an information loss appears for an observer in one of the regions, since he/she needs to trace over the other region [4, 5]. So far we have reported the results of the various entanglement measures in the system under study. Nevertheless, a more appropriate, richer measure can be used for the investigation of the correlation of these types of mixed states. In the context of quantum information theory, bipartite entanglement has been studied widely to understand the entanglement structure of any system. Several attempts have been made to explore the multipartite entanglement. This type of correlation has a wide range of applications in various quantum phenomena ranging from quantum gravity to quantum computation. Despite its importance, multipartite entanglement measure is still a challenging field of research in quantum information theory (see [6, 7, 8, 9] and reference therein for recent progress). More recently, the so called reflected entropy has been proposed as a crucial tool to investigate the correlation of a mixed state [10]. This measure involves a canonical purification of a mixed state which is easier to obtain compared to the purification considered in the computation of the entanglement of purification. The computation of reflected entropy was introduced for a conformal field theory (CFT) based on a specific replica technique. The entanglement wedge cross section has been suggested as the holographic dual of reflected entropy in the framework of AdS/CFT correspondence. Note that entanglement wedge cross section has also been proposed to be dual to the entanglement of purification in [11, 12]. Furthermore, it was argued that the tripartite entanglement is necessary for holographic CFT states in order to respect the conjectures about the reflected entropy or entanglement of purification involving the entanglement wedge cross section [13]. These results indicate that the reflected entropy inherits some information about the multipartite entanglement by studying a two party state 1. Following these developments, two non negative measures of tripartite entanglement, named as \(g\) and \(h\), have been proposed in [8]. The measure \(g\) is defined as the difference between the reflected entropy and the mutual information whereas for \(h\), it is the difference between the double of entanglement of purification and the mutual information. Furthermore, the quantity \(g\) has been explored in [16] from an information theoretic point of view where it was related to a specific Markov recovery problem and thus the name Markov gap was coined. A non-vanishing value of the Markov gap precludes a perfect Markov recovery map. It has been also demonstrated that the lower bound of the Markov gap in a holographic CFT state is related to the number of boundaries of the entanglement wedge cross section. Despite of the success of the reflected entropy in the context of AdS/CFT duality, the monotonicity of this measure under partial tracing, which is a requirement of a good measure of correlation, has been questioned very recently in [17]. For a qutrit-qutrit-qubit system, the density matrix of a fine-tuned quantum state can violate the monotonicity of the reflected entropy for the Renyi index \(\xi\in(0,2)\). Footnote 1: There are other entanglement measures i.e. three-tangle [14], \(\pi\)-tangle [15] in the literature which are used frequently to quantify tripartite entanglement. These developments generate an intense interest to understand the reflected entropy from the viewpoint of quantum information theory. In this article we extend these studies on fermionic systems in non-inertial frames. The two leading protagonists are the reflected entropy and the Markov gap considered for three different scenarios. In the first case, we have two observers, one stationary (Alice) and the other accelerating (Bob), who shared a bipartite entangled fermionic mode described by the Bell state in an inertial frame. In the second and third scenarios, there are three observers, with Alice and Charlie being stationary and Bob accelerating uniformly, who initially shared a tripartite entangled fermionic mode described by the Werner state (W-state) and Greenberger-Horne-Zeilinger (GHZ) state. We study in detail the reflected entropy for these states. To begin with, we show that reflected entropy is monotonic under partial trace for our states which indicates that it is a good measure of correlation at least for the states in question and the acceleration of the observers we consider. Reflecting on the recent developments this is a necessary check. As a side relevant exercise we show that exist new states (with no acceleration involved) in higher dimensional Hilbert spaces that violate the monotonicity of reflected entropy confirming and extending the work of [17]. Getting back to our system we study the properties of the reflected entropy for all our states. We find a degradation of correlation between Alice and Bob due to the Unruh effect in all three scenarios. In the limit of infinite acceleration, the reflected entropy reaches to a non-zero minimum value. Meanwhile, the Markov gap between Alice and Bob exhibits a monotonic behavior with respect to acceleration and we notice that it increases for the Bell and GHZ states whereas it decreases with acceleration for the W-state. Furthermore, we have defined a specific dimensionless function, which we call a \(\sigma\)-function, that depends on reflected entropy which, in all scenarios, exhibits monotonic behavior with Unruh temperature and shows interesting properties. This paper is arranged as follows: in section 2 we explain the setup, defining the states and the effect of acceleration on them. These are the states we study later in this article. In the next section 3 we present the results for reflected entropy and also study its bounds in the non-inertial frames. In section 4, we analyze the Markov gap which indicates a specific evolution of three party correlation. Next in section 5, we discuss a monotonic \(\sigma\)-function based on reflected entropy. Finally, in section 6, we summarize our results and present some of the future directions of our work. Our results of the main text are supported by three appendices. ## 2 The States and the Non-inertial Observers We consider a free Dirac field in \((1+1)\)-dimensional Minkowski space with coordinates \(x^{\mu}=(t,z)\) \[i\gamma^{\mu}\partial_{\mu}\psi-m\psi=0\, \tag{2.1}\] where \(m\) is the particle mass, \(\psi\) is the spinor wave function and \(\gamma^{\mu}\) are the Dirac gamma matrices. This field may be expanded in terms of positive (fermions) \(\psi_{k}^{+}\) and negative (anti-fermions) \(\psi_{k}^{-}\) energy solutions as \[\psi=\int dk\left(a_{k}\psi_{k}^{+}+b_{k}^{\dagger}\psi_{k}^{-}\right)\, \tag{2.2}\] where \(k\) is the momentum. The Minkowski creation and annihilation operators \((a_{k}^{\dagger},b_{k}^{\dagger})\) and \((a_{k},b_{k})\) for fermions and anti-fermions satisfy the anticommutation relations \[\left\{a_{i},a_{j}^{\dagger}\right\}=\left\{b_{i},b_{j}^{\dagger}\right\}= \delta_{ij}\, \tag{2.3}\] with all other anticommutators vanishing. The Minkowski vacuum state is given as \[\left|0\right\rangle=\prod_{kk^{\prime}}\left|0_{k}\right\rangle^{+}\left|0_ {k^{\prime}}\right\rangle^{-}, \tag{2.4}\] where the \(\{+,-\}\) superscript on the kets indicates the fermion and anti-fermion vacua. Note that as \((a_{k}^{\dagger})^{2}=(b_{k}^{\dagger})^{2}=0\), there are only two allowed states for each mode, \(\left|0_{k}\right\rangle^{+}\)and \(\left|1_{k}\right\rangle^{+}=a_{k}^{\dagger}\left|0_{k}\right\rangle^{+}\) for fermions, and \(\left|0_{k}\right\rangle^{-}\)and \(\left|1_{k}\right\rangle^{-}=b_{k}^{\dagger}\left|0_{k}\right\rangle^{-}\) for anti-fermions. In our work, we consider three distinct scenarios. In the first case, we consider two non-inertial observers sharing an initially entangled bipartite fermionic field modes described by the Bell state which is given as2 Footnote 2: From now on, we will only consider the fermionic field modes and we will also omit the superscript \(\{+\}\) and subscript \(k\) on the kets. \[\left|B\right\rangle_{AB}=\alpha\left|0\right\rangle_{A}\left|0\right\rangle_{ B}+\sqrt{1-\alpha^{2}}\left|1\right\rangle_{A}\left|1\right\rangle_{B}, \quad\alpha\in(0,1)\, \tag{2.5}\] where the subscripts \(A\) and \(B\) indicate the modes associated with the observers Alice and Bob respectively. In the second and third case we consider two tripartite entangled fermionic field modes represented by the Werner and GHZ states which are given as \[\left|W\right\rangle_{ABC}=\alpha\left|1\right\rangle_{A}\left|0\right\rangle_{ B}\left|0\right\rangle_{C}+\alpha\left|0\right\rangle_{A}\left|0\right\rangle_{ B}\left|1\right\rangle_{C}+\sqrt{1-2\alpha^{2}}\left|0\right\rangle_{A}\left|1 \right\rangle_{B}\left|0\right\rangle_{C},\quad\alpha\in(0,\frac{1}{\sqrt{2}})\, \tag{2.6}\] and \[\left|GHZ\right\rangle_{ABC}=\alpha\left|0\right\rangle_{A}\left|0\right\rangle _{B}\left|0\right\rangle_{C}+\sqrt{1-\alpha^{2}}\left|1\right\rangle_{A}\left| 1\right\rangle_{B}\left|1\right\rangle_{C},\quad\alpha\in(0,1)\, \tag{2.7}\] where the subscripts \(A\), \(B\) and \(C\) indicate the modes associated with the observers Alice, Bob and Charlie. At this stage we need to choose which of the observers is stationary and which is accelerating. For the case of bipartite state eq. (2.5), we choose the observer Alice to be stationary carrying a detector sensitive only to mode \(\ket{n}_{A}\) and Bob moves with uniform acceleration possessing a detector that only detects mode \(\ket{n}_{B}\). As for the tripartite states eqs. (2.6) and (2.7), we choose Alice and Charlie who detect mode \(\ket{n}_{A}\) and mode \(\ket{n}_{C}\) respectively to be stationary, and the accelerating Bob who detects mode \(\ket{n}_{B}\). Rindler coordinates \((\tau,\xi)\) are appropriate to describe an observer moving with uniform acceleration in an inertial plane described by Minkowski coordinates \((t,z)\). To describe the entire Minkowski space, two different sets of Rindler coordinates are required which differ from each other by an overall change in sign. These sets of coordinates define two causally disconnected Rindler regions \(I\) and \(II\) that are defined as \[t=a^{-1}e^{a\xi}\sinh a\tau,\ \ \ \ z=a^{-1}e^{a\xi}\cosh a\tau\,\ \ \ \text{ region I }, \tag{2.8}\] \[t=-a^{-1}e^{a\xi}\sinh a\tau,\ \ \ z=a^{-1}e^{a\xi}\cosh a\tau\,\ \ \ \text{ region II },\] where \(a\) denotes the proper acceleration of the observer Bob. The Rindler regions \(I\) and \(II\) are causally disconnected, the accelerating observer in either region has no access to the other which leads to detection of a thermal mixed state. Henceforth, we will refer the observer in region \(I\) as Bob (B) and the observer in region \(II\) as anti-Bob (\(\bar{B}\)). The Minkowski and Rindler creation and annihilation operators are related to each other through the Bogoliubov transformation as [3, 18, 19, 20, 21, 22] \[\left[\begin{array}{c}a_{k}\\ b_{-k}^{\dagger}\end{array}\right]=\left[\begin{array}{cc}\cos r&-e^{-i\phi} \sin r\\ e^{i\phi}\sin r&\cos r\end{array}\right]\left[\begin{array}{c}c_{k}^{I}\\ d_{-k}^{I\dagger}\end{array}\right], \tag{2.9}\] where \(\left(c_{k}^{I},d_{k}^{I}\right)\) and \(\left(c_{k}^{I\dagger},d_{k}^{I\dagger}\right)\) are annihilation and creation operators for fermion and anti-fermion respectively in Rindler region \(I\). In eq. (2.9), \(r=\tan^{-1}\exp(-\frac{\pi\pi}{a})\) is the acceleration parameter ranging from \(0\leqslant r<\pi/4\) corresponding to \(0\leqslant a<\infty\), and \(\omega\) indicates the Rindler mode frequency as measured by the observer Bob with proper acceleration \(a\). The phase \(\phi\) in eq. (2.9) is unimportant and it can be absorbed in the definition of operators. The corresponding annihilation and creation operators in region \(II\) are \((c_{k}^{I\dagger},c_{k}^{I\dagger})\) and \((d_{k}^{II},d_{k}^{I\dagger})\) respectively. Similarly, the Bogoliubov transformation that mixes an anti-fermion modes in region \(I\) to fermion modes in region \(II\) is given as follows \[\left[\begin{array}{c}b_{k}\\ a_{-k}^{\dagger}\end{array}\right]=\left[\begin{array}{cc}\cos r&e^{-i\phi} \sin r\\ -e^{-i\phi}\sin r&\cos r\end{array}\right]\left[\begin{array}{c}d_{k}^{I}\\ c_{-k}^{II\dagger}\end{array}\right]. \tag{2.10}\] By quantizing the fermionic field in the Minkowski and Rindler frames, respectively, one can relate the Minkowski particle vacuum for Bob's modes in terms of Rindler Fock states through the Bogoliubov transformations as [2, 3]3, Footnote 3: Note that, we have employed the single mode approximation as described in [3]. \[\ket{0}_{B}=\cos r\ket{0}_{B}\ket{0}_{\bar{B}}+\sin r\ket{1}_{B}\ket{1}_{\bar {B}}, \tag{2.11}\] and the excited state \(\ket{1}_{B}\) is given as \[\ket{1}_{B}=\ket{1}_{B}\ket{0}_{\bar{B}}. \tag{2.12}\] Note that as Bob accelerates through the Minkowski vacuum \(\ket{0}\), his detector detects a number of particle given by \[\langle 0|c_{k}^{I\dagger}c_{k}^{I}|0\rangle_{B}=\frac{1}{1+e^{h\omega/(k_{B}T)}}\, \tag{2.13}\] where the Unruh temperature \(T\) is related to the proper acceleration \(a\) as \[T=\frac{a}{2\pi}. \tag{2.14}\] ### Bell state The bipartite fermionic field modes described by Bell state (2.5) may be expressed by employing eqs. (2.11) and (2.12) as \[\ket{B}_{AB}=\alpha\cos r|000\rangle_{ABB}+\alpha\sin r|011\rangle_{ABB}+ \sqrt{1-\alpha^{2}}|110\rangle_{ABB}, \tag{2.15}\] where we have denoted \(\ket{l}_{A}\ket{m}_{B}\ket{n}_{\bar{B}}=\ket{lmn}_{ABB}\) and for simplicity, henceforth we will denote \(\ket{lmn}_{ABB}\) as \(\ket{lmn}\). The mixed density matrices for Alice-Bob (\(AB\)), Alice-anti-Bob (\(A\bar{B}\)) and Bob-anti-Bob (\(B\bar{B}\)) are given as follows \[\rho_{AB}^{(B)}= \alpha^{2}\cos^{2}r|00\rangle\langle 00|+\alpha\sqrt{1-\alpha^{2}} \cos r(|00\rangle\langle 11|+|11\rangle\langle 00|)+\alpha^{2}\sin^{2}r|01 \rangle\langle 01|+(1-\alpha^{2})|11\rangle\langle 11|\, \tag{2.16}\] \[\rho_{AB}^{(B)}= \alpha^{2}\cos^{2}r|00\rangle\langle 00|+\alpha^{2}\sin^{2}r|01 \rangle\langle 01|+\alpha\sqrt{1-\alpha^{2}}\sin r(|01\rangle\langle 10|+|10\rangle \langle 01|)+(1-\alpha^{2})|10\rangle\langle 10|\,\] \[\rho_{BB}^{(B)}= \alpha^{2}\cos^{2}r|00\rangle\langle 00|+\alpha^{2}\cos r\sin r(|00 \rangle\langle 11|+|11\rangle\langle 00|)+(1-\alpha^{2})|10\rangle\langle 10|+\alpha^{2} \sin^{2}r|11\rangle\langle 11|\,\] where the superscript refers to the state, in this case the Bell. Similarly, the density matrices for Alice, Bob and anti-Bob respectively are \(\rho_{A}^{(B)}\), \(\rho_{B}^{(B)}\) and \(\rho_{B}^{(B)}\). They can be found as \[\begin{split}\rho_{A}^{(B)}=&\alpha^{2}|0\rangle \langle 0|+\left(1-\alpha^{2}\right)|1\rangle\langle 1|\,\\ \rho_{B}^{(B)}=&\alpha^{2}\cos^{2}r|0\rangle \langle 0|+\left(1-\alpha^{2}\cos^{2}r\right)|1\rangle\langle 1|\,\\ \rho_{B}^{(B)}=&\big{(}1-\alpha^{2}\sin^{2}r \big{)}|0\rangle\langle 0|+\alpha^{2}\sin^{2}r|1\rangle\langle 1|\.\end{split} \tag{2.17}\] ### Werner state The tripartite entangled fermionic mode described by W-state in eq. (2.6) may be express as follows by employing eqs. (2.11) and (2.12) \[\begin{split}|W\rangle_{ABC}=&\alpha\cos r|1000 \rangle_{ABBC}+\alpha\sin r|1110\rangle_{ABBC}+\alpha\cos r|0001\rangle_{ABBC}+ \alpha\sin r|0111\rangle_{ABBC}+\sqrt{1-2\alpha^{2}}|0100\rangle_{ABBC}.\end{split} \tag{2.18}\] The density matrices of \(AB\), \(A\bar{B}\), and \(B\bar{B}\) are given as \[\begin{split}\rho_{AB}^{(W)}=&\alpha^{2}\cos^{2}r|0 \rangle\langle 00|+\left((1-2\alpha^{2})+\alpha^{2}\sin^{2}r\right)|01\rangle \langle 01|+\alpha\sqrt{1-2\alpha^{2}}\cos r(|10\rangle\langle 01|+|01 \rangle\langle 10|)\\ &+\alpha^{2}\cos^{2}r|10\rangle\langle 10|+\alpha^{2}\sin^{2}r|11 \rangle\langle 11|\,\\ \rho_{AB}^{(W)}=&((1-2\alpha^{2})+\alpha^{2}\cos^{2 }r)|00\rangle\langle 00|+\alpha^{2}\sin^{2}r|01\rangle\langle 01|+\alpha\sqrt{1-2 \alpha^{2}}\sin r(|00\rangle\langle 11|+|11\rangle\langle 00|)\\ &+\alpha^{2}\cos^{2}r|10\rangle\langle 10|+\alpha^{2}\sin^{2}r|11 \rangle\langle 11|\,\\ \rho_{BB}^{(W)}=& 2\alpha^{2}\cos^{2}r|00\rangle \langle 00|+2\alpha^{2}\cos r\sin r(|00\rangle\langle 11|+|11\rangle \langle 00|)+2\alpha^{2}\sin^{2}r|11\rangle\langle 11|+(1-2\alpha^{2})|10 \rangle\langle 10|\.\end{split} \tag{2.19}\] While the density matrices of \(A\), \(B\) and \(\bar{B}\) are \[\begin{split}\rho_{A}^{(W)}=&(1-\alpha^{2})|0\rangle \langle 0|+\alpha^{2}|1\rangle\langle 1|\,\\ \rho_{B}^{(W)}=& 2\alpha^{2}\cos^{2}r|0\rangle \langle 0|+(1-2\alpha^{2}\cos^{2}r)|1\rangle\langle 1|\,\\ \rho_{B}^{(W)}=&(1-2\alpha^{2}\sin^{2}r)|0 \rangle\langle 0|+2\alpha^{2}\sin^{2}r|1\rangle\langle 1|\.\end{split} \tag{2.20}\] ### Greenberger-Horne-Zeilinger state By employing eqs. (2.11) and (2.12), the GHZ state eq. (2.7) may further be expressed as \[|GHZ\rangle_{ABC}= \alpha\cos r|0000\rangle_{ABBC}+\alpha\sin r|0110\rangle_{ABBC}+ \sqrt{1-\alpha^{2}}|1101\rangle_{ABBC}. \tag{2.21}\] The density matrices of \(AB\), \(A\bar{B}\), and \(B\bar{B}\) are as follows \[\begin{split}\rho_{AB}^{(GHZ)}=&\alpha^{2}\cos^{2}r| 00\rangle\langle 00|+\alpha^{2}\sin^{2}r|01\rangle\langle 01|+(1-\alpha^{2})|11 \rangle\langle 11|\,\\ \rho_{AB}^{(GHZ)}=&\alpha^{2}\cos^{2}r|00\rangle \langle 00|+\alpha^{2}\sin^{2}r|01\rangle\langle 01|+(1-\alpha^{2})|10 \rangle\langle 10|\,\\ \rho_{B\bar{B}}^{(GHZ)}=&\alpha^{2}\cos^{2}r|00 \rangle\langle 00|+\alpha^{2}\cos r\sin r(|00\rangle\langle 11|+|11\rangle \langle 00|)+(1-\alpha^{2})|10\rangle\langle 10|+\alpha^{2}\sin^{2}r|11 \rangle\langle 11|\,\end{split} \tag{2.22}\] while the density matrices of \(A\), \(B\) and \(\bar{B}\) read \[\begin{split}\rho_{A}^{(GHZ)}=&\alpha^{2}|0\rangle \langle 0|+(1-\alpha^{2})|1\rangle\langle 1|\,\\ \rho_{B}^{(GHZ)}=&\alpha^{2}\cos^{2}r|0\rangle \langle 0|+(1-\alpha^{2}\cos^{2}r)|1\rangle\langle 1|\,\\ \rho_{B}^{(GHZ)}=&(1-\alpha^{2}\sin^{2}r)|0 \rangle\langle 0|+\alpha^{2}\sin^{2}r|1\rangle\langle 1|\.\end{split} \tag{2.23}\] ## 3 Reflected entropy In this section we study the reflected entropy between the observers Alice-Bob (\(AB\)), Alice-anti-Bob (\(A\bar{B}\)) and Bob-anti-Bob (\(B\bar{B}\)) for Bell state given by eq. (2.15), W-state given by eq. (2.18) and GHZ-state given by eq. (2.21) respectively. Before delving into the details of the computation we briefly review reflected entropy in quantum information theory. To begin with, we consider a bipartite density matrix \(\rho_{AB}\) in a Hilbert space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), where \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) are Hilbert spaces associated to the subsystems \(A\) and \(B\) respectively. The entanglement entropy of the subsystem \(A\) is defined as the von Neumann entropy of the reduced density matrix \(\rho_{A}=\operatorname{Tr}_{B}\rho_{AB}\) as, \[S(A)=-\operatorname{Tr}\left(\rho_{A}\log\rho_{A}\right). \tag{3.1}\] The mutual information which measures the total correlation between the subsystems \(A\) and \(B\) is defined as \[I(A:B)=S(A)+S(B)-S(AB)\, \tag{3.2}\] which is symmetric in \(A\) and \(B\). As it has been mentioned in the introduction, for mixed states the entanglement entropy is not the most appropriate entanglement measure and other mixed state entanglement measures are to be used. Note that any mixed state \(\rho_{AB}\) in quantum information theory may be expressed as a sum of pure states \[\rho_{AB}=\sum_{a}p_{a}\rho_{AB}^{(a)}\,\quad\rho_{AB}^{(a)}=\left|\phi_{a} \right\rangle\left\langle\phi_{a}\right|, \tag{3.3}\] where \(\left|\phi_{a}\right\rangle\) is an orthonormal basis of \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), and eigenvalues \(p_{a}\) are non-negative \(0\leq p_{a}\leq 1\). We construct the Schmidt decomposition of \(\left|\phi_{a}\right\rangle\) by choosing appropriate bases \(\left|i_{a}\right\rangle_{A}\in\mathcal{H}_{A}\) and \(\left|i_{b}\right\rangle_{B}\in\mathcal{H}_{b}\) as \[\left|\phi_{a}\right\rangle=\sum_{i}\sqrt{l_{a}^{i}}\left|i_{a}\right\rangle_{ A}\left|i_{a}\right\rangle_{B}\, \tag{3.4}\] where \(l_{a}^{i}\) is a non-negative quantity with the normalization \(\sum_{i}l_{a}^{i}=1\). By using eq. (3.4), the density matrix eq. (3.3) may be expressed as \[\rho_{AB}=\sum_{a,i,j}p_{a}\sqrt{l_{a}^{i}l_{a}^{j}}|i_{a}\rangle_{A}|i_{a} \rangle_{B}\langle j_{a}|_{A}\langle j_{a}|_{B}. \tag{3.5}\] We now interpret \(\langle j_{a}|_{A}\) and \(\langle\ j_{a}|_{B}\) as states \(|j_{a}\rangle_{A^{*}}\) and \(|j_{a}\rangle_{B^{*}}\) on Hilbert spaces \(\mathcal{H}_{A}^{*}\) and \(\mathcal{H}_{B}^{*}\) respectively, and define a pure state \(\left|\sqrt{\rho_{AB}}\right\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B} \otimes\mathcal{H}_{A}^{*}\otimes\mathcal{H}_{B}^{*}\) as \[\left|\sqrt{\rho_{AB}}\right\rangle=\sum_{a,i,j}\sqrt{p_{a}l_{a}^{i}l_{a}^{j} }|i_{a}\rangle_{A}|i_{a}\rangle_{B}|j_{a}\rangle_{A^{*}}|j_{a}\rangle_{B^{*}}. \tag{3.6}\] This state \(\left|\sqrt{\rho_{AB}}\right\rangle\) known as purification of the state \(\rho_{AB}\). The reflected entropy between \(A\) and \(B\) for \(\rho_{AB}\) is defined as the von Neumann entropy of \(\rho_{AA^{*}}=\mathrm{Tr}_{BB^{*}}\left|\sqrt{\rho_{AB}}\right\rangle\left\langle \sqrt{\rho_{AB}}\right|\) which is given as [23, 24, 25, 26, 10] \[S_{R}(A:B)=-\mathrm{Tr}_{AA^{*}}\left[\rho_{AA^{*}}\log\rho_{AA^{*}}\right]. \tag{3.7}\] It is interesting to note that the reflected entropy is upper bounded by \(\min\{2S_{A},2S_{B}\}\) and lower bounded by the mutual information \(I(A:B)\) as \[\min\{2S_{A},2S_{B}\}\geq S_{R}(A:B)\geq I(A:B). \tag{3.8}\] For any tripartite pure state reflected entropy satisfies the polygamy inequality which is given as \[S_{R}(A:B)+S_{R}(A:C)\geq S_{R}(A:BC). \tag{3.9}\] Apart from these, reflected entropy can also distinguish isospectral density matrices [27]. One example of such density matrices are \[\rho_{1}=\frac{1}{3}\left(\begin{array}{cccc}1&0&0&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&0\end{array}\right)\,\qquad\rho_{2}=\frac{1}{3}\left(\begin{array}{cccc}1&0&0&0 \\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&2\end{array}\right)\, \tag{3.10}\] where, \(\rho_{1}\) and \(\rho_{2}\) can be written in the basis \(\{\left|00\right\rangle,\left|01\right\rangle,\left|10\right\rangle,\left|11 \right\rangle\}\) by tracing out one party from a W-state, \(\left|W\right\rangle_{ABC}=\frac{1}{\sqrt{3}}(\left|100\right\rangle+\left|01 0\right\rangle+\left|001\right\rangle)\) and GHZ state, \(\left|GHZ\right\rangle_{ABC}=\frac{1}{\sqrt{3}}(\left|000\right\rangle+\sqrt{2} \left|111\right\rangle)\) respectively. For example, in this case one can compute \(S_{R}(\rho_{1})=1.49\) and \(S_{R}(\rho_{2})=.92\) which clearly distinguishes these two isospectral density matrices. We now turn to the computation of reflected entropy for the bipartite and tripartite fermionic field mode as described in eqs. (2.5) to (2.7). To compute reflected entropy \(S_{R}(A:B)\), \(S_{R}(A:\bar{B})\), and \(S_{R}(B:\bar{B})\) between \(AB\), \(A\bar{B}\), and \(B\bar{B}\), we first construct the canonically purified states \(\left|\sqrt{\rho_{AB}}\right\rangle\), \(\left|\sqrt{\rho_{AB}}\right\rangle\), and \(\left|\sqrt{\rho_{B\bar{B}}}\right\rangle\) by doubling the Hilbert space as mentioned in eq. (3.6). Now the reflected entropy \(S_{R}(A:B)\), \(S_{R}(A:\bar{B})\), and \(S_{R}(B:\bar{B})\) are obtained by using the eq. (3.7) for Bell, W and GHZ states (see section A for details). Note that in the inertial frame \(r=0\), and \(\alpha=\frac{1}{\sqrt{3}}\) correspond to the case of maximally entangled Bell and GHZ state, and \(\alpha=\frac{1}{\sqrt{3}}\) for the maximally entangled W-state. In figs. 1a to 1c, we plot \(S_{R}(A:B)\), \(S_{R}(A:\bar{B})\), and \(S_{R}(B:\bar{B})\) as a function of acceleration \(r\) for fixed \(\alpha\) for Bell, W and GHZ state respectively. We notice that \(S_{R}(A:B)\) decreases whereas \(S_{R}(A:\bar{B})\) increases due to Unruh effect for all the three cases. Furthermore, in the infinite acceleration limit they both reach at the same non-vanishing final value, which indicates that the observers \(B\) and \(\bar{B}\) becomes indistinguishable at this limit. We notice that as the correlation between \(AB\) decreases, the correlation between \(A\bar{B}\) grows which is due to the fact of the correlation sharing. Indeed, this phenomena has also been observed for other entanglement measures as well e.g., entanglement negativity [3]. On the other hand \(S_{R}(B:\bar{B})\) increases monotonically starting from zero at \(r=0\) culminating to a final non-zero value in the infinite acceleration limit where \(r=\frac{\pi}{4}\). Let us now briefly discuss some of the recent developments which raised a concern on the generic validity and applicability of the reflected entropy as a correlation measure in quantum information theory. It has been recently noticed [17] that in a qutrit-qutrit-qubit system there exist quantum states which violate the monotonicity of reflected entropy under the operation of partial trace. Nevertheless, it remains an important quantity in the context of holography where entanglement wedge cross section is considered as a bulk dual of reflected entropy [10]. Therefore, utilizing the nesting property of the entanglement wedge, it can be argued that reflected entropy in holographic CFT does not suffer from non-monotonicity [10, 28]. However, for our states in this work it is essential to confirm that the reflected entropy does reduce under the partial tracing of the degrees of freedom. As a side development to this task we confirm and extend the work of [17], by showing that there exists another three party state in the Hilbert space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}=\mathbb{C}^{4} \otimes\mathbb{C}^{3}\otimes\mathbb{C}^{2}\) which violets the monotonicity of the \(\xi\)-th Renyi reflected entropy in the domain \(\xi\in(0,2)\) \[\rho_{ABC}=\frac{1}{6a+2b}\Big{[}a\Big{(}|000\rangle\langle 000|+|110 \rangle\langle 110|+|200\rangle\langle 000|+|210\rangle\langle 110|+ |300\rangle\langle 000|+|310\rangle\langle 110|\Big{)} \tag{3.11}\] \[+b\Big{(}|020\rangle\langle 020|+|121\rangle\langle 121|\Big{)} \Big{]}.\] In the above expression, \(a\) and \(b\) are two parameters which can be treated as classical probabilities. Using this state in eq. (3.11), one can compute \(\xi\)-th Renyi reflected entropy and check the monotonicity under partial trace. It is observed that for some fixed range of parameters \(a\) and \(b\), the quantity \(S_{R}^{\xi}(A:BC)-S_{R}^{\xi}(A:B)\) becomes negative (fig. 1(a)). It is easy to check the conditions numerically which yields that \(a\) should be larger than \(b\). Similar to [17], increasing value of \(\frac{a}{b}\) pushes the region of violation towards \(\xi=2\). Furthermore, for a fixed value of \(\xi\), it can be observed in fig. 1(b) that the violation of monotonicity occurs at different values of the ratio \(p=\frac{a}{b}\). The state eq. (3.11) can be generalized for Hilbert space with arbitrary dimensions i.e. \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}=\mathbb{C}^{n+1} \otimes\mathbb{C}^{m+1}\otimes\mathbb{C}^{2}\) where violation of monotonicity is observed for Renyi reflected entropy as we show in section C. For the states considered in this study, we are able to confirm that the reflected entropy does reduce under the partial tracing of the degrees of freedom. We include some representative results in section C. Consequently, we argue that reflected entropy is a good correlation measure for our states and the non-inertial observers in our setup. Figure 1: Reflected entropy for Bell, Werner and GHZ state are plotted as a function acceleration. ### Bounds of reflected entropy In figs. 3 to 5, we provide an illustrative representation of the upper and lower bound followed by \(S_{R}(A:B)\) as mentioned in eq. (3.8) for Bell, Werner and GHZ states respectively. For the case of Bell state, the density matrix \(\rho_{AB}\) at \(r=0\) is pure and entangled hence reflected entropy \(S_{R}(A:B)\) saturates both upper and lower bounds. Interestingly, increasing \(r\) induces tripartite entanglement into the system which leads to the non-saturation of the bound depicted in fig. 2(a). In fig. 2(b) we observe that for \(\alpha=0\) and \(\alpha=1\) both the bounds are saturated as expected and near \(\alpha=0\), \(S_{R}(A:B)\) (blue solid curve) follows closer the \(I(A:B)\) (red dot-dashed curve) whereas close to \(\alpha=1\), it follows closer the \(min\{2S_{A},2S_{B}\}\) (orange dashed curve). We notice the clear shift of dominance between \(min\{2S_{A},2S_{B}\}\) (orange dashed curve) close to \(\alpha\simeq.8\) from \(2S_{A}\) to \(2S_{B}\), where the exact value \(\alpha\) that this happens depends on the parameters we chose for the state. On the other hand for the W-state, \(\rho_{AB}\) at \(r=0\) is mixed and entangled, as a result none of the bounds are saturated indicating the existence of tripartite entanglement which increases with \(r\) fig. 3(a). In fig. 3(b) we see that for \(\alpha=0\) both the bounds are saturated and at \(\alpha=1\) only the lower bound is saturated. We also observe that unlike Bell state, for W-state \(S_{R}(A:B)\) (blue solid curve) near \(\alpha=0\) follows close the \(min\{2S_{A},2S_{B}\}\) (orange dashed curve) whereas close to \(\alpha=1/\sqrt{2}\), it comes closer to the \(I(A:B)\) (red dot-dashed curve). Furthermore, we observe a change of dominance in \(min\{2S_{A},2S_{B}\}\) from \(2S_{A}\) to \(2S_{B}\) near \(\alpha\simeq.6\), as in the previous case. As for the GHZ state, at \(r=0\) the density matrix \(\rho_{AB}\) is mixed and separable hence only the lower bound is saturated. With increasing \(r\) the reflected entropy \(S_{R}(A:B)\) (blue solid curve) decreases and none of the bounds are saturated at large \(r\) as it can be seen at fig. 4(a). This refers to the existence of the tripartite entanglement at finite \(r\). When \(S_{R}(A:B)\) is plotted as a function of \(\alpha\) at a fixed \(r\), we observe that both the bound are saturated at \(\alpha=0\) and \(\alpha=1\) presented in fig. 4(b). Notice the clear change of dominance of \(min\{2S_{A},2S_{B}\}\) (orange dashed) near \(\alpha\simeq.8\) from \(2S_{A}\) to \(2S_{B}\). Figure 3: (\(a\)) Reflected entropy \(S_{R}(A:B)\) for maximally entangled Bell state as function of \(r\) is compared with its upper and lower bound. (\(b\)) Reflected entropy \(S_{R}(A:B)\) as a function of \(\alpha\) for \(r=\frac{\pi}{4}\) is compared with its upper and lower bound. Figure 2: Monotonicity of Renyi reflected entropy under partial tracing for the state in eq. (3.11). ## 4 Markov gap In this section we will study the Markov gap \(h\) which is proposed as a measure of tripartite entanglement [8]. For a bipartite system \(A\cup B\), it is defined as the difference between reflected entropy and mutual information [6, 16, 29] \[h(A:B)=S_{R}(A:B)-I(A:B). \tag{4.1}\] This quantity is identified with conditional mutual information [10] \[h(A:B)=I\left(A:B^{\star}\mid B\right)=I\left(B:A^{\star}\mid A\right)\, \tag{4.2}\] where the conditional mutual information is defined in terms of the linear combination of entanglement entropies as follows \[I(A:C\mid B)=S(AB)+S(BC)-S(ABC)-S(B)=I(A:BC)-I(A:B). \tag{4.3}\] The fidelity of a Markov recovery process is related to the conditional mutual information as [30] \[\max_{\mathcal{R}_{B\to BC}}F\left(\rho_{ABC},\mathcal{R}_{B\to BC} \left(\rho_{AB}\right)\right)\geq\exp^{-I(A:C\mid B)}. \tag{4.4}\] Here the Markov recovery process is understood as a technique to obtain the state \(\rho_{ABC}\) from any of its bipartite reduced state using Markov recovery map \(\mathcal{R}_{B\to BC}\)4. The quantity \(F\) in eq. (4.4) is known as quantum fidelity which for two density matrices \(\rho\) and \(\sigma\) is defined as Footnote 4: The Markov recovery map essentially is a quantum channel which produces a bipartite system from a single party system. \[F(\rho,\sigma)=\left[\mathrm{Tr}\,\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right]^{ 2}\,. \tag{4.5}\] Figure 4: (\(a\)) The reflected entropy \(S_{R}(A:B)\) for the maximally entangled W-state as function of \(r\) is compared with its upper and lower bound. (\(b\)) Reflected entropy \(S_{R}(A:B)\) as a function of \(\alpha\) for \(r=\frac{\pi}{8}\) is compared with its upper and lower bound. Figure 5: (\(a\)) The reflected entropy \(S_{R}(A:B)\) for the maximally entangled GHZ state as function of \(r\) is compared with its upper and lower bound. (\(b\)) Reflected entropy \(S_{R}(A:B)\) as a function of \(\alpha\) for \(r=\frac{\pi}{4}\) is compared with its upper and lower bound. Note that it is symmetric in its arguments which lies in the range \(0\leq F(\rho,\sigma)\leq 1\). Utilizing the canonically purified state \(\rho_{ABA^{*}B^{*}}\), an inequality can be proposed as [16] \[h(A:B)\geq-\max_{\mathcal{R}_{B\to BB^{*}}}\log F\left(\rho_{ABB^{*}}, \mathcal{R}_{B\to BB^{*}}\left(\rho_{AB}\right)\right), \tag{4.6}\] where eqs. (4.2) and (4.4) are used to obtain the above equation. Markov gap can be studied in the present setup of this article where we investigate three party (Alice-Bob-Anti Bob) entanglement for Bell, Werner and GHZ states in non inertial frame. The characteristic behavior of the Markov gap \(h(A:B)\), \(h(A:\bar{B})\) and \(h(B:\bar{B})\) as a function of acceleration \(r\) for a constant \(\alpha\) are depicted in fig. 6. Interestingly, we observe that the Markov gap for all these three cases increase monotonically for Bell state fig. (a)a and GHZ state fig. (c)c, whereas for W-state \(h(A:B)\) decreases but \(h(A:\bar{B})\) and \(h(B:\bar{B})\) increase monotonically fig. (b)b. These figures indicate a few characteristics of multipartite entanglement in these three states. For the Bell state, the entanglement is purely bipartite at \(r=0\) and consequently Markov gap vanishes. Anti-Bob (\(\bar{B}\)) evolves with increasing acceleration which creates tripartite correlation in the system. As a result, the Markov gaps \(h(A:B)\), \(h(A:\bar{B})\) and \(h(B:\bar{B})\) increases with the acceleration. Interestingly, the authors in [3] studied the evolution of three party correlation by exploring a measure named residual tangle [14]. Their system under consideration was the same as the first case in this article i.e. Bell state with accelerating Bob. It was found that the residual tangle is zero for any value of acceleration. This result was interpreted as the absence of tripartite correlation where all the entanglement present in the system is bipartite in nature. As the Markov gap is sensitive towards the the tripartite entanglement, our results can be interpreted as the presence of three party entanglement in the Bell state under acceleration even if the residual tangle vanishes. This behavior of the Markov gap suggests that it might be able to serve as a fine probe of multipartite entanglement. Interestingly, on the other hand, W-state has tripartite entanglement between Alice, Bob and anti-Bob in the inertial frame (\(r=0\)) which is indicated by the non-zero initial value of \(h(A:B)\). Furthermore, anti-Bob does not exist in the inertial frame where the Markov gap related to him is zero. The Markov gap \(h(A:B)\) shows a monotonic decreasing behavior because of the entanglement sharing between Alice, Bob and anti-Bob with increasing acceleration. Note that, at \(r=\frac{\pi}{4}\), the Markov gap \(h(A:B)\) coincides with \(h(A:\bar{B})\) similar to other findings in this article and in [3]. Furthermore, for GHZ states \(h(A:B)\), \(h(A:\bar{B})\) and \(h(B:\bar{B})\) increase monotonically as function of \(r\) starting from Figure 6: Markov gap for Bell, Werner and GHZ state are plotted as a function of acceleration. zero at \(r=0\). The nature of the tripartite entanglement computed by the Markov gap for GHZ state are similar to that of Bell state with accelerating Bob as depicted in fig. 5(c). ## 5 A monotonic function for reflected entropy In this section we will study few properties of reflected entropy in our setup by defining a specific function of temperature and frequency. Here we use the relation between the acceleration and the Unruh temperature, \[r=\tan^{-1}(e^{-\frac{\omega}{2T}})\, \tag{5.1}\] to obtain the characteristics of reflected entropy with respect to the Unruh temperature \(T\). We find that for fixed \(\omega\) and increasing \(T\) all the maximally entangled states have a monotonically decreasing behavior of \(S_{R}(A:B)\) and \(I(A:B)\) with \(T\). We also notice that the dimensionless, single parameter function \(\sigma(T)\) which we define as, \[\sigma(T)=\frac{1}{\omega}\frac{\partial S_{R}}{\partial\left(\frac{1}{T} \right)}\, \tag{5.2}\] where \(\omega\) can be considered as the fixed scale, has a monotonic properties with respect to the increase of temperature. In figs. 6(a) to 6(c), we observe \(\sigma(T)\) increases monotonically with increasing Unruh temperature, meaning that the entanglement measure \(S_{R}\) we are interested in decreases for increasing acceleration. The \(\sigma\)-function tends to zero for \(T\to 0\) and for \(T\to\infty\) saturates to fixed values which are different for each state and independent of \(\omega\). Notice that the \(\sigma\)-function does not suffer from divergences and for our states, for two party and three party system having bipartite, W and GHZ type entanglement, the generic behavior remains the same. We point out that the definition of this function is partly motivated by the well known \(c\)-function in terms of the entanglement entropy proposed and further studied in [31, 32, 33, 34, 35]. By establishing the monotonicity of the function, one may like to question whether there exist a clear physically relevant interpretation of the function in relation to the degrees of freedom shared between the two parties. An initial approach is that observers with higher accelerations are further away from the origin, covering only a subspace of the observers with lower accelerations and therefore should be associated with less degrees of freedom. It is worthy to be studied in even more complex setups, in order to obtain a more solid interpretation. Summary and discussion In this paper we have investigated the behavior of reflected entropy between two modes of free fermionic fields in a non-inertial frame from the perspective of two relatively accelerated observers. Alice and Bob, for bipartite system described by Bell state, and added Charlie for tripartite system represented by Werner and GHZ state. We confirm that for our 3-qubit and 4-qubit states, Renyi reflected entropy is monotonic under partial trace, allowing us to use reflected entropy as a legitimate measure of correlation. This is an essential check since recent developments raised concerns about the generic validity and applicability of the reflected entropy as a correlation measure in quantum information theory [17], by pointing out the existence of a fine-tuned state that violates the desirable monotonicity. In fact we validate these developments by showing that such fine tuned states can exist in higher dimensional Hilbert spaces and we explicitly present a class of such states. Nevertheless, getting back to our setup and our used states in this work we confirm that the reflected entropy does reduce under the partial tracing of the degrees of freedom. We show that the reflected entropy between Alice and Bob degrades with acceleration due to the Unruh effect, culminating in a non-vanishing minimum value. We also computed the Reflected entropy between Alice and anti-Bob (who is causally separated from the observer Bob in region I) and Bob and anti-Bob. We discovered that the reflected entropy increases monotonically with acceleration in these two circumstances. Furthermore, we explored the Markov gap, which is a measure of tripartite entanglement, between all three parties Alice-Bob, Alice-anti-Bob, and Bob-anti-Bob. We find that the Markov gap increases monotonically with acceleration in all three scenarios for Bell and GHZ state whereas for W-state it declines for Alice-Bob but grows for Alice-anti-Bob, and Bob-anti-Bob. In Bell and GHZ state, for vanishing acceleration, the Markov gap was zero. We have argued that acceleration causes tripartite entanglement in the system for all the three states in consideration, as evidenced by the non-zero value of the Markov gap at finite and even infinite acceleration in figs. 5(a) to 5(c). This observation suggests that the Markov gap could be used to characterize the there-body correlation encoded for tripartite states apart from some other measures in the literature. We have suggested a dimensionless \(\sigma\)-function of reflected entropy for a fixed mode frequency which preserves monotonicity with increasing temperature. Due to the character of the reflected entropy, this specific function is free from any divergences. The function exhibits always a convergence to certain values at \(T\to 0\) and \(T\rightarrow\infty\). We suggest the possibility that this function contains information of the effective degrees of freedom or the shared correlation between two parties. As for future direction, it would be interesting to ask what happens if Alice and Bob both accelerate simultaneously with different rate of acceleration. Intuitively, one could expect that reflected entropy between Alice and Bob to further decrease, eventually reaching a non-zero value in the infinite acceleration limit. Another interesting path for future research along this line is to address the same question for black hole spacetimes. Besides, it will be exciting to check the generalized properties of the \(\sigma\)-function independent of the choices of states. ## 7 Acknowledgment The authors are grateful to V. Malvimat for useful discussions. D.G. would like to thank the Department of Theoretical Physics of CERN for hospitality during the final stages of this work. The research work of JKB supported by the National Science and Technology Council of Taiwan with the grant 112-2636-M-110-006. The research work of D.G. is supported by the National Science and Technology Council (NSTC) of Taiwan with the Young Scholar Columbus Fellowship grant 112-2636-M-110-006. This research work of SM and WW are supported in part by the Taiwan's Ministry of Science and Technology (109-2112-M-033-005-MY3) and the National Center for Theoretical Sciences (NCTS). ## Appendix A The Density Matrices of Bell state The density matrices \(\rho_{AB}^{(B)}\), \(\rho_{AB}^{(B)}\), and \(\rho_{BB}^{(B)}\) for the Bell state have been given in the section 2.1. Using a proper basis \(\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}\), \(\rho_{AA^{*}}=\mathrm{T}_{BB^{*}}(|\sqrt{\rho_{ABA^{*}B^{*}}}\rangle\langle \sqrt{\rho_{ABA^{*}B^{*}}}|)\), \(\bar{\rho}_{AA^{*}}=\mathrm{T}_{BB^{*}}(|\sqrt{\rho_{ABA^{*}B^{*}}}\rangle \langle\sqrt{\rho_{ABA^{*}B^{*}}}|)\) and \(\rho_{BB^{*}}=\mathrm{T}_{BB^{*}}(|\sqrt{\rho_{BB^{*}B^{*}}}\rangle\langle \sqrt{\rho_{BB^{*}B^{*}}}|)\) are given as follows \[\left(\begin{array}{ccc}\frac{\alpha^{2}\big{(}\big{(}2^{2}-1\big{)}\cos 2r+1 \big{)}}{-\alpha^{2}+\alpha^{2}\cos 2r+2}&0&0&-\frac{\sqrt{2}\alpha^{2}\big{(} \alpha^{2}-1\big{)}\sin^{2}r}{\sqrt{\alpha^{2}\sin^{2}r}\big{(}-\alpha^{2}+ \alpha^{2}\cos 2r+2\big{)}}\\ 0&-\frac{2\alpha^{2}\big{(}\alpha^{2}-1\big{)}\cos^{2}r}{-\alpha^{2}+\alpha^{ 2}\cos 2r+2}&0\\ 0&0&-\frac{2\alpha^{2}\big{(}\alpha^{2}-1\big{)}\cos^{2}r}{-\alpha^{2}+\alpha ^{2}\cos 2r+2}&0\\ -\frac{\sqrt{2}\alpha^{2}\big{(}\alpha^{2}-1\big{)}\sin^{2}r}{\sqrt{\alpha^{2 }\sin^{2}r}\big{(}-\alpha^{2}+\alpha^{2}\cos 2r+2}\big{)}&0&\frac{2\big{(} \alpha^{2}-1\big{)}^{2}}{-\alpha^{2}+\alpha^{2}\cos 2r+2}\end{array}\right),\] (A.1) \[\left(\begin{array}{cccc}\frac{\alpha^{2}\left(2\alpha^{2}-1\right)\cos(2r)-1 \right)}{\alpha^{2}+\alpha^{2}\cos(2r)-2}&0&0&-\frac{\alpha\left(\alpha^{2}-1 \right)\cos(r)}{\sqrt{1-\alpha^{2}\cos^{2}(r)}}\\ 0&\frac{2\alpha^{2}\left(\alpha^{2}-1\right)\sin^{2}(r)}{\alpha^{2}+\alpha^{2 }\cos(2r)-2}&0&0\\ 0&0&\frac{2\alpha^{2}\left(\alpha^{2}-1\right)\sin^{2}(r)}{\alpha^{2}+\alpha^ {2}\cos(2r)-2}&0\\ -\frac{\alpha\left(\alpha^{2}-1\right)\cos(r)}{\sqrt{1-\alpha^{2}\cos^{2}(r)} }&0&0&-\frac{2\left(\alpha^{2}-1\right)^{2}}{\alpha^{2}+\alpha^{2}\cos(2r)-2} \end{array}\right),\] (A.2) \[\left(\begin{array}{cccc}\alpha^{2}\cos^{4}(r)&0&0&\alpha\sqrt{1-\alpha^{2} \cos^{2}(r)}\\ 0&\alpha^{2}\sin^{2}(r)\cos^{2}(r)&0&0\\ 0&0&\alpha^{2}\sin^{2}(r)\cos^{2}(r)&0\\ \alpha\sqrt{1-\alpha^{2}\cos^{2}(r)}&0&0&-\alpha^{2}+\alpha^{2}\sin^{2}(r) \cos^{2}(r)+1\end{array}\right).\] (A.3) The Reflected entropy \(S_{R}(A:B)\), \(S_{R}(A:\bar{B})\), and \(S_{R}(B:\bar{B})\) may be obtained by employing the eq. (3.7) and using the information above. The expression of these density matrices \(\rho_{AA^{\ast}}\), \(\bar{\rho}_{AA^{\ast}}\) and \(\rho_{BB^{\ast}}\) for W-state and GHZ state are large and we have not included them here for presentation reasons. ## Appendix B Polygamy inequality To show the polygamy inequality eq. (3.9), we construct \(S_{R}(A:B)+S_{R}(A:\bar{B})-S_{R}(A:B\bar{B})\) for Bell state \(S_{R}(A:B)+S_{R}(A:\bar{B}C)-S_{R}(A:B\bar{B}C)\) for Werner and GHZ states for fixed \(\alpha\) and plot these in figs. 7(a) to 7(c). We notice that for Bell and GHZ states it increases monotonically with growing \(r\) and remain positive for all value of \(r\), thus satisfies the polygamy inequality. Unlike Bell and GHZ states, for W-state it decreases monotonically with \(r\) from a maximum value at \(r=0\) although satisfies the polygamy inequality as it remains positive for all \(r\). ## Appendix C Monotonicity of Reflected Entropy In this section we show some representative plots of the monotonicity of the reflected entropy by depicting \(S_{R}^{(\xi)}(A:B\bar{B})-S_{R}^{(\xi)}(A:B)\) as a function of Renyi index \(\xi\) for Bell, Werner and GHZ states. We show that \(S_{R}^{(\xi)}(A:B\bar{B})-S_{R}^{(\xi)}(A:B)\) is always positive for any value of \(\xi\) which indicates that reflected entropy (Renyi index \(\xi=1\)) is a valid correlation measure for the systems under question. We have considered all the possible configurations of the parties to check the monotonicity where in fig. 9 only three representatives have been presented. Nevertheless, we may elaborate more on the discussion of the main text regarding the existence in general of other quantum states violating monotonicity of the reflected entropy under partial trace we generalize them even for higher Figure 8: Polygamy inequality as function of acceleration. -dimensional Hilbert spaces. The violation depends on the ratio \(p=\frac{a}{b}\) which changes with the dimension of the Hilbert space. Such a generic state in \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}=\mathbb{C}^{n+1} \otimes\mathbb{C}^{m+1}\otimes\mathbb{C}^{2}\) can be suggested to be \[\begin{split}\rho_{ABC}=\frac{1}{2na+2(m-1)b}\Big{[}a|000\rangle \langle 000|+a|110\rangle\langle 110|+\sum_{m,n}\Big{(}a|n00\rangle\langle n00|+a|n10 \rangle\langle n10|\Big{)}\\ +b|0m0\rangle\langle 0m0|+b|1m1\rangle\langle 1m1|\Big{)} \Big{]}\,\end{split}\] (C.1) where \(n,m\geq 2\). Considering \(n=m=2\), we get the states given in [17]. The state presented in eq. (3.11) can be reproduced by taking \(n=3\) and \(m=2\) in eq. (C.1). We expect that for any arbitrary values of \(m\) and \(n\), the plots for \(S_{k}^{\zeta}(A:BC)-S_{k}^{\zeta}(A:B)\) with respect to the Renyi index and \(p\) are similar to those presented in figs. 1(a) and 1(b). The generic state in eq. (C.1) represents the class of states showing the non-monotonicity of the reflected entropy. It would be interesting to study the characteristics of these states in detail compared to the states that respect the monotonicity under partial tracing.
2309.04527
Deformed Fredkin model for the $ν{=}5/2$ Moore-Read state on thin cylinders
We propose a frustration-free model for the Moore-Read quantum Hall state on sufficiently thin cylinders with circumferences $\lesssim 7$ magnetic lengths. While the Moore-Read Hamiltonian involves complicated long-range interactions between triplets of electrons in a Landau level, our effective model is a simpler one-dimensional chain of qubits with deformed Fredkin gates. We show that the ground state of the Fredkin model has high overlap with the Moore-Read wave function and accurately reproduces the latter's entanglement properties. Moreover, we demonstrate that the model captures the dynamical response of the Moore-Read state to a geometric quench, induced by suddenly changing the anisotropy of the system. We elucidate the underlying mechanism of the quench dynamics and show that it coincides with the linearized bimetric field theory. The minimal model introduced here can be directly implemented as a first step towards quantum simulation of the Moore-Read state, as we demonstrate by deriving an efficient circuit approximation to the ground state and implementing it on IBM quantum processor.
Cristian Voinea, Songyang Pu, Ammar Kirmani, Pouyan Ghaemi, Armin Rahmani, Zlatko Papić
2023-09-08T18:00:03Z
http://arxiv.org/abs/2309.04527v1
# Deformed Fredkin model for the \(\nu\)=\(5/2\) Moore-Read state on thin cylinders ###### Abstract We propose a frustration-free model for the Moore-Read quantum Hall state on sufficiently thin cylinders with circumferences \(\lesssim 7\) magnetic lengths. While the Moore-Read Hamiltonian involves complicated long-range interactions between triplets of electrons in a Landau level, our effective model is a simpler one-dimensional chain of qubits with deformed Fredkin gates. We show that the ground state of the Fredkin model has high overlap with the Moore-Read wave function and accurately reproduces the latter's entanglement properties. Moreover, we demonstrate that the model captures the dynamical response of the Moore-Read state to a geometric quench, induced by suddenly changing the anisotropy of the system. We elucidate the underlying mechanism of the quench dynamics and show that it coincides with the linearized bimetric field theory. The minimal model introduced here can be directly implemented as a first step towards quantum simulation of the Moore-Read state, as we demonstrate by deriving an efficient circuit approximation to the ground state and implementing it on IBM quantum processor. ## I Introduction The enigmatic fractional quantum Hall (FQH) state observed at filling fraction \(\nu\)=5/2 [1] stands out as a rare example of an even-denominator state among the majority of odd-denominator states described by the Laughlin wave functions [2] and composite fermion theory [3]. One of the leading theoretical explanations of the \(\nu\)=5/2 state is based on the Moore-Read (MR) variational wave function [4]. Two unique properties of the MR state are worth highlighting: (i) it represents a \(p\)-wave superconductor of composite fermions [5]; (ii) its elementary charge excitations behave like Ising anyons, i.e., they carry charge \(e/4\) and exhibit non-Abelian braiding statistics [4; 6]. The latter has motivated the use of MR state as a potential resource for topological quantum computation [7], whereby quantum information is encoded in the collective states of MR anyons and quantum gates are executed by braiding the anyons. Such operations would be protected by the topological FQH gap, avoiding the costly quantum error correction. On the fundamental side, the understanding of particle-hole symmetry and collective excitations in the \(\nu\)=5/2 state has recently generated a flurry of interest. While the numerics [8; 9] provided initial support of the MR wave function capturing the physical ground state at \(\nu\)=5/2, it has been realized that preserving (or breaking) particle-hole (PH) symmetry can lead to distinct phases of matter. For example, by PH-conjugating the MR wave function one obtains a distinct state known as the "anti-Pfaffian" state [10; 11], while enforcing the PH symmetry leads to another, PH-symmetric Pfaffian state ("PH-Pf") [12]. Understanding the relation of these states with the MR state in light of physical PH symmetry breaking effects, such as Landau level mixing [13; 14; 15] remains an important task for reconciling numerics [16; 17; 18; 19] with experiment [20; 21]. On the other hand, collective excitations of the \(\nu\)=5/2 state have also attracted much attention. The pairing in the MR state mentioned above gives rise to an additional collective mode - the unpaired "neutral fermion" mode - which has been "seen" in the numerical simulations [22; 23; 24], but so far not detected in experiment. The gap of the neutral fermion mode is of direct importance for topological quantum computation, as the former can be excited in the process of fusion of two elementary anyons. Recently, Ref. [25] proposed a description of the neutral fermion mode based on an emergent "supersymmetry" with the more conventional, bosonic density-wave excitation [26; 27]. The numerics in Ref. [28] suggests that supersymmetry can indeed emerge in a realistic microscopic model of \(\nu\)=5/2. Figure 1: (a)-(b): Two types of 3-electron scattering processes present in the Moore-Read Hamiltonian. The cylinder circumference, \(L_{2}\), controls the spacing \(2\pi\ell_{B}^{2}/L_{2}\) between Landau level orbitals (dashed lines). (c)-(d): Sending \(L_{2}/\ell_{B}\)\(\rightarrow\)0 suppresses the longer-range hopping (d) compared to the one in (c). It will be shown that (c), where one electron is fixed while the other two electrons hop between the nearest-neighbor orbitals, maps to a controlled-SWAP (Fredkin) gate. In this paper we develop a framework for studying the MR state in a quasi-one-dimensional limit, obtained by placing the FQH fluid on a streched cylinder or torus whose lengths in the two directions obey \(L_{2}{\ll}L_{1}\). This so-called "thin-torus" limit has been fruitful in gaining understanding of the structure of many FQH ground states and their excitations [29; 30; 31; 32; 33; 34; 35; 36]. The thin-torus limit provides a natural classical "cartoon" of the complicated physics in the two-dimensional limit: the off-diagonal matrix elements of the Hamiltonian describing a FQH state become strongly suppressed \({\sim}\exp[-(2\pi\ell_{B}/L_{2})^{2}]\) in the limit \(L_{2}/\ell_{B}{\to}0\), allowing for considerable simplifications of the problem - see Fig. 1 for an illustration. However, there have been comparatively few studies of the MR state near the thin-torus limit. Most previous works [37; 38; 39; 40] focused on the "extreme" thin-torus limit, also known as the Tao-Thouless limit [29], where the Hamiltonian is reduced to purely classical electrostatic repulsion. It is therefore important to develop an analytically-tractable model for the MR state _beyond_ the strict Tao-Thouless limit, where some correlated hopping terms are present in the model. For the Laughlin and Jain states, such models were previously formulated in Refs. [41; 42; 43; 44] and one of the goals of this paper is to work out an analogous model for the MR state. The intrinsic one-dimensional (1D) structure of such models makes them suitable for implementation on digital quantum computers, as recently shown for the \(\nu{=}1/3\) Laughlin state [45; 46]. The versatility of such devices allows to probe questions, such as the real-time dynamics following a quench, that are challenging for traditional solid-state experiments [47; 48; 49; 50]. In particular, the implementation on IBM quantum processor allowed to simulate the "graviton" dynamics induced by deforming the geometry of the Laughlin state [51; 52; 53; 54; 55]. The remainder of this paper is organized as follows. We start by reviewing the parent Hamiltonian of the MR state in Sec. II. We make use of the second-quantization formalism to derive a simplified frustration-free model near the thin-cylinder limit and we show that its ground state has high overlap with the MR state, with similar entanglement properties. In Sec. III we show that the frustration-free model can be expressed as a deformed Fredkin model for spin-1/2 degrees of freedom. Working in the spin representation, we present an intuitive picture of the ground state of this deformed Fredkin model and derive its matrix-product state (MPS) representation. We also demonstrate that the ground state can be efficiently approximated by a quantum circuit, which we implement on the IBM quantum processor. In Sec. IV we show that the Fredkin model also captures the dynamics of the MR state induced by quenching the anisotropy of the system, and we elucidate the mechanism of this dynamics. Our conclusions are presented in Sec. V, while Appendices contain technical details of the derivations, further characterizations of the ground state, and a generalization of the Laughlin case in Ref. [41] to a closely-related Motzkin spin chain. ## II Moore-Read Hamiltonian on a thin cylinder In this section we formulate a frustration-free model that provides a good approximation of the MR ground state near the thin-cylinder limit. In the infinite 2D plane, the parent Hamiltonian of the MR state is a peculiar interaction potential that penalizes configurations of any three electrons forming a state with relative angular momentum equal to 3 - the smallest possible momentum allowed by the Pauli exclusion principle [56; 57]. At the same time, pairs of electrons do not experience any interaction. The combination of these two effects gives rise to an exotic many-electron state with \(p\)-wave pairing correlations [5]. Concretely, the MR interaction potential can be written in real space using derivatives of delta functions [9]: \[H_{\rm MR}=-\sum_{i<j<k}S_{i,j,k}\left[\nabla_{i}^{4}\nabla_{j}^{2}\delta^{2} \left({\bf r}_{i}-{\bf r}_{j}\right)\delta^{2}\left({\bf r}_{j}-{\bf r}_{k} \right)\right], \tag{1}\] where \(S_{i,j,k}\) is a symmetrizer over the electron indices \(i\), \(j\), \(k\). At filling \(\nu{=}1/2\), the ground state of this Hamiltonian has energy \(E{=}0\) and it is unique (on a disk, sphere or cylinder geometry) or six-fold degenerate on a torus, corresponding exactly to the wave function written down by Moore and Read [4]. The same state was shown to have high overlap with the exact ground state of Coulomb interaction in the first-excited Landau level [9; 58]. Moreover, the Hamiltonian above also captures the collective excitations of the MR state [22; 23; 24; 59; 60]. Below we first convert the Hamiltonian (1) into a second-quantized form on the cylinder and torus geometries. This form allows us to derive a simplified model for the MR state on a thin cylinder. ### Moore-Read Hamiltonian in second quantization The singularities in Eq. (1) are naturally regularized by projection to the lowest Landau level (LLL). Assuming Landau gauge \({\bf A}=(0,Bx,0)\), the single-electron wave functions are given by [61] \[\phi_{j}({\bf r})=\frac{1}{\sqrt{L_{2}\sqrt{\pi}\ell_{B}}}e^{i2\pi jyt\ell_{B }^{2}/L_{2}}e^{-(x-2\pi j\ell_{B}^{2}/L_{2})^{2}/2\ell_{B}^{2}}, \tag{2}\] where \(L_{2}\) is the cylinder circumference in the \(y\)-direction and \(\ell_{B}=\sqrt{\hbar c/eB}\) is the magnetic length. The \(j\)th magnetic orbital is therefore exponentially localized (in \(x\)-direction) around \(2\pi j\ell_{B}^{2}/L_{2}\). For simplicity, unless specified otherwise, below we will work in units \(\ell_{B}{=}1\). The second-quantized representation of the MR Hamiltonian is: \[H_{\rm MR}=\sum_{j_{1},\ldots,j_{6}=0}^{N_{\phi}-1}V_{j_{1}j_{2}j_{3}j_{4}j_{5} j_{6}}\,c_{j_{1}}^{\dagger}c_{j_{2}}^{\dagger}c_{j_{3}}^{\dagger}c_{j_{4}}c_{j_{5}}c_{ j_{6}}, \tag{3}\] where the operators \(c_{j}^{\dagger}\), \(c_{j}\) create or destroy an electron in the orbital \(\phi_{j}(\mathbf{r})\). The matrix elements are derived by integrating Eq. (1) between the single-electron eigenfunctions (2), which yields \[V_{j_{1}j_{2}j_{3}j_{4}j_{5}j_{6}} =\delta_{j_{1}+j_{2}+j_{3}.j_{4}+j_{5}+j_{6}}\] \[\times(j_{1}-j_{2})(j_{1}-j_{3})(j_{2}-j_{3})\] \[\times(j_{6}-j_{5})(j_{6}-j_{4})(j_{5}-j_{4})\] \[\times\exp\bigg{[}-\frac{\kappa^{2}}{2g_{11}}\bigg{(}\sum j_{i}^{ 2}-\frac{1}{6}\big{(}\sum j_{i}\big{)}^{2}\] \[+ig_{12}\big{(}j_{1}^{2}+j_{2}^{2}+j_{3}^{2}-j_{4}^{2}-j_{5}^{2}- j_{6}^{2}\big{)}\bigg{)}\bigg{]}, \tag{4}\] where we have dropped the overall normalization constant for simplicity. The magnitude of the matrix element is controlled by the cylinder circumference \(L_{2}\) in units of magnetic length \(\ell_{B}\), which defines the parameter \(\kappa\)=\(2\pi\ell_{B}/L_{2}\). We have derived the matrix elements by assuming a general anisotropic band-mass tensor \(g_{ab}\), which will be relevant for the discussion of geometric quench in Sec. IV. Note that the matrix element \(V_{j_{1}\cdots j_{6}}\) is properly antisymmetric, resulting in a minus sign when any two electrons are exchanged, hence the limits in the sum in Eq. (3) can be restricted to \(j_{1}\)\(>\)\(j_{2}\)\(>\)\(j_{3}\), \(j_{6}\)\(>\)\(j_{5}\)\(>\)\(j_{4}\) without loss of generality. The delta function in Eq. (4) encodes momentum conservation during a scattering process, hence one of the indices, e.g., \(j_{6}\) can be eliminated in terms of \(j_{1},\ldots,j_{5}\). A few comments are in order. We have denoted by integer \(N_{\phi}\) the number of magnetic orbitals. For a cylinder, \(N_{\phi}=2N_{e}\)\(-\)\(2\), where \(N_{e}\) is the number of electrons. The offset \(-2\) is a geometric feature of the MR state called the Wen-Zee shift Wen and Zee (1965). The total area of the fluid of dimensions \(L_{1}\times L_{2}\) must be quantized in any FQH state Wen and Zee (1965), thus we take the thin-cylinder limit according to \[L_{2}/\ell_{B}\to 0,\ \ \text{such that}\ \ \ L_{1}L_{2}=2\pi\ell_{B}^{2}N_{\phi}, \tag{5}\] which ensures that the number of orbitals, and hence the filling factor, remains constant. Although we will focus on cylinder geometry in this paper, we note that the same Hamiltonian, Eqs.(3)-(4), can also be used on a torus, with a few modifications. On a torus, the shift vanishes and \(N_{\phi}=2N_{e}\). However, because of the periodicity in both \(x\) and \(y\) directions, the momentum is only defined modulo \(N_{\phi}\). This means that the momentum conservation takes the form \(j_{1}+j_{2}+j_{3}=j_{4}+j_{5}+j_{6}\) (mod \(N_{\phi}\)). Moreover, the matrix element (4) must be explicitly periodized to make it compatible with the torus boundary condition, which can be done by replacing \(j_{i}\)\(\rightarrow\)\(j_{i}\)+\(k_{i}N_{\phi}\) and summing over \(k_{i}\). The derivation of the effective Hamiltonian in the thin-cylinder limit proceeds by noting that, in the limit of \(\kappa\)\(\gg\)\(1\) (equivalently, \(L_{2}\)\(\ll\)\(\ell_{B}\)), there is a natural hierarchy of matrix elements (4), which are separated by different powers of \(\exp(-\kappa^{2})\)Kane and Meissner (1965); Wen and Zee (1965); Wen and Zee (1965). Below we list the first few relevant terms in decreasing order: \[2^{2}e^{-2\kappa^{2}}n_{p+1}n_{p}n_{p-1};\ \ \ 111 \tag{6}\] \[2^{2}3^{2}e^{-14\kappa^{2}/3}n_{p+2}n_{p+1}n_{p-1};\ \ \ 1011\] (7) \[2^{5}e^{-5\kappa^{2}}c_{p-1}^{\dagger}c_{p}^{\dagger}c_{p+1}^{ \dagger}c_{p+2}c_{p}c_{p-2};\ \ \ 10101\to 01110\] (8) \[2^{3}3^{2}e^{-20\kappa^{2}/3}c_{p}^{\dagger}c_{p+1}^{\dagger}c_{p+ 4}^{\dagger}c_{p+3}c_{p+2}c_{p};\ \ \ 11001\to 10110\] (9) \[2^{8}e^{-8\kappa^{2}}n_{p+2}n_{p}n_{p-2};\ \ \ 10101\] (10) \[2^{3}5e^{-8\kappa^{2}}c_{p-2}^{\dagger}c_{p+2}^{\dagger}c_{p+3}^ {\dagger}c_{p+2}c_{p+1}c_{p};\ \ \ 001110\to 100011\] (11) \[2^{4}3^{2}e^{-26\kappa^{2}/3}n_{p+4}n_{p+3}n_{p};\ \ \ 11001\] (12) \[2^{2}3^{2}e^{-26\kappa^{2}/3}c_{p-1}^{\dagger}c_{p}^{\dagger}c_{p +2}^{\dagger}c_{p+3}c_{p}c_{p-2};\ \ \ 101001\to 011010. \tag{13}\] We have included a binary mnemonic to represent the type of process generated by each Hamiltonian term. A single pattern, e.g., 1011, represents a diagonal term in the Hamiltonian which assigns energy penalty for the given local pattern of occupation numbers anywhere in the system. The terms containing an arrow, such as 10011\(\rightarrow\)01101, can be visualized as correlated hopping processes, Fig. 1. In such cases, the Hermitian conjugates of the processes, corresponding to reflected hoppings with the same amplitude, are implied. ### Tao-Thouless limit The "extreme" thin-torus limit, also known as the Tao-Thouless limit, of the MR Hamiltonian was discussed in Refs. Kane and Meissner (1965); Wen and Zee (1965). In this limit, the only terms that survive are Eqs. (6) and (7), giving energy penalty to configurations \(\ldots 1111\ldots\) and \(\ldots 1011\ldots\). Hence, the ground states at filling \(\nu\)=\(1/2\) (with zero energy) are \[\ldots 110011001100\ldots\ \ \text{and}\ \ \ldots 1010101010\ldots, \tag{14}\] while any other Fock state will be higher in energy by at least an amount \(\sim\exp(-14\kappa^{2}/3)\), see Eq. (7). This gives the expected 6-fold ground state degeneracy of the MR state on the torus Wen and Zee (1965), since the first state in Eq. (14) is 4-fold and the second is 2-fold degenerate under translations. These ground states have different momenta on the torus, so they live in different sectors of the Hilbert space 1. Similarly, the ground states in Eq. (14) are the _densest_ zero-energy states one can construct, as increasing the filling factor would necessarily violate the terms in Eq. (6)-(7). On the other hand, _decreasing_ the filling factor is allowed, leading to many more \(E\)=0 states. These correspond to quasihole excitations and can be interpreted as domain walls between two different types of ground state patterns in Eq. (14), see Refs. Kane and Meissner (1965); Wen and Zee (1965). Footnote 1: The \(E\)-odd degeneracy of the MR state is also discussed in Ref. Wen and Zee (1965). On a finite cylinder, the densest zero-energy ground state is found instead at \(N_{\phi}\)=\(2N\)\(-\)\(2\), as expected from the Wen-Zee shift. This coincides with the root partition of the Jack polynomial corresponding to the MR state Wen and Zee (1965), \(11001100\ldots 0011\), with 11 at each boundary. The other torus root state can be similarly adapted to a finite cylinder according to \(1010\ldots 101\). However, this requires an extra orbital, since the flux is now \(N_{\phi}\)=\(2N-1\). Thus, the second type of torus ground state becomes an excited state on a cylinder. In both cylinder and torus geometries, the Tao-Thouless ground states are trivial product states without any entanglement. We next discuss how to go beyond the extreme thin-cylinder limit and generate an entangled ground state. ### Frustration-free model beyond the Tao-Thouless limit Beyond the extreme limit discussed above, we would like to retain a few more terms, with smaller powers of \(\exp(-\kappa^{2})\), and thereby generate a more accurate approximation of the MR state over a slightly large range of \(L_{2}\). A natural way to do to this would be to choose a magnitude cut-off and keep only the Hamiltonian matrix elements that are larger than this cutoff. However, we would also like to be able to analytically solve for the ground state of the resulting truncated Hamiltonian. In this sense, it is natural to look for a Hamiltonian which is frustration-free, i.e., has a ground state that is simultaneously annihilated by all individual terms in the Hamiltonian. In such cases it is often possible to find analytically exact ground states even though the Hamiltonian overall may not be solvable, e.g., as in the case of the Affleck-Kennedy-Lieb-Tasaki (AKLT) model [67]. Unfortunately, the program outlined above fails for our 3-body Hamiltonian: keeping the terms in the order they are listed in Eqs. (6)-(13) does not result in a positive semi-definite operator. This can be seen by considering the first two correlated hoppings, Eqs. (8) and (9). One would want to include these hoppings as they naturally act on the two types of ground states in the extreme thin-torus limit, Eq. (14), and would create some entanglement. However, the "dressed" ground states would no longer be zero modes and their degeneracy would be lifted. Inspired by the Laughlin construction [41], one could try to remedy this by including the terms Eqs. (10) and (12) to create a sum of _two_ positive semi-definite operators. One quickly realizes that the hopping term Eq. (13) now becomes a problem, spoiling the frustration-free property. In Ref. [64], an attempt was made to define a frustration-free model for a bosonic MR state by dropping the equivalent of hopping Eq. (13) (as well as the hopping Eq. (11)). Unfortunately, upon further inspection, we have found the claim in Ref. [64] to be inaccurate because the model proposed there does not yield strictly zero-energy ground states. We now describe the simplest frustration-free truncation of the Hamiltonian in Eqs. (6)-(13) that we have found. We will focus on the cylinder root state \(|R_{0}\rangle=|11001100\ldots 0011\rangle\), which is nondegenerate. In order to obtain a unique "dressed" ground state on the cylinder, we consider the Hamiltonian terms that act nontrivially on this root state. The resulting states will be the first relevant corrections to the ground state. The effective Hamiltonian contains terms in Eqs. (6), (7), (9) and (12): \[H^{\prime}_{\text{MR}}=\sum_{i=0}^{N_{\phi}-3}A_{i}^{\dagger}A_{i}+\sum_{i=0}^ {N_{\phi}-4}\big{(}B_{i}^{\dagger}B_{i}+C_{i}^{\dagger}C_{i}\big{)}, \tag{15}\] where the operators \(A\), \(B\) and \(C\) are given by \[A_{i} = \alpha c_{i}c_{i+1}c_{i+2}, \tag{16}\] \[B_{i} = \beta c_{i}c_{i+2}c_{i+3}+\gamma c_{i}c_{i+1}c_{i+4},\] (17) \[C_{i} = \beta c_{i+1}c_{i+2}c_{i+4}+\gamma c_{i}c_{i+3}c_{i+4}. \tag{18}\] For brevity, we have introduced the parameters \[\alpha=\sqrt{V_{012210}},\ \beta=\sqrt{V_{023320}},\ \gamma=e^{i\theta} \sqrt{V_{014410}}\, \tag{19}\] given in terms of matrix elements (4) and \(\theta=2\kappa^{2}g_{12}/g_{11}\). Amongst the terms omitted, Eqs. (8), (10) and (13) do not act directly on the root state, bringing only subleading contributions. While the term in Eq. (11) can directly act on the extreme root state, its contribution is suppressed in the vicinity of the thin-cylinder limit because of the prefactor being much smaller than the hopping term retained, Eq. (9). Therefore, to a first order approximation, \(H^{\prime}_{\text{MR}}\) is the correct effective Hamiltonian that captures the departure of the Pfaffian state from the root \(|R_{0}\rangle\) in this geometry (in Sec. IV we will also investigate its dynamical properties to show that the model captures the properties of excited eigenstates). On the torus, our model preserves the 6-fold ground state degeneracy of the MR Hamiltonian. Four of those, corresponding to root unit cell 1001 and its translations, will become "dressed", in analogy Figure 2: Overlaps between the ground state \(|\psi_{\text{MR}}\rangle\) of the full model in Eq. (3) (i.e., the Moore-Read state) and the ground state \(|\psi_{0}\rangle\) of the truncated model in Eq. (15), for different system sizes. For cylinder circumferences up to \(L_{2}\)\(\approx\)7\(\ell_{B}\) (shaded), the overlap is 95% or higher, indicating this truncation returns a good approximation for the ground state in the thin-cylinder regime. to the ground state of the cylinder Hamiltonian. The other two will remain inert, i.e. equal to \(|101010\dots\rangle\) and \(|010101\dots\rangle\) for any value of \(L_{2}\) (since the hopping in Eq. (9) cannot produce new configurations). To confirm the validity of the model in Eq. (15) we performed several tests. First, we evaluated the overlap of the model's ground state with the full MR state, i.e., the ground state of the untruncated Hamiltonian. Fig. 2 shows that this overlap is very high close to the thin-cylinder regime, with overlap on the order \(\sim\)95% at \(L_{2}\)\(\approx\)7\(\ell_{B}\). As we are not exactly capturing the full state at a finite value of \(L_{2}\), the overlaps naturally decay with system size (and vanish in the thermodynamic limit). Nevertheless, the fact that they remain very high and weakly dependent on system size in the range \(L_{2}\lesssim 7\ell_{B}\) gives us confidence that the model captures the right physics, as will be further demonstrated below. An example of a physical quantity that can be meaningfully scaled with system size and is sensitive to both local and nonlocal correlations is the bipartite entanglement entropy, \(S_{A}\). We compute \(S_{A}\) by choosing a bipartition in orbital space, i.e., the subsystem \(A\) contains \(N_{\phi}^{A}\) orbitals and the complementary subsystem \(B\) contains the remaining \(N_{\phi}-N_{\phi}^{A}\) orbitals [68]. Due to the Gaussian localization of the magnetic orbitals (2), this roughly corresponds to the more conventional partitioning of the system in real space [69; 70]. The entanglement entropy is the von Neumann entropy, \(S_{A}=-\mathrm{tr}\rho_{A}\ln\rho_{A}\), of the reduced density matrix \(\rho_{A}=\mathrm{tr}_{B}|\psi\rangle\langle\psi|\) for the (truncated) MR ground state \(|\psi\rangle\). In Fig. 3 we plot \(S_{A}\) as a function of cylinder circumference, contrasting the full MR state with the ground state of the truncated model (15). The entanglement entropy of the MR state has been shown to scale linearly with the circumference of the cylinder [71; 72], which is the "area law" scaling expected in ground states of gapped systems [73]. We observe that this linear scaling is obeyed also by the truncated model for \(L_{2}\lesssim 7\ell_{B}\). Furthermore, the subleading correction \(\gamma_{\mathrm{top}}\) to the area law, \(S_{A}=cL_{2}-\gamma_{\mathrm{top}}+O(e^{-\xi/\ell_{B}})\), where \(c\) is a constant and \(\xi\) is the correlation length, is known to be a sensitive indicator of topological order as it arises due to fractionalized anyon excitations [74; 75]. As shown in the inset of Fig. 3, in the range of validity of the truncated model we obtain \(\gamma_{\mathrm{top}}\) within 20% of the theoretically expected value \(\ln\sqrt{8}\) for the MR case [71]. Beyond the area-law regime, the entanglement entropy of the truncated model saturates, illustrating that the model is no longer able to capture the relevant correlations in the full MR state. Conversely, near the Tao-Thouless limit \(L_{2}\)\(\lesssim\)\(4\ell_{B}\), there is practically no growth of entropy with \(L_{2}\), as the ground state remains a product state. In the latter regime, \(\gamma_{\mathrm{top}}\)=0, illustrating that topological order is completely lost since the system is too narrow in the \(y\) direction. Finally, Fig. 3 illustrates the sensitivity of entanglement scaling with respect to the location of the bipartition. This is due to the cylinder ground state being dominated by the pattern...11001100.... Given this form of the root state unit cell, there are three distinct types of locations where we could place the partition. If we partition between two occupied orbitals (i.e., \(1|1\rangle\), at first order there will be no correlated hoppings across this boundary, leading to a very slow growth in entanglement, as indeed seen in Fig. 3. Instead, if we partition the system next to an empty orbital (i.e., at \(0|0\) or \(1|0\)), then there will be correlated hopping across the boundary and the two subsystems can get entangled more easily. Note that this sensitivity to the location of the partition is not present in the Laughlin case [41]. Moreover, it is an artefact of being near the thin-cylinder limit where the ground state still possesses a crystalline-like density modulation, which becomes strongly suppressed in the isotropic 2D limit where the fluid is spatially uniform. Nevertheless, Fig. 3 shows that our truncated model (15) successfully captures all the entanglement features of the full MR state in the regime \(L_{2}\lesssim 7\ell_{B}\). In the next section, we show that the model (15) can be expressed as a well-known spin-1/2 chain model. Figure 3: Bipartite entanglement entropy \(S_{A}\) of the full MR state [i.e., the ground state of Eq. (3)] and the ground state of the truncated model, Eq. (15), as a function of cylinder circumference, \(L_{2}\). Data is for \(N_{e}\)=14 electrons and \(N_{\phi}\)=26 magnetic orbitals. All types of bipartitions are considered: choosing subsystem \(A\) to contain \(N_{\phi}^{A}\)=11\(|\) orbitals, the boundary looks like \(0|0\); it can be seen that entanglement entropy here starts growing early on. The case \(N_{\phi}^{A}\)=12, corresponding to \(1|0\), behaves similarly. By contrast, choosing \(N_{\phi}^{A}\)=13 corresponds to the bipartition type 1\(|1\), where entropy grows much more slowly. The truncated model accurately captures the behavior of the model in the range of circumferences \(L_{2}\lesssim 7\ell_{B}\) (shaded). The inset shows the topological entanglement entropy \(\gamma_{\mathrm{top}}\), extracted numerically from \(S_{A}\) using a linear fit over the interval \([5.5\ell_{B},L_{\mathrm{max}}]\). Only the bipartitions \(0|0\) and \(1|0\) were used, as those scale correctly near the Tao-Thouless limit. Restricting ourselves to the range of validity for the truncated model, our \(\gamma_{\mathrm{top}}\) estimate is within 20% of the theoretical value of \(\ln\sqrt{8}\). ## III Mapping to a deformed Fredkin chain Our effective Hamiltonian (15) is frustration-free and it has an exact zero-energy ground state, which is unique on a cylinder. To write down the ground state wave function and provide its intuitive representation, we map the model (15) to a deformed Fredkin chain [76, 77, 78]. The Fredkin model is a spin-1/2 analog of the Motzkin chain model [79, 80, 81, 82]. As shown in Appendix C, the Motzkin chain allows to generalize the construction from Ref. [41] and describe the \(\nu\)=1/3 Laughlin state over a larger range of cylinder circumferences. ### Spin mapping The mapping to spin-1/2 degrees of freedom is possible because we are only interested in the connected component of the ground state, i.e., the Krylov subspace spanned by states \((H^{\prime}_{\rm MR})^{n}|11001100\ldots 110011\rangle\), for an arbitrary integer \(n\)[83]. For our truncated model, the dimension of such a subspace does not exhaust the full Hilbert space dimension, allowing the mapping onto a spin-1/2 chain. This is an example of Hilbert space fragmentation [84] and it has previously been used to perform mappings of 2-body FQH Hamiltonians onto spin models [85]. Thus, without any loss of generality, we can investigate both the ground state and dynamics by restricting to the Krylov subspace built from the Tao-Thouless root pattern. To perform the mapping, start from the root state \(110011\ldots 0011\) and pad it with one fictitious 0 on each side. This allows us to group sites in pairs of two, noticing that the only present pairs are 01 and 10. These are mapped to spins: \[|01\rangle\rightarrow|\!\uparrow\rangle\quad\text{and}\quad|10\rangle \rightarrow|\!\downarrow\rangle. \tag{20}\] Thus, the root state maps to the antiferromagnetic (Neel) state \(|(0)110011\ldots 0011(0)\rangle\rightarrow|\!\uparrow\downarrow\uparrow\downarrow \ldots\uparrow\downarrow\rangle\). Since \(H^{\prime}_{\rm MR}\) acts on a maximum of 5 consecutive sites at once, its equivalent acts on 3 consecutive spins. As discussed below, acting with \(H^{\prime}_{\rm MR}\) on any product state that can be mapped to spins, i.e., consists of a sequence of 01 and 10 pairs, generates another product state that can be similarly mapped to spins. An example is presented in Fig. 4 which shows the connected sector of the adjacency graph of \(H^{\prime}_{\rm MR}\) that contains the Neel state. This shows that in this connected component of the Hilbert space, our model (15) is equivalent to a spin-1/2 model. For simplicity, we will denote the number of spins by \(N\), although it should be kept in mind this is equal to the number of electrons \(N_{e}\). The moves implemented by the \(B\) and \(C\) terms in Eq. (15) are \(|\!\uparrow\uparrow\downarrow\rangle\leftrightarrow|\!\uparrow\downarrow\uparrow\rangle\) and \(|\!\uparrow\downarrow\downarrow\rangle\leftrightarrow|\!\downarrow\uparrow\downarrow\rangle\), i.e., they are controlled swaps of spins or Fredkin gates [86], as illustrated in Fig. 4. Note that the \(A\) term in our subspace is redundant, as this subspace does not contain any \(\ldots 111\ldots\) patterns. The resulting spin Hamiltonian is a sum of local projectors and can be written as \[H_{F}=\sum_{i=0}^{N-3}\mathcal{P}_{i}^{\uparrow}P_{i+1,i+2}^{\varphi(\tau)}+ \mathcal{P}_{i,i+1}^{\varphi(\tau)}\mathcal{P}_{i+2}^{\downarrow} \tag{21}\] where the single-spin projector \(\mathcal{P}_{i}^{\sigma}\)=\(|\sigma_{i}\rangle\langle\sigma_{i}|\) projects onto a local spin pointing in the direction \(\sigma=\uparrow,\downarrow\). The two-spin projector \(\mathcal{P}_{i,i+1}^{\varphi(\tau)}=|\varphi(\tau)_{i,i+1}\rangle\langle \varphi(\tau)_{i,i+1}|\) projects onto the deformed superposition state \[|\varphi(\tau)_{i,i+1}\rangle = |\!\uparrow_{i}\!\downarrow_{i+1}\rangle-\tau\;|\!\downarrow_{i} \!\uparrow_{i+1}\rangle, \tag{22}\] \[\tau=-\gamma/\beta = -2\exp\left(-2\kappa^{2}\frac{1-ig_{12}}{g_{11}}\right). \tag{23}\] The model (21) is our central result of this section. It can be recognized that this model corresponds to a (colorless) deformed Fredkin chain from Ref. [77]. Note that the boundary Hamiltonian terms from Ref. [77], i.e., \(H_{\partial}=\mathcal{P}_{0}^{\uparrow}+\mathcal{P}_{N-1}^{\downarrow}\), have been omitted because our subspace has the first \(\uparrow\) spin and the last \(\downarrow\) spin frozen. For convenience, we note that \(H_{F}\) can be equivalently Figure 4: Adjacency graph of the truncated model \(H^{\prime}_{\rm MR}\), Eq. (15), in the connected sector containing the root state (0)110011001100110(0) or, equivalently, \(\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow \uparrow\downarrow\) in the spin representation (marked in red). The graph vertices are product states that can be reached by repeated application of the Hamiltonian. The edges denote the nonzero matrix elements of the Hamiltonian between respective vertices. This example is for \(N_{e}\)=8 electrons, where the connected component of the root state contains only 14 out of total 151 states with the same momentum quantum number. Any basis configurations not present here are dynamically disconnected and cannot be reached from the ground state under the dynamics generated by \(H^{\prime}_{\rm MR}\). The Fredkin moves (swaps with \(\uparrow\) on the first or \(\downarrow\) on the last site acting as a control qubit), as implemented by the Hamiltonian, are also shown in the bottom panels. Each move changes the area of the path by a constant value. expressed in terms of usual Pauli spin operators: \[H_{\text{F}}=\sum_{j=0}^{N_{\text{c}}-3}\biggl{(}\bigl{(}\frac{1}{2}\mathds{1}+S_ {j}^{z}\bigr{)}h_{j+1,j+2}(\tau)+h_{j,j+1}(\tau)\bigl{(}\frac{1}{2}\mathds{1}-S_ {j+2}^{z}\bigr{)}\biggr{)}, \tag{24}\] where \[h_{j,j+1}(\tau) =\frac{1+|\tau|^{2}}{4}(\mathds{1}-4S_{j}^{z}S_{j+1}^{z})+\frac{1- |\tau|^{2}}{2}(S_{j}^{z}-S_{j+1}^{z})\] \[-2\text{Re}(\tau)(S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+1}^{y})\] \[-2\text{Im}(\tau)(S_{j}^{x}S_{j+1}^{y}-S_{j}^{y}S_{j+1}^{x}). \tag{25}\] Note that outside the connected component of the ground state, the mapping can be extended by defining the additional composite degrees of freedom: \(|11\rangle\to|+\rangle\), \(|00\rangle\to|-\rangle\). In this mapping, the constrained dynamics of the model resembles that of fractonic models in Refs. [87; 88; 89; 90], which can lead to different thermalization properties in different Krylov fragments [83]. ### The ground state The ground state of the Fredkin chain is a weighted superposition of "Dyck paths" of length \(N\) (the set of which we denote \(\mathcal{D}_{N}\)). These are product state configurations with \(S_{\text{total}}^{z}=0\) and \(\sum_{i=0}^{k}S_{i}^{z}\geq 0\) for all \(k\). The last condition is equivalent to the number of spin \(\uparrow\) sites always being greater than or equal to the number of spin \(\downarrow\) sites, as we go through the chain from left to right. The paths in \(\mathcal{D}_{N}\) can be interpreted graphically, as a "mountain range" where each \(\uparrow\) corresponds to an upward slope, while \(\downarrow\) corresponds to a downward slope. The Dyck constraint is equivalent to the height of these graphs starting and ending at zero, and always staying positive. The weight of each path (configuration) \(p\) in the ground state \(|\psi_{0}\rangle\) will be determined by the area \(A(p)\) under the mountain range: \[|\psi_{0}\rangle=\mathcal{N}^{-1}\sum_{p\in\mathcal{D}_{N}}\tau^{A(p)/2}\,|p \rangle. \tag{26}\] According to the phase diagram of the Fredkin spin model, for \(|\tau|<1\) the entanglement entropy of the ground state is bounded (obeys area law), whereas \(|\tau|=1\) is a critical point where the scaling becomes logarithmic in system size \(\sim\log N\). Ref. [76] also discusses the subtleties of the spin model with periodic boundary conditions. For \(|\tau|=1\), the ground state degeneracy scales linearly with \(N\), with zero modes in every \(S_{\text{total}}^{z}\) sector. However, upon decreasing the deformation away from the critical point we find that the extensive degeneracy disappears - only 4 zero modes survive. Two of them are in the sectors \(S_{\text{total}}^{z}=\pm N/2\), corresponding to the inert states \(|\uparrow\rangle^{\otimes N}\) and \(|\downarrow\rangle^{\otimes N}\) (or the Fock states \(|101010\dots\rangle\) and \(|010101\dots\rangle\)). The other two are in the sector \(S_{\text{total}}^{z}=0\) and will correspond to the root unit cells \(1001\) and \(0110\); the two remaining translations break our spin mapping but can be obtained by shifting every orbital by one position and then applying the mapping. All other zero modes disappear because deformed Fredkin ground states are constructed using the Dyck path area as in Eq. (26), which is only well defined when \(S_{\text{total}}^{z}=0\). These results are in agreement with the 6-fold degeneracy found in the fermionic model. With this understanding, we can map back to the fermionic ground state. All Dyck paths can be obtained from the root configuration \(|\uparrow\downarrow\uparrow\dots\downarrow\uparrow\downarrow\rangle\) by exchanging a number of \(\downarrow\) with an equal number of \(\uparrow\) further along the chain. In the fermionic picture, this is equivalent to performing a number of "squeezes", i.e., applications of the operator \[\hat{S}_{k,d}=c_{2k}^{\dagger}c_{2(k+d)-1}^{\dagger}c_{2(k+d)}c_{2k-1}, \tag{27}\] with \(d,k>0\). Similar structure exists in the Laughlin state [45; 46]. The resulting states are in one-to-one correspondence with those in \(\mathcal{D}_{N}\), and we will denote their set by \(\mathcal{D}^{\prime}_{N}\). Every configuration \(s\) in \(\mathcal{D}^{\prime}_{N}\) can be obtained from a number of \(n(s)\) squeezes applied to the root. \[s\in\mathcal{D}^{\prime}_{N}\Longleftrightarrow|s\rangle=\prod_{i=1}^{n(s)} \hat{S}_{k_{i},d_{i}}\,|110011\dots 0011\rangle \tag{28}\] The weight of such a basis state in the ground state is now determined by the total distance squeezed \(D(s)=\sum_{i=1}^{n(s)}d_{i}\), which is equivalent to the previous definition (26) expressed in terms of the area under the Dyck path. \[|\psi_{0}\rangle=\mathcal{N}^{-1}\sum_{s\in\mathcal{D}^{\prime}_{N}}\tau^{D(s) /2}\,|s\rangle. \tag{29}\] ### Matrix-product state representation Along with the undeformed chain (\(\tau\)=1), Ref. [76] introduced a matrix-product state (MPS) representation for its ground state. The associated MPS matrices have bond dimension \(\chi=N/2+1\), where \(N\) is the number of spins: \[M^{\uparrow}_{jk}=\delta_{j+1,k}\quad\text{and}\quad M^{\downarrow}=(M^{ \uparrow})^{T}. \tag{30}\] As we are working with open boundary conditions, we use the boundary vectors \(v_{L}=v_{R}^{T}\) with \((v_{L})_{j}=\delta_{j,0}\). This MPS can be directly extended to the deformed chain, where we need to introduce the deformation parameter in the following way: \[(M^{\uparrow}_{\tau})_{jk}=\tau^{j/2}\,\delta_{j+1,k}\quad\text{and}\quad M^{ \downarrow}_{\tau}=(M^{\uparrow}_{\tau})^{T} \tag{31}\] This holds for any \(\tau\in\mathbb{C}\) so it can be used for anisotropic states as well. Therefore the Fredkin ground state can be written as: \[|\psi_{0}\rangle=\mathcal{N}^{-1/2}\,v^{T}M_{\tau,0}M_{\tau,1}\dots M_{\tau,N- 1}\,v, \tag{32}\] where the MPS tensor is given by \[M_{\tau,j}=\begin{pmatrix}0&|\uparrow_{j}\rangle&0&0&\cdots\\ |\downarrow_{j}\rangle&0&\tau^{1/2}|\uparrow_{j}\rangle&0\\ 0&\tau^{1/2}|\downarrow_{j}\rangle&0&\tau|\uparrow_{j}\rangle\\ 0&0&\tau|\downarrow_{j}\rangle&0\\ \vdots&&&&\ddots\end{pmatrix}. \tag{33}\] For e.g. \(N\)=6 this expression gives: \[|\psi_{0}^{N=6}\rangle =|\uparrow\uparrow\downarrow\uparrow\downarrow\rangle+\tau\big{(} |\uparrow\downarrow\uparrow\uparrow\downarrow\downarrow\rangle+|\uparrow \uparrow\downarrow\uparrow\downarrow\rangle\big{)}\] \[+\tau^{2}|\uparrow\uparrow\downarrow\uparrow\downarrow\rangle+ \tau^{3}|\uparrow\uparrow\uparrow\downarrow\downarrow\downarrow\rangle \tag{34}\] which indeed agrees with Eq. (29). We note that alternative tensor network representations of the Fredkin ground states have been discussed in the literature [91]. Furthermore, the MPS representation above is able to capture the critical point at \(|\tau|=1\), which is precisely the reason behind \(\chi\) increasing linearly with system size. This limits our ability to extract thermodynamic limit behaviour in this phase using the MPS tensors from Eqs. (30) and (31). However, the regime of interest for this paper is \(|\tau|\lesssim 0.4\) (i.e., \(L_{2}\lesssim 7\,\ell_{B}\)), which is far from the critical point. Hence, it is possible to describe the ground state with high accuracy by truncating to a finite bond dimension. Consider the following tensors: \[M_{\tau,j}^{(\chi=3)}=\begin{pmatrix}0&|\uparrow_{j}\rangle&0\\ |\downarrow_{j}\rangle&0&\tau^{1/2}|\uparrow_{j}\rangle\\ 0&\tau^{1/2}|\downarrow_{j}\rangle&0\end{pmatrix}. \tag{35}\] For a chain with even number of spins, this MPS yields the following simple ground state: \[|\psi_{0}^{(\chi=3)}\rangle=|\uparrow\rangle\big{(}|\downarrow\uparrow\rangle +\tau|\uparrow\downarrow\rangle\big{)}^{\otimes\frac{N-2}{2}}|\downarrow\rangle. \tag{36}\] With a fixed \(\chi\), it is straightforward to analytically calculate the behavior of relevant quantities in the thermodynamic limit by using the MPS transfer matrix. The average orbital density takes the form: \[\langle\hat{n}_{4j/4j+1}\rangle=\frac{1}{1+\tau^{2}}\,,\quad\langle\hat{n}_{4 j+2/4j+3}\rangle=\frac{\tau^{2}}{1+\tau^{2}}. \tag{37}\] As expected, this resembles a CDW pattern, which in this approximation (and also in the full Fredkin ground state state) is predicted to disappear at \(|\tau|=1\), corresponding to \(L_{2}\)\(\approx\)\(10.7\ell_{B}\) (outside the range of validity of the truncated model). Figure 5 shows a comparison of orbital density between the MR state, the Fredkin state and the \(\chi\)=3 approximation above. At \(L_{2}\)=\(7\ell_{B}\) the two truncated states still capture the CDW pattern, with the Fredkin state showing more accurate results. Since this approximate state can be written in the tensor product form above, the density-density correlations decay to zero with a finite correlation length. ### Quantum algorithm for preparing the ground state and its implementation on IBM quantum processor The simple structure of the MPS wave function in the above approximation (for \(\chi\)=3) is amenable to implementation on noisy intermediate-scale quantum devices. Indeed, all states in the superposition can be obtained from a direct-product root pattern by only one layer of one- and two-qubit gates. Furthermore, the parameters of the circuit can be determined analytically without the need for any classical or hybrid optimization, which allows for direct implementation on a large number of qubits. The structure of the quantum circuit is shown in Fig. 6. If we choose the angle \(\theta\) to be equal to \[\theta=2\arctan(\tau), \tag{38}\] Figure 6: The structure of the quantum circuit for six qubits. The \(X\) gates create the root patterns, and the rotations and CNOTs implement the action of MPS matrices in Eq. (35). Figure 5: Comparison between the average orbital density of the MR state and different truncated states: the Fredkin state and the state obtained by truncating the Fredkin MPS at \(\chi\)=3. The system has \(N_{e}\)=14 electrons and the cylinder circumference is \(L_{2}\)=\(7\ell_{B}\). The truncated states deviate slightly from the charge density wave pattern of the MR state. the \(y\)-rotation creates a superposition \(|\uparrow\rangle+\tau|\downarrow\rangle\) and the cnot then changes the state of the two qubits to \(|\downarrow\uparrow\rangle+\tau|\uparrow\downarrow\rangle\). As a quick check, we executed the circuit on the ibmq_mumbai device, a 27-qubit processor with quantum volume 128. This implementation was carried out using the Qiskit package. In this simulation, we used \(N=26\), with the initial and final qubits held in trivial up and down states. Notably, we refrained from employing any error mitigation techniques, and we deliberately incorporated qubits and couplings from the device with lower quality. Our simulation utilized a mere couple of thousand shots to ascertain bitstring probabilities in the computational basis. We found very good agreement of the measured orbital densities with the analytical results of Eq. (37), save for a few instances where gate calibrations during the simulation were imperfect. The results for the orbital density are shown in Fig. 7. We used 2048 shots in both the quantum execution and the simulation of the circuit with IBM's Aer simulator. The Aer simulation is in excellent agreement with Eq. (37), with slight differences due to the finite number of shots. There is also good agreement with the quantum device, except for a few qubits. Despite its simplicity, the approximate ground state prepared above can serve as a valuable starting point for exploring the dynamics of the MR phase on quantum computers. ## IV Quench dynamics Given that our Fredkin model (21) was constructed by focusing on the root state and it does not represent a truncation of the MR Hamiltonian according to a decreasing order of magnitude, it is not obvious that the excited spectrum necessarily matches that of the full MR Hamiltonian. To demonstrate the correspondence of key physical properties between the two spectra, in this section we focus on the dynamical response of the Fredkin model. In particular, we study geometric quench [52] to probe the compatibility between the two models, which was previously used to a similar effect in the \(\nu\)=1/3 Laughlin case [46]. Geometric quench is designed to elicit the dynamical response of the Girvin-MacDonald-Platzman (GMP) collective mode [26, 27], which is present in all known gapped FQH states, including the MR state [92, 22, 23, 24, 93]. In the long-wavelength limit \(k\ell_{B}\)\(\rightarrow\)0, the GMP mode forms a quadrupole degree of freedom that carries angular momentum \(L\)=2 and can be represented by a quantum metric [51]. In this respect, the \(k\ell_{B}\)\(\rightarrow\)0 limit of the GMP mode has formal similarity with the fluctuating space-time metric in a theory of quantum gravity [94, 95] and it is sometimes referred to as "FQH graviton" [92, 96]. It was shown that the quantum metric fluctuations can be exposed by introducing anisotropy which breaks rotational symmetry of the system [52, 53]. Such geometric quenches induce coherent dynamics of the FQH graviton, even though the latter resides in the continuum of the energy spectrum, making it a useful probe of physics beyond the ground state considered thus far. ### Spectral function The GMP mode, to a high accuracy, can be generated by a simple ansatz called the "single-mode approximation" [26, 27]: the state belonging to the mode with momentum \(\mathbf{k}\) is obtained by acting with the projected density operator, \(\bar{\rho}_{\mathbf{k}}\), on the ground state, i.e., \(|\psi_{\mathbf{k}}^{\text{GMP}}\rangle=\bar{\rho}_{\mathbf{k}}|\psi_{0}\rangle\). Thus, the GMP states are automatically orthogonal to the ground state as they live in different momentum sectors. However, in practice, it is more convenient to study dynamics within the \(\mathbf{k}=0\) sector of the ground state. This is the case with the geometric quench setup, described in Sec. IV.2 below. Thus, in order to identify the relevant GMP state in the \(\mathbf{k}=0\) sector, possibly hidden in the continuum of the energy spectrum, we need a different tool. We identify the long-wavelength limit of the GMP mode using the following Figure 7: The orbital density from the quantum circuit. There is good agreement between the results of Eq. (37), classical simulation of the circuit using IBM Aer simulator and quantum implementation on the ibmq_mumbai device. The IBM data was obtained on 31/8/2023 at 2:02 PM. spectral function [52; 54]: \[I(\omega)=\sum_{n}|\langle\psi_{n}|\hat{O}|\psi_{0}\rangle|^{2}\delta(\omega- \omega_{n}), \tag{39}\] where \(\hat{O}\) is a 3-body operator with quadrupolar \(x^{2}-y^{2}\) symmetry, given in Ref. [54], and the sum runs over (in principle, all) energy eigenstates \(|\psi_{n}\rangle\) with energies \(\omega_{n}\), measured relative to the ground state energy \(\omega_{0}\). In second quantization, the matrix element of \(\hat{O}\) is \[O_{j_{1}j_{2}j_{3}j_{4}j_{5}j_{6}} =\delta_{j_{1}+j_{2}+j_{3},j_{4}+j_{5}+j_{6}}\left(\sum j_{i}^{2} -\frac{1}{6}\left(\sum j_{i}\right)^{2}\right)\] \[\times(j_{1}-j_{2})(j_{2}-j_{3})(j_{1}-j_{3})\] \[\times(j_{4}-j_{5})(j_{5}-j_{6})(j_{4}-j_{6})\] \[\times\exp\left[-\frac{\kappa^{2}}{2}\left(\sum j_{i}^{2}-\frac{ 1}{6}\big{(}\sum j_{i}\big{)}^{2}\right)\right], \tag{40}\] which allows to readily evaluate Eq. (39). Note that in this section we consider the spectral function for an _isotropic_ system, hence there is no metric dependence in Eq. (40). As before, the matrix element given here is derived for cylinder geometry and appropriate modifications are needed to make it compatible with torus boundary conditions, as explained in Sec. II.1. In Fig. 8 we plot the evolution of the spectral function \(I(\omega)\) as the cylinder circumference is varied from the Tao-Thouless limit towards the isotropic 2D limit, in both the untruncated and Fredkin models. We see there is good agreement between the two models for \(L_{2}{\lesssim}7\ell_{B}\), i.e., across the same range where we previously established high overlap between the ground states of the two models. For larger circumferences, it becomes impossible to adiabatically track the evolution of the graviton peak in \(I(\omega)\) due to multiple avoided crossings in Fig. 8. The graviton resides in the continuum of the spectrum and it is not protected by a symmetry of the Hamiltonian, hence its support over energy eigenstates may undergo complicated "redistribution" as the geometry of the system is varied. In particular, away from the Tao-Thouless limit, there is also a clear splitting of spectral weight between several energy eigenstates, suggesting that the graviton degree of freedom may not correspond to a single eigenstate in this regime. ### Geometric quench Given the complex evolution of the spectral function in Fig. 8 when interpolating between the isotropic 2D limit and the thin-cylinder limit we are interested in, it is natural to inquire if the graviton oscillations observed in the Laughlin case in Refs. [52; 46] persist in the MR case and what their origin may be. In this section we analyze the geometric quench dynamics in the thin-cylinder limit and establish that it corresponds to a linearized bi-metric theory of Gromov and Son [97]. This shows that, despite the simplicity of our model (21), it is successful at capturing a nontrivial many-body effect of a 2D FQH system away from equilibrium. The geometric quench setup assumes that electrons are described by an arbitrary mass tensor \(g_{ab}\), with \(a,b=1,2\). The mass tensor must be symmetric and unimodular (det \(g=1\)) [51], hence we can generally write it as \(g=\exp(\hat{Q})\) where \(\hat{Q}=Q(2\hat{d}_{a}\hat{d}_{b}-\delta_{a,b})\) is a Landau-de Gennes order parameter and \(\hat{\mathbf{d}}=(\cos(\phi/2),\sin(\phi/2))\) is a unit vector [98]. Parameters \(Q\) and \(\phi\) intuitively represent the stretch and rotation of the metric, respectively, with \(Q{=}\phi{=}0\) corresponding to the isotropic case. Under Landau-level projection, the interaction matrix elements acquire explicit dependence on \(g\), as can be seen in Eq. (4). For \(g\) close to the identity (i.e., at weak anisotropy), the topological gap is robust and the MR state remains a zero-energy ground state. We assume the initial state before the quench to be the isotropic Figure 8: Evolution of the spectral function \(I(\omega)\) in Eq. (39) in the Fredkin model (top) and the full MR model (bottom), as a function of cylinder circumference \(L_{2}\). The peak(s) in the spectral function are identified with the long-wavelength limit of the GMP mode, i.e., the FQH graviton. System size is \(N_{e}{=}10\) electrons, \(N_{\phi}{=}18\) flux quanta. MR state with \(g=\mathds{1}\). At time \(t=0\), the anisotropy in the Hamiltonian is instantaneously changed and, for simplicity, we assume the new metric to be diagonal, \(g=\text{diag}[g_{11},g_{22}]\), with \(g_{11}\neq g_{22}\). The deformed \(g_{11}\) (and, therefore, \(g_{22}\)) should be sufficiently close to unity such that the equilibrium system is still in the MR phase. From Eq. (29) we can directly extract the first order corrections to the root state, \(|R_{0}\rangle\equiv|11001100\ldots 0011\rangle\). These are given by states where only one squeezing, Eq. (27), is applied at a minimal distance: \[|\psi_{0}\rangle\approx\big{(}1-\tau\sum_{i}\hat{S}_{i,1}\big{)}|R_{0}\rangle. \tag{41}\] Substituting the deformation parameter in Eq. (22) and assuming \(\exp(-2\kappa^{2})\) and the metric anisotropy \(Q,\phi\) to be small, we get \[|\psi_{0}\rangle\approx|R_{0}\rangle-2\exp\big{[}-2\kappa^{2} \big{(}1-Qe^{i\phi}\big{)}\big{]}\sum_{i}\hat{S}_{i,1}|R_{0}\rangle, \tag{42}\] where we used \(g_{11}=\cosh Q\), \(g_{12}=\sinh Qe^{i\phi}\) and therefore \((1-ig_{12})/g_{11}\approx 1-iQe^{i\phi}\). On the other hand, the graviton state is approximated by: \[|\psi_{g}\rangle=\hat{O}|\psi_{0}\rangle\propto e^{-14\kappa^{2} /3}\bigg{(}\sum_{i}\hat{S}_{i,1}|R_{0}\rangle+O\big{(}e^{-2\kappa^{2}}\big{)} \bigg{)}. \tag{43}\] Note that \(|\psi_{g}\rangle\) is orthogonal to \(|\psi_{0}\rangle\). From here, we deduce the graviton root state, \[|R_{g}\rangle =\sum_{i}\hat{S}_{i,1}|R_{0}\rangle\] \[=|1011010011\ldots\rangle+|1100101101\ldots\rangle+\ldots, \tag{44}\] which is proportional to the first order squeezes. This is identical to the MR ground state first-order correction to the root state and, in some sense, it is the simplest translationally invariant quadrupole structure that we can impose on top of it, creating quadrupoles of the form \(-++-\) in each unit cell. From the graviton root state, we can deduce the geometric quench dynamics up to first order in \(\exp(-2\kappa^{2})\). Assuming, for simplicity, that the post-quench Hamiltonian has the metric \(g_{11}=\exp(A)\approx 1+A\) and \(g_{12}=0\), the initial state is given by \[|\psi(t=0)\rangle=|\psi_{0}^{\text{iso}}\rangle\approx|R_{0}\rangle-2e^{-2 \kappa^{2}}|R_{g}\rangle. \tag{45}\] Denoting by \(|\psi_{0}^{\text{aniso}}\rangle\) the ground state of the post-quench Hamiltonian and using Eq. (42), we get \[|\psi(t=0)\rangle=|\psi_{0}^{\text{aniso}}\rangle-2e^{-2\kappa^{2} }(1-e^{2\kappa^{2}A})|R_{g}\rangle. \tag{46}\] Very close to the thin-cylinder limit, the graviton root state will be the correct \(O(1)\) approximation to an eigenstate of the Hamiltonian, as confirmed by the numerics. Thus, to first order, we can treat both \(|\psi_{0}^{\text{aniso}}\rangle\) and \(|R_{g}\rangle\) as eigenstates and write the time-evolved state as \[|\psi(t)\rangle=|\psi_{0}^{\text{aniso}}\rangle-2e^{-2\kappa^{2} }(1-e^{2\kappa^{2}A})e^{-iE_{\gamma}t}\left|R_{g}\right\rangle, \tag{47}\] with \(E_{\gamma}\) being the energy of the graviton state. Assuming that the combined anisotropy, coming from the metric deformation and the stretching of the cylinder, is still small, \(\kappa^{2}A\ll 1\), we can rewrite the above expression \[|\psi(t)\rangle\approx|R_{0}\rangle-2e^{-2\kappa^{2}}(1+2\kappa ^{2}A(1-e^{-iE_{\gamma}t}))|R_{g}\rangle. \tag{48}\] The expression in the bracket can be rewritten \[1+2\kappa^{2}A(1-e^{-iE_{\gamma}t})\approx e^{2\kappa^{2}A(1-e^{-iE_{\gamma}t })}. \tag{49}\] Substituting into the previous equation, \[|\psi(t)\rangle\approx|R_{0}\rangle-2\exp\left(-2\kappa^{2}[1-A(1 -e^{-iE_{\gamma}t})]\right)|R_{g}\rangle. \tag{50}\] We recognize that this is of the same form as Eq. (42), as the expression in the square bracket can be written as \(1-\tilde{Q}\exp(i\tilde{\phi})\), with \[\tilde{Q}(t)=2A\sin(E_{\gamma}t/2),\quad\tilde{\phi}(t)=\pi/2-E_{ \gamma}t/2. \tag{51}\] These are nothing but the equations of motion of the linearized bimetric theory [52]. Thus, we have reproduced the graviton dynamics, which in the thin-cylinder limit reduces to the above two-level system dynamics. Figure 9: Geometric quench dynamics in the Fredkin model (21). The system size is \(N_{e}\)=8, \(N_{\phi}\)=14, and the circumference is \(L_{2}=3.6\ell_{B}\). The system is initialised in the isotropic ground state and then time-evolved by the anisotropic Hamiltonian with Q=0.01. The resulting dynamics is in excellent agreement with the linearized bimetric theory, shown by dashed lines. The slight disagreement between the two at late times slow decay comes from the spectral weight in Fig. 8 spread over more than a single energy eigenstate. Fig. 9 confirms the existence of very regular metric oscillations at \(L_{2}{=}3.6\ell_{B}\) and their agreement with the analytical expression in Eq. (51). From Eq. (44) we deduce that the energy of the graviton in the thin-cylinder limit is \(E_{\gamma}=2V_{023320}=72\,e^{-14\kappa^{2}/3}\,\), which agrees with the frequency of the oscillations seen in Fig. 9. Notably, our Fredkin model still accurately captures the dynamics beyond the regime where it can be analytically treated as a two-level system. For example, around circumference \(L_{2}{\sim}5\ell_{B}\), the graviton peak splits into a few smaller peaks close in energy. The resulting metric oscillations can be seen in Fig. 10. There is now a slowly varying envelope that cannot be accounted for within the simple linearized bimetric theory in Eqs. (51). At even larger circumferences, the structure of the graviton state becomes increasingly complicated, as there are many types of quadrupolar configurations of the root state. The spectrum undergoes dramatic transformations at intermediate values of \(L_{2}\), as the hierarchy of energy scales in the Hamiltonian changes. It is expected that close to the 2D limit and in the thermodynamic limit, the energy of the graviton stabilises, as the energy hierarchy stabilises too when \(\kappa\) is small. ## V Conclusions and discussion We have formulated a one-dimensional qubit model that captures the MR state and its out-of-equilibrium properties on sufficiently thin cylinders with circumferences \(L_{2}{\lesssim}7\ell_{B}\). This was demonstrated by computing the overlap with the MR wave function and scaling of entanglement entropy with the size of the subsystem, as well as the dynamics following a geometric quench. One advantage of the proposed model is that its ground state can be written down exactly and it is amenable to efficient preparation on the existing quantum hardware, as we have demonstrated using the IBM quantum processor. At the expense of noise-aware error mitigation schemes [46], these results can naturally be extended to probe the dynamics of the MR phase on quantum computers. This would also require an efficient optimally decomposed circuit to emulate trotterized evolution with our Hamiltonian (24), which is left to future work. There are some notable differences between the model presented here and previous studies of the \(\nu{=}1/3\) Laughlin case [46]. While in the latter case, the truncated model can be easily adapted to either open or periodic boundary condition, in our case the torus boundary condition leads to considerable complications. For example, the hopping term in Eq. (8) should no longer be neglected as it can act on the root state \(1010\cdots 1010\), which is one of the Tao-Thouless torus ground states. Keeping this hopping term, in combination with Eq. (9) we considered above, leads to more complicated models, none of which appears to be frustration-free (i.e., until we exhaust all the terms in the Hamiltonian for a given finite system size). For this reason, our truncated model in Eq. (15) applies primarily to the cylinder geometry. The previously mentioned caveats of boundary conditions highlight the fact that defining a truncated model is a nontrivial task. Unlike the Laughlin case, where the Hamiltonian can be naturally truncated according to the magnitude of the matrix elements, leading to a frustration-free model, such a truncation scheme was not possible for the MR case. The requirement of a frustration-free truncated Hamiltonian involves judiciously neglecting certain terms, which necessitates an independent demonstration of the model's validity. In fact, similar difficulties are encountered even in the Laughlin case in going beyond the first-order truncation in Ref. [41] (see Appendix C), and they become progressively more severe in higher members of the Read-Rezayi sequence [64]. It would be useful if a systematic approach could be developed for generating frustration-free models in all these examples, which would allow to controllably approach the isotropic 2D limit. The frustration-free property of the truncated model at \(\nu{=}1/3\) has recently allowed to rigorously prove the existence of a spectral gap in that case [99; 100; 101]. It would be interesting to see if such an approach could be generalized to the MR state and, potentially, to longer-range truncations. As mentioned in Introduction, a unique feature of the MR state is the neutral fermion collective mode and the emergent supersymmetry relating that mode with the GMP mode we discussed above. It would be worth investigating signatures of the neutral fermion mode in the proposed Fredkin model or other appropriate truncations Figure 10: Comparison of the geometric quench dynamics between the Fredkin and full model at the cylinder circumference \(L_{2}{=}5\ell_{B}\). The system size is \(N_{e}{=}8\), \(N_{\phi}{=}14\). The system is initialized in the isotropic ground state and then time-evolved by an anisotropic Hamiltonian with \(Q{=}0.02\). In this case, the dynamics is beyond the simple two-level system dynamics described by linearized bimetric theory, as the graviton does not correspond to a single eigenstate. The contribution of additional eigenstates to the spectral function gives rise to the beating pattern seen here. Nevertheless, there is still good agreement between the Fredkin and full model. of the MR Hamiltonian near the thin-cylinder limit. Unlike the GMP mode, which can be directly probed using the geometric quench, it is not known how to excite the neutral fermion mode. This is because the latter carries angular momentum \(L\)=3/2 in the long-wavelength limit. Therefore, it does not couple to the anisotropic deformations of the FQH fluid studied above. We leave the investigation of such dynamical probes and their implementation on quantum hardware to future work. Note added. During the completion of this work, we became aware of a work by Causer _et al._[102] which finds evidence of anomalous thermalization dynamics and quantum many-body scars in a similar type of deformed Fredkin model. However, the model studied by Causer _et al._[103] assumes a different parameter range \(|\tau|>1\), which is unphysical from the point of view of FQH realization considered here. ###### Acknowledgements. We would like to thank Zhao Liu and Ajit C. Balram for useful discussions. This work was supported by the Leverhulme Trust Research Leadership Award RL-2019-015. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. PG acknowledges support from NSF DMR-2130544 and infrastructural support from NSF HRD-2112550 (NSF CREST Center IDEALS). PG and AK acknowledge support from NSF DMR-2037996. AR acknowledges support from NSF DMR-2038028 and NSF DMR-1945395. We acknowledge the use of IBM Quantum services. We also thank the Brookhaven National Laboratory for providing access to IBM devices. ## Appendix A Anisotropic interaction matrix elements for the Moore-Read state We sketch the derivation of the interaction matrix elements when an anisotropic band mass is introduced. The interaction Hamiltonian can be written as: \[\hat{H}=\frac{1}{N_{\phi}}\sum_{\mathbf{p},\mathbf{q}}\bar{V}(\mathbf{p}, \mathbf{q},-\mathbf{p}-\mathbf{q})\,:\bar{\rho}(\mathbf{p})\,\bar{\rho}( \mathbf{q})\,\bar{\rho}(-\mathbf{p}-\mathbf{q}): \tag{10}\] where \(\bar{\rho}(\mathbf{q})=e^{-iq_{x}q_{y}/2}\sum_{j}e^{iq_{z}\kappa j}\,c_{j+q_{y }/\kappa}^{\dagger}c_{j}\) is the projected density operator, and \(\bar{V}(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q})\) is the interaction potential multiplied by the corresponding form factor: \[\bar{V}(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q})=F(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q})\,v(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q}), \tag{11}\] where the form factor is \[F(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q})=e^{-\mathbf{p}^{2}/4-\mathbf{ q}^{2}/4-(\mathbf{p}+\mathbf{q})^{2}/4}, \tag{12}\] and the interaction potential \[v(\mathbf{p},\mathbf{q},-\mathbf{p}-\mathbf{q})=\mathbf{p}^{4} \mathbf{q}^{2}+\mathbf{q}^{4}\mathbf{p}^{2}+\mathbf{q}^{4}(\mathbf{q}+\mathbf{ p})^{2}+\] \[(\mathbf{q}+\mathbf{p})^{4}\mathbf{q}^{2}+\mathbf{p}^{4}( \mathbf{q}+\mathbf{p})^{2}+(\mathbf{q}+\mathbf{p})^{4}\mathbf{p}^{2} \tag{13}\] is the Fourier transform of Eq. (1). Anisotropic band mass tensor affects the single-electron wave functions - see, e.g., Ref. [104]. Thus, it also modifies the matrix elements: \[V_{j_{1}j_{2}j_{3}j_{4}j_{5}j_{6}} \propto P_{g}(\{j_{i}\})\,\exp\biggl{(}-\frac{\kappa^{2}}{2g_{11}}( \sum j_{i}^{2}-\frac{1}{6}(\sum j_{i})^{2})\] \[+\frac{i\kappa^{2}g_{12}}{2g_{11}}(j_{6}^{2}+j_{5}^{2}+j_{4}^{2}- j_{3}^{2}-j_{2}^{2}-j_{1}^{2})\biggr{)} \tag{14}\] Just as in the isotropic case, the polynomial \(P_{g}\) is tightly constrained: it has to be antisymmetric in the pairs \((j_{1},j_{2})\), \((j_{1},j_{3})\), \((j_{2},j_{3})\), \((j_{4},j_{5})\), \((j_{5},j_{6})\), \((j_{5},j_{6})\), and its maximum total degree is 6. The only such polynomial is the one that appears in the isotropic case. Therefore the only contribution of the metric in the prefactor is a constant. The final form will be: \[V_{j_{1}\dots j_{6}} \propto\frac{\kappa^{8}}{g_{11}^{4}}(j_{1}-j_{2})(j_{1}-j_{3})(j _{2}-j_{3})(j_{6}-j_{4})\] \[(j_{6}-j_{5})(j_{5}-j_{4})\,\exp\biggl{(}-\frac{\kappa^{2}}{2g _{11}}(\sum j_{i}^{2}-\frac{1}{6}(\sum j_{i})^{2})\] \[+\frac{i\kappa^{2}g_{12}}{2g_{11}}(j_{6}^{2}+j_{5}^{2}+j_{4}^{2}- j_{3}^{2}-j_{2}^{2}-j_{1}^{2})\biggr{)}. \tag{15}\] ## Appendix B Nonlocal string order in the Fredkin chain The nonlocal constraint that defines Dyck paths hints that the Fredkin ground state might have interesting behavior in certain nonlocal order parameters. This is reinforced by the fact that such nonlocal correlations were found in the spin-1 analog of our model, the deformed Motzkin chain [105]. The natural correlations to probe in the Fredkin chain are the string orders discussed in Ref. [106] in connection to spin-1/2 ladders and the Majumdar-Ghosh chain: \[O_{\text{even/odd}}=\lim_{|i-j|\to\infty}\biggl{\langle}\bigl{(}S_{i}^{z}+S_{ i+1}^{z}\bigr{)}e^{i\pi\sum_{i=i+z}^{j-1}S_{i}^{z}}\bigl{(}S_{j}^{z}+S_{j+1}^{z} \bigr{)}\biggr{\rangle} \tag{16}\] where for \(O_{\text{even/odd}}\) the sites \(i,j\) are both even/odd, respectively. Using the MPS representation (31), we test the Fredkin ground state for nonlocal order, shown in Fig. 11. First, note that nondecaying expectation values are only found inside the \(|\tau|<1\) phase (as the inset shows), whereas in the \(|\tau|>1\) "domain-wall" phase these nonlocal correlations decay. This suggests that in the \(|\tau|<1\) "antiferromagnetic" phase, short range valence bonds form between consecutive spins. We also notice that generally \(O_{\rm even}\) is higher in magnitude compared to \(O_{\rm odd}\). Given that the spin chain always has even length, the favored arrangement is where all spins are paired (i.e. \((0,1),(2,3),\ldots,(N-2,N-1)\)), as opposed to the case where the first and the last spins remain unpaired. This implies the bonds starting on an even index will be stronger. ## Appendix C Motzkin chain as an effective truncated model of the \(\nu{=}1/3\) Laughlin state In this Appendix, for the sake of completeness, we show that the Motzkin chain [79, 80, 81, 82] - a closely related spin-1 cousin of the Fredkin chain - captures the properties of the \(\nu{=}1/3\) Laughlin state. This model contains more terms compared to the model derived in Ref. [41] and hence captures the physics of the Laughlin state over a slightly larger range of cylinder circumferences. The model in Ref. [41] can be derived via a similar method to the one presented above, but for a 2-body interaction given in terms of \(V_{1}\) Haldane pseudopotential [63]. The corresponding matrix elements are now given by \[V_{j_{1}j_{2}j_{3}j_{4}}=(j_{1}-j_{2})(j_{4}-j_{3})e^{-\frac{b^{2}}{4}\left[(j _{1}-j_{2})^{2}+(j_{3}-j_{4})^{2}\right]}. \tag{12}\] The minimal model beyond the extreme thin-cylinder limit from Ref. [41] can be written in the positive-semidefinite form \[H_{\rm L}^{\prime}=\sum_{i}\left(Q_{i}^{\dagger}Q_{i}+P_{i}^{\dagger}P_{i}\right) \tag{13}\] where \[Q_{i}=\alpha_{i}c_{i+1}c_{i+2}+\gamma_{i}c_{i}c_{i+3}\,,\quad P_{i}=\beta_{i}c _{i}c_{i+2} \tag{14}\] and \[\alpha=\sqrt{V_{0110}},\quad\beta=\sqrt{V_{0220}},\quad\gamma=e^{2i\kappa^{2} \frac{g_{12}}{g_{11}}}\sqrt{V_{0330}} \tag{15}\] The only configurations which are dynamically connected to the root state are those that can be obtained from applying squeezing operators \(\hat{S}_{i}=c_{i+1}^{\dagger}c_{i+2}^{\dagger}c_{i+3}c_{i}\) to \(100100\ldots 001\). This connected component of the Hilbert space can be mapped to a spin-1 model by considering unit cells of three magnetic orbitals, whose occupations can only take the following patterns: \[\ket{010}\rightarrow\ket{\mathrm{o}},\quad\ket{001}\rightarrow\ket{+},\quad \ket{100}\rightarrow\ket{-}. \tag{16}\] Thus, we can write the model of Ref. [41] as \[H_{\rm L}^{\prime}=\sum_{i=0}^{N-2}\mathcal{P}_{i,i+1}^{\varphi_{\rm L}(v)} \tag{17}\] where \(\ket{\varphi_{\rm L}(v)}=\ket{+-}-v\mathrm{|oo}\) and \(v=-\sqrt{V_{0330}/V_{0110}}=-3\exp(-2\kappa^{2})\). It is important to notice there are no boundary conditions - they are not necessary if the mapped Hilbert space is used (which entails constraints, e.g. configurations with the first spin \(\ket{-}\) and the last spin \(\ket{+}\) being disallowed). ### Extension to the Motzkin chain A natural attempt to improve the model in Eq. (17) would be to extend the truncation, \(P_{i}\rightarrow\beta_{i}c_{i}c_{i+2}+\delta_{i}c_{i}c_{i+4}\). The newly obtained Hamiltonian \(H^{\prime}\) would have the following off-diagonal actions: \[H_{\rm L}^{\prime\prime}|\ldots 100\,010\ldots\rangle =\beta\delta|\ldots 010\,100\ldots\rangle\] \[H_{\rm L}^{\prime\prime}|\ldots 010\,001\ldots\rangle =\beta\delta|\ldots 001\,010\ldots\rangle \tag{18}\] and the Hermitian conjugates. In the spin-1 mapping, these mean \[H_{\rm L}^{\prime\prime}|\ldots-\mathrm{o}\ldots\rangle =\beta\delta|\ldots\mathrm{o}-\ldots\rangle\] \[H_{\rm L}^{\prime\prime}|\ldots\mathrm{o}+\ldots\rangle =\beta\delta|\ldots+\mathrm{o}\ldots\rangle \tag{19}\] However, the fermionic Hamiltonian also produces hoppigns of the following kind: \[H_{\rm L}^{\prime\prime}|\ldots 100\,100\ldots\rangle =\beta\delta|\ldots 011\,000\ldots\rangle\] \[H_{\rm L}^{\prime\prime}|\ldots 001\,001\ldots\rangle =\beta\delta|\ldots 000\,110\ldots\rangle \tag{20}\] Figure 11: The behavior of \(O_{\rm even/odd}\) as a function of the deformation parameter \(\tau\). String order is not present at \(\tau=0\), i.e., when the ground state is a product state. For \(|\tau|<1\) the string order parameter increases but drops quickly at \(|\tau|=1\) where the gap closes. The difference in magnitude between \(O_{\rm even}\) and \(O_{\rm odd}\) is a result of stronger bonds that form between sites \(2i\) and \(2i+1\), such that all spins are paired up. The inset shows the behavior of \(O_{\rm even}\) as a function of \(|i-j|\), demonstrating that the nonlocal correlations quickly stabilize to a \(\tau\)-dependent value. All numerical results are obtained from a chain with \(N=100\) spins, where the values are already converged. These break our spin mapping, and connect the entire Hilbert space. Even though we only keep 2 types of off-diagonal terms, we no longer obtain a significant reduction in complexity, and in fact we find numerically that the zero-mode property is also lost. Thus, we focus only on the spin model instead. The extension of the Hamiltonian in Eq. (101) therefore takes the form: \[H_{\rm M}=\sum_{i=0}^{N-2}\mathcal{P}_{i,i+1}^{\varphi_{\rm L}(v)}+\mathcal{P}_ {i,i+1}^{U(w)}+\mathcal{P}_{i,i+1}^{D(w)}, \tag{102}\] where we introduced the projectors on the states \(|U(w)\rangle=|+\mathrm{o}\rangle-w|0+\rangle\) and \(|U(w)\rangle=|0-\rangle-w|-\mathrm{o}\rangle\), where \(w=-\sqrt{V_{0440}/V_{0220}}=-2\exp(-3\kappa^{2})\). These implement the additional the additional terms in our truncation, while keeping the spin Hamiltonian 2-local. The Hamiltonian in Eq. (102) represents a particular deformation of the Motzkin spin chain introduced in [82]. It has a unique, zero-energy ground state which is equal to an area-weighted sum of Motzkin paths, \(p\in\mathcal{M}_{N}\): \[|\psi_{0}^{\rm M}\rangle=\mathcal{N}^{-1}\sum_{p\in\mathcal{M}_{N}}v^{A_{ \square}(p)}w^{A_{\triangle}(p)}|p\rangle. \tag{103}\] Fig. 12 shows that this ground state has good overlap with the Laughlin over a larger range of circumferences. Notice that in the Motzkin ground state with \(v=w\), the weights of a path \(p\) are \(v^{A(p)}\), i.e. only dependent on the total area and not its shape. Fig. 13 illustrates how this is different from Eq. (103). Similar to the Fredkin chain discussed in Section III.3, the ground state of the Motzkin chain also has an exact MPS representation in terms of matrices: \[A^{\rm o}_{jk}=w^{j-1}\delta_{j,k}\quad A^{+}_{jk}=v^{1/2}w^{j-1}\delta_{j+1,k }\quad A^{-}=(A^{+})^{T} \tag{104}\] The Motzkin chain with equal deformation parameters \(v=w\) has been studied in depth in the literature. Its gap for \(v<1\) has been proven [107], and based on our analogy with the Laughlin state we conjecture that the gap survives for \(w\leq v\). ### Laughlin graviton root state and geometric quench dynamics Here we analyze the graviton root state and dynamics following the geometric quench for the \(\nu\)=1/3 Laughlin state, following a similar approach the MR state in Sec. IV. As explained in the main text, we can use the SMA ansatz [26; 27] to identify the GMP state with nonzero momentum \(\mathbf{k}\): \[|\phi_{\mathbf{k}}\rangle=\overline{\rho}_{\mathbf{k}}|\psi_{0}\rangle=e^{ \frac{ik_{\sigma}k_{\sigma}}{2}}\sum_{j}e^{ik_{\sigma}\kappa j}c^{\dagger}_{j+ k_{y}/\kappa}c_{j}|\psi_{0}\rangle \tag{105}\] In the thin-torus limit, the ground state is the root state \(|R_{0}\rangle=|1001001\dots\rangle\). Thus, the graviton root state with momentum \(\mathbf{k}=2\kappa j\) is given by \[|R_{g}^{(2)}\rangle\propto|1100001001\dots\rangle+|1001100001\dots\rangle+\dots \tag{106}\] In the extreme thin-cylinder limit, these states are degenerate in energy, and the first product state is the Jack root state [108], from which all states that follow can be obtained by applying a sequence of squeezes. However, as the geometric quench preserves momentum, to identify the long-wavelength limit of the graviton state we rely on spectral function \(I(w)\) from Eq. (39). In the present case, \(\hat{O}\) is a 2-body operator [54] with matrix Figure 12: Squared overlaps of the ground states of models in Eq. (101) (the minimal model \(H_{\rm L}^{\prime}\)) and Eq. (102) (the extended spin model \(H_{\rm M}\)) with the ground state of the untruncated \(V_{1}\) Hamiltonian. The extended spin model captures the properties of the Laughlin state up to cylinder circumferences of \(L_{2}\approx 8\,l_{B}\), where the overlaps are \(\gtrsim 95\%\). Figure 13: (a): Two types of allowed moves in the Motzkin chain. The upper move corresponds to \(|\mathrm{o}\rangle\rightarrow|+-\rangle\), while the bottom one shows \(|\mathrm{o}+\rangle\rightarrow|+\mathrm{o}\rangle\) (\(|-\mathrm{o}\rangle\rightarrow|\mathrm{o}-\rangle\) is omitted for brevity). Although each step increases the area of the path by the same amount, the corresponding weight in Eq. (103) scales differently depending on the move. (b): One type of allowed configuration in the ground state, that was not present in the model Eq. (101). The sketch shows how the total area is divided into \(A_{\square}\) and \(A_{\triangle}\), determining its weight in the ground state. elements \[O_{j_{1}j_{2}j_{3}j_{4}} =\delta_{j_{1}+j_{2},j_{3}+j_{4}}(j_{1}-j_{2})(j_{3}-j_{4})\] \[\times\left(\sum j_{i}^{2}-\frac{1}{4}(\sum j_{i})^{2}\right)\] \[\times\exp\left[-\frac{\kappa^{2}}{2}\left(\sum j_{i}^{2}-\frac{1 }{4}(\sum j_{i})^{2}\right)\right]. \tag{100}\] The spectral function \(I(\omega)\) for the \(\nu\)=1/3 Laughlin state is plotted in Fig. 14. Similar to the MR case in Fig. 8, we see that the graviton undergoes a nontrivial evolution as the cylinder circumference is varied, with clear avoided crossings in the evolution. In the thin-cylinder limit, the gap of the graviton can be accurately estimated from the dominant matrix element in the Hamiltonian. The graviton state is given by acting on the ground state with the quadrupole operator (100). From the model in Eq. (101), we also know that the ground state is approximated by \[|\psi_{0}\rangle =\prod_{i}\big{(}1-\sqrt{V_{0330}/V_{0110}}\,e^{2i\kappa^{2}g_{12 }/g_{11}}\hat{S}_{i}\big{)}|R_{0}\rangle\] \[\approx|R_{0}\rangle-3e^{-2\kappa^{2}\frac{1-g_{12}}{g_{11}}} \sum_{i}\hat{S}_{i}|R_{0}\rangle\] \[\approx|R_{0}\rangle-3\exp\big{[}-2\kappa^{2}\big{(}1-Qe^{i \phi}\big{)}\big{]}\sum_{i}\hat{S}_{i}|R_{0}\rangle, \tag{101}\] where we assumed \(e^{-2\kappa^{2}}\) and the metric anisotropy \(Q,\phi\) to be small. The graviton is then approximated by: \[|\psi_{g}\rangle=\hat{O}|\psi_{0}\rangle\propto e^{-\frac{\kappa^{2}}{2}} \bigg{(}\sum_{i}\hat{S}_{i}|\psi_{0}\rangle+\mathcal{O}\big{(}e^{-2\kappa^{2}} \big{)}\bigg{)}. \tag{102}\] From here we deduce the graviton root state, \[|\psi_{g}\rangle=\sum_{i}\hat{S}_{i}|\psi_{0}\rangle=|01100010\dots\rangle+|10 001100\dots\rangle+\dots \tag{103}\] Similar to the MR case, the graviton root state here is also proportional to the first order squeezes and it encodes the simplest quadrupole structure of the form \(-++-\) in each unit cell. Repeating the same steps as in Eqs. (45)-(51) of the main text, from the graviton root state we can determine the time-evolved state, showing that it takes the form (at first order) \[|\psi(t)\rangle\approx|R_{0}\rangle-3e^{-2\kappa^{2}}(1+2\kappa^{2 }A(1-e^{-iE_{\gamma}t}))|R_{g}\rangle, \tag{104}\] which has the identical form to the linearized bimetric theory in Eq. (51). This agreement is confirmed in Fig. 15.
2309.06330
Decentralized Constraint-Coupled Optimization with Inexact Oracle
We propose an inexact decentralized dual gradient tracking method (iDDGT) for decentralized optimization problems with a globally coupled equality constraint. Unlike existing algorithms that rely on either the exact dual gradient or an inexact one obtained through single-step gradient descent, iDDGT introduces a new approach: utilizing an inexact dual gradient with controllable levels of inexactness. Numerical experiments demonstrate that iDDGT achieves significantly higher computational efficiency compared to state-of-the-art methods. Furthermore, it is proved that iDDGT can achieve linear convergence over directed graphs without imposing any conditions on the constraint matrix. This expands its applicability beyond existing algorithms that require the constraint matrix to have full row rank and undirected graphs for achieving linear convergence.
Jingwang Li, Housheng Su
2023-09-12T15:42:43Z
http://arxiv.org/abs/2309.06330v3
# Decentralized Constraint-Coupled Optimization with Inexact Oracle ###### Abstract We propose an inexact decentralized dual gradient tracking method (iDDGT) for decentralized optimization problems with a globally coupled equality constraint. Unlike existing algorithms that rely on either the exact dual gradient or an inexact one obtained through single-step gradient descent, iDDGT introduces a new approach: utilizing an inexact dual gradient with controllable levels of inexactness. Numerical experiments demonstrate that iDDGT achieves significantly higher computational efficiency compared to state-of-the-art methods. Furthermore, it is proved that iDDGT can achieve linear convergence over directed graphs without imposing any conditions on the constraint matrix. This expands its applicability beyond existing algorithms that require the constraint matrix to have full row rank and undirected graphs for achieving linear convergence. Constraint-coupled optimization, dual gradient tracking, inexact oracle, linear convergence. ## I Introduction Recently decentralized optimization has gained significant popularity in numerous fields due to its promising applications in areas such as large-scale machine learning, distributed control, decentralized estimation, smart grids, and more [1, 2, 3, 4, 5, 6, 7]. This work focuses on addressing the decentralized optimization problem \[\begin{split}\min_{x_{i}\in\mathbb{R}^{d_{i}}}& \sum_{i=1}^{n}f_{i}(x_{i})\\ \text{s.t.}&\sum_{i=1}^{n}A_{i}x_{i}=b\end{split}\] (P1) over a directed network consisting of \(n\) agents, where \(f_{i}:\mathbb{R}^{d_{i}}\rightarrow\mathbb{R}\) and \(A_{i}\in\mathbb{R}^{p\times d_{i}}\) are completely private for agent \(i\) and cannot be shared with its neighbors, while \(b\in\mathbb{R}^{p}\) is public for all agents. Without loss of generality, assume that there exist at least one finite solution of (P1). The constraint \(\sum_{i=1}^{n}A_{i}x_{i}=b\) couples the decision variables of all agents, making it a decentralized constraint-coupled optimization problem [8, 9]. Notably, (P1) can cover lots of practical optimization problems, such as distributed resource allocation [8, 10] and decentralized vertical federated learning [11, 12]. One can observes that the dual of (P1) has the same form with the classical decentralized unconstrained optimization (DUO) problem, leading to the natural idea of applying existing DUO algorithms to its dual. This approach has been adopted in numerous previous works [8, 9, 10, 11]. However, a key challenge lies in dealing with the dual gradient or dual subgradient. A straightforward approach is to use the exact dual gradient and follow the same steps as in DUO algorithms. This involves applying a suitable DUO algorithm to the dual of (P1), resulting in a decentralized algorithm for (P1), and the resulting algorithm is essentially a special case of the original DUO algorithm. The only thing you need to do is to use a gradient based DUO algorithm if the dual funtion is differentiable, and a subgradient based one if the dual function is non-differentiable. Related works include [8, 10, 11, 14, 18, 20]. Despite its convenience, the aforementioned approach has a common drawback: the use of the exact dual gradient necessitates solving a subproblem exactly at each iteration. This can be computationally expensive and even infeasible in practice, particularly when dealing with nonlinear objective functions [21]. A simple and widely adopted solution to address the above limitation is to use an inexact dual gradient instead of the exact one, which has been extensively explored in existing works [15, 16, 17, 12, 13, 11, 9, 13]. In these works, a common approach to obtaining the inexact dual gradient is to employ single-step (proximal) gradient descent, which leads to an approximate solution of the subproblem. This approach can be viewed as minimizing the first-order approximation of the objective function. By introducing this approximation, the resulting algorithms do not require solving the subproblem exactly at each iteration, making them computationally feasible and easy to implement. However, there are some concerns regarding the aforementioned approximate method: 1. The suboptimality of the approximate solution obtained through single-step gradient descent is uncontrollable 1, this implies that we are unable to control the gap between the inexact dual gradient and the exact one, which is crucial for the algorithm's performance. As a result, the ability to control and optimize the overall performance of the algorithm may be significantly limited. Footnote 1: Certainly, we can control the suboptimality within a certain range by adjusting the step size of single-step gradient descent. However, when we say the suboptimality is “uncontrollable,” we mean that we cannot make the suboptimality arbitrarily small, regardless of the step size chosen. 2. There are multiple methods available for solving the subproblem, such as multi-step gradient descent, Nesterov's accelerated gradient descent (AGD) [22], Newton's method, and others. Relying solely on single-step gradient descent is overly inflexible. Intuitively, incorporating AGD or even second-order methods could potentially enhance the overall performance of the algorithm. 3. The computation cost and communication cost vary widely across different decentralized optimization sc narios. In some scenarios, computation is cheap while communication is costly, whereas the opposite is true for others. Intuitively, when computation is cheap but communication is expensive, utilizing a more accurate dual gradient (which requires more computation steps to solve the subproblem) may lead to a decrease in the total convergence time. However, this strategy is impractical for algorithms based on single-step gradient descent because we cannot arbitrarily control the accuracy of the inexact dual gradient. Therefore, we aim to develop a new scheme that can address the aforementioned potential concerns. Besides, we are also interested in addressing another open problem: Can we design an algorithm that can linearly solve (P1) under a less restrictive condition on \(A_{i}\)? Currently, the weakest condition obtained in [9, 12, 16] is that \(A=[A_{1},\cdots,A_{n}]\) has full row rank. Although this condition is much weaker compared to its predecessors, such as \(A_{i}\) is the identity matrix [10] or \(A_{i}\) has full row rank [11], it is still too strong to be satisfied by some practical optimization problems. For instance, in the vertical federated learning setting for regression problems [12], where \(A\) represents the feature matrix with each row corresponding to a sample and \(A_{i}\) represents the local feature matrix of agent \(i\), the number of samples is typically much larger than the number of features. As a result, \(A\) fails to satisfy the full row rank condition. Hence, there is a need for algorithms that can achieve linear convergence under a weaker condition on \(A_{i}\). The major contributions of this work are summarized as follows. 1. We propose iDDGT, a novel inexact decentralized dual gradient tracking method. Unlike existing algorithms that rely on either the exact dual gradient or an inexact one obtained through single-step gradient descent, iDDGT introduces a new approach: utilizing an inexact dual gradient with controllable levels of inexactness. Specifically, in iDDGT, the subproblem is approximately solved with a predefined accuracy during each iteration to regulate the level of inexactness in the dual gradient. It is proved that iDDGT can achieve linear convergence if the error in solving the subproblem decreases linearly. 2. Thanks to the new approach for handling the dual gradient, iDDGT offers two significant advantages. Firstly, the inexactness of the dual gradient in each iteration can be controlled arbitrarily. This allows for adjusting the algorithm's overall performance by modifying the level of inexactness in different iterations. Secondly, the choice of the subproblem solver is flexible, enabling the utilization of accelerated or second-order methods to enhance the algorithm's overall performance. In numerical experiments, we compare the performances of iDDGT and NPGA, which is considered state-of-the-art [12]. The results demonstrate that iDDGT achieves a significantly faster convergence speed in terms of the number of gradient steps compared to NPGA. Therefore, when computation is expensive but communication is cheap, iDDGT would be a preferable choice. 3. A consequence of the above two advantages of iDDGT is that we can obtain multiple versions of iDDGT by choosing different subproblem solvers (such as single-step gradient descent, multi-step gradient descent, and AGD) and different strategies to control the level of inexactness in the dual gradient during different iterations. We compares the performances of different versions of iDDGT and observe some important facts. Firstly, it is an incredibly counterintuitive fact that using the exact gradient results in significantly lower computational and communication efficiencies, as measured by the number of gradient steps and communication rounds required to achieve a certain level of accuracy, compared to using an inexact dual gradient. Secondly, accelerating the reduction of subproblem solving errors within a certain range can enhance communication efficiency. However, it may also lead to a potential decrease in computational efficiency. Thirdly, employing single-step gradient descent as the subproblem solver can yield favorable communication efficiency. However, the computational efficiency is significantly lower compared to the strategy of linearly reducing the error in solving the subproblem. 4. As mentioned earlier, some existing algorithms such as IDEA [12], DCPA [16], and NPGA can linearly solve (P1) under the condition that \(A\) has full row rank, which was the weakest condition prior to the introduction of iDDGT. However, iDDGT achieves linear convergence without imposing any conditions on \(A_{i}\) or \(A\), significantly expanding its applicability. Furthermore, the linear convergence of IDEA, DCPA, and NPGA (under the condition that \(A\) has full row rank) is dependent on undirected graphs, whereas iDDGT can work for directed graphs. ## II Preliminaries _Notations:_\(\mathbf{1}_{n}\) and \(0_{n}\) represent the all-zero vector and the all-one vector, respectively, and \(\mathbf{I}_{n}\) denotes the \(n\times n\) identity matrix. Notice that if the dimensions of the vectors consisting of ones and zeros, and the identity matrix can be inferred from the context, we will not explicitly indicate their dimensions. For \(B\in\mathbb{R}^{m\times n}\), \(\left[B\right]_{ij}\) denotes the element of \(B\) in the \(i\)-th row and the \(j\)-th column, \(\underline{\sigma}(B)\) and \(\overline{\sigma}(B)\) denote the smallest non-zero and largest singular values of \(B\), respectively. \(\mathbf{Col}(B)\) represents the column space of \(B\). \(\left\|\cdot\right\|\) denotes the Euclidean norm, and \(\text{diag}(\cdot)\) denotes the (block) diagonal matrix. For a vector \(v\in\mathbb{R}^{n}\), we define \(\mathbf{1}v=\mathbf{1}\otimes v\), where the dimension of \(\mathbf{1}\) can be easily inferred from the context of \(\mathbf{1}v\). In the following, we provide several useful lemmas that will be utilized in the subsequent convergence analysis. _In particular, if a lemma or theorem is not referenced and is not immediately followed by a proof, we assume that its proof is included in the appendix._ **Lemma 1**.: _[_22_, Theorem 2.1.10]_ _Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be continuously differentiable and \(\mu\)-strongly convex over \(\mathbb{R}^{n}\), then we have_ \[\mu\left\|x-y\right\|\leq\left\|\nabla f(x)-\nabla f(y)\right\|,\ \forall x,y\in \mathbb{R}^{n}.\] **Lemma 2**.: _For any \(B\in\mathbb{R}^{nm\times q}\) (\(B\) can be a vector with \(q=1\)), let it be partitioned as \(B=[B_{1}^{\top},\cdots,B_{n}^{\top}]^{\top}\), we have_ \[\left\|B-\mathbf{1}\frac{1}{n}\sum_{i=1}^{n}B_{i}\right\|\leq\left\|B\right\|.\] ## III Algorithm Design (P1) can be reformulated as \[\begin{split}&\min_{\mathbf{x}\in\mathbb{R}^{d}}\,f(\mathbf{x}) \\ &\text{s.t. }A\mathbf{x}=b,\end{split}\] (P2) where \(\mathbf{x}=[x_{1}^{\top},\cdots,x_{n}^{\top}]^{\top}\in\mathbb{R}^{d}\), \(d=\sum_{i=1}^{n}d_{i}\), \(f(\mathbf{x})=\sum_{i=1}^{n}f_{i}(x_{i})\), and \(A=[A_{1},\cdots,A_{n}]\in\mathbb{R}^{p\times d}\). In this work, we assume the following assumption holds. **Assumption 1**.: \(f_{i}\) _is \(\mu_{i}\)-strongly convex and \(l_{i}\)-smooth over \(\mathbb{R}^{d_{i}}\), where \(\mu_{i}\) and \(l_{i}\) are both positive constants, \(i=1,\cdots,n\)._ Since the strong duality holds for (P2), we can alternatively solve its dual. The dual function of (P2) can be decomposed as \[\phi(\lambda) =\inf_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x})+\lambda^{\top}( A\mathbf{x}-b)\] \[=\sum_{i=1}^{n}\inf_{x_{i}\in\mathbb{R}^{d_{i}}}f_{i}(x_{i})+ \lambda^{\top}\left(A_{i}x_{i}-\frac{1}{n}b\right)\] \[=\sum_{i=1}^{n}\phi_{i}(\lambda),\] then we can reformulate the dual of (P2) as \[\max_{\lambda\in\mathbb{R}^{p}}\,\phi(\lambda).\] (P3) An important fact is that if \(f\) is strictly convex, then \(\phi\) is differentiable and [23] \[\nabla\phi(\lambda)=A\mathbf{x}^{\star}(\lambda)-b, \tag{1}\] where \(\mathbf{x}^{\star}(\lambda)=\arg\min_{\mathbf{x}\in\mathbb{R}^{d}}\left\{f( \mathbf{x})+\lambda^{\top}(A\mathbf{x}-b)\right\}\). Given Assumption 1, obviously \(f\) is \(\mu\)-strongly convex and \(l\)-smooth, where \(\mu=\min_{i=1,\cdots,n}\mu_{i}\) and \(l=\max_{i=1,\cdots,n}l_{i}\). Then we have the following lemma. **Lemma 3**.: _Suppose Assumption 1 holds, then \(\phi\) is \(\frac{\mathbf{g}^{2}(A)}{\mu}\) smooth over \(\mathbb{R}^{p}\) and \(\frac{\mathbf{g}^{2}(A)}{l}\)-strongly concave over \(\mathbf{Col}(A)\)._ **Remark 1**.: _An evident fact about Lemma 3 is that the solution of (P3), denoted as \(\lambda^{\star}\), is not unique unless \(A\) has full row rank, which ensures that \(\phi\) is strongly concave over \(\mathbb{R}^{p}\). However, Lemma 3 also indicates that \(\phi\) is strongly concave over \(\mathbf{Col}(A)\), implying that the projection of \(\lambda^{\star}\) onto \(\mathbf{Col}(A)\), denoted as \(\lambda_{c}^{\star}\), is unique._ Applying the classical gradient method to (P3) gives the dual ascent method (DA) \[\lambda^{k+1}=\lambda^{k}+\alpha\nabla\phi(\lambda^{k}),\] which can be unfolded as \[\begin{split}\mathbf{x}^{k+1}&=\arg\min_{\mathbf{x }\in\mathbb{R}^{d}}\left\{f(\mathbf{x})+\lambda^{k^{\top}}(A\mathbf{x}-b) \right\},\\ \lambda^{k+1}&=\lambda^{k}+\alpha\left(A\mathbf{x}^{k +1}-b\right),\end{split} \tag{2}\] The following lemma illustrates the contraction property of DA, which also implies its linear convergence. **Lemma 4**.: _Suppose Assumption 1 holds, \(\lambda^{0}=0\), and the step-size satisfies \(0<\alpha<\frac{2\mu}{\beta^{2}(A)}\), then we have_ \[\left\|\lambda^{k+1}-\lambda_{c}^{\star}\right\|\leq\eta\left\|\lambda^{k}- \lambda_{c}^{\star}\right\|,\ \forall k\geq 0, \tag{3}\] _where \(\eta=\max\left\{\left|1-\frac{\alpha\mathbf{g}^{2}(A)}{\mu}\right|,\left|1- \frac{\alpha\mathbf{g}^{2}(A)}{l}\right|\right\}\in(0,1)\)._ While DA can achieve linear convergence in solving (P3), it cannot be implemented in a decentralized manner due to the requirement of global information for the dual gradient \(\sum_{i=1}^{n}A_{i}x_{i}-b\). Additionally, solving a subproblem exactly to obtain the dual gradient \(\nabla\phi(\lambda^{k})\) at each iteration of DA is computationally expensive and often impractical. To address these limitations, we propose iIDGT, which is a decentralized version of DA that eliminates the need for solving the subproblem exactly at each iteration. Let \(\mathbf{\lambda}=[\lambda_{0}^{\top},\cdots,\lambda_{k}^{\top}]^{\top}\), \(\mathbf{z}=[z_{0}^{\top},\cdots,z_{k}^{\top}]^{\top}\), \(\mathbf{A}=\text{diag}(A_{1},\cdots,A_{n})\), \(\mathbf{b}=\mathbf{1}_{n}\otimes\frac{1}{n}b\), and \(\mathbf{W}=W\otimes\mathbf{I}_{p}\), we can rewrite iIDGT in a compact form as follows. \[\mathbf{x}^{k+1} \approx\arg\min_{\mathbf{x}\in\mathbb{R}^{d}}\left\{f(\mathbf{x} )+\mathbf{\lambda}^{k^{\top}}\left(A\mathbf{x}-\mathbf{b}\right)\right\}, \tag{5a}\] \[\mathbf{z}^{k+1} =\mathbf{W}\mathbf{z}^{k}+\mathbf{A}(\mathbf{x}^{k+1}-\mathbf{x }^{k}),\] (5b) \[\mathbf{\lambda}^{k+1} =\mathbf{W}\left(\mathbf{\lambda}^{k}+\beta\mathbf{z}^{k+1}\right). \tag{5c}\] **Remark 2**.: _The decentralized nature of iIDGT originates from the classical gradient tracking technique [24, 25, 26], which is utilized to track the global dual gradient in a decentralized manner. The key features of iIDGT lie in the flexibility to control the level of inexactness in the dual gradient and the freedom to choose the subproblem solver. Thanks to these features, iIDGT demonstrates significantly higher computational efficiency compared to state-of-the-art methods in numerical experiments. Moreover, it has been proven that iIDGT can achieve linear convergence over directed graphs without imposing any conditions on \(A_{i}\) or \(A\). In contrast, existing algorithms require \(A\) to have full row rank and the graphs to be undirected in order to achieve similar convergence guarantees._ **Remark 3**.: _A related work is [10], which considers a special case of (P1) where \(A_{i}=\mathbf{I}\), and proposes a similar algorithm called distributed dual gradient tracking (DDGT). The main distinctions between iDDGT and DDGT lie in the range of problem settings they can handle and the approach they employ for utilizing the dual gradient. DDGT is only capable of solving (P1) when \(A_{i}=\mathbf{I}\), while iDDGT can handle general \(A_{i}\), giving it much a broader range of applications. It is worth noting that this generalization is non-trivial since the convergence analysis of DDGT heavily relies on the property \(A_{i}=\mathbf{I}\) making it difficult to extend to general \(A_{i}\). Furthermore, DDGT utilizes the exact dual gradient and solves the subproblem exactly at each iteration, which could be computationally expensive and impractical. Moreover, as mentioned earlier, we have observed that using the exact gradient leads to significantly lower computational and communication efficiencies compared to using an inexact dual gradient. Consequently, iDDGT is much more efficient than DDGT. The advantage of DDGT lies in its ability to work over more general directed graphs compared to iDDGT._ ## IV Convergence Analysis In this section, we analyze the linear convergence of iDDGT. **Assumption 2**.: _The mixing matrix \(W\) associated with the network graph is assumed to be primitive, doubly stochastic, and with positive diagonal entries._ **Remark 4**.: _Assumption 2 can be satisfied by strongly-connected directed graphs that admit doubly-stochastic weights (see [27] for a more detailed discussion), which can cover connected undirected graphs as special cases. Consequently, our network condition is more inclusive compared to [9, 12, 16], where only undirected graphs are considered._ Given Assumption 2, \(W\) possesses the following important property [25, 27]2 Footnote 2: Though the above property is derived under the assumption that \(\mathcal{G}\) is undirected and connected in [25], it can be trivially proven for our case using the Perron-Frobenius theory. \[\sigma=\left\|W-\frac{1}{n}\mathbf{1}\mathbf{1}^{\top}\right\|\in(0,1), \tag{6}\] then the following lemma immediately holds. **Lemma 5** ([25]).: _Suppose Assumption 2 holds, then we have_ \[\left\|Wx-\frac{1}{n}\mathbf{1}\mathbf{1}^{\top}x\right\|\leq\sigma\left\|x- \frac{1}{n}\mathbf{1}\mathbf{1}^{\top}x\right\|,\ \forall x\in\mathbb{R}^{n}.\] Let \[\mathbf{x}^{*}(\boldsymbol{\lambda})=\arg\min_{\mathbf{x}\in\mathbb{R}^{d}} \left\{f(\mathbf{x})+\boldsymbol{\lambda}^{\top}\left(\mathbf{A}\mathbf{x}- \mathbf{b}\right)\right\}, \tag{7}\] and define \[\bar{\mathbf{z}}^{k}=\frac{1}{n}\sum_{i=1}^{n}z_{i}^{k},\ \bar{\mathbf{x}}^{k}=\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}^{k},\bar{ \mathbf{x}}^{k}=\mathbf{x}^{*}\left(\mathbf{1}\bar{\mathbf{\lambda}}^{k-1} \right), \tag{8}\] then we have the following lemma. **Lemma 6**.: _Given \(z_{i}^{0}=A_{i}x_{i}^{0}-\frac{1}{n}b\), then we have_ \[\bar{\mathbf{z}}^{k+1} =\bar{\mathbf{z}}^{k}+\frac{1}{n}A\left(\mathbf{x}^{k+1}-\mathbf{ x}^{k}\right)=\frac{1}{n}(A\mathbf{x}^{k+1}-b),\] \[\bar{\mathbf{\lambda}}^{k+1} =\bar{\mathbf{\lambda}}^{k}+\beta\bar{\mathbf{z}}^{k+1}=\bar{ \mathbf{\lambda}}^{k}+\frac{\beta}{n}(A\mathbf{x}^{k+1}-b).\] **Algorithm 2** Nesterov's Accelerated Gradient Descent The following lemma establishes a linear matrix inequality regarding the iterations of iDDGT, which is crucial for proving its linear convergence. **Lemma 7**.: _Suppose Assumptions 1 and 2 holds, \(\boldsymbol{\lambda}^{0}=0\), the step-size satisfies \(0<\beta<\frac{2n\mu}{\bar{\mathbf{y}}^{2}(A)}\), and_ \[\left\|\mathbf{x}^{k+1}-\mathbf{x}^{*}(\boldsymbol{\lambda}^{k})\right\|\leq \delta^{k+1},\ \forall k\geq 0, \tag{9}\] _then we have_ \[\zeta^{k}\leq M^{k}\zeta^{0}+\sum_{i=0}^{k-1}M^{k-1-i}H\xi^{i},\ \forall k\geq 1,\] _where \(\zeta^{k}=\left[\begin{array}{c}\left\|\mathbf{z}^{k}-\mathbf{1}\bar{ \mathbf{z}}^{k}\right\|\\ \left\|\boldsymbol{\lambda}^{k}-\mathbf{1}\bar{\mathbf{\lambda}}^{k}\right\| \\ \left\|\boldsymbol{\lambda}^{k}-\boldsymbol{\lambda}^{k-1}\right\|\\ \sqrt{n}\left\|\bar{\mathbf{\lambda}}^{k}-\lambda^{k}\right\|\end{array}\right]\), \(\xi^{k}=\left[\delta^{k+1},\delta^{k}\right]^{\top}\),_ \[M=\left[\begin{array}{ccc}\sigma&0&\frac{\overline{\mathbf{y}}^{2}(\mathbf{A})} {\sigma}&0\\ \beta\sigma^{2}&\sigma&\frac{\beta\sigma^{2}}{\sigma^{2}(\mathbf{A})}&0\\ \beta\sigma&1+\sigma+\frac{\beta\overline{\sigma}(A)\overline{\sigma}(\mathbf{A })}{\sqrt{n}}&\frac{\beta\overline{\sigma}^{2}(\mathbf{A})}{\mu}&\frac{\beta \overline{\sigma}^{2}(\mathbf{A})}{n\mu}\\ 0&\frac{\beta\overline{\sigma}(A)\overline{\sigma}(\mathbf{A})}{\sqrt{n}\mu}&0 &\nu\end{array}\right]\] _and_ \[H=\left[\begin{array}{ccc}\overline{\sigma}(\mathbf{A})&\overline{\sigma}( \mathbf{A})\\ \beta\sigma\overline{\sigma}(\mathbf{A})&\beta\sigma\overline{\sigma}(\mathbf{A })\\ \beta\left(\overline{\sigma}(\mathbf{A})+\frac{\overline{\sigma}(A)}{\sqrt{n}} \right)&\beta\overline{\sigma}(\mathbf{A})\\ \frac{\beta\overline{\sigma}(A)}{\sqrt{n}}&0\end{array}\right].\] **Remark 5**.: _There are various strategies to solve the subproblem (5a) and obtain an inexact solution that satisfies the suboptimality condition (9). To ensure (9), we can simply require agent \(i\) to satisfy_ \[\left\|x_{i}^{k+1}-x_{i}^{*}(\lambda_{i}^{k})\right\|\leq\frac{\delta^{k+1}}{ \sqrt{n}}. \tag{10}\] _Typically, we employ unconstrained optimization algorithms such as gradient descent, AGD, or second-order methods to iteratively solve (10). Hence, a straightforward approach is to set a stopping condition that is sufficient for (10) and terminate the iteration once the condition is met. Let \(F_{i}^{k}(x_{i})=f_{i}(x_{i})+\lambda_{i}^{k^{\top}}\left(A_{i}x_{i}-\frac{1}{n}b\right)\), obviously \(F_{i}^{k}(x_{i})\) is \(\mu_{i}\)-strongly convex and \(l_{i}\)-smooth for \(k\geq 0\). Notice that \(x_{i}^{*}(\lambda_{i}^{k})\) represents the solution of \(\min_{x_{i}\in\mathbb{R}^{d_{i}}}F_{i}^{k}(x_{i})\), implying that \(\nabla F_{i}^{k}(x_{i}^{*}(\lambda_{i}^{k}))=0\). Consequently, we have_ \[\begin{split}\left\|x_{i}^{k+1}-x_{i}^{*}(\lambda_{i}^{k})\right\|& \leq\frac{1}{\mu}\left\|\nabla F_{i}^{k}(x_{i}^{k+1})\right\|\\ &=\frac{1}{\mu}\left\|\nabla f(x_{i}^{k+1})+A_{i}^{\top}\lambda_{i}^{k} \right\|.\end{split} \tag{11}\] _Thus, the stopping condition for agent \(i\) to ensure (10) can be expressed as_ \[\left\|\nabla f(x_{i}^{k+1})+A_{i}^{\top}\lambda_{i}^{k}\right\|\leq\frac{\mu \delta^{k+1}}{\sqrt{n}}. \tag{12}\] _Another approach is to predefine the number of inner iterations, which can be estimated based on the theoretical convergence rate of the selected algorithm. Lemma 8 provides a lower bound on the number of inner iterations for AGD._ **Lemma 8**.: _Suppose Assumption 1 holds, and AGD (i.e., Algorithm 2) is chosen as the solver for the inner problem (4), then agent \(i\) requires at least \(\sqrt{\frac{l_{i}}{\mu_{i}}}\ln\left(\frac{n(l_{i}+\mu_{i})\left\|\nabla f_{i} (x_{i}^{k+1,0})\right\|^{2}}{(\delta^{k+1})^{2}\mu^{3}}\right)\) inner iterations to ensure (9) at the \(k\)-th iteration, where \(x_{i}^{k+1,0}\) is the chosen initial value of AGD, \(i=1,\cdots,n\)._ In the following theorem, we show that iDDGT can achieve linear convergence if \(\delta^{k}\) decreases linearly. **Theorem 1**.: _Suppose Assumptions 1 and 2 holds, \(\mathbf{\lambda}^{0}=0\), the step-size satisfies_ \[0<\beta<\max\left\{\frac{\mu}{\overline{\sigma}^{2}(\mathbf{A})},\frac{(1- \sigma)^{2}\underline{\sigma}(A)^{2}\mu^{2}}{16\overline{\sigma}^{4}(\mathbf{ A})nl}\right\}, \tag{13}\] _and \(\delta^{k}\) defined in (9) satisfies_ \[\delta^{k+1}=\gamma\delta_{k},\ \forall k\geq 0, \tag{14}\] _with \(\gamma\in(0,1)\), then we have_ \[\left\|\mathbf{x}^{k}-\mathbf{x}^{*}\right\|=\mathcal{O}\left(\theta^{k} \right),\] _where_ \[\theta=\max\left(1-\frac{\underline{\beta\sigma^{2}(A)}}{2nl},\sigma+4\sqrt{ \underline{\beta\frac{\overline{\sigma}^{4}(\mathbf{A})nl}{\underline{\sigma }^{2}(A)\mu^{2}}}},\gamma\right)\in(0,1).\] Proof.: Notice that \(M\) is a nonnegative and irreducible matrix, then we have \[\left[M^{k}\right]_{ij}=\mathcal{O}\left(\rho(M)^{k}\right),\ i,j=1,\cdots,4.\] Lemma 7 states that \[\zeta^{k}\leq M^{k}\zeta^{0}+\sum_{i=0}^{k-1}M^{k-1-i}H\xi^{i},\] combining it with (14) gives that \[\zeta^{k}\leq M^{k}\zeta^{0}+\frac{1}{\gamma}\sum_{i=0}^{k-1}\gamma^{i+1}M^{k -1-i}H\xi^{0}.\] It follows that \[\left\|\mathbf{\lambda}^{k}-\mathbf{1}\bar{\mathbf{\lambda}}^{k}\right\| =\mathcal{O}\left(\max\left(\rho(M),\beta\right)^{k}\right),\] \[\left\|\bar{\mathbf{\lambda}}^{k}-\mathbf{\lambda}^{*}\right\| =\mathcal{O}\left(\max\left(\rho(M),\beta\right)^{k}\right),\] then we have \[\left\|\mathbf{x}^{k+1}-\mathbf{x}^{*}\right\| \tag{15}\] \[\leq \left\|\mathbf{x}^{k+1}-\mathbf{x}^{*}(\mathbf{\lambda}^{k})\right\| +\left\|\mathbf{x}^{*}(\mathbf{\lambda}^{k})-\mathbf{x}^{*}\right\|\] \[\leq \gamma^{k+1}\delta^{0}+\frac{\overline{\sigma(\mathbf{A})}}{\mu} \left(\left\|\mathbf{\lambda}^{k}-\mathbf{1}\bar{\mathbf{\lambda}}^{k}\right\|+\sqrt{n} \left\|\bar{\mathbf{\lambda}}^{k}-\lambda^{*}\right\|\right)\] \[= \mathcal{O}\left(\max\left(\rho(M),\gamma\right)^{k+1}\right).\] Now we need to find an upper bound for \(\rho(M)\). Let \(a_{1}=\frac{\overline{\sigma(\mathbf{A})}}{\sqrt{\mu}}\) and \(a_{2}=\frac{\overline{\sigma(\mathbf{A})}}{\sqrt{n\mu}}\), then the characteristic polynomial of \(M\) is given as \[p(x)=(x-\eta)\left[xp_{0}(x)-(a_{1}a_{2}\beta)^{3}\sigma\right]-(a_{1}a_{2} \beta)^{3}\sigma\eta,\] where \[p_{0}(x)=x^{2}-\left(a_{1}^{2}\beta+2\sigma\right)x-\left(a_{1}^{3}a_{2}\beta ^{2}\sigma+a_{1}^{2}\beta\sigma^{2}-\sigma^{2}\right).\] Note that (13) guarantees that \(\beta<\frac{\mu}{\overline{\sigma}^{2}(\mathbf{A})}\), and it holds that \[\overline{\sigma}(A)=\left\|A\right\|=\left\|\left(\mathbf{1}_{n}\otimes \mathbf{I}_{p}\right)\mathbf{A}\right\|\leq\sqrt{n}\overline{\sigma}(\mathbf{ A}).\] Therefore, we have \[a_{2}^{2}\beta\leq a_{1}a_{2}\beta\leq a_{1}^{2}\beta\leq 1, \tag{16}\] Figure 1: The result of Experiment I. which implies that the two roots of \(p_{0}\) satisfy \[\frac{1}{2}\left(a_{1}^{2}\beta+2\sigma+\sqrt{(a_{1}^{2}\beta)^{2}+4 (a_{1}^{2}\beta\sigma+a_{1}^{3}a_{2}\beta^{2}\sigma+a_{1}^{2}\beta\sigma^{2})} \right)\\ <\sigma+3\sqrt{a_{1}^{2}\beta}.\] Consequently, it follows that \[p_{0}(x)\geq\left(x-\sigma-3\sqrt{a_{1}^{2}\beta}\right)^{2},\ \forall x\geq\sigma+3\sqrt{a_{1}^{2}\beta}. \tag{17}\] Let \[\hat{x}=\max\left\{1-\frac{\beta\underline{\sigma}^{2}(A)}{2nl},\sigma+4\sqrt {a_{1}^{2}\beta}\sqrt{\frac{\overline{\sigma}^{2}(\mathbf{A})nl}{\underline{ \sigma}^{2}(A)\mu}}\right\}, \tag{18}\] note that \(\frac{\overline{\sigma}^{2}(\mathbf{A})nl}{\underline{\sigma}^{2}(A)\mu}\geq 1\), then we have \[\hat{x}\geq\sigma+4\sqrt{a_{1}^{2}\beta}\sqrt{\frac{\overline{\sigma}^{2}( \mathbf{A})nl}{\underline{\sigma}^{2}(A)\mu}}\geq\sigma+3\sqrt{a_{1}^{2}\beta }\geq 3a_{1}^{2}\beta, \tag{19}\] and \[p_{0}(\hat{x})\geq a_{1}^{2}\beta\frac{\overline{\sigma}^{2}(\mathbf{A})nl}{ \underline{\sigma}^{2}(A)\mu}. \tag{20}\] Also note that \(\beta<\frac{n\mu}{\overline{\sigma}^{2}(A)}\), then we have \(\eta=1-\frac{\beta\underline{\sigma}^{2}(A)}{nl}<1\). It follows that \[p(\hat{x})\geq \frac{\beta\underline{\sigma}^{2}(A)}{2nl}\left[\hat{x}p_{0}( \hat{x})-(a_{1}^{2}\beta)^{3}\right]-(a_{1}^{2}\beta)^{3} \tag{21}\] \[\geq \frac{\beta\underline{\sigma}^{2}(A)}{2nl}\left[\frac{3(a_{1}^{2 }\beta)^{2}\overline{\sigma}^{2}(\mathbf{A})nl}{\underline{\sigma}^{2}(A)\mu }-(a_{1}^{2}\beta)^{3}\right]-(a_{1}^{2}\beta)^{3}\] \[\geq (a_{1}^{2}\beta)^{3}-(a_{1}^{2}\beta)^{3}\] \[\geq 0,\] which implies that \(p(x)\) is monotone increasing on \([\hat{x},+\infty)\), hence all real roots of \(p(x)\) lie in \((-\infty,\hat{x})\). According to the Perron-Frobenius theorem, we know that \(\rho(M)\) is an eigenvalue of \(M\), hence \(\rho(M)\leq\hat{x}=\max\left\{1-\frac{\beta\underline{\sigma}^{2}(A)}{2nl}, \sigma+4\sqrt{a_{1}^{2}\beta}\sqrt{\frac{\overline{\sigma}^{2}(\mathbf{A})nl}{ \underline{\sigma}^{2}(A)\mu}}\right\}\). Also note that \(\rho(M)<1\) if (13) holds, then the proof is completed. **Remark 6**.: _As demonstrated in Theorem 1, iDDGT can achieve linear convergence over directed graphs without imposing any conditions on \(A_{i}\) or \(A\). In contrast, existing algorithms such as IDEA, DCPA, and NPGA require \(A\) to have full row rank and are limited to undirected graphs. Thus, iDDGT has a much broader scope of application than these algorithms. Furthermore, in numerical experiments where \(A\) has full row rank and the graph is undirected, iDDGT exhibits significantly faster convergence in terms of the number of gradient steps compared to NPGA, which is considered state-of-the-art._ ## V Numerical Experiments In this section, we take two numerical experiments to validate the theoretical results and compare the performance of iDDGT with existing algorithms. ### _Experiment 1_ This experiment aims to validate Theorem 1, which states that iDDGT can achieve linear convergence for solving (P1) over directed graphs, even if the matrix \(A\) does not have full row rank. We consider the following instance of (P1): \[\min_{x_{i}\in\mathbb{R}^{d_{i}}} \sum_{i=1}^{n}\frac{1}{2}x_{i}^{\top}P_{i}x_{i}+q_{i}^{\top}x_{i}\] (22) s.t. \[\sum_{i=1}^{n}A_{i}x_{i}=b,\] where \(P_{i}\in\mathbb{R}^{d_{i}\times d_{i}}\) is a positive definite matrix. In this experiment, we choose \(n=20\) and generate a directed exponential graph with \(20\) nodes using a parameter \(e=4\). For a directed exponential graph, node \(i\) can send messages to nodes (\((i+2^{j})\mod n\)) for \(j=0,1,\cdots,e\). The matrix \(P_{i}\in\mathbb{R}^{2\times 2}\) is randomly generated using the reverse process of diagonal decomposition, ensuring that their eigenvalues belong to the interval \([1,10]\). Each element of the first \(20\) rows of \(A_{i}\in\mathbb{R}^{100\times 2}\) is independently sampled from a normal distribution with mean \(0\) and variance \(10\), while the remaining \(80\) rows are generated by linearly combining the first \(20\) rows. As a result, the row rank of the final matrix \(A\in\mathbb{R}^{100\times 2}\) is \(20\), indicating that it does not have full row rank. Each element of \(q_{i}\in\mathbb{R}^{2}\) and \(b\in\mathbb{R}^{100}\) is independently sampled from a standard normal distribution. We compare the performances of different versions of iDDGT, with AGD chosen as the subproblem solver. The variations in iDDGT versions lie in the solving strategy of subproblems. One strategy involves controlling the solving error and decreasing the error linearly with respect to outer iterations, as described in Theorem 1. The other strategy involves using a fixed number of inner iterations. For example, "iDDGT, \(\gamma=0.95\)" denotes a version of iDDGT that uses the first strategy with an error decreasing rate of \(0.95\), and "iDDGT, \(s=1\)" denotes a version that uses the second strategy with a fixed number of inner iterations set to \(1\). The experiment result is shown in Fig. 1, where the optimality gap is defined as \(\frac{\|\mathbf{x}^{k}-\mathbf{x}^{*}\|}{\|\mathbf{x}^{0}-\mathbf{x}^{*}\|}\). Several observations can be made: 1. The versions of iDDGT that adopt the first strategy demonstrate linear convergence, which confirms the validity of Theorem 1. 2. Within a certain range, accelerating the reduction of subproblem solving errors in the first strategy can enhance communication efficiency. However, it may also lead to a potential decrease in computational efficiency. An important and counterintuitive finding is that solving the subproblem exactly (i.e., \(\gamma=0\)) can result in both low computational efficiency and communication efficiency. 3. Using single-step gradient descent as the subproblem solver can yield favorable communication efficiency. However, the computational efficiency is significantly inferior compared to the first strategy. **Remark 7**.: _Though the fact that solving the subproblem exactly could result in both low computational efficiency and communication efficiency is counterintuitive, it can still be understood. The direct reason is that when the subproblem is solved inexactly, the value of \(\beta\) could be much larger compared to when solving the subproblem exactly in experiments. An intuitive explanation for this is that when solving the subproblem inexactly, even if a larger value of \(\beta\) is used (which could potentially lead to divergence if solving the subproblem exactly), \(\mathbf{x}^{k+1}\) would not be pulled very far from the convergent sequence. This is because only a few iterations are taken to solve the subproblem, which preserves the possibility of convergence._ ### _Experiment II_ In this experiment, we continue to consider the optimization problem (22) but with slightly different settings. As mentioned earlier, IDEA, DCPA, and NPGA can achieve linear convergence over undirected graphs under the condition that matrix \(A\) has full row rank, with NPGA showing the best performance in numerical experiments [12]. Therefore, in this comparison, we focus on evaluating the performances of iDDGT and NPGA. Since NPGA is an algorithmic framework with various variants, we have selected some of its best-performing variants, namely NPGA-NIDS, NPGA-P2D2, NPGA-Aug-DGM, NPGA-I, and NPGA-II. To ensure a fair comparison between iDDGT and NPGA, we need to use the setting where the graph is undirected and matrix \(A\) has full row rank. Specifically, we adopt the same settings and data as in Experiment I, with the exception of the graph, \(A_{i}\), and \(b\). The undirected graph with 20 nodes is generated using the Erdos-Renyi model [28] with a connectivity probability of 0.3. The elements of \(A_{i}\in\mathbb{R}^{20\times 2}\) and \(b\in\mathbb{R}^{20}\) are randomly and independently sampled from normal distributions with mean \(0\) and variance \(10\) for \(A_{i}\), and from the standard normal distribution for \(b\). The resulting matrix \(A\) is guaranteed to have full row rank. The experiment result is shown in Fig. 2. We can observe that the convergence speed of iDDGT in terms of the number of gradient steps is much faster than that of NPGA, whereas its convergence speed in terms of the number of communication rounds is slower compared to NPGA. Therefore, iDDGT would be a better choice when computation is expensive but communication is cheap. ## VI Conclusion In this work, we have presented iDDGT, an inexact decentralized dual gradient tracking method for distributed optimization problems with a globally coupled equality constraint. By utilizing an inexact dual gradient with controllable inexactness, iDDGT offers significant computational efficiency advantages over existing algorithms. Another key contribution of iDDGT is its ability to achieve linear convergence over directed graphs without imposing any conditions on the constraint matrix. This significantly broadens the scope of its applicability compared to existing algorithms that require the constraint matrix to have full row rank and undirected graphs for linear convergence. Overall, iDDGT offers a promising approach for solving constraint-coupled optimization problems. Its ability to achieve linear convergence, computational efficiency, and flexibility make it a valuable tool for a wide range of applications. Future research can focus on extending iDDGT to handle more complex constraints and exploring adaptive inexactness control.
2309.09003
RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework
In recent years, remote sensing (RS) vision foundation models such as RingMo have emerged and achieved excellent performance in various downstream tasks. However, the high demand for computing resources limits the application of these models on edge devices. It is necessary to design a more lightweight foundation model to support on-orbit RS image interpretation. Existing methods face challenges in achieving lightweight solutions while retaining generalization in RS image interpretation. This is due to the complex high and low-frequency spectral components in RS images, which make traditional single CNN or Vision Transformer methods unsuitable for the task. Therefore, this paper proposes RingMo-lite, an RS multi-task lightweight network with a CNN-Transformer hybrid framework, which effectively exploits the frequency-domain properties of RS to optimize the interpretation process. It is combined by the Transformer module as a low-pass filter to extract global features of RS images through a dual-branch structure, and the CNN module as a stacked high-pass filter to extract fine-grained details effectively. Furthermore, in the pretraining stage, the designed frequency-domain masked image modeling (FD-MIM) combines each image patch's high-frequency and low-frequency characteristics, effectively capturing the latent feature representation in RS data. As shown in Fig. 1, compared with RingMo, the proposed RingMo-lite reduces the parameters over 60% in various RS image interpretation tasks, the average accuracy drops by less than 2% in most of the scenes and achieves SOTA performance compared to models of the similar size. In addition, our work will be integrated into the MindSpore computing platform in the near future.
Yuelei Wang, Ting Zhang, Liangjin Zhao, Lin Hu, Zhechao Wang, Ziqing Niu, Peirui Cheng, Kaiqiang Chen, Xuan Zeng, Zhirui Wang, Hongqi Wang, Xian Sun
2023-09-16T14:15:59Z
http://arxiv.org/abs/2309.09003v1
# RingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid Framework ###### Abstract In recent years, remote sensing (RS) vision foundation models such as RingMo have emerged and achieved excellent performance in various downstream tasks. However, the high demand for computing resources limits the application of these models on edge devices. It is necessary to design a more lightweight foundation model to support on-orbit RS image interpretation. Existing methods face challenges in achieving lightweight solutions while retaining generalization in RS image interpretation. This is due to the complex high and low-frequency spectral components in RS images, which make traditional single CNN or Vision Transformer methods unsuitable for the task. Therefore, this paper proposes RingMo-lite, an RS multi-task lightweight network with a CNN-Transformer hybrid framework, which effectively exploits the frequency-domain properties of RS to optimize the interpretation process. It is combined by the Transformer module as a low-pass filter to extract global features of RS images through a dual-branch structure, and the CNN module as a stacked high-pass filter to extract fine-grained details effectively. Furthermore, in the pretraining stage, the designed frequency-domain masked image modeling (FD-MIM) combines each image patch's high-frequency and low-frequency characteristics, effectively capturing the latent feature representation in RS data. As shown in Fig. 1, compared with RingMo, the proposed RingMo-lite reduces the parameters over 60% in various RS image interpretation tasks, the average accuracy drops by less than 2% in most of the scenes and achieves SOTA performance compared to models of the similar size. In addition, our work will be integrated into the MindSpore computing platform in the near future. Lightweight foundation model, remote sensing (RS) frequency domain features, CNN-Transformer hybrid framework, masked image modeling (MIM). ## I Introduction In recent years, many Transformer-based methods [1, 2, 3] have emerged and achieved great success with their excellent feature extraction and representation capabilities, especially for vision foundation models [4]. Unlike the unsupervised method [5], the foundation model shows an extreme capacity for generalization. In the field of remote sensing (RS), RingMo [6], the RS foundation model has been proposed, which effectively solves the problem of inadequate generalization ability for existing methods. However, the foundation models are not flexible and efficient due to the high demand for computing and storage resources [7, 8, 9], making it difficult to adapt to edge servers or terminals, and cannot support on-orbit RS image interpretation in practical applications. Therefore, designing a lightweight foundation model and achieving high-precision, multi-task on-orbit interpretation of RS images is a future development trend. Many methods have been proposed to realize lightweight vision foundation models in the field of general visual processing [10, 11, 12]. These methods can be categorized into three groups: knowledge distillation [13, 14], neural architecture search (NAS) [15, 16, 17], and network structure design [18, 19]. Among them, knowledge distillation transfers knowledge from the large-scale foundation model to the student model, compressing the size of the model while maintaining its performance. However, this approach needs training an additional teacher model and requires separate distillations for various Fig. 1: Comparison of RingMo and our proposed RingMo-lite from four dimensions: data (use 85% smaller image datasets for pretraining MIM), pretraining methods (use frequency domain for more RS features), encoder (over 60% fewer parameters) and decoder (similar computer vision tasks). downstream tasks. NAS is another approach to achieve model lightweight, which automatically explores different neural network structures to find a model with good performance and fewer parameters. However, it demands significant computational resources and processing time. The network structure design methods achieve a significant parameter reduction by changing the transformer block structure of foundation models, without training additional models and saving computing resources. Although network structure design offers a viable option, there are still two challenges in the field of RS that may affect the performance of the method. Firstly, as shown in Fig. 2, RS images have various resolution and orientation ranges, and the distribution of objects is complex [20, 21, 22, 23]. Therefore, RS images usually contain specific target areas and large-scale ground objects simultaneously, with many scale differences between them. The pixels of dense small objects change drastically in the spatial dimension, while the pixels of large-scale ground objects change relatively more uniformly and slowly. The multi-scale differences in these objects bring great challenges to the generalization ability of the model. Secondly, various RS interpretation tasks prefer to focus on different target areas. For example, the scene classification task [24, 25, 26] involves a wide range of spatial scales, thus requiring more attention to the global generalization information. However, in the downstream tasks of RS object detection [27, 28], it is necessary to pay more attention to the local detail information of targets such as aircraft, ships and vehicles. The pixel changes of critical objects in RS images have corresponding representations in the frequency domain, and different frequencies refer to the intensity of feature changes. These differences in the high-frequency and low-frequency information affect the interpretation accuracy of different downstream tasks to a certain extent. Although many network structure design methods adopt a combination of CNN [29, 30] and Transformer [31], they mainly focus on using CNN to replace parts of the Transformer block to reduce calculations. Most existing methods do not pay attention to the advantages of using CNN and Transformer in extracting high-frequency and low-frequency information from RS images. Considering the above problems, this paper proposes a novel lightweight foundation model RingMo-lite suitable for various RS image interpretation tasks. Firstly, in order to fully extract the detailed features of specific target areas and the global features of large-scale scenes, this paper designs a lightweight CNN-Transformer dual-branch hybrid architecture. Specifically, the Transformer structure establishes the global relationship and long-distance dependence through the self-attention mechanism, which enables a more profound comprehension of both the structural and semantic aspects of the image. Therefore, in the frequency domain of the input image, Transformer can be regarded as the low-pass filter to extract low-frequency information, which can better extract the information of large-scale surface feature elements. On the contrary, the CNN architecture pays attention to the local details in the convolution sliding window through matrix calculation. Thus, the CNN branch aims to further alleviate the spatial position bias and capture local features such as texture and details. In the frequency domain, CNN can be regarded as the superposition of multiple high-pass filters, which is more Fig. 2: Examples of frequency-domain comparison between specific target area and large-scale scene area in different RS scenes. The 3D frequency domain diagram in the second row is calculated based on the spectral components, where the closer to the center represents the low-frequency part, and the closer to the periphery represents the high-frequency part. The third and fourth rows are the results of the image after high-pass filter and low-pass filter respectively. suitable for extracting high-frequency information and processing specific target information. Combining the advantages of the two different structures of CNN and Transformer, the proposed dual-branch block decouples the hybrid structure in the channel dimension, which comprehensively utilizes high-frequency and low-frequency information in RS images and effectively improves the interpretation accuracy. Secondly, this paper designs a frequency domain masked image modeling (FD-MIM) that adapts to high-frequency and low-frequency information of RS images, which improves the pretraining effect of lightweight foundation models by combining self-supervised learning [32, 33]. The FD-MIM, which corresponds to the proposed CNN-Transformer hybrid framework, contributes to better reconstruction of image details during masking, and promotes the proposed lightweight model to learn a rich feature representation suitable for different downstream tasks. This paper's contribution can be summarized as follows: 1. In order to achieve lightweight on-orbit interpretation, this paper proposes RingMo-lite, a dual-branch CNN-Transformer hybrid framework suitable for various RS image interpretation tasks. The proposed method fully considers the high-frequency and low-frequency information of RS images and tasks and effectively improves interpretation accuracy. 2. Considering the frequency-domain characteristics of RS object areas, this paper designs an FD-MIM self-supervised pretraining strategy, which facilitates the proposed framework to learn richer feature representations and effectively improves the generalization ability in downstream tasks. 3. Compared with RingMo, RingMo-lite reduces the parameters over 60% in various RS image interpretation tasks, the average accuracy drops by less than 2%, and RingMo-lite achieves SOTA performance in four downstream tasks compared to models of the same scale, including RS image classification, object detection, semantic segmentation, and change detection. In addition, we are planning to integrate Ringmo-lite into Huawei AI chips, which has the MindSpore computing platform, to enable deployment on edge devices. The rest part of the paper is recognized as follows. Section II will introduce the related work. Section III will present the RingMo-lite algorithm, and Section IV will discuss several experiments of the RingMo-lite performance. In Section V, the conclusion will be given. ## II Related Work ### _Lightweight Foundation Models in General Vision Domain_ As the two dominant architectural paradigms in the field of computer vision, CNNs and ViTs have garnered significant attention and research in recent years due to their remarkable performance in tasks such as image classification, object detection, semantic segmentation and change detection. CNNs excel at capturing local features and high-frequency information owing to their advantages of local receptive fields, parameter sharing, and hierarchical feature extraction, thus establishing them as the classical method for image processing. In contrast, ViTs leverage the self-attention mechanism of Transformer [31], which is good at capturing global relationships, and exhibit immense potential in the image domain. However, the exceptional performance of both architectures critically depends on their substantial parameters and computational consumption, limiting their deployment on resource-constrained devices. To address this issue, a multitude of research endeavors focusing on model lightweight have emerged, aiming to maintain high performance while reducing computational resources and memory requirements. In the realm of lightweight CNNs, pioneering achievements like MobileNet [34], EfficientNet [35], and ShuffleNet [36], have harnessed innovations such as depth-wise separable convolutions and channel attention. In the domain of lightweight ViTs, methodologies like MobileViT [7], TinyViT [13], and EfficientViT [37] have emerged, which mainly concentrate on techniques such as knowledge distillation and model pruning to compress ViT models. This paper will introduce existing lightweight strategies around knowledge distillation, quantization and pruning, and Neural Architecture Search (NAS). These methods collectively propel the efficient deployment of deep learning models in edge devices. **Knowledge Distillation-Based Lightweight Method:** This approach involves transferring rich knowledge (including logits, intermediate features, etc.) from large pretrained models to small student models. Wu _et al._[13] introduce TinyViT, achieving fast distillation by sparsifying logits of the large teacher model in advance and storing them, which saves forward computations and memory usage. Liu _et al._[14] propose a cross-architecture knowledge distillation method to distill complementary knowledge from Transformer to guide the training of CNNs. Yang _et al._[38] introduce ViTKD, which utilizes the nature of feature maps in ViT to design a knowledge distillation method suitable for the ViT structure. Liang _et al._[39] propose TED, a method that aligns the hidden representations of the student and teacher model at each layer by designing task-aware filters, selecting knowledge beneficial for the target task, and reducing the knowledge gap between the two models to help the student model better adapt to the target task. **Quantization and Pruning-Based Lightweight Method:** Yu _et al._[40] propose an end-to-end lightweight framework that combines pruning, skip connections, and distillation, significantly reducing computational complexity while nearly maintaining model performance. Yin _et al._[41] introduce GOHSP, which leverages graph-based ranking methods to measure the importance of attention heads and then integrates the extracted importance information into an optimization-based scheme to induce heterogeneous structure sparsity in ViT models. Wei _et al._[42] propose TPS, using one-way nearest neighbor matching and similarity-based fusion to combine pruned token information with retained tokens, which mitigates performance degradation caused by pruning. **NAS-based lightweight method:** In the current mainstream lightweight method, NAS generates a mobile terminal model by searching and stacking small basic units, and automatically implements network model design, which has obvious advan tages. Google proposes MnasNet [43], which considers the real model delay parameters in the neural network structure search process, proposes a decomposed hierarchical search space, and obtains a deep neural model that achieves an optimal balance between model accuracy and model delay. In the exploration of lightweight large models, due to serious conflicts between the gradients of different subnets and supernets, the training is prone to early saturation and poor convergence. Therefore, the direct application of supernet-based NAS for optimization will lead to performance degradation. To this end, NASVIT [44] proposes a series of techniques, including a gradient projection algorithm, a switchable layer scaling design, and a simplified data augmentation and regularization training recipe, which effectively improve the convergence and performance of the subnetwork. The aforementioned lightweight methods have been extensively validated in general vision domain. However, unlike natural images, RS images exhibit characteristics such as complex object distributions and diverse spatial scales, which leads to the limited applicability of the above general methods in RS tasks. Therefore, it is necessary to design a more suitable method for the characteristics of RS images. ### _Lightweight Networks in Remote Sensing Domain_ Different from natural scene images, due to the essential differences in the scale and direction of the objects generated by the bird's-eye view, RS images have complex backgrounds and dense targets, which makes intelligent interpretation more difficult. RS scene classification is widely used in the RS community, but traditional CNN-based methods lack long-range dependencies and cannot fully capture contextual information, while fine-tuning pretrained large models, such as ViT, is costly. To this end, LTNet [45] proposes a lightweight transformer network, which captures global dependencies with low computing resources through the multi-level group convolution (MLGC) module, improves the diversity of local features, and enhances classification performance. In order to solve the problem of limited links between ViT adjacent windows and huge computational load, LDBST [46] designs a dual-branch structure combining ViT branches and CNN branches based on the hierarchical Swin Transformer model. The ViT branch promotes the connection of adjacent windows through the dual multi-layer perceptron structure with deep convolutional layers, and the CNN branch with maximum pooling not only retains the discriminative ability of scene features but also avoids the huge computation caused by complex multi-head attention. However, it does not consider distinguishing high-frequency information from low-frequency information, which affects the interpretation accuracy to a certain extent. As a more difficult task, object detection suffers from stronger interference in RS scenarios. Aimed at the problem of complex RS image background and difficult detection of small targets in dense scenes, Chen _et al._[47] proposes a single-stage object detection model called MDCT based on a multi-kernel dilated convolution block (MDC) and Transformer block, in which the MDC module is used to enhance the features of small targets and increase the receptive field. The transformer block is integrated into the neck network of the detection model to prevent the loss of object information in complex backgrounds. Considering that the introduction of the attention mechanism will lead to an increase in computational complexity as the image resolution increases, Gong _et al._[48] proposes to replace the convolutional prediction head with Swin Transformer prediction heads (SPH). The designing of the shift window effectively reduces the computational complexity, while introducing a feature fusion layer to preserve feature information to the maximum extent. In RS semantic segmentation, it is also limited by the ability of CNN to capture the global context, and the segmentation accuracy reaches a bottleneck. To address this issue, Wang _et al._[49] proposes a hybrid architecture UNetFormer consisting of a CNN-based encoder and a transformer-based decoder to handle RS semantic segmentation tasks, where the decoder consists of a global-local Transformer block (GLTB) to efficiently model global and local information. However, transformer-based architectures usually suffer from high computational load and low-precision edge classification when applied to RS semantic segmentation tasks. To this end, Xu _et al._[50] proposes a purely efficient lightweight model with MLP head based on Swin Transformer to speed up inference, and deal with the edge problem through explicit and implicit edge enhancement methods. Although the aforementioned methods of lightweight foundation models in the field of RS have achieved certain results, they have not considered the distinction between high-frequency and low-frequency information in RS images, so as to realize the comprehensive utilization of information. Furthermore, these methods are all trained exclusively on single tasks, lacking abilities for multi-task generalization. ## III Methodology ### _Overview_ The overall framework of the proposed method is illustrated in Fig 3. The input image is initially partitioned into non-overlapping patches using the Patch Partition module, with each patch being the size of 4\(\times\)4 and treated as a token. These patches are then stacked together as the input to the subsequent linear embedding layer. Subsequently, processed image representations are obtained through four stages. Each stage comprises varying numbers of High-Low Frequency Information Fusion Block (FIFB), with the specific quantity adhering to the configuration of (2, 2, 6, 2) of Swin Tiny [1]. As the network deepens, the Patch Merging layer is introduced between stages to counteract the diminishing number of tokens. Within each FIFB, there exists a subdivision into Low-Frequency (L-F) Branch and the High-Frequency (H-F) Branch. To optimally exploit the feature extraction capabilities of both CNNs and Transformers, the input features of FIFB are sent to two branches respectively to capture low-frequency information and high-frequency information, and then fused and fed to the next Block or Patch Merging layer. The L-F Branch follows the main structure of Swin Transformer and obtains global features. H-F Branch further divides the input features into two parts and uses CNNs to extract detailed features. ### _High-Low Frequency Information Fusion Block (Fifb)_ **Revisiting Vision Transformer and CNNs:** ViT leverages Multi-head Self-Attention (MSA) for information exchange among non-overlapping tokens. As a low-pass filter [51], MSA excels in modeling long dependencies and capturing low-frequency information. Nevertheless, MSA's spatial smoothing operations on feature maps tend to attenuate high-frequency signals, leading to feature representations dominated by low-frequency information. Conversely, CNNs employ local convolutions (Convs) within receptive fields to obtain local information. In contrast to MSA, Convs are high-pass filters [51] and effectively extract high-frequency representations of images. As a result, MSA and Convs exhibit complementary characteristics, with MSA excelling in capturing global depen Fig. 3: The overall architecture of the proposed RingMo-lite. During the pretraining, the FD-MIM strategy is employed to efficiently extract feature representations of RS images. Given an optical RS image as input, the features are first extracted by the encoder composed of several high-low frequency dual-branch blocks (FIFB). Then, four different downstream tasks are achieved through different decoder heads, including classification, object detection, semantic segmentation and change detection from top to bottom. dencies and low-frequency information, while Conv excels in preserving local details and high-frequency information. **Frequency Characteristics in Remote Sensing Tasks:** Typically, the global structures of scenes and objects convey low-frequency information in images, while local spatial details such as edges and textures manifest as high-frequency information. RS images inherently encompass both small targets and extensive geographical features. The pixels of densely distributed and small-scale targets change drastically in spatial, while large-scale features are relatively uniform and slow. As for RS image interpretation tasks, scene classification emphasizes the extraction of comprehensive global information, while object detection tasks concentrate on capturing details. Furthermore, more fine-grained tasks require more local details. In light of these considerations, we propose the FIFB, which combines high-frequency and low-frequency information, thereby promoting the model's ability for multi-task generalization of RS images. **FIFB:** As shown in Fig. 4, the input feature \(F\in\mathbb{R}^{N\times N\times C}\) of FIFB are separately fed into two distinct branches: the L-F Branch and the H-F Branch. The L-F Branch is based on the architecture of Swin Transformer to capture extensive dependencies over long distances. The input feature \(F\) first undergo Layer Normalization (LN) layer and subsequently engage in an alternating sequence of Windowed Multi-Head Self-Attention (W-MSA) and Shifted Windowed Multi-Head Self-Attention (SW-MSA) modules, after which a residual link is applied. An additional LN layer and two Multi-Layer Perceptron (MLP) layers are further connected. After that, another residual connection is employed to obtain the output of the low-frequency branch, which is defined as \(L\). \[\hat{L}=\mathrm{MSA}_{W/SW}\left(\mathrm{LN}\left(F\right)\right)+F \tag{1}\] \[L=\mathrm{MLP}\left(\mathrm{LN}\left(\hat{L}\right)\right)+\hat{L} \tag{2}\] In contrast, the H-F Branch divides the input features into two partitions: \(F_{1}\in\mathbb{R}^{N\times N\times\frac{C}{2}}\) and \(F_{2}\in\mathbb{R}^{N\times N\times\frac{C}{2}}\), to extract high-frequency information through a parallel architecture, respectively utilizing the sharp sensitivity of maximum filters and the detailed perception of Convs [52]. \(F_{1}\) successively passes through 1\(\times\)1 Conv layers and 3\(\times\)3 Conv layers to obtain \(\hat{F}_{1}\). A combination of maximum pooling layers and 1x1 Conv layers is employed to appropriately compress the receptive field, yielding features \(\hat{F}_{2}\). Finally, the concatenation of \(\hat{F}_{1}\) and \(\hat{F}_{2}\) generates a comprehensive feature map \(H\) with rich high-frequency information: \[\hat{F}_{1}=\mathrm{Conv}^{3\times 3}\left(\mathrm{Conv}^{1\times 1}\left(F_{1} \right)\right) \tag{3}\] \[\hat{F}_{2}=\mathrm{Conv}^{1\times 1}\left(\mathrm{MaxPool}\left(F_{2} \right)\right) \tag{4}\] \[H=\mathrm{Concat}\left(\hat{F}_{1},\hat{F}_{2}\right) \tag{5}\] The end of FIFB process involves the fusion of the low-frequency feature \(L\) and the high-frequency feature \(H\): \[Z=L\oplus H \tag{6}\] where \(\oplus\) represents element-wise addition of feature maps. ### _Frequency Domain Masked Image Modeling_ In computer vision tasks, it is a common practice to design a pretraining strategy that captures both local and global image features to enhance the efficiency and generalization ability of the model. One promising approach is using masking techniques to emphasize specific features in the images. Masked image modeling (MIM) [32, 33] can combine the intrinsic data relationship to guide the model to understand complex RS images better. By exploiting the structure of input images and the correlations among neighboring pixels, it enables the model to learn meaningful representations without explicit labeling. Many MIM methods commonly employ the strategy of random masking. This technique involves selecting a certain proportion of image patches and subjecting them to complete Fig. 4: The structure of H-F Branch and L-F Branch and illustration of the variation of feature dimension. For H-F Branch, CNNs are used to extract local information. Meanwhile, for L-F Branch, the Swin Transformer blocks are utilized for global information. The features then follows residual computation to the next block. Fig. 5: Illustration of FD-MIM Self-Supervised Pretraining method. First, 50% of image patches are selected for frequency-domain analysis by algorithms like DFT (the above). Then, they are preserved critical frequency domain characteristics for decomposition (the below). Finally, random pixel masking (PIMask Strategy) is chosen to enhance the robustness and generalization. masking. Although this method is widely used in natural images, it has challenges in the practical application of RS image interpretation. RS images have unique imaging mechanisms and contain more complex backgrounds and many smaller-scale objects, which limits many random masking strategies in RS image interpretation. In this context, we introduce the concept of high-frequency and low-frequency domain masked image modeling (FD-MIM). FD-MIM corresponds to the proposed CNN-Transformer hybrid framework. Like other autoencoders, the proposed method can extract latent representations of masked images and use them to reconstruct the original signal of masked regions. By properly retaining the high-frequency and low-frequency domain information in the complex RS image, it contributes to better reconstruction details of the image while masking. The learned encoder is useful for various optical RS downstream tasks, and the L1 regression loss is used to calculate the difference between the reconstruction result and the pixel value. The proposed FD-MIM strategy is shown in Fig. 5. Firstly, FD-MIM randomly selects 50% of image patches from each RS image in the dataset. These patches are subjected to frequency-domain analysis, usually using techniques such as the Discrete Fourier Transform (DFT) to produce frequency-domain coefficients that describe the distribution of image energy over different spatial frequencies. Subsequently, the selected patches are classified into high-frequency or low-frequency categories. This classification depends on comparing the proportion of high-frequency content pixels to low-frequency content pixels within each patch. Patches with a higher proportion of high-frequency content are designated as high-frequency patches, while those dominated by low-frequency content are classified as low-frequency patches. To further emphasize high-frequency and low-frequency information, we perform high-pass and low-pass filtering on these classified patches, respectively. The former enhances the unique characteristics of the high-frequency portion, while the latter filtering contributes to preserving essential low-frequency information. This step facilitates a better separation of frequency components while preserving critical frequency domain characteristics. Finally, to enhance the robustness and generalization ability of the model, we introduce random pixel masking, which involves randomly selecting pixels from the frequency-separated patches and applying a masking operation. This strategy increases the complexity of the reconstruction images during training, facilitating the model to focus on learning the most relevant and discriminative features. ## IV Experiments ### _Remote Sensing Scene Classification_ #### Iv-A1 Dataset Introduction We use three classic RS scene classification datasets to comprehensively evaluate the interpretation ability of the proposed lightweight foundation model. **AID [53].** This comprehensive dataset is curated from multi-sensor data collected via Google Earth's resources. The spatial resolution ranges from 8 meters to 0.5 meters, while the dimensions are set at 600 x 600 pixels. The AID dataset comprises a collection of 10,000 images, encompassing a total of 30 categories. The number of images within each category varies from 220 to 400 images. The AID dataset presents a combination of small inter-class variability, large intra-class variability, and an imbalanced distribution of categories, collectively posing a significant challenge to contemporary scene classification methods. **NWPU-RESISC45 [54].** This dataset serves as a substantial benchmark sourced from Google Earth, presenting a broad spectrum of regions across the global landscape. It consists of 45 different categories, each of which contains 700 images, for a total of 31500 images. The spatial resolution of each RS image is noteworthy, ranging from approximately 0.2 to 30 meters, and standardized at a dimension of 256 x 256 pixels. **UCM [55].** The UCM dataset serves as a representative collection for scene classification, consisting of 2,100 RS images exclusively obtained from the USGS National Map. These images possess a uniform spatial resolution of 1 foot, maintaining a dimension of 256 pixels in both width and height. The assortment of images has been allocated across 21 distinct categories, amounting to a balanced distribution of 100 images per category. The dataset derives from diverse urban landscapes across the United States, thereby introducing a formidable level of complexity and challenge. #### Iv-A2 Detailed Experimental Settings All experiments are conducted using Docker images and PyTorch framework. The computational hardware employed is an NVIDIA Tesla A40 GPU, with a batch size set to 64. Training duration spans an average of 300 epochs. The model optimization employs the Adam optimizer until achieving convergence. The hyperparameters are set to \(\beta 1=0.9\), \(\beta 2=0.999\) and \(\epsilon=10^{-8}\). We load ImageNet pretrained weights for the ordinary method, while our method loads weights learned through MIM on 150000 visible images. The training sample proportions in the NWPU-RESISC dataset are configured as TR=10% and TR=20%, correspondingly. For the AID dataset, the training ratios are set at TR=20% and TR=50%. Meanwhile, the UCM dataset employs a training rate of TR=80%. Each experiment is meticulously repeated five times to ensure robustness. Throughout these experiments, the metric utilized to evaluate performance is Overall Accuracy (OA), a widely adopted criterion in scene classification research. #### Iv-A3 Experimental Result Analysis Table II shows the results of our method on different datasets, compared with the conventional lightweight algorithms mentioned above. "Swin Tiny (MIM)" and "RingMo-lite" represent the baseline of our proposed method and the version after adding the FIFB module, respectively. Obviously, compared with conventional lightweight methods, our method has achieved the best results on various datasets. Especially in the scene classification tasks of the difficult AID and NWPU datasets, the performance of the Swin-based method has been significantly improved, which demonstrates the superiority of our approach. To further illustrate the significant effect of the proposed modules, a comparison with the baseline is performed, and the results show that the introduction of the FIFB module improves the OA by 1.36%-5.48% on different datasets and training sample proportions, indicating that the proposed module has a good feature extraction effect and high robustness on scene classification tasks. In order to prove the superiority of our method from different perspectives, we also compare the parameters and computation size of different methods, and the results are presented in Table III. It should be noted that we use the floating point operations (FLOPs) to represent the amount of model calculations, and the results of OA are obtained based on the UCM dataset. Through comprehensive analysis of the results, we can draw the following conclusions. First, compared with the ResNet-50 method [29] with the same parameter level, our method improves the OA by 5.48%. Secondly, the accuracy of the RingMo method [6] in the table is obtained by pretraining 100 epochs on 1.5 million data and then fine-tuning. In comparison, our method has one-third the number of parameters and one-fourth the computation. With such a large gap, the OA of our method is only 0.01% lower than it. Finally, the MIM self-supervised pretraining does not change the size of the original network parameters, while the introduction of the FIFB module only adds 0.006M parameters. However, compared with the supervised Swin Tiny method and the MIM self-supervised pretrained Swin Tiny method, our method improves the OA by 6.19% and 3.34%, respectively. In summary, our method maximizes the effect of RingMo while significantly optimizing the amount of parameters and calculations. Fig. 6-8 are the classification confusion matrices on the AID, NWPU, and UCM datasets, showing the prediction results of our proposed RingMo-lite method among different categories. Experimental results show that our method achieves excellent performance on almost all scene categories. Fig. 6: Classification confusion matrix for RingMo-lite on 50% AID training data. ### _Remote Sensing Object Detection_ #### Iv-B1 Dataset Introduction We validate the performance of our model on the DIOR dataset covering complex component objects (horizontal object detection task) and the RS fine-grained category dataset FAIR1M (oriented object detection task). **DIOR [64].** DIOR stands as an expansive and publicly accessible RS dataset that includes 20 distinct object categories. The dataset includes 23463 images, totaling 190288 instances. Each image is meticulously annotated with horizontal bounding boxes, which facilitate both model training and testing phases. Beyond its diverse array of RS scenes, DIOR also encapsulates objects of different geometric shapes. Notably, within the context of RS, some objects have a wide coverage range, often composed of multiple regular components. Examples of these complex composite objects within DIOR include infrastructures like airports, expressway service areas, Golf courses, and train stations, each posing a heightened level of challenge to detection algorithms. Our experimental endeavors place particular emphasis on delving into the performance outcomes of these complex composite object categories. **FAIR1M [65].** The dataset encompasses over 1 million instances. It comprises a repository of more than 15000 images, captured across various platforms, boasting resolutions spanning from 0.3m to 0.8m. The whole objects are categorized into 5 primary classes and subsequently further subdivided into 37 intricate sub-categories by oriented bounding boxes. These primary classes encompass distinct entities such as airplanes, vehicles, ships, roads, and courts, with each of them comprising a diverse array of fine-grained sub-categories. To streamline the experimental framework, we exclude three other categories, resulting in a focused assessment of the remaining 34 model categories. For the sake of clarity and coherence throughout our experiments, we assign the aforementioned categories with designations ranging from C1 to C34. #### Iv-B2 Detailed Experimental Settings The GPU and Docker image are consistent to the prior classification experiments. Training parameters are tuned to encourage model convergence, with a learning rate of 0.0005, momentum at 0.99 and weight decay at 0.0001. We evaluate our models on the test part of the DIOR and FAIR1M datasets, following the open-source project of MMDetection and MMRotate, respectively. #### Iv-B3 Experimental Result Analysis Table IV shows the Horizontal Bounding Box (HBB) detection accuracy of various methods on the DIOR dataset, which can be simply divided into three categories: methods based on ResNet-18 [29], ResNet-50 [29] and Swin Tiny [1] as the backbone. Faster-RCNN [58], RetinaNet [59], and Reppoints [60] represent different object detection algorithms, which will be combined with two kind of ResNet-based backbones. "Reppoints (Swin-Tiny)" means the combination of Swin Tiny backbone with the Reppoints detection head. Through the analysis of the experimental results, the method based on Reppoints has the best effect among the three methods. In addition, the proposed RingMo-lite improved the detection accuracy by 1.5%. The Reppoints uses corner point regression to calculate the location of key points, which may have a better representation of RS objects. Table VI shows the Oriented Bounding Box (OBB) detection accuracy of various methods on the FAIR1M dataset, which can be simply divided into two parts: methods based on CNNs like RetinaNet [59], Cascade-RCNN [61], FasterRCNN [58] and FCOS [63]. On the other hand, Rol Transformer [62] is based on Transformer. Through the analysis of the experimental results, we find that the method of FCOS achieves the best accuracy. Meanwhile, RingMo-lite, which adds FIFB module and MIM, improves the detection accuracy by 1.6%, which shows our method has an advance in rotated object detection. Fig. 8: Classification confusion matrix for RingMo-lite on 80% UCM training data. Fig. 7: Classification confusion matrix for RingMo-lite on 20% NWPU-RESISC45 training data. ## VI Conclusion Fig. 10: Visualization results of the proposed RingMo-lite on the FAIR1M dataset. Fig. 9: Visualization results of the proposed RingMo-lite on the DIOR dataset. At the same time, we also compare the parameters and calculation amount of different methods, and the results are presented in Table V and VII. We still measure the amount of calculation through FLOPs, and the results of mAP on both of the DIOR and FAIR1M dataset. It can be seen that for HBB, the parameters of the RingMo-lite are reduced to one-third of the RingMo foundation model, while the detection accuracy mAP is only reduced by 1.3%. For OBB, the parameters are decreased to 33% and the accuracy is 1.7%. Secondly, for HBB, compared to the baselines of ResNet-50 and Swin Tiny, the parameters of RingMo-lite have increased slightly by 0.76M and 0.04M, respectively, but the effect has improved by 3.6% and 0.7%. For OBB, the increasing parameters of Swin Tiny is 3.1M and the mAP improvement is 1.6% and 0.3%, respectively. Finally, for HBB, although the number of parameters is equivalent, the calculation of RingMo-lite is reduced by 7.5% compared with RetinaNet (r50). For OBB, FCOS has 40% fewer parameters compared with RoI Transformer (r50). We show the visualization results on some of the samples in DIOR and FAIR1M in Fig. 9 and 10. It can be seen that RingMo-lite has a good detection effect on common RS targets, from small cars to large buildings, airplanes, ships and infrastructure. ### _Remote Sensing Semantic Segmentation_ #### Iv-C1 Dataset Introduction We conduct experiments on the iSAID and ISPRS Potsdam datasets corresponding to the above two segmentation subtasks respectively, demonstrating the effectiveness and generalization ability of the proposed model. **iSAID [69].** The iSAID dataset stands as a semantic segmentation dataset derived from DOTA, which comprises 2806 aerial imagery samples spanning resolutions spanning from 800 x 800 to 4000 x 13000 pixels. These data are primarily sourced from Google Earth, supplemented by contributions from both the JL-1 and GF-2 satellites. In total, the dataset encompasses 655451 instances from 15 different object categories, while the remaining non-object pixels are designated as background. This dataset thus yields a cumulative of 16 categories. Within the ensemble of 2806 high-resolution images, the partitioning allocates 1411 images, while 458 images for training, 957 for validation, and the remaining images for testing. Notably, due to a lack of annotations for the test set, the testing in this study is conducted on the validation set. **ISPRS Potsdam.** The dataset known as ISPRS Potsdam, made available by the ISPRS Commission WG II/4, offers 38 aerial images in high resolution with a pixel density of 0.5 meters. These images have a size of 6000 x 6000 pixels, with 24 images assigned as training subsets and the remaining 14 images as test subsets. Within this dataset, fine-grained annotations span 6 distinct categories, including impervious surfaces, building, low vegetation, tree, car, and clutter. For the purposes of experimentation, this section employs images composed of the Near-Infrared, Red and Green spectral bands as provided by the dataset. #### Iv-C2 Detailed Experimental Settings The GPU and Docker image are consistent to the prior experiments. Training parameters are tuned to encourage model convergence, with a learning rate of 0.0005, momentum at 0.99 and weight decay at 0.0001. We evaluate our models on the test part of the iSAID and Potsdam datasets, following the open-source project of MMSegmentation. #### Iv-C3 Experimental Result Analysis Table VIII presents the segmentation accuracy results of our method and other state-of-the-art methods on the iSAID dataset. We use two different backbones of the ResNet series and the Swin Tiny series to complete the training of 80000 iterations on the iSAID dataset. Through the analysis of the results, it can be seen that compared with the ResNet series, Swin Tiny as a backbone has a better segmentation effect on the DeeplabV3+ segmentation method. In addition, we verify the effect of the FIFM module, and the results show that the method based on high and low frequencies has surpassed the effect of the current basic method (DeeplabV3+(r50)), and the improvement rate is 0.29%. Table X shows the segmentation accuracy results of our method on the Potsdam dataset in comparison with other state-of-the-art methods. As in the Potsdam dataset, we use two different backbones to complete the training of the Potsdam dataset for 80000 iterations. Obviously, compared with using the ResNet series as the backbone, Swin Tiny as the backbone achieves a higher OA measure in the DeeplabV3+ method. Furthermore, the introduction of the FIFM module enables our method to outperform the current base method (DeeplabV3+ (r50)) by 0.25%. We compare the parameters and calculation amount of different methods, and the results of iSAID and Potsdam are presented in Table IX and XI. It can be seen that the parameters of the RingMo-lite are only 34% of the RingMo foundation model, and the OA is only reduced by 0.47% and 0.19%, respectively. Compared to the baselines of ResNet-50 and Swin Tiny, the parameters of RingMo-lite have increased slightly by 0.7M and 0.03M, respectively, while the effect has improved by 0.17% and 0.12% for iSAID, 0.21% and 0.04% for Potsdam. Finally, it is proved that RingMo-lite has an extreme low computation cost, which is only 12% of the RingMo. We show the visualization results on some samples in Potsdam in Fig. 11. It can be seen that RingMo-lite has a good segmentation effect on common RS scenes, from small cars to large constructions. ### _Remote Sensing Change Detection_ Change detection stands as a critical task within the field of RS, holding significant implications for a multitude of applications. Its main objective revolves around discerning changes that occur within the same geographic range at different points in time. As a key technology for updating geographic data, Change detection plays an important role in assessing hazards and monitoring land cover. #### Iv-D1 Dataset Introduction In our pursuit to validate the efficacy of our proposed methodology, we embark upon experiments utilizing the LEVIR-CD [70] dataset. **LEVIR-CD** dataset is a novel large-scale binary change detection dataset in RS scenarios. Captured in Texas, USA, the dataset comprises a collection of bi-temporal images spanning timeframes from 2002 to 2018. The dataset consists of 637 pairs of images with very high resolutions (VHF), configured at dimensions of 1024 \(\times\) 1024 pixels. The annotations provided in the LEVIR-CD dataset are in a binary format, specifically focusing on areas of change related to scenarios of building growth and building decline. #### Iv-C2 Detailed Experimental Settings The GPU and Docker image are consistent to the prior experiments. Training parameters are carefully adjusted to promote model convergence, including a learning rate of 0.0001, momentum set at 0.99, and weight decay of 0.01. We evaluate our models on the test part of the LEVIR-CD dataset, following the set of RingMo. #### Iv-C3 Experimental Result Analysis Table XII presents the change detection accuracy results of our method and other state-of-the-art methods on the LEVIR-CD dataset. We utilize CNN based methods like the FC series, DTCFSCN and the STANet, and the ViT based methods like the ChangeFormer, to complete the training of 200 epoches on the LEVIR-CD dataset. The analysis of the results reveals that Swin Tiny, as a backbone, outperforms both the ResNet series and ViT-based methods in terms of change detection effectiveness. In addition, we verify the effectiveness of the FIFM module, and the results show that the method based on high and low frequencies has surpassed the effect of the current baseline methods, resulting in an improvement rate of 0.06% on OA. We compare the parameters and calculation amount, and the results are presented in Table XIII. Our method is based on DiFormer [79], which is based on Swin Transformer and has a differential feature enhancement module for RS change detection task. The parameters of the RingMo-lite are only 40% of the RingMo foundation model, and the OA is only reduced by 0.03%. Compared to the baselines of Swin Tiny, the parameters of RingMo-lite have increased slightly by 0.12M, while the effect has improved by 0.03%. Finally, it is proved that RingMo-lite has an extreme low computation cost, which is only 17% of the RingMo. ### _Discussion_ Through a series of experiments and comparisons, we have demonstrated that RingMo-lite, with over 60% fewer parameters than the original RingMo, exhibits only a marginal decrease (less than 2%) in accuracy. Table XIV emphasizes this advantages of RingMo-lite over the original RingMo. This achievement is attributed to the utilization of the FD-MIM pretraining method and the FIFB frequency-domain module, allowing RingMo-lite to approach the performance of large models more closely. Meanwhile, the performance across various tasks also demonstrates the generalization capability of RingMo-lite. It shows effectively extracting sample characteristics in multiple tasks. RingMo-lite will have significant relevance for lightweight foundation models in future RS tasks. However, RingMo-lite still faces several challenges. _The performance of RingMo-lite is not impressive._ RingMo-lite fails to surpass the performance of RingMo in all tasks. We plan to continue exploring the model structure to achieve higher accuracy. _There is possible high load for devices._ While RingMo-lite can be run on a single GPU during experiments, it still consumes a substantial amount of graphic memory. Its suitability for on-orbit scenario requires further validation. _RingMo-lite is not unified enough._ RingMo-lite serves as a foundational model in the realm of pretraining, and it has not achieved unification in the field of lightweight models. Future research of lightweight multi-task RS models will be crucial. ## V Conclusion This paper proposes RingMo-lite, a novel lightweight CNN-Transformer hybrid framework to address the challenge of efficient and accurate RS image interpretation on a lightweight platform. The framework considers the high-frequency and low-frequency domain characteristics of major target areas in different RS interpretation tasks and effectively balances global and local feature extraction. Self-supervised pretraining combined with pixel-level FD-MIM learns enriched feature representations from large amounts of data. The proposed method achieves SOTA performance in various RS tasks compared to models of the similar size. The proposed RingMo-lite Fig. 11: Visualization results of the proposed RingMo-lite and the compared methods on the Potsdam dataset. (a) Images; (b) Labels; (c) PSPNet (r50); (d) DeeplabV3+ (r50); (e) DeeplabV3+ (Swin Tiny); (f) RingMo-lite. Our method achieves more accurate segmentation results compared to other methods. reduces the parameters over 60% compared to RingMo, and the average accuracy drops by less than 2%. In addition, our work will be integrated into the MindSpore in the near future. Moreover, we aim to incorporate richer patterns from various RS image modalities such as optical, SAR, infrared, and hyperspectral data. We will also expand our lightweight framework to cover a wider range of downstream applications, allowing the foundational model to capture more comprehensive and essential feature representations. Furthermore, we plan to design a lightweight foundation model suitable for multiple RS platforms and solve the challenges in RS image interpretation tasks more efficiently and accurately.
2301.13679
Density of states and spectral function of a superconductor out of a quantum-critical metal
We analyze the validity of a quasiparticle description of a superconducting state at a metallic quantum-critical point (QCP). A normal state at a QCP is a non-Fermi liquid with no coherent quasiparticles. A superconducting order gaps out low-energy excitations, except for a sliver of states for non-s-wave gap symmetry, and at a first glance, should restore a coherent quasiparticle behavior. We argue that this does not necessarily hold as in some cases the fermionic self-energy remains singular slightly above the gap edge. This singularity gives rise to markedly non-BCS behavior of the density of states and to broadening and eventual vanishing of the quasiparticle peak in the spectral function. We analyze the set of quantum-critical models with an effective dynamical 4-fermion interaction, mediated by a gapless boson at a QCP, $V(\Omega) \propto 1/\Omega^\gamma$. We show that coherent quasiparticle behavior in a superconducting state holds for $\gamma <1/2$, but breaks down for larger $\gamma$. We discuss signatures of quasiparticle breakdown and compare our results with the data.
Shang-Shun Zhang, Andrey V. Chubukov
2023-01-31T14:52:41Z
http://arxiv.org/abs/2301.13679v1
# Density of states and spectral function of a superconductor out of a quantum-critical metal ###### Abstract We analyze the validity of a quasiparticle description of a superconducting state at a metallic quantum-critical point (QCP). A normal state at a QCP is a non-Fermi liquid with no coherent quasiparticles. A superconducting order gaps out low-energy excitations, except for a sliver of states for non-s-wave gap symmetry, and at a first glance, should restore a coherent quasiparticle behavior. We argue that this does not necessarily hold as in some cases the fermionic self-energy remains singular slightly above the gap edge. This singularity gives rise to markedly non-BCS behavior of the density of states and to broadening and eventual vanishing of the quasiparticle peak in the spectral function. We analyze the set of quantum-critical models with an effective dynamical 4-fermion interaction, mediated by a gapless boson at a QCP, \(V(\Omega)\propto 1/\Omega^{\gamma}\). We show that coherent quasiparticle behavior in a superconducting state holds for \(\gamma<1/2\), but breaks down for larger \(\gamma\). We discuss signatures of quasiparticle breakdown and compare our results with the data. **Introduction.** Metals near a quantum critical point (QCP) display a number of non-Fermi liquid properties like linear-in-\(T\) resistivity, a broad peak in the spectral function near \(k_{F}\) with linear-in-\(\omega\) width, singular behavior of optical conductivity, etc [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. These properties are often thought to be caused by the coupling of fermions to near-gapless fluctuations of an order parameter, which condenses at a QCP [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. The same fermion-boson interaction gives rise to superconductivity near a QCP [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. A superconducting order gaps out low-energy excitations, leaving at most a tiny subset of gapless states for a non-\(s-\)wave order parameter. A general belief has been that this restores fermionic coherence. A frequently cited experimental evidence is the observed re-emergence of a quasiparticle peak below \(T_{c}\) in near-optimally doped cuprates (see e.g., Ref. [43]). From theory side, the argument is that the fermionic self-energy in a superconductor has a conventional Fermi-liquid form \(\Sigma(\omega)\sim\omega\) at the lowest \(\omega\), in distinction from a non-Fermi-liquid \(\Sigma(\omega)\propto\omega^{a}\) with \(a<1\) in the normal state [44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. In this paper, we analyze theoretically whether fermions in a superconducting state at a QCP can be viewed as well-defined coherent quasiparticles. We argue that this is not necessarily the case as fermionic self-energy can still be singular on a real frequency axis immediately above the gap edge. This singularity gives rise to markedly non-BCS behavior of the density of states (DoS) and to broadening and eventual vanishing of the quasiparticle peak. For superconductivity away from a QCP, mediated by a massive boson, numerous earlier studies have found that the spectral function \(A(\mathbf{k},\omega)\) at \(T=0\) has a \(\delta\)-functional peak at \(\omega=(\Delta^{2}+(\xi_{\mathbf{k}}/Z)^{2})^{1/2}\), where \(\xi_{\mathbf{k}}=v_{F}(k-k_{F})\) is a fermionic dispersion (\(v_{F}\) is a Fermi velocity), \(\Delta\) is a superconducting gap, and \(Z\) is an inverse quasiparticle residue. A \(\delta\)-functional peak holds for momenta near the Fermi surface, as long as \(\omega<\Delta+\omega_{0}\), where \(\omega_{0}\) is a mass of a pairing boson in energy units. At larger \(\omega\), fermionic damping kicks in, and the peak broadens. The same physics leads to peak-dip-hump behavior of \(A(\mathbf{k},\omega)\) as a function of \(\omega\), observed most spectacularly in near-optimally doped cuprate Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) (see, e.g, Refs. [54; 55]). At a QCP, the pairing boson becomes massless and \(\omega_{0}\) vanishes. This creates a singular behavior near the gap edge at \(\omega=\Delta\), which holds even when \(\xi_{\mathbf{k}}\) is finite. A simple experimentation shows that there are three possible forms of \(A(\mathbf{k},\omega)\), which we present in Fig. 1: it (i) either vanishes at \(\omega=\Delta\) and has a well-defined peak at \(\omega>\Delta\) whose width at small \(\xi_{\mathbf{k}}\) is parametrically smaller than its energy; or (ii) diverges at \(\omega=\Delta\) Figure 1: Three possible forms of the electronic spectral function \(A(\mathbf{k},\omega)\) at \(T=0\) in a quantum critical superconductor at a small but finite \(k-k_{F}\) and in the absence of impurity broadening. (a): \(A(\mathbf{k},\omega)\) vanishes at \(|\omega|=\Delta\) and has a well-defined peak at \(\omega>\Delta\), (b): \(A(\mathbf{k},\omega)\) diverges at \(|\omega|=\Delta\), but it non-monotonic at larger \(\omega\). The peak in \(A(\mathbf{k},\omega)\) at \(|\omega|>\Delta\) broadens, but still exists. (c): \(A(\mathbf{k},\omega)\) diverges at \(|\omega|=\Delta\), and monotonically decreases at larger \(\omega\). In case (a) fermions can be viewed as well-defined quasiparticles, in case (c) the quasiparticle picture completely breaks down. The case (b) is the intermediate one between (a) and (c). but is non-monotonic at larger \(\omega\) and displays a broad maximum at some \(\omega>\Delta\), or (iii) diverges at \(\omega=\Delta\) and monotonically decreases at larger \(\omega\). In the first case, fermions in a quantum-critical superconductor can be viewed as well-defined quasiparticles; in the last case the quasiparticle picture completely breaks down; the second case is the intermediate one between the other two. Our goal is to understand under what circumstances \(A(\mathbf{k},\omega)\) of a quantum-critical superconductor has one of these forms. **Model.** For our study, we consider dispersion-full fermions, Yukawa-coupled to a massless boson. We assume, like in earlier works (see, e.g., Refs. [56]), that a boson is Landau overdamped, and its effective velocity is far smaller than \(v_{F}\). In this situation, the interaction that gives rise to non-Fermi liquid in the normal state and to superconductivity, is a purely dynamical \(V(\Omega)\). The fermionic self-energy and the pairing gap, tuned into a proper spatial pairing channel, are then determined by two coupled equations in the frequency domain. At a QCP, \(V(\Omega)\) is singular at vanishing \(\Omega\) in spatial dimension \(D\leq 3\), and behaves as \(V(\Omega)\propto(\bar{g}/\Omega)^{\gamma}\), where \(\bar{g}\) is the effective fermion-boson coupling, and the exponent \(\gamma\) is determined by the underlying microscopic model. The most studied models of this kind are of fermions near an Ising-nematic or Ising/ferromagnetic QCP (\(\gamma=1/3\)) and near an antiferromagnetic or charge density wave QCP (\(\gamma=1/2\)). The same effective interaction emerges for dispersion-less fermions in a quantum dot coupled to Einstein bosons (the Yuakawa-SYK model) [57; 58; 59; 60]. For this last case, the exponent \(\gamma\) is a continuous variable \(\gamma\in(0,1)\), depending on the ratio of fermion and boson flavors. An extension of the Yukawa-SYK model to \(\gamma\in(1,2)\) has recently been proposed [61]. We follow these works and consider \(\gamma\) as a continuous variable. We note that the value of \(\gamma\) is generally larger deep in a superconducting state because of feedback from superconductivity on the bosonic polarization. For simplicity, we neglect potential in-gap states associated with non-\(s\)-wave pairing symmetry and focus on the spectral function of fermions away from the nodal points and on features in the density of states (DoS) above the gap edge. An extension to models with in-gap states is straightforward. In previous studies of the \(\gamma\)-model, we focused on the novel superconducting behavior at \(\gamma>1\), when the pairing interaction is attractive on the Matsubara axis, while on the real axis \(\mathrm{Re}V(\Omega)\) is repulsive [62; 63]. We argued that this dichotomy gives rise to phase slips of the gap function on the real axis. Here, we restrict ourselves to \(\gamma\leq 1\), when this physics is not present and, hence, does not interfere with the analysis of the validity of a quasiparticle description in a superconducting state. **Pairing gap and quasiparticle residue.** For superconductivity mediated by a dynamical interaction, the paring gap \(\Delta(\omega)\) and the inverse quasiparticle residue \(Z(\omega)\) are functions of the running real fermionic frequency \(\omega\). We define the gap edge \(\Delta\) (often called the gap) from the condition \(\Delta(\omega)=\omega\) at \(\omega=\Delta\). For our purposes, it is convenient to introduce \(D(\omega)=\Delta(\omega)/\omega\). The gap edge is at \(|D|=1\). The equation for \(D(\omega)\) that we need to solve is \[\omega B(\omega)D(\omega)=A(\omega)+C(\omega), \tag{1}\] where \(B(\omega)\) and \(A(\omega)\) are regular functions of \(\omega\) (see [64; 65]). The \(C(\omega)\) term depends on the running \(D(\omega)\), \[C(\omega)=\bar{g}^{\gamma}\sin\frac{\pi\gamma}{2}\int_{0}^{\omega}\frac{d \Omega}{\Omega^{\gamma}}\frac{D(\omega-\Omega)-D(\omega)}{\sqrt{D^{2}(\omega- \Omega)-1}}. \tag{2}\] Its presence makes Eq. (S3) an integral equation. The inverse residue \(Z(\omega)\) is expressed via \(D(\omega^{\prime})\) as \[Z(\omega)=B(\omega)+\frac{\bar{g}^{\gamma}\sin\frac{\pi\gamma}{2}}{\omega} \int_{0}^{\omega}\frac{d\Omega}{\Omega^{\gamma}}\frac{1}{\sqrt{D^{2}(\omega- \Omega)-1}} \tag{3}\] and is readily obtained once \(D(\omega)\) is known. At \(\gamma=0\), which models a BCS superconductor, \(C(\omega)=0\) and \(D(\omega)=A(\omega)/(\omega B(\omega))\) is a regular function of frequency. Near the gap edge at \(\omega>0\), \(D(\omega)-1\sim\omega-\Delta\) and \(Z(\omega)\approx Z(\Delta)\equiv Z\). We assume and then verify that \(D(\omega)\) remains regular in some range of \(\gamma>0\). Substituting \(D(\omega)-1\sim\omega-\Delta\) into (S7) for \(\gamma>0\), we obtain \(C(\omega)-C(\Delta)\sim(\omega-\Delta)^{3/2-\gamma}\). We see that \(C(\omega)\) is non-analytic near the gap edge, but for \(\gamma<1/2\), the exponent \(3/2-\gamma\) is larger than one. In this situation, the non-analytic term in \(C(\omega)\) generates a non-analytic term in \(D(\omega)\) of order \((\omega-\Delta)^{3/2-\gamma}\), which is smaller than the regular \(\omega-\Delta\) term. Evaluating the prefactors, we obtain slightly above the gap edge, at \(\omega=\Delta+\delta\) \[D^{\prime}(\Delta+\delta)=1+\alpha\delta+A\cos[\pi(3/2-\gamma)] \delta^{3/2-\gamma},\] \[D^{\prime\prime}(\Delta+\delta)=-A\sin[\pi(3/2-\gamma)]\delta^{3 /2-\gamma}, \tag{4}\] where \(\alpha\sim 1/\bar{g}\), \(A=\sqrt{\frac{\pi}{2}}\frac{\bar{g}^{\gamma}\sin(\gamma/2)}{\Delta B(\Delta)} J(\gamma,1)\) and \(J(\gamma,\nu)\) is expressed via Beta functions: \[J(\gamma,\nu)=B(1-\gamma,\gamma-1-\frac{\nu}{2})-B(1-\gamma, \gamma-1+\frac{\nu}{2}). \tag{5}\] For \(\gamma>1/2\), \(3/2-\gamma>1\), and the calculation of \(D(\omega)\) has to be done differently. We find after straightforward analysis that the leading \(\delta\)-dependent term in \(D(\Delta+\delta)\) is non-analytic and of order \(\delta^{\nu}\), where \(\nu\) is the solution of \(J(\gamma,\nu)=0\). The exponent \(\nu\approx 1+0.67(\gamma-1/2)\) for \(\gamma\approx 1/2\) and \(\nu\approx 1.3\) for \(\gamma=1\). The subleading term in \(D(\Delta+\delta)\) scales as \(\delta^{\nu+e}\), where \(c>0\) is approximately linear in \(\gamma-1/2\). In Fig. 2, we plot \(\nu(\gamma)\) and \(c(\gamma)\) along with the numerical results of \(D(\omega)\) for a representative \(\gamma=0.8\). The exponent \(\nu\) extracted from this numerical \(D(\omega)\) is \(1.18\), which matches perfectly with the analytical result. The behavior at \(\gamma=1/2\) is special, and we discuss it in Ref. [64]. Substituting \(D(\Delta+\delta)\) into the formula for \(Z(\omega)\), Eq. (S8), we obtain \[Z^{\prime}(\Delta+\delta) = Z(\Delta)+B\cos(\pi(\gamma+\nu/2-1))\delta^{1-\gamma-\nu/2}, \tag{6}\] \[Z^{\prime\prime}(\Delta+\delta) = B\sin(\pi(\gamma+\nu/2-1))\delta^{1-\gamma-\nu/2}. \tag{7}\] where \(B=\frac{\bar{g}^{\gamma}\sin\frac{\gamma\gamma}{2}}{\Delta\sqrt{2\alpha}}B(1 -\gamma,\frac{\nu}{2}+\gamma-1)\). For \(\gamma<1/2\), \(Z(\omega)=Z(\Delta)+O(\delta^{1/2-\gamma})\) is approximately a constant near the gap edge. For \(\gamma>1/2\), the inverse residue diverges at the gap edge, indicating a qualitative change in the system behavior. **Spectral function and DoS.** The spectral function and the DoS per unit volume are given by \[A(\mathbf{k},\omega)=-\frac{1}{\pi}\text{Im}G_{R}(\mathbf{k},\omega),\] \[N(\omega)=\frac{1}{V}\sum_{\mathbf{k}}A(\mathbf{k},\omega)=N_{F}\omega \text{Im}\sqrt{\frac{1}{\Delta^{2}(\omega)-\omega^{2}}}, \tag{8}\] where the retarded Green's function \(G_{R}(\mathbf{k},\omega)=-(\omega Z(\omega)+\xi_{\mathbf{k}})/(\xi_{\mathbf{k}}^{2}+( \Delta^{2}(\omega)-\omega^{2})Z^{2}(\omega))\). ARPES intensity is proportional to \(A(\mathbf{k},\omega)n_{F}(\omega)\), which at \(T=0\) selects negative \(\omega\). At \(\gamma=0\) (BCS limit), \(N(\omega)\sim 1/(\omega-\Delta)^{1/2}\), and the spectral function has a \(\delta\)-functional peak at \(\omega=(\Delta^{2}+(\xi_{\mathbf{k}}/Z)^{2})^{1/2}\). In Fig. 2 (c,d), we show the DoS \(N(\omega)\), obtained from the numerical solution of the full gap equation (S3) for representative \(\gamma=0.35\) and \(0.8\). We see that in both cases the DoS describes a gapped continuum, but there is a qualitative difference in the behavior near the gap edge: for \(\gamma=0.35\), \(N(\omega)\) has the same \(1/\delta^{1/2}\) singularity as for \(\gamma=0\), and for \(\gamma=0.8\) the DOS behaves as \(1/\delta^{0.59}\), which perfectly matches the analytical form \(\delta^{-\nu/2}\), given that \(\nu=1.18\) for \(\gamma=0.8\). The spectral function \(A(\mathbf{k},\omega)\) is shown in Fig. (3). For comparison with ARPES, we set \(\omega\) to be negative: \(\omega=-(\Delta+\delta)\). For any \(\gamma\), there is no frequency range, where \(A(\mathbf{k},\omega)\) is a \(\delta\)-function, simply because the bosonic mass vanishes at a QCP. Still, for \(\gamma<1/2\), \(D(-(\Delta+\delta))-1\propto\delta\) and \(Z(-(\Delta+\delta))\approx Z(-\Delta)=Z(\Delta)\). In this situation, the spectral weight on the Fermi surface, integrated over an infinitesimally small range around \(\omega=-\Delta\) immediately above the real axis, is finite, like in BCS case. Away from the Fermi surface, the spectral function vanishes as \(|\omega+\Delta|^{1/2-\gamma}\) at the gap edge and displays a quasiparticle peak at \(\omega\approx-(\Delta^{2}+(\xi_{\mathbf{k}}/Z(\Delta))^{2})^{1/2}\). The peak is well defined at small \(\delta\) as its width \(O(\delta^{1/2-\gamma})\) is parametrically smaller than its frequency. This is the same behavior as in Fig. 1 (a). For \(\gamma>1/2\), the situation is qualitatively different. Now \(Z(-\Delta-\delta)\) diverges at \(\delta\to 0\) and \(D(-\Delta-\delta)-1\sim|\delta|^{\nu}\ll|\delta|\). In this case, the integral of \(A(\mathbf{k}_{F},\omega)\) over an infinitesimally small range around \(\omega=-\Delta\) vanishes, which can be interpreted as a vanishing of a quasiparticle peak. At finite \(\xi_{\mathbf{k}}\), the spectral function diverges at the gap edge as \(1/|\omega+\Delta|^{\gamma/2+\gamma-1}\). For \(\gamma\) slightly above \(1/2\), \(A(\mathbf{k},\omega)\) is non-monotonic and possess a broad maximum at \(|\omega+\Delta|\sim\left(\xi_{\mathbf{k}}/\bar{g}^{\gamma}\right)^{\frac{1}{1- \gamma}}\). This is the same behavior as in Fig. 1 (b). For larger \(\gamma\), the maximum disappears, and \(A(\mathbf{k},\omega)\) monotonically decreases at \(|\omega|>\Delta\). This is the same behavior as in Fig. 1 (c). For small \(\xi_{\mathbf{k}}\), the maximum disappears at \(\gamma\sim 0.9\). For larger \(\xi_{\mathbf{k}}\), it disappears at smaller \(\gamma\), first for positive \(\xi_{\mathbf{k}}\) (see Fig. 4). **Comparison with ARPES** The behavior shown in Fig. 4 is our result in some range of \(\gamma>1/2\). For positive \(\xi_{\mathbf{k}}\) (i.e., outside the Fermi surface), the spectral function has a single non-dispersing maximum at the gap edge, except for the smallest \(\xi_{\mathbf{k}}\), while for negative \(\xi_{\mathbf{k}}\), \(A(\mathbf{k},\omega)\) has a kink at the gap edge \(\omega=-\Delta\) and a dispersing maximum at \(\omega=-\Delta-O\left(|\xi_{\mathbf{k}}|^{1/(1-\gamma)}\right)\). This behavior is consistent with the ARPES data for Bi2201, Ref. [66]. The data shows that the spectral function near the antinode, where our analysis is valid, displays an almost non-dispersing maximum at positive \(\xi_{\mathbf{k}}\), while for negative \(\xi_{\mathbf{k}}\) it displays a non-dispersing kink at the same energy and a dispersing maximum at larger \(|\omega|\). We associate the non-dispersing feature at both positive and negative \(\xi_{\mathbf{k}}\) with the gap edge \(\Delta\), and associate the dispersing maximum, observed in [66] at \(\xi_{\mathbf{k}}<0\), with the dispersing maximum in Fig. 4. **Discussion and summary.** In this work, we analyzed the applicability of quasiparticle description of a superconducting state which emerges out of a non-Fermi liquid at a metallic QCP. We considered the model with an effective dynamical 4-fermion interaction \(V(\Omega)\propto 1/\Omega^{\gamma}\), mediated by a gapless boson at a QCP and analyzed the spectral function and the DoS for \(\gamma\in(0,1)\). Interaction \(V(\Omega)\) gives rise to a non-Fermi liquid in the normal state with self-energy \(\Sigma(\omega)\propto\omega^{1-\gamma}\) and to pairing below some finite \(T_{c}\). A superconducting order gaps out low-energy excitations and, at a first glance, should restore fermionic coherence. We found, however, that this holds only for \(\gamma<1/2\). For larger \(\gamma\) the spectral function and the DoS exhibit qualitatively different behavior than that in a superconductor with coherent quasiparticles. (different power-laws). We argued that the quasiparticle peak broadens up and completely disappears for \(\gamma\) close to one. Away from a QCP, a pairing boson is massive and at the lowest energies a Fermi-liquid description holds already in the normal state and continue to hold in a superconductor. In particular, in the immediate vicinity of the gap edge, the system displays a BCS-like behavior for all \(\gamma\). Still, the system behavior over a broad frequency range is governed by the physics at a QCP, as numerous experiments on the cuprates and other correlated systems indicate. We argued that our results are quite consistent with the ARPES data for Bi2201 [66; 11]. We acknowledge with thanks useful conversations with a number of our colleagues. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0014402.
2309.00027
A Sequential Framework for Detection and Classification of Abnormal Teeth in Panoramic X-rays
This paper describes our solution for the Dental Enumeration and Diagnosis on Panoramic X-rays Challenge at MICCAI 2023. Our approach consists of a multi-step framework tailored to the task of detecting and classifying abnormal teeth. The solution includes three sequential stages: dental instance detection, healthy instance filtering, and abnormal instance classification. In the first stage, we employed a Faster-RCNN model for detecting and identifying teeth. In subsequent stages, we designed a model that merged the encoding pathway of a pretrained U-net, optimized for dental lesion detection, with the Vgg16 architecture. The resulting model was first used for filtering out healthy teeth. Then, any identified abnormal teeth were categorized, potentially falling into one or more of the following conditions: embedded, periapical lesion, caries, deep caries. The model performing dental instance detection achieved an AP score of 0.49. The model responsible for identifying healthy teeth attained an F1 score of 0.71. Meanwhile, the model trained for multi-label dental disease classification achieved an F1 score of 0.76. The code is available at https://github.com/tudordascalu/2d-teeth-detection-challenge.
Tudor Dascalu, Shaqayeq Ramezanzade, Azam Bakhshandeh, Lars Bjorndal, Bulat Ibragimov
2023-08-31T13:47:01Z
http://arxiv.org/abs/2309.00027v2
# A Sequential Framework for Detection and Classification of Abnormal Teeth in Panoramic X-rays ###### Abstract This paper describes our solution for the Dental Enumeration and Diagnosis on Panoramic X-rays Challenge at MICCAI 2023. Our approach consists of a multi-step framework tailored to the task of detecting and classifying abnormal teeth. The solution includes three sequential stages: dental instance detection, healthy instance filtering, and abnormal instance classification. In the first stage, we employed a Faster-RCNN model for detecting and identifying teeth. In subsequent stages, we designed a model that merged the encoding pathway of a pretrained U-net, optimized for dental lesion detection, with the Vgg16 architecture. The resulting model was first used for filtering out healthy teeth. Then, any identified abnormal teeth were categorized, potentially falling into one or more of the following conditions: embedded, periapical lesion, caries, deep caries. The model performing dental instance detection achieved an AP score of 0.49. The model responsible for identifying healthy teeth attained an F1 score of 0.71. Meanwhile, the model trained for multi-label dental disease classification achieved an F1 score of 0.76. The code is available at [https://github.com/tudordascalu/2d-teeth-detection-challenge](https://github.com/tudordascalu/2d-teeth-detection-challenge). Keywords:Multi-label object detection Panoramic X-ray ## 1 Introduction This article provides an overview of our solution submitted in the Dental Enumeration and Diagnosis on Panoramic X-rays Challenge held at MICCAI 2023 [1]. ## 2 Method We proposed a multi-step framework designed for the detection and classification of abnormal teeth. This system consisted of three sequential stages: detection of dental instances, filtering of healthy instances, and classification of abnormal instances (Figure 1). In the initial phase, we employed a Faster-RCNN to identify dental instances from 2D panoramic X-rays, classifying them by quadrant and tooth number [2]. The model's outputs were bounding boxes, each paired with a confidence score and an encoded value representing the tooth and quadrant numbers. In the following stages, we utilized a U-net model to segment both caries and periapical lesions from cropped tooth images [3]. The model was trained to minimize the mean of the Binary Cross Entropy loss and Dice loss. The encoding path of the trained U-net was integrated into the models used for (2) filtering out healthy instances and (3) distinguishing abnormal instances. These models presented a unified architecture, combining the U-net's encoding path with the Vgg16's feature extraction path [4]. Subsequently, the classification path of the Vgg16 classifier was adapted to handle the combined feature set sourced from both the U-net and Vgg16. The objective function used for training the models was the Binary Cross Entropy. The model designed to differentiate between healthy and abnormal teeth required cropped teeth images as input. It produced a binary label indicating the presence or absence of abnormalities. Teeth identified as abnormal were then processed by another model focused on their classification. This classification model was trained for multi-label categorization to accommodate teeth with multiple conditions. Figure 1: The multi-step pipeline employed for detecting abnormal teeth. Initially, the Faster-RCNN (A) detects all teeth. Subsequently, a hybrid model incorporating features from both U-net and Vgg16 architectures (B) filters out healthy teeth. Finally, a model of the same architecture (B) classifies the abnormal teeth. ## 3 Experiment and results ### Data The dataset used in the present work was introduced in the Dental Enumeration and Diagnosis on Panoramic X-rays Challenge. We leveraged a subset of 1039 Panoramic X-rays, with 634 X-rays associated with bonding boxes, teeth numbers, quadrant numbers corresponding to all the teeth, and 705 X-rays associated with bounding boxes, teeth numbers, quadrant numbers, disease type corresponding to abnormal teeth. The images were subjected to on-the-fly augmentation techniques such as horizontal flipping, brightness and contrast modifications, affine transformations, and random cutouts with dimensions reaching up to 80x80 pixels. ### Results In this section, the metrics presented include Average Precision at varying IoU thresholds for object detection and the F1 score for classification tasks. The results of our experiment are presented in Table 1. The Faster-RCNN model achieved an AP score of 0.49. The model differentiating healthy from unhealthy teeth obtained an F1 score of 0.71, while the classifier for abnormal teeth registered an F1 score of 0.76.
2309.07990
Leveraging Contextual Information for Effective Entity Salience Detection
In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity.
Rajarshi Bhowmik, Marco Ponza, Atharva Tendle, Anant Gupta, Rebecca Jiang, Xingyu Lu, Qian Zhao, Daniel Preotiuc-Pietro
2023-09-14T19:04:40Z
http://arxiv.org/abs/2309.07990v2
# Leveraging Contextual Information for Effective Entity Salience Detection ###### Abstract In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity. ## 1 Introduction Many NLP studies have highlighted the importance of entities to understanding the semantics of a document (Wu et al., 2020; Meij et al., 2012). Automatically identifying entities in unstructured text documents and linking them to an underlying knowledge base, such as Wikipedia, is one of the core NLP tasks, with multiple shared tasks (Tjong Kim Sang and De Meulder, 2003; Strauss et al., 2016), benchmarks (Hoffart et al., 2011; Hovy et al., 2006; Pradhan et al., 2013; Rijhwani and Preotiuc-Pietro, 2020; Derczynski et al., 2016), and studies (Kolitsas et al., 2018; Nguyen et al., 2014) dedicated to solving them. Although an entity may play a crucial semantic role in document understanding, not all entities in a text document play equal roles. Some entities are the central subjects or actors within a document, around which the content and the key events revolve. Others are mentioned only to provide additional context to the main event. For example, some entities may be actors in peripheral events, while others are deemed uninformative to the understanding of the document. Thus, _entity salience_ in a text is defined as a binary or ordinal rating to quantify the extent to which a target entity is central to a given piece of text (Gamon et al., 2013; Dunietz and Gillick, 2014). Figure 1 provides an example text along with the mentioned entities and their salience. We note that the salience of an entity to a text is independent of the user's interest when reading or searching the document (Gamon et al., 2013), which is usually referred to as _entity relevance_. It is also distinct from _entity importance_, which quantifies the overall importance of the entity independent of the document. Automatically inferring entity salience was shown to aid search (Gamon et al., 2013), improve ranking results (Xiong et al., 2018), entity detection (Trani et al., 2018), and enable entity-centric applications such as entity-centric summarization (Maddela et al., 2022). Figure 1: An example of a document with salient and non-salient entities. Entity mentions are highlighted in text. In this paper, we study the effectiveness of Transformer-based Pre-trained Language Models (PLMs) in the task of entity salience detection. Prior work on determining entity salience relied on heavy feature engineering to craft features explicitly covering relevant aspects, such as entity frequency Dunietz and Gillick (2014); Dojchinovski et al. (2016), position of entity mentions within a document Dunietz and Gillick (2014); Trani et al. (2018), relations to other entities Trani et al. (2018), document features, such as its length Gamon et al. (2013) and lexical features, such as the name of the entity or its context. Only a single recent work attempted to use PLMs in a pipeline which included key entity detection, albeit the scope of the evaluation was limited to a single high performing dataset Zhao et al. (2021). In contrast, our proposed method uses a cross-encoder architecture where a target entity's name or alias and its contextual mentions in a text document are encoded by a PLM encoder. The classifier uses the contextual representation and, optionally, positional information about the entity encoded through the decile position embedding vector of mentions to determine the salience score of a target entity. We conduct experiments on four publicly available datasets, two of which were human annotated and two that were curated semi-automatically. We fine-tune several cross-encoders using PLMs and demonstrate that these yield consistent and significant improvements over feature-based methods, as well as prompting instruction-tuned PLMs. The latter shows the novelty and complexity of the task of entity salience detection, which requires the model to learn significant task-specific semantic knowledge for this natural language understanding task. Our contributions in this paper are the following: * 24.4 F1 score over previous feature engineering approaches. * We establish a uniform benchmark of two human annotated and two semi-automatically curated datasets for the task of entity salience detection that we expect to be beneficial to future study of this task; * A faceted analysis of the models' predictive behaviour. ## 2 Related Work Understanding the aboutness of a document is one of the long-standing goals of research in both Information Retrieval and Natural Language Processing Gamon et al. (2013). Several types of approaches have been proposed, including extracting key-terms Hulth (2003); Mihalcea and Tarau (2004), identifying latent topics Blei et al. (2003), or generating text summaries Erkan and Radev (2004). There has been a recent focus in using entities to understand the content of the document. Towards this goal, the task of entity salience has been first described for web pages in Gamon et al. (2013) and for news content in Dunietz and Gillick (2014). This task can be viewed as a restricted form of keyword or keyphrase extraction Alami Merrouni et al. (2020) if salience is binary. For the rest of this study, we will use the concept of salience as described in Gamon et al. (2013). The salience labels for entities were obtained either by crowdsourcing labels from multiple raters to identify salient entities Gamon et al. (2013); Dojchinovski et al. (2016); Trani et al. (2018); Maddela et al. (2022) or by using proxies. For example, Dunietz and Gillick (2014) hypothesize that salient entities are those that appear in the article's abstract. Wu et al. (2020) identifies an entity as salient if the Wikinews category that corresponds to the entity is also labeled as the category of the article. Past studies mostly proposed machine learning methods to infer the salience of a given entity that relied on hand-crafted features. Features that can be computed from the target entity mentions and document alone can be categorized into the following: positional (e.g., position in the document, if entity is in the abstract) Dunietz and Gillick (2014), count-based (e.g., number of references to the entity) Dunietz and Gillick (2014); Wu et al. (2020), local context Trani et al. (2018), or global context Ponza et al. (2019). Further, joint entity salience resolution can be performed by creating features using the entity graph (e.g., centrality in the entity graph) Dunietz and Gillick (2014); Trani et al. (2018). Finally, past work also showed that incorporating external knowledge about entities from knowledge bases can boost predictive performance Dojchinovski et al. (2016). Automatically inferring salience for entities can directly benefit multiple downstream applications, such as improving ranking results for queries containing entities Xiong et al. (2018) or improv ing the performance of entity detection by joint modelling (Trani et al., 2018). Moreover, by inferring salience, new entity-centric applications can be built, such as highlighting salient entities in search (Gamon et al., 2013), improving the interpretability of news trends through salient entities (Ponza et al., 2021), or identifying entities for creating entity-centric summaries of news stories (Maddela et al., 2022; Hofmann-Coyle et al., 2022). ## 3 Problem Definition We use the concept of salience as introduced in (Gamon et al., 2013): salient entities are entities explicitly mentioned in the document that are objectively important as a function of the structure of the text. The goal of the salience model is to produce a single salience score \(\psi(e)\) for the entity \(e\) using only the document \(D\) and the explicit entity mentions \(\mathcal{M}_{e}\). We consider using external knowledge, such as information about entities from knowledge bases, to be outside the scope and leave integration of such knowledge for future work. ## 4 Methods Pre-trained Language Models (PLMs) have shown a remarkable ability to encode syntactic and semantic knowledge in their parameters (Tenney et al., 2018, Tenney et al., 2019) that can be leveraged when fine-tuned on downstream natural language understanding (NLU) tasks. We postulate that PLMs can be harnessed to help in entity salience detection, a target-based document-level NLU task. In this section, we present an architecture based on the cross-encoder setup adapted to the task of entity salience detection. ### Cross-encoder **Encoding** Given a document \(D\) and a target entity \(e\), which is mentioned in the document, we concatenate the target entity's name and the document using a special [SEP] token. We then encode the text using a Transformer-based pre-trained encoder. Figure 2 shows the graphical representation of the cross-encoder model. This setup allows the model to have deep cross attention between the target entity and the entire document. Note that we do not explicitly use the mention information \(\mathcal{M}_{e}\) in this modeling approach and rely on cross-attention to use mention information implicitly. **Position Encoding** We compute the decile positions for each entity mention (\(m\in M_{e}\)) in the document \(D\) by taking a positional index \(p_{m}\in\{0,1,\dots,9\}\), indicating which part of document the mention belongs to if the document is partitioned into \(10\) equal chunks. Depending on the number and positions of the mentions, the vector can contain multiple non-zero values in the \(p\) vector. To obtain positional embeddings, we use an embedding layer that maps positional indices to a dense vector of dimension \(d_{model}\), formally \(\mathbf{h}_{pe}(m)=\texttt{Embedding}(p_{m})\). **Scoring** The output representation of the [CLS] token is concatenated with the mean position embedding vector \(\mathbf{h}_{pe}\) and fed to a scorer module that produces a salience score \(\psi(e)\in[0,1]\) for entity \(e\). The salience scorer is a feed-forward network with a sigmoid scoring function head. Formally, \[\psi(e)=\sigma(\texttt{FFN}(\mathbf{h}_{\texttt{[CLS]}}||\mathbf{h}_{pe}))\] ### Optimization We fine-tune the model described above by minimizing the binary cross entropy loss that is calculated using the ground truth binary salience labels and the predicted salience score \(\psi(e)\). ## 5 Datasets In this section, we describe our entity salience benchmark, which consists of four datasets: two datasets were curated using semi-automated methods and two used human annotations. We provide summary statistics of these datasets and label collection methods in Table 1. **NYT-Salience** This dataset is introduced in (Dunietz and Gillick, 2014) and is the largest dataset Figure 2: Graphical representation of the cross-encoder architecture w/ decile position encoding. to date for entity salience detection. The dataset is curated with an assumption that salient entities are mentioned in the abstract of a news article in the NYT Corpus Sandhaus (2008). Entities and their mentions are identified using a classical NLP pipeline involving POS tagging, dependency parsing, and noun phrase extraction. Despite being large-scale, the automatic dataset creation process could introduce noise as corroborated by moderate agreement numbers with human annotators on a subset of the data. The dataset contains a binary salience label for each entity. **WN-Salience** Introduced in Wu et al. (2020), this is another automatically curated dataset consisting of Wikinews articles. These are annotated with Wikinews categories by their authors. WN-Salience identifies salient entities by using the hypothesis that an entity is salient if the Wikinews category that corresponds to the entity is also labeled as a category of the article. Similar to NYT-Salience, this dataset has binary salience labels. **SEL** This is another dataset based on Wikinews released by Trani et al. (2018). However, unlike WN-Salience, this dataset is human annotated, where multiple human annotators ranked the salience of entities into one of four categories. To conform with the binary labels of the other datasets, we map the 4 categories into binary labels of \(\{0,1\}\) by mapping the bottom two classes to not salient and the top two classes to salient. **EntSUM** This dataset was introduced in Maddela et al. (2022). To construct this dataset, a randomly selected set of entities spanning a subset of \(693\) articles from the NYT corpus were assigned salience labels by human annotators on a four-point scale, ranging between \([0,3]\). For each document entity pair, two independent annotations were collected, which were increased up to \(5\) in case of disagreements. If the average annotation score is greater than \(1.5\) for an entity, it is assigned a positive salience label. ### Data Enrichment with Inferred Mentions Except for EntSUM, the other datasets do not have explicit entity mention offsets as annotations, which are necessary for many feature-based approaches and to compute positional embeddings. While SEL contains only the mention surface texts per entity, NYT-Salience and WN-Salience only provide the start and end character indices (aka mention offsets) of the very first mention of an entity. To this end, we infer additional mentions of an entity within the text using a combination of Flair NER Akbik et al. (2019) and pattern matching. For SEL, since the mentions are available, we use a pattern matching approach to match the surface text of the mentions to infer mention offsets. For NYT-Salience and WN-Salience, we first use Flair NER to identify mentions of named entities in the text. We attempt to match these mentions to the first mention of each entity in the document provided in the respective datasets. Since the surface text of other mentions may differ from the first mention, we additionally use the overlap between a mention's surface text and the entity name as a candidate mention for that entity. Applying this approach, we infer additional mentions of an entity in the text and their offsets. While this process could introduce some noise, the overall quality of the datasets are enhanced through this process. ## 6 Experiments We experiment on our entity salience benchmark with our proposed PLM-based method, other ML and heuristic-based approaches used in past research, as well as an instruction-tuned PLM. ### Data Splits Prior works Dunietz and Gillick (2014); Trani et al. (2018); Wu et al. (2020) use inconsistent (or not reported) train/validation/test splits. NYT-Salience and WN-Salience datasets are provided \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Dataset** & **NYT-Salience** & **WN-Salience** & **SEL** & **EntSUM** \\ \hline \# Docs & 110,463 & 6,956 & 365 & 693 \\ Doc Length (avg chars) & 5,079 & 2,106 & 1,660 & 4,995 \\ \# Unique entities & 179,341 & 23,205 & 6,779 & 7,854 \\ \# Mentions & 4,405,066 & 145,081 & 19,729 & 20,784 \\ \% Salient entities & 14\% & 27\% & 10\% & 39\% \\ Ground-truth & Abstract Alignment & Category Alignment & Human & Human \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics and label collection methods for the datasets used in our experiments. with train/test splits (but no validation), whereas SEL dataset is provided without any splits. This makes it hard to benchmark previous works with a fair comparison across models. To overcome this issue, we do a temporal split of NYT-Salience's and WN-Salience's original training sets into a new train/validation sets based on the publication time of the news stories, which provides a more realistic testing setup Huang and Paul (2018); Rijhwani and Preotiuc-Pietro (2020). We also perform a temporal split of SEL and EntSUM datasets into train/validation/test sets. ### Baselines First, we list all methods used in past research, for which we report the results from the original papers. * _First Sentence_. Classifies an entity as salient if it appears in the first sentence of the document's body; used in both Dunietz and Gillick (2014) and Wu et al. (2020). * _Position & Frequency (Dunietz and Gillick, 2014)_. Feeds the first sentence index and the frequency features of an entity into a logistic regression model. * _All Features (Dunietz and Gillick, 2014)_. Uses a series of features based on position, frequency, and PageRank signals fed into a logistic regression model. * _SEL (Trani et al., 2018)_. Uses a combination of features based on position, frequency, and Wikipedia graph statistics fed into a Gradient Boosted Decision Tree algorithm implemented in sklearn Pedregosa et al. (2011). * _SWAT (Ponza et al., 2019)_. Uses a set of features similar to the SEL Method described above, with the addition of features based on entity embeddings. All features are fed into a Gradient Boosted Decision Tree algorithm implemented in XGBoost Chen et al. (2015). * _Positional Feature (Wu et al., 2020)_. Uses the index of the first sentence the entity is mentioned as a feature in a logistic regression model. This method provides best results on the WN Salience dataset in Wu et al. (2020). Next, we re-implement a set of common methods based on the above baselines in order to be able to test them on all four datasets. This ensures the evaluation is performed on the same experimental setup. * _Positional Headline_. Classifies an entity as salient whether it appears in the headline of the input document. * _Positional Headline & Lead_. Classifies an entity as salient if it appears in the headline of the document or in the first sentence (lead sentence) of the document. * _Entity Frequency_. Classifies an entity as salient if they are more frequent than a given value. For each dataset, we calculated different thresholds and reported the best results. Thresholds can be found in the Appendix. * _Features & GBDT_. This method uses the most common features from past works Dunietz and Gillick (2014); Wu et al. (2020); Trani et al. (2018); Ponza et al. (2019) -- i.e., entity's first sentence index, and entity frequency -- and feeds them into a GBDT model implemented using LightGBM Ke et al. (2017). * _SEL GBDT_. Follows the method from Trani et al. (2018) and uses sklearn's GBDT Pedregosa et al. (2011) to train a model on the features provided with the SEL dataset. * _Target entity masking_. This method feeds the input to a Transformer-based encoder (RoBERTa-base) with the target entity mentions represented through a special mask token. The salience prediction is obtained by mean pooling the mask token representations and passing this through a feed-forward network. * _Zero-shot prompting_. We test an instruction-tuned PLM Wei et al. (2021) by zero-shot prompting. The prompt introduces the task description followed by the input text and a target entity and formulates the output as a yes/no question. The PLM, already instruction-tuned on a large collection of NLU tasks, attempts to provide an answer based on the prompt, input text and target entity. This family of models has been demonstrated to be robust and versatile on multiple benchmarks Wei et al. (2021). We use _Flan-T5-Large_Chung et al. (2022), which is comparable in size to other PLMs we use as the base models for the cross-encoders. ### Experimental Setup We use RoBERTa-base Liu et al. (2019) and DeBERTa-v3-base He et al. (2023) as the base PLM for experiments. For each of these base models, we train both a cross-encoder model and a cross-encoder model augmented with decile positional embeddings. For training our proposed models, we use AdamW (Loshchilov and Hutter, 2019) as the optimizer. We perform a hyperparameter search for learning rate using the following set of values: \(\{0.001,0.0005,0.0002,0.0001,0.00005\}\). We train our models for a maximum of \(10\) epochs with early stopping based on the validation set performance. We pick the best performing model checkpoints for each dataset based on the performance on the validation set. In Tables 2 and 3, we report the performance of our models and the baselines using the standard classification metrics (i.e., Precision, Recall, and F1) on the positive (salient) class, following previous research on entity salience. For training and inference of each Transformer-based model, we use a single NVIDIA V100 GPU with 32GiB GPU memory, 4 CPUs, and 128 GiB of main memory. ### Results In Tables 2 and 3, we present the experimental results of the baselines and our proposed models on the four datasets described in Section 5. **Comparison with feature-based methods.** We observe that the cross-encoder model significantly outperforms all baseline models in F1 score. It also yields better precision compared to the baselines for three of the four datasets. Only for the SEL \begin{table} \begin{tabular}{l|l|l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{NYT-Salience} & \multicolumn{3}{c}{WN-Salience} \\ \cline{5-10} & & & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline (Dunietz and Gillick, 2014) & Heuristic & First Sentence & 59.5 & 37.8 & 46.2 & – & – & – \\ (Dunietz and Gillick, 2014) & ML & Position \& Frequency & 59.3 & 61.3 & 60.3 & – & – & – \\ (Dunietz and Gillick, 2014) & ML & All Features & 60.5 & 63.5 & 62.0 & – & – & – \\ (Ponza et al., 2019) & ML & SWAT & 62.4 & 66.0 & 64.1 & – & – & – \\ (Wu et al., 2020a) & Heuristic & First Sentence & 56.0 & 41.0 & 47.3 & 47.9 & 53.2 & 50.4 \\ (Wu et al., 2020a) & ML & Positional Feature & 19.0 & 41.3 & 26.0 & 29.1 & **78.9** & 42.5 \\ (Wu et al., 2020a) & ML & Features \& GBDT & 39.2 & 59.7 & 47.3 & 29.2 & 48.1 & 36.3 \\ \hline \multirow{8}{*}{Our Implementations} & Heuristic & Positional Headline & 57.5 & 42.0 & 48.5 & 46.1 & 51.5 & 48.7 \\ & Heuristic & Positional Headline \& Lead & 49.8 & 55.4 & 52.5 & 41.0 & 60.0 & 48.7 \\ & Heuristic & Entity Frequency & 53.7 & 53.3 & 53.6 & 37.3 & 61.9 & 46.6 \\ & ML & Features \& GBDT & 61.0 & 57.4 & 59.2 & 46.2 & 53.3 & 49.5 \\ & PLM (RoBERTa) & Target Entity Masking & 64.6 & 50.2 & 56.5 & 57.0 & 65.4 & 60.9 \\ & PLM (Flan-T5) & Zero-shot prompting & 43.0 & 60.6 & 50.3 & 38.8 & 53.2 & 44.9 \\ \hline \multirow{8}{*}{Our Models} & PLM (RoBERTa) & cross-encoder & 75.9 & 87.1 & 81.1 & 71.8 & 73.6 & 72.7 \\ & PLM (DeBERTa) & cross-encoder & 77.5 & 87.4 & **82.1** & 71.5 & 78.3 & **74.8** \\ \cline{1-1} & PLM (RoBERTa) & cross-encoder w/ position emb. & **78.7** & 84.2 & 81.4 & 71.2 & 76.7 & 73.8 \\ \cline{1-1} & PLM (DeBERTa) & cross-encoder w/ position emb. & 75.9 & **88.4** & 81.7 & **73.3** & 76.1 & 74.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on the NYT-Salience and WN-Salience datasets. The ground-truth of these datasets was generated via abstract/category alignment. The top section presents results as originally reported in the source papers. \begin{table} \begin{tabular}{l|l|l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{SEL} & \multicolumn{3}{c}{EntSUM} \\ \cline{3-8} & & & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline (Trani et al., 2018) & ML & SEL (w/ 5-fold cross val.) & 50.0 & 61.0 & 52.0 & – & – & – \\ (Ponza et al., 2019) & ML & SWAT (w/ 5-fold cross val.) & 58.0 & 64.9 & 61.2 & – & – & – \\ \hline \multirow{8}{*}{Our Implementations} & Heuristic & Positional Headline & 26.6 & 78.4 & 39.7 & 60.7 & 18.5 & 28.4 \\ & Heuristic & Positional Headline \& Lead & 22.1 & **87.1** & 35.3 & 51.2 & 31.6 & 39.1 \\ & Heuristic & Entity Frequency & 13.5 & 57.8 & 21.9 & 48.4 & 54.0 & 51.0 \\ & ML & Features \& GBDT & 26.6 & 78.4 & 39.7 & 60.7 & 52.0 & 56.0 \\ & ML & SEL GBDT & **71.1** & 47.8 & 57.1 & – & – & – \\ & PLM (RoBERTa) & Target Entity Masking & 36.3 & 13.8 & 20.0 & 63.0 & 41.7 & 50.2 \\ & PLM (Flan-T5) & Zero-shot prompting & 27.0 & 81.7 & 40.6 & 50.7 & 54.5 & 52.5 \\ \hline \multirow{8}{*}{Our Models} & PLM (RoBERTa) & cross-encoder & 51.6 & 73.6 & 60.6 & 65.5 & 60.6 & **63.0** \\ & PLM (DeBERTa) & cross-encoder & 64.1 & 73.6 & **68.5** & 64.9 & 59.2 & 61.9 \\ \cline{1-1} & PLM (RoBERTa) & cross-encoder w/ position emb. & 63.0 & 69.9 & 66.3 & 67.5 & 57.0 & 61.8 \\ \cline{1-1} & PLM (DeBERTa) & cross-encoder w/ position emb. & 67.3 & 62.4 & 64.7 & **72.1** & 51.5 & 60.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the SEL and EntSUM datasets. The ground-truth of these datasets was generated via human annotation. The top section presents results as originally reported in the source papers. dataset does the SEL GBDT model trained on publicly available pre-computed features produce a model with better precision than the cross-encoder. We observe that adding the decile positional embedding with cross-encoder improves the precision across all datasets, but also degrades the recall in every dataset except NYT-Salience. The Target Entity Masking approach, which also leverages contextual information with a transformer-based model yields mixed results. Overall, the model is able to obtain better precision than the feature-based models for all datasets except SEL, but the model suffers from poor recall across all datasets, resulting in significantly worse F1 scores especially when compared to cross-encoder models. Our re-implementation of positional methods and GBDT methods are consistent with the performance reported in prior works. The variance in numbers can be attributed to the enrichment of datasets with inferred mentions (Section 5.1) and the explicit train/dev/test data split used in our experiments (Section 6.1). **Comparison with zero-shot prompting of a large language model.** To the best of our knowledge, this is the first evaluation of zero-shot prompting of instruction-tuned models (here _Flan-T5-Large_) for the entity salience detection task. We observe that the model is able to provide comparable or better performance with the heuristic-based methods, with the exception of the WN-Salience dataset. However, it falls short of the dedicated cross-encoder model by double digit margins, showing that this is a distinctive task and requires a dedicated model. We further discuss causes for this performance in the Appendix (Section 9.1), along with the implementation details. ## 7 Analysis In this section, we perform an analysis of model predictions in order to gain more insights into model behavior and understand potential avenues for further improvement. We thus break down performance by different factors including: the importance of inferring all entity mentions, the position of the first entity mention, and entity mention frequency. ### Impact of Inferred Mentions In Section 5.1, we inferred additional mentions of an entity for the NYT-Salience and WN-Salience datasets. We compare the performance of our best model that leverages multiple mentions of an entity to its version trained with only the first mentions of entities in a document. The results in Table 4 show that doing so consistently improves the performance of our models across all datasets. In particular, for the largest dataset, NYT-Salience, our model achieves a substantial gain of 27.3 F1 points. This experiment showcases the importance of augmenting our datasets with additional mentions and the importance of explicitly modelling contextual information present around all entity mentions. ### Stratified Analysis on First Mention Position We compare our cross-encoder models against the Features & GBDT model, our re-implemented baseline that relies on the most popular features used in prior works Dunietz and Gillick (2014); Wu et al. (2020); Trani et al. (2018). As shown in the results from Tables 2 and 3, among other features, positional features are most informative for salience. Intuitively, if an entity is mentioned in the headline or in the first sentence of a news article, there is high probability of that entity being salient. Figure 3 shows that all models perform well when the first mention falls in the headline or the first sentence of the document. We notice that the cross-encoder models constantly outperform the Features & GBDT model and the largest gains are observed in the SEL and WN-Salience datasets. This observation indicates that the cross-encoder models are able to use the context to identify that mentions that occur in the headline or the first parts of the document are often salient without explicitly using this information as a feature. We also investigate the performance of the models when the first mention falls inside or outside the context window of the PLM (here, 512 tokens). When mentions fall inside the context window, we observe that the cross-encoder models consistently outperform the Features & GBDT model. When the mention falls outside the context window, the model predictions become close to random, which is expected, as the model does not have immediate contextual information around the mention. Using models that can deal with longer inputs would be a promising direction for improvement for these samples Beltagy et al. (2020). Interestingly, for WN-Salience, the Features & GBDT model also performs considerably worse outside the first 512 tokens. ### Stratified Analysis on Mention Frequency Similar to mention position analysis, we compare our cross-encoder models against the Features & GBDT model, which uses mention frequency as one of its input features. Figure 3 shows how the cross-encoder models and Features & GBDT compare with varying frequency of entity mentions. For salient entities with single mentions, the cross-encoder model performs significantly better than the Features & GBDT model. In particular, for the NYT-Salience dataset, the Features & GBDT model fails to predict any of the single mention entities as salient. This observation indicates that the cross-encoder models do not simply model the mention frequency, but potentially leverage other contextual information to determine the salience of entities with a single mention. The performance of the Features & GBDT model improves with more mentions per entity. In fact, for the frequency range of 6-10 mentions per entity, the Features & GBDT model performs better than the cross-encoder models for EntSUM and SEL datasets. This observation indicates the over-reliance of the Features & GBDT model on mention frequency to determine salience, but also that the cross-encoder cannot fully use this heuristic. ## 8 Conclusion This paper aims to leverage the semantic knowledge encoded in pre-trained language models for entity salience detection. We propose the cross-encoder method based on Transformer-based PLMs with positional representations and compare its performance to several ML-based, heuristic methods and instruction-tuned PLMs across four different datasets, two human-annotated and two automatically curated. Across all our experiments, the cross-encoder model based on pre-trained language models outperforms all other methods, often with double digit gains in F-1 score. Analyses of model behavior illustrate the important effects of mention frequency, mention position, and document length on performance, highlighting areas of future work. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{NYT-Salience} & \multicolumn{3}{c|}{WN-Salience} & \multicolumn{3}{c|}{SEL} & \multicolumn{3}{c}{EntSUM} \\ \cline{2-13} & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline Cross-encoder w/ first mention & 54.2 & 57.5 & 55.8 & 69.6 & 80.4 & 74.6 & 59.8 & 76.1 & 67.0 & 69.1 & 53.2 & 60.2 \\ Cross-encoder w/ all mentions & 77.5 & 84.4 & 82.1 & 71.5 & 78.3 & 74.8 & 64.1 & 73.6 & 68.5 & 64.9 & 59.2 & 61.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison of cross-encoder models with only the first mention vs. all inferred mentions. Figure 3: Stratified analysis across models and datasets.
2309.12423
Event Prediction using Case-Based Reasoning over Knowledge Graphs
Applying link prediction (LP) methods over knowledge graphs (KG) for tasks such as causal event prediction presents an exciting opportunity. However, typical LP models are ill-suited for this task as they are incapable of performing inductive link prediction for new, unseen event entities and they require retraining as knowledge is added or changed in the underlying KG. We introduce a case-based reasoning model, EvCBR, to predict properties about new consequent events based on similar cause-effect events present in the KG. EvCBR uses statistical measures to identify similar events and performs path-based predictions, requiring no training step. To generalize our methods beyond the domain of event prediction, we frame our task as a 2-hop LP task, where the first hop is a causal relation connecting a cause event to a new effect event and the second hop is a property about the new event which we wish to predict. The effectiveness of our method is demonstrated using a novel dataset of newsworthy events with causal relations curated from Wikidata, where EvCBR outperforms baselines including translational-distance-based, GNN-based, and rule-based LP models.
Sola Shirai, Debarun Bhattacharjya, Oktie Hassanzadeh
2023-09-21T18:46:29Z
http://arxiv.org/abs/2309.12423v1
# Event Prediction using Case-Based Reasoning over Knowledge Graphs ###### Abstract. Applying link prediction (LP) methods over knowledge graphs (KG) for tasks such as causal event prediction presents an exciting opportunity. However, typical LP models are ill-suited for this task as they are incapable of performing inductive link prediction for new, unseen event entities and they require retraining as knowledge is added or changed in the underlying KG. We introduce a case-based reasoning model, EvCBR, to predict properties about new consequent events based on similar cause-effect events present in the KG. EvCBR uses statistical measures to identify similar events and performs path-based predictions, requiring no training step. To generalize our methods beyond the domain of event prediction, we frame our task as a 2-hop LP task, where the first hop is a causal relation connecting a cause event to a new effect event and the second hop is a property about the new event which we wish to predict. The effectiveness of our method is demonstrated using a novel dataset of newsworthy events with causal relations curated from Wikidata, where EvCBR outperforms baselines including translational-distance-based, GNN-based, and rule-based LP models. Knowledge Graph Completion, Case-Based Reasoning, Event Prediction + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † † †: Footnote † †: thanks: [ + Footnote † †: Footnote † † †: thanks: [ + Footnote However, a major limitation of most LP methods is that relations can only be predicted among entities that already exist in the KG. A more practical and challenging perspective for this task is to consider LP for entirely new entities (i.e., inductive link prediction). This perspective also better reflects the needs of an event prediction system, which would aim to make predictions about entirely new events. The inductive setting introduces an additional level of difficulty for LP models, as many of them (especially embedding-based ones) operate under the closed world assumption - that is, they can only perform predictions for entities that have been seen during training, and facts that are not seen during training are assumed to be false. Additionally, even for models that are capable of this task, some KGs (such as Wikidata) constantly undergo updates and changes, which can negatively affect a model's performance as its training data becomes outdated. Towards our goal of applying KGs to perform event prediction, we develop a case-based reasoning approach, EvCBR, which leverages examples of past events in the KG to perform LP for the properties of unseen event entities. We frame our problem as a 2-hop link prediction task - e.g., starting from a cause event, we assume the existence of a causal relation to a new effect event and predict its properties. This approach allows us to perform inductive link prediction for the properties of the unseen effect without the need for external data. Importantly, EvCBR requires no training, relying instead on statistical measures and subclass hierarchies to identify similar entities and follow paths through the KG to perform predictions. This makes EvCBR well-suited to KGs such as Wikidata that frequently experience changes in their structure and content. More generally, we can also apply EvCBR to perform 2-hop LP for properties of any type of unseen entity that is connected to a known entity by some relation. Our contributions are as follows: 1. We introduce a case-based reasoning model, EvCBR,2 for event prediction. Our model considers this task as a LP task in a 2-hop setting, leveraging knowledge about similar cause-effect events to make predictions about the unseen effect. EvCBR requires no training, making it well-suited to handle new events and changes to the underlying KG. Footnote 2: We publicly release our code at [https://github.com/solashirai/WWW-EvCBR](https://github.com/solashirai/WWW-EvCBR) 2. Compared to similar work, we introduce novel similarity metrics to identify similar cause-effect event cases as well as a refinement step to improve the precision of predictions. 3. We curate and release a novel dataset surrounding causal events in Wikidata, extracting news events that are connected by causal relations as well as their local connections.3 Footnote 2: Dataset available at [https://doi.org/10.5281/zenodo.7196049](https://doi.org/10.5281/zenodo.7196049). 4. In our 2-hop inductive link prediction task, our model shows superior performance on our event dataset as well as competitive performance on a modified evaluation dataset based on the FB15k-237 dataset. ## 2. Related Work Zhao (Zhao, 2018) presents a comprehensive survey of different kinds of event prediction methods across different domains. Under their taxonomy of event prediction methods (Zhao, 2018, Fig. 3), our method is most closely related to the "causality-based" methods under "semantic prediction" techniques. Other notable work that falls under this class of methods includes Radinsky et al's Pundit algorithm Radinsky et al. (2017) which is based on automated construction of a causality graph through extraction of causal patterns and generalization. Zhao et al. (Zhao et al., 2018) also construct a causal graph from text documents and build embeddings to perform a simple (one-hop) prediction. While prior work has explored applying LP methods to KGs of causal events (Rang et al., 2018; Zhao et al., 2018), to our knowledge, our work is the first to apply 2-hop LP over a knowledge graph for event prediction. In the space of LP in KGs, a significant number of embedding-based methods have been developed as detailed in recent surveys (Zhao et al., 2018; Zhao et al., 2018). Such methods often show trade-offs in performance based on the dataset and how well the model hyperparameters are tuned (Zhao et al., 2018). While GNN-based models (Zhao et al., 2018) have often shown recent state-of-the-art performance, there has been discussion surrounding what semantics are actually captured in such models (Zhao et al., 2018) as well as re-evaluations of _how_ one evaluates such LP models (Bahdan et al., 2017; Zhang et al., 2018; Zhang et al., 2018). Furthermore, most embedding-based models are trained and evaluated in the closed world setting, with a relatively sparse number of recent works such as (Bahdan et al., 2017; Zhang et al., 2018) considering inductive link prediction using additional knowledge sources such as text or hyper-relational facts. While less prevalent than embedding-based models, there are also a variety of rule-based LP models (Bahdan et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhao et al., 2018; Zhao et al., 2018). Methods such as AnyBURL (Zhao et al., 2018) have shown similar performance to state-of-the-art embedding models while requiring significantly less training time (Zhao et al., 2018). Works such as (Bahdan et al., 2017; Zhang et al., 2018) present case-based reasoning models which, like our method, require no training and make path-based predictions. Our method differs from such similar works in our method of computing entity similarity, approach to apply and refine prediction paths, and task formulation of performing 2-hop LP. ## 3. Problem Formulation Our work considers the application of LP methods over a knowledge graph for the task of event prediction. We define a knowledge graph \(G=(E,R,T)\) as consisting of a set of entities \(E\), a set of relations \(R\), and a set of triples \(T\). \(T\) consists of triples of the form \((h,r,t)\), where \(h,t\in E\) and \(r\in R\). The task of LP is then to correctly predict the missing tail entity given a partial triple \((h,r,?t)\).4 The standard assumption in this task is that the head entity \(h\) and the missing tail entity \(?t\) are known entities in \(E\). To perform inductive link prediction, it is necessary to predict the missing \(?t\) where \(?t\notin E\). Since LP models can not predict such an entity \(?t\) solely from the content of \(G\), inductive link prediction methods often also rely on additional background data (such as textual descriptions) to extract and represent new entities. Footnote 4: We note the related LP tasks of predicting a missing head \(?t\) or relation \(?r\), but in this paper we focus on LP for predicting \(?t\) as it aligns best with our event prediction task. ### Event Prediction Task Extending the task of LP to event prediction, our goal is to make predictions about new events based on a causal relation between two events. We represent our task as a query triple \((e,r,e)\), where \(e\), \(r\), and \(e\) denote the cause event, causal relation, and effect event, respectively. Furthermore, the point of the event prediction task is to predict properties about the new effect event - i.e., predicting outgoing properties about \(e\) in the form of \((e,r_{e},?_{2})\in Out_{e}\), where \(r_{e}\in R\) and \(?z\in E\). In this task, we assume that \(c\in E\) while \(e\notin E\) - if we wish to make predictions about a cause \(c\) which was not in the original KG, we assume that triples indicating the properties of \(c\) are supplied as input and added to the KG. Additionally, we assume that we are given the relations \(r_{e}\) to predict. ExampleIn Figure 1, our task is to perform predictions for New Effect based on the triple (New Cause, hasEffect, New Effect). Further, we make predictions about each of New Effect's properties, as (New Effect, instanceOf, \(?z\)) and (New Effect, country, \(?z\)). To make predictions under these assumptions, we must consider two points: first, we must make a prediction for an unseen effect event \(e\) given a cause \(c\) of interest and a relation \(r\); second, from the unseen event \(e\), we must make predictions about its properties by performing LP to tail entities \(?z\in E\). Given this problem formulation for event prediction, we can then perform predictions for properties of \(e\) without the need for any background knowledge about \(e\) by performing a 2-hop LP task starting from \(c\) - that is, predicting the tail entity for the 2-hop link \((c,r,r_{e},?z)\) for each triple in \(Out_{e}\). By omitting explicit consideration of \(e\) in the prediction task, we are able to overcome the limitation that LP models must have a way of extracting or representing the entity. From a practical perspective, when performing event predictions, one need not explicitly represent \(e\) as the primary focus is on predicting properties about the effect event. We additionally denote the set of outgoing triples from \(c\) as \(Out_{c}\) - we assume that outgoing properties of the cause event \(c\) are provided or already present in the KG, and our method only makes use of such outgoing triples5 from \(c\). A graphical depiction of this problem setup for our running example is shown in Figure 2. Footnote 5: We are motivated to only consider outgoing edges because we can expect a new event entered into the KG to have little to no incoming relations present. ### A General 2-Hop Prediction Task We pose a generalization of the event prediction task, which could be useful in other applications: to predict properties about a novel entity based on a known relation between that entity and some entity in \(E\). Given some representative relation \(r\in R\) that connects an entity \(h\in E\) to a novel entity \(e_{n}\notin E\), we can make predictions about its properties \((e_{n},r_{n},t_{n})\) using the 2-hop link (\(h\), \(r\), \(r_{n}\), \(?t\)), where \(?t\in E\). This generalization operates under the assumption that the relation connecting \(h\) and \(e_{n}\) is semantically meaningful such that we can make useful predictions about \(e_{n}\) based on knowledge about \(h\). The validity of this assumption may vary for different types of relations. ## 4. Methods EvCR is a case-based reasoning approach which aims to make predictions about new events based on background knowledge about similar cause-effect events. While our methods can be generalized to make predictions about new entities in other domains (conceptually replacing the "cause event" and "effect event" with different types of entities and connecting them by different relations as mentioned in the previous section), we describe our approach from the perspective of event prediction in this section. At a high level, case-based reasoning aims to solve a new _problem_ based on experience and knowledge about similar _cases_ in the system's case base. In our task, the target problem refers to a new cause-effect event pair query for which we wish to make predictions, and a case refers to an event pair present in the KG. This approach can then be broken down as (1) retrieve cases from the KG which are similar to the new problem query, (2) identify and score paths through the KG that can be used to predict properties of the effect events in each case, and (3) reuse the learned paths to make predictions for our new problem. ### Case Retrieval We define a case \(s\) as a triple \((c_{s},r,e_{s})\) connecting the cause \(c_{s}\) to the effect \(e_{s}\) by the causal relation \(r\), where \((c_{s},r,e_{s})\in T\). In our running example KG, there are three example cases of cause and effect events connected by the hasEffect relation. The first step of EvCBR is to retrieve cases from the KG that are similar to our problem, which we break down into two subtasks. First, we compute a similarity measure among entities in \(E\) based on their subclass hierarchy and outgoing connections in the KG. Second, we compute the similarity of cases to the query \((c,r,e)\). #### 4.1.1. Entity Similarity To compute entity similarity, we apply a relatively simple similarity metric which can be computed based on count statistics and vector multiplication. Our goal in this step is to acquire a very rough sense of **similarity between individual entities**, which will be utilized in our subsequent steps. For each entity \(h_{i}\in E\), we form a vector representation \(v_{h_{i}}=[a_{1},a_{2},..,a_{N_{h}}]\), where \(N_{h}\) is the total number of entities in \(E\). Each value \(a_{j}=1\) if there is a triple \((h_{i},r,t_{j})\in T\) for any relation \(r\) or if \(h_{i}\) is a subclass of \(a_{j}\), and otherwise \(a_{j}=0\). Intuitively, this vector allows us to capture the idea that similar entities are likely to be connected to the same entities. ExampleThe Kanto Earthquake and Tohoku Earthquake both have an outgoing connection to Japan, which might indicate that they are more similar than other earthquake events that occurred in different countries. Additionally, while these two earthquakes do not have any other outgoing connections that are identical, the subclassOf relation between Megathrust Earthquake and Earthquake indicates that both the Kanto and Tohoku Earthquakes are a type of Earthquake. We also weight the importance of each entity in this vector based on how frequently it occurs as an outgoing neighbor or superclass. Our intuition here is to ensure that sharing outgoing connections to a less common entity is more meaningful than sharing connections to a common one. We compute this weight as an inverse document frequency (IDF) measure, \(IDF(h)=\log(N_{h}/count(h))\), where \(count(h)\) is the number of times \(h\) has an incoming edge or is a superclass of an entity, and form the vector containing the Figure 2. The task setup for our running example. IDF weighting of all entities as \(\upsilon_{IDF}=[IDF(h_{1}),...,IDF(h_{N_{h}})]\). Additionally, we apply normalization to \(\upsilon_{IDF}\). Lastly, we apply the IDF weighting to each vector by elementwise multiplication, and compute the similarity between two entities based on a weighted Jaccard similarity between those vectors. The entity similarity \(ES\) is then computed as: \[w Jaccard(x,y)=\frac{x\cdot y}{\sum_{i=1}^{N_{h}}x_{i}+\sum_{i=1}^{N_{h}}y_{i}-x \cdot y} \tag{1}\] \[ES(h_{i},h_{j})=w Jaccard(\upsilon_{h_{i}}\odot\upsilon_{IDF},\upsilon_{h_{j}} \odot\upsilon_{IDF}) \tag{2}\] ExampleThe entity that is the most similar to the Tohoku Earthquake would be the Indian Ocean Earthquake, since both of them have outgoing connections to Megathrust Earthquake, Earthquake (through the subclass hierarchy), and Tsunami. While an entity like the Kanto Earthquake shares the outgoing connection to Japan, this link has a lower weighting since Japan has more connections in the KG compared to Megathrust Earthquake and Earthquake. #### 4.1.2. Case Head Similarity Next, we begin to determine the similarity of a case \((c_{s},r,e_{s})\) to our query \((c,r,e)\). Our goal for case head similarity is to **determine how similar the case's cause is to \(c\)**. Rather than directly using entity similarity from Equation 2, we instead compute similarity based on the set of \(c\)'s outgoing triples, denoted as \((c,r_{c},t_{c})\in Out_{c}\). For each triple in \(Out_{c}\), we define the importance of that triple based on (1) the probability that any triple in \(T\) containing the tail entity \(t_{c}\) also contains the relation \(r_{c}\), denoted \(P(r_{c}|t_{c})\), and (2) the probability that the tail entity \(t_{c}\) occurs in any triple in \(T\), denoted \(P(t_{c})\). We posit that triples containing uncommon relations or that lead to uncommon entities should be considered more important. The importance \(I\) of a triple is then computed as: \[I(c,r_{c},t_{c})=\log\left(\frac{P(r_{c}|t_{c})}{P(t_{c})}\right) \tag{3}\] ExampleThe importance of (\(c\), instanceOf, Megathrust Earthquake) will be greater than the importance of (\(c\), country, Japan). The \(P(r_{c}|t_{c})\) terms for both triples will be equal to 1 in our KG snippet, while the \(P(\text{Japan})\) is greater than \(P(\text{Megathrust Earthquake})\). After computing the importance of each triple in \(Out_{c}\), we also normalize the values (denoted \(n[(c,r_{c},t_{c}))\) by dividing the importance of each triple by the sum of all importance values. Case head similarity of \(c_{s}\) to the cause \(c\) is then calculated as the weighted sum of similarities between each outgoing triple from \(c_{s}\) (denoted \(cOut_{s}\)) and \(Out_{c}\), using the most similar triple from each set for each unique relation. Denoting \(ROut_{c}\) as the set of relations that occur in \(Out_{c}\), we compute the case head similarity \(CS_{h}\) as: \[CS_{h}(c,c_{s})=\sum_{r_{c}\in ROut_{c}}\frac{(c_{s},r_{c},t_{s})\in Out_{s}} {(c,r_{c},t_{c})\in Out_{c}} \tag{4}\] ExampleTohoku Earthquake has the greatest \(CS_{h}\) to our cause \(c\), given that both the instanceOf and country have an exact match. Kanto Earthquake has the second highest \(CS_{h}\), because it has an exact match with \(c\) for the country relation leading to Japan and in its instanceOf relation, the entity similarity \(ES\) between Earthquake and Megathrust earthquake is high. #### 4.1.3. Case Tail Similarity Besides judging how similar a case's cause event is to \(c\), we also want some notion of **how similar the case's effect is to the effect \(e\)**. Since \(e\) is unseen in the KG and we do not actually know what the tail entities are for triples in \(Out_{e}\), we determine similarity based on the types of outgoing relations from each entity. This similarity metric closely resembles those of works such as (Barbani, 2017; Barbani, 2017), which form a one-hot vector for each outgoing relation of entities to determine similarity. We compute case tail similarity, \(CS_{t}\), as a simple Jaccard similarity between the prediction relations, denoted \(ROut_{e}\), and the outgoing relations for the case's effect, denoted \(ROut_{es}\). \[CS_{t}(e_{s})=\frac{|ROut_{e}\cap ROut_{es}|}{|ROut_{e}\cup ROut_{es}|} \tag{5}\] Lastly, we compute the coverage of the outgoing relations for the case tail, denoted \(CC_{t}\). While \(CS_{t}\) identifies case effects with similar relations to our effect event \(e\), it also penalizes effects with a large number of relations. This second measure aims to ensure that we can identify cases whose tail entity can be used to make predictions about \(e\) without applying such a penalty - this measure might result in some entities that are less "similar" but more capable of performing reasoning through the KG. \[CC_{t}(e_{s})=\frac{|ROut_{e}\cap ROut_{es}|}{|ROut_{e}|} \tag{6}\] #### 4.1.4. Case Selection Finally, we **select a number of similar cases to retrieve from the KG** which we will use for our case-based reasoning. After identifying the initial set of candidate cases, we score the similarity of the case to our query using Equation 7 below. The cases are then sorted and we select the top \(N_{h}\) cases. When selecting these top \(N_{h}\) cases, we only select cases with a unique cause entity \(c_{s}\) - if another case with the same cause ranks among the top \(N_{h}\) cases, it is disregarded. We follow this procedure to ensure a level of diversity in the cases over which we perform our reasoning. \[CaseScore(c_{s},r,e_{s})=CS_{h}(c,c_{s})*CS_{t}(e_{s}) \tag{7}\] We select an additional \(N_{t}\) cases, where \(N_{t}<N_{h}\), in which we prioritize the coverage of outgoing edges from the case's effect \(e_{s}\). The motivation of this second set of cases is to ensure that we select some cases in which the case's effect contains as many relations in \(ROut_{e}\) as possible. Therefore, we formulate the scoring equation to give \(CC_{t}\) a greater influence than \(CS_{h}\). We follow the same ranking and selection procedure as for the first \(N_{h}\) cases, using Equation 8 to rank the cases. \[CaseScore_{cov}(c_{s},r,e_{s})=(1+CS_{h}(c,c_{s}))*CC_{t}(e_{s}) \tag{8}\] We denote the set of cases selected through this procedure as \(S\), containing \(N_{s}=N_{h}+N_{t}\) total cases. ExampleIn our KG snippet, we find that all of the cases' effect events have the same properties as those we wish to predict for \(e\), and so our selection of the best cases will be judged by their head similarity \(CS_{h}\) - out of our three example cases, the Tohoku Earthquake case will have the highest \(CaseScore\). ### Prediction Path Enumeration and Scoring We next use the retrieved cases to enumerate paths through the KG that can be used to make predictions about \(e\). #### 4.2.1. Path Enumeration Our first step is to **enumerate a set of paths, using cases retrieved from the KG, which can be used to connect cause events to properties of their effect events**. We define a path as a sequence of triples through the KG that can connect two entities. Given a start entity \(x\) and end entity \(y\), we can express a path of length \(n\) connecting them as a sequence of triples, \(p=[(x,r_{1},e_{1}),...,(e_{n-1},r_{n},y)]\). We define the _relation path_ as the sequence of relations used in each triple of a path, denoted \(rel(p)=(r_{1},...,r_{n})\). We denote the list of entities that can be reached starting from entity \(x\) following the relation path \(rel(p)\) as \(\mathbb{E}_{p,x}\). Note that \(\mathbb{E}_{p,x}\) is not a set, and may contain repeated entries of an entity that can be reached by different paths through the KG. For each case \((c_{s},r,e_{s})\) in our set of retrieved cases \(S\) and relation \(r_{e}\in Route_{e}\), let \(E_{r_{e},e_{s}}\) denote the set of entities connected to \(e_{s}\) by the outgoing relation \(r_{e}\), i.e., \((e_{s},r_{e},t)\in T\) for all \(t\in E_{r_{e},e_{s}}\). We then randomly sample up to \(N_{p}\) unique paths of length \(\leq 3\) connecting \(c_{s}\) to any entity in \(E_{r_{e},e_{s}}\). Here, we restrict the paths that are sampled such that (1) the first relation of \(rel(p)\) must be an outgoing relation of our cause event \(c\), and (2) the path does not traverse through the case's effect event \(e_{s}\). Our first restriction ensures that we identify paths that are relevant to our query event - since our goal is to follow relation paths starting from the event \(c\), if the first relation is not an outgoing relation of \(c\) it will not be possible to utilize the path. Also note that the triple \((c,r,e)\) is not present in the KG, so a path starting with \(r\) would only be valid if \(c\) has some other outgoing triple with the relation \(r\). Similarly, our second restriction aims to prevent us from sampling paths that traverse through the causal relation connecting the cause and effect event. Again, since \(e\) is unseen in the KG, such a path would not provide us with any useful information to apply to our event query. ExampleFigure 3 shows two example paths, highlighted in red (light) and blue (dark), connecting the Tohoku Earthquake event to its effect's instanceOf property. \(rel(p_{red})\) consists of the three relations [instanceOf, subclassOf, hasCause\({}^{-1}\)] while \(rel(p_{blue})\) is the single relation [instanceOf]. We do not sample paths that traverse through the Aftermath event. The path sampling steps are repeated for each case in \(S\), and the set of all paths sampled from each case, connecting the case's cause to its effect's \(r_{e}\) properties, is denoted as \(\mathcal{P}_{S,r_{e}}\). #### 4.2.2. Path Scoring Next, we **compute confidence scores for each unique relation path** present in \(\mathcal{P}_{S,r_{q}}\). This confidence score will correspond to the notion of how confident we are that a given path leads to the correct entity prediction. We base our scoring on a simple precision measure, aiming to score how well a given relation path leads to the correct entities for the target relation. Additionally, following from the path confidence measure implemented in (Hardt et al., 2017), we add a smoothing constant \(\epsilon\) (set to \(\epsilon=5\) in our experiments) to the denominator when calculating the precision - this allows for relation paths with the same precision and a greater number of samples to have a higher score than relation paths with fewer samples. The relation path score for a target relation to predict, \(r_{e}\), and relation path \(rel(p)\) is given as: \[PScore(r_{e},rel(p))=\frac{\sum_{(c_{s},r,e_{s})\in S}\sum_{t^{\prime}\in \mathbb{E}_{p,e_{s}}}\mathbb{I}\left[t^{\prime}\in E_{r_{e},e_{s}}\right]}{ \epsilon+\sum_{(c_{s},r,e_{s})\in S}\left[\mathbb{E}_{p,e_{s}}\right]} \tag{9}\] where \(\mathbb{I}\left[t^{\prime}\in E_{r_{e},e_{s}}\right]=1\) if \(t^{\prime}\in E_{r_{e},e_{s}}\) is true and \(0\) otherwise. The \(PScore\) for a particular \(rel(p)\) may be influenced by sampling multiple different paths that have the same relation path. This allows us to implicitly add a weighting to each relation path based on how frequently they occur, which corresponds to how likely they are to be randomly sampled. ExampleTo make a prediction about the effect's country in our three cases, we can sample the relation path \(rel(p)=[\text{country}]\). For all three of our cases, following the relation path [country] from the cause leads to the correct country property of its effect. The \(PScore\) of this path would then be \(3/(\epsilon+3)\). On the other hand, for a relation path like \(rel(p)=[\text{instanceOf}]\) from our example in Figure 3, which aims to predict the instanceOf relation of the effect, the correct entity is only reached in \(1\) out of \(2\) results for the Tohoku Earthquake, and \(0\) out of \(3\) total results for the other two cases. The \(PScore\) for this path would then be \(1/(\epsilon+5)\). Based on these path scores, produced for the set of distinct relation paths in \(\mathcal{P}_{S,r_{e}}\), we select \(N_{p}\) relation paths with the highest scores, denoted \(\mathcal{P}_{r_{e}}\), to make predictions for \((e,r_{e},?z)\). ### Applying Prediction Paths Given our set of relation paths \(\mathcal{P}_{r_{e}}\), selected and scored using our set of retrieved cases, we can now **apply these paths to our cause event \(c\) to make predictions about property \(r_{e}\) of the effect event**. We perform these predictions by following each relation path, starting from our cause \(c\), and using the \(PScore\) for each path to produce a total confidence score for each predicted entity. Formally, for our query event \((c,r,e)\) and a single property for which we want to predict \((e,r_{e},?z)\), we score each candidate prediction entity \(z\) as follows: \[EScore(c,r,e_{e},z)=\sum_{\begin{subarray}{c}rel(p)\in\mathcal{P}_{r_{e}}\\ z^{\prime}\in\mathbb{E}_{p,e}\end{subarray}}PScore(r_{e},rel(p))[z^{\prime}=z] \tag{10}\] In Equation 10, we choose to calculate the score for a prediction \(z\) as a summation of path \(PScores\) so that the score is increased for (1) entities that are frequently reached by a given relation path, and (2) entities that are reached by a variety of different relation paths. Figure 3. Example paths for the Tohoku Earthquake case. For the target relation \(r_{e}\) we compute all \(EScore\) values for all entities for which a path \(rel(p)\in\mathcal{P}_{r_{e}}\) exists between it and \(c\), denoted \(E_{\mathcal{P}_{r_{e}},c}\). We use these scores to sort and rank our predictions for the given 2-hop link. To make predictions for all properties \(Out_{e}\) of that we wish to predict, we repeat the procedures in Section 4.2 for each \(r_{e}\in ROut_{e}\) and the aforementioned scoring procedure. Example.Let us consider applying paths to make predictions for \(e\)'s instanceOf relation. There are only two valid paths that may be sampled to make this prediction, \(rel(p_{1})=[\text{instanceOf}]\) and \(rel(p_{2})=[\text{instanceOf},\text{ subclassOf},\text{hasCause}^{-1}]\), both coming from the Tohoku Earthquake case. Setting \(\epsilon=0\) for simplicity, the \(PScore\) for these two path are \(PScore(p_{1})=1/5\) and \(PScore(p_{2})=1\). Applying these two paths to our cause event \(c\) results in predicting \(e\) to be an instance of \(MegabrrustExtHubake\) with a score of \(1/5\) and \(Tunami\) with a score of \(1\). The best prediction in our minimal running example is that the new effect event will be a tsunami. Algorithm 1 provides a high-level overview of our model's inputs and the steps that are performed to produce predictions for each property of the unseen effect event up to this point. ``` Inputs:\((c,r,e),ROut_{e},N_{h},N_{t},N_{p}\)\((c,r,e)\leftarrow\) the cause \(c\), causal relation \(r\), and unseen effect \(e\)\(ROut_{e}\leftarrow\) the set of relations about the effect to predict \(N_{h}\leftarrow\) number of cases to retrieve using \(CaseScore\)\(N_{t}\leftarrow\) number of cases to retrieve using \(CaseScore_{cow}\)\(N_{p}\leftarrow\) number of paths to sample ``` ``` 1:Re retrieve cases from the KG 2:for\(r_{e}\in ROut_{e}\)do\(\triangleright\) Repeat for each prediction relation 3: Sample \(N_{p}\) paths from each case 4: Compute each path's score using \(PScore\) 5: Follow each path starting from \(c\) to generate predictions 6: Score each prediction using \(EScore\) 7: Rank and report top predictions for \(r_{e}\) ``` **Algorithm 1** EvCBR Event Prediction Overview ### Prediction Score Refinement Optionally, after making predictions for all of our event prediction relations \(ROut_{e}\), we introduce an additional step to **refine our prediction rankings** produced by Equation 10. Given our task setup, in which we perform a prediction based on some properties about a cause event, our refinement step aims to apply our prediction methods in the opposite direction to **make "predictions" about the _cause_ event's properties starting from the _effect_ event. The key intuition behind this step is that if we have chosen the correct entity \(z\) for the prediction triple \((e,e_{r},?z)\), we would expect that applying our prediction methods starting from the event \(e\) to predict properties of \(c\) should yield good results. Furthermore, because we know the true properties of \(c\), we can refine the score of the prediction \((c,r,r_{e},?z)\) based on how accurately it can be used to perform this reverse prediction. Example.A visualization of our refinement step is shown in Figure 4, for the case of the Tohoku Earthquake event. Two paths connecting the effect to the cause's instanceOf property are highlighted in orange (light) and blue (dark). Similar to sampling paths from this case to make predictions about the effect, we now aim to sample paths to make predictions about the cause. To perform our refinement step we reuse our set of previously retrieved cases \(S\) and proceed with producing refined scores for one prediction relation \(r_{e}\) at a time. Following through the path enumeration and scoring methods of Section 4.2, we now produce paths for each unique outgoing relation of the _cause_ event. Let \(ROut_{c}\) denote the set of outgoing relations from our cause event \(c\). For each \(r_{e}\in ROut_{c}\), we sample a set of paths \(\mathcal{P}_{S^{-1},r_{e}}\) indicating paths connecting \(e_{s}\) to any entity in \(E_{r_{e},e_{s}}\) for each case \((c_{s},r_{e},e_{s})\in S\). We apply similar restrictions on our path sampling as in Section 4.2.1, sampling up to \(N_{p}\) paths, limiting paths' length to 3 relations, and only selecting paths whose first relation is in the set of prediction relations \(ROut_{e}\). For each relation path in \(rel(p)\in\mathcal{P}_{S^{-1},r_{e}}\), we produce a path score \(PScoreR\), as: \[PScoreR(r_{c},rel(p))=\frac{\sum_{(c_{s},r,e_{s})\in S}\sum_{r^{\prime}\in \mathbb{E}_{p,e_{s}}}\mathbb{1}\left[t^{\prime}\in E_{r_{c},e_{s}}\right]}{ \epsilon+\sum_{(c_{s},r,e_{s})\in S}\left|\mathbb{E}_{p,e_{s}}\right|} \tag{11}\] We note that \(PScoreR\) is identical to \(PScore\) from Equation 9, except that the starting points of paths are switched to reflect our swap to following paths from the effect to the cause event. For each predicted entity \(z\in E_{\mathcal{P}_{r_{e}},c}\) which we produced using the methods up to Section 4.3, we produce a sequence refinement score, \(RS(z,r_{e},(c,r_{e},t_{c}))\), corresponding to how accurately paths that contain the entity \(z\) can predict outgoing triples of \(c\), where \((c,r_{c},t_{c})\in OUt_{c}\). Additionally, we temporarily treat the prediction triple \((e,r_{e},z)\) as being present in the KG so that we can follow paths starting from our unseen effect event \(e\). Since we only temporarily add this single triple, only paths that traverse through the prediction entity \((e,r_{e},z)\) will be considered in calculating \(RS\). \[RS(z,r_{e},(c,r_{c},t_{c}))=\sum_{\begin{subarray}{c}rel(p)\in\mathcal{P}_{S^{- 1},r_{e}}\\ r^{\prime}\in\mathbb{E}_{p,e}\end{subarray}}\frac{PScoreR(r_{c},rel(p))\left[t ^{\prime}=t_{c}\right]}{\left|\mathbb{E}_{p,e}\right|} \tag{12}\] We normalize these values by dividing each \(RS\) by the maximum \(RS\) score produced for each triple - i.e., for each triple in \(Out_{c}\) and entity \(z\in E_{\mathcal{P}_{r_{e}},c}\), the normalized refinement score \(nRS\) is calculated: \[nRS(z,r_{e},(c,r_{c},t_{c}))=\frac{RS(z,r_{e},(c,r_{c},t_{c}))}{\max_{z_{i}\in E _{\mathcal{P}_{r_{e}},c}}(RS(z_{i},r_{e},(c,r_{c},t_{c}))} \tag{13}\] \(nRS\) now provides us with a score in the range of [0,1] which indicates how well a given entity \(z\) can "predict" a particular cause event, where the \(nRS=1\) for the entity \(z\) that produces the highest Figure 4. Example paths connecting the effect event (Aftermath of...) to the Tohoku Earthquake’s instanceOf property. \(RS\) score for a particular triple \((e,r_{c},t_{c})\). We then compute our final refinement score as: \[\begin{split} nRS_{max}(z,r_{e})&=\max_{(c,r_{c},t_{ c})\in Out_{c}}\left(nRS(z,r_{e},(c,r_{c},t_{c}))\right)\\ nRS_{ang}(z,r_{e})&=\frac{1}{|Out_{c}|}*\sum_{(c,r_{c },t_{c})\in Out_{c}}\left(nRS(z,r_{e},(c,r_{c},t_{c}))\right)\\ Rescore(h_{q},r_{q},r_{e},z)&=\\ ESCore(h_{q},r_{q},r_{e},z)*(nRS_{max}(z,r_{e})+nRS_{aug}(z,r_{e})) \end{split} \tag{14}\] In Equation 14, high values of \(nRS_{max}\) provides us with evidence that \(z\) can be used to predict _some_ triple in \(Out_{c}\) well, while high values of \(nRS_{ang}\) indicate that \(z\) can predict many triples in \(Out_{c}\) well. We combine these scores so that we can reward prediction entities that can predict all of the cause's properties while not over-penalizing entities that still are able to reach some of the cause's properties with high accuracy. Example.Our initial predictions for \(e\)'s instanceOf relation were Tsunami and Megathrust Earthquake. To refine the scores of these two predictions, our method determines how well we can reach (\(c\), country, Japan) and (\(e\), instanceOf, Megathrust Earthquake). Using paths such as the two shown in Figure 4, we find that the Tsunami prediction can more accurately reach \(c\)'s instanceOf property compared to the Megathrust Earthquake prediction. This leads to Tsunami receiving a higher \(nRS_{max}(z,instanceOf)\) score, which subsequently leads to a higher \(ReScore\) value. After producing the refined scores for each entity, we re-rank them to produce our refined predictions. For each prediction relation \(r_{e}\), the above process is repeated. ## 5. Experiments To evaluate our work, we perform a modified evaluation over a commonly used benchmark dataset, FB15k-237 (Zhou et al., 2017), as well as a novel dataset curated from Wikidata. Model performance is measured in terms of the predicted tail entity for a given 2-hop link. ### Datasets Given our problem setup as a 2-hop LP task for properties of unseen entities, we split our test data based on entities rather than individual triples. We split both datasets into training triples, validation connections, validation triples, test connections, and test triples. We refer to any entity contained in the training triples as part of the training set, while head entities of triples in the test and validation triples are part of the test and validation sets, respectively. The test connections indicate relations from an entity in the training set to an entity in the test set. The test triples then indicate outgoing triples from an entity in the test set to one in the training set. #### 5.1.1. FB15k-237 For the FB15k-237 dataset, we iteratively select random entities to place in the test set based on the following conditions: for a candidate test entity \(x\), (1) there must be no triple connecting \(x\) to an entity in the test set, (2) \(x\) must have at least one incoming and outgoing triple, and (3) any entity connected to \(x\) must have at least one other triple connecting it to a different entity in the training set. These conditions ensure that test triples can plausibly be predicted by models and that all training entities have training data available. After selecting entities to place in the test set, we randomly select half of those for our validation set. From the 14,541 entities in FB15k-237, we select 500 entities each for the test and validation sets. The connecting and test triples together combine to form a total of 170,000 2-hop links over which we perform evaluation. #### 5.1.2. Wikidata Events To curate our Wikidata causal event dataset, we use an approach similar to (Kang et al., 2017) to identify 307 event-related classes based on links between entries in Wikidata and news articles in Wikinews. Following Wikidata's guidelines on modeling causes, we select 6 causal relations6 which we then use to query Wikidata for pairs of event entities that are connected by a causal relation. Footnote 6: [https://www.wikidata.org/wiki/Wikidata.List_of_properties/causality](https://www.wikidata.org/wiki/Wikidata.List_of_properties/causality) This query yielded a set of 1,953 pairs of events, encompassing 157 unique event classes. These triples connect **284** unique "cause" events to **311** unique "effect" events. We then collect the 3-hop neighborhood of outgoing connections surrounding each of these event entities, removing all literals (e.g., strings, integers) and any entity for which only one triple existed in the dataset. Our final dataset consists of **758,857** triples for **123,166** entities. From our set of cause-effect event pairs, we randomly select 100 effect events to serve as the test set. After filtering to remove test events related to each other, we end up with a test set consisting of 89 unique effect events, with 104 connections to causal events to test over, corresponding to a total of 1,365 2-hop links to evaluate. ### Baseline Models To compare EvCBR's performance against baselines, we modify each model's scoring function to incorporate the 2-hop link. For instance, for TransE (Chen et al., 2017), which aims to learn embeddings for \((h,r,t)\) by optimizing for \(h+r=t\), we can score a 2-hop prediction as \(h+r_{1}=t_{1}\), \(t_{1}+r_{2}=t_{2}\xrightarrow{}h+r_{1}+r_{2}=t_{2}\). We follow a similar procedure for each of our baseline models which rely on learning embeddings. This modified scoring is only used for testing, and model training is performed normally using the training triples. For our baselines, we choose three embedding-based models which have seen widespread use in recent years - TransE (Chen et al., 2017), ComplEx (Zhou et al., 2017), and RotatE (Zhou et al., 2017). For each model, we test embedding sizes of {100, 200, 300} dimensions and train the model for 100 iterations7 using self-adversarial negative sampling (Zhou et al., 2017). Footnote 7: We select these hyperparameters to explore based on prior experience and related work, and consider further fine-tuning (Zhou et al., 2017) to be beyond the scope of this research. We also compare against NoGE (Ran et al., 2017), a graph neural network (GNN) model which models both entities and relations as nodes in the graph. NoGE was developed and evaluated on a Wikidata-based dataset, CoDEx (Ran et al., 2017), which we believe makes it a strong candidate for showing good performance on our own causal event dataset. We train NoGE for 100 iterations using default configurations, and test using embedding sizes of (Ran et al., 2017; Zhou et al., 2017; Zhou et al., 2017). Lastly, we compare against two rule-based baselines: ProbCBR (Chen et al., 2017) and AnyBURL (Ran et al., 2017). ProbCBR is a similar model to ours, which leverages clustering of entities to better estimate scores of reasoning paths through the KG. We perform experiments with ProbCBR using parameter settings of retrieving (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) cases and sampling {60, 80, 100} paths. AnyBURL on the other hand is a rule learning model which can efficiently sample the KG to learn and generalize logical rules. Following from the original publication, we train AnyBURL for 1,000 seconds with default parameters. For our own models, we report our results for three variations of our entity scoring. EvCBR\({}_{base}\) is our basic case-based reasoning approach, following the path sampling and scoring methods up to Section 4.3. EvCBR\({}_{re}\) denotes the use of our score refinement method, from Section 4.4. Lastly, EvCBR\({}_{re+base}\) indicates the addition of the scores produced by the base and refinement methods. We evaluate our methods by retrieving \(N_{h}=\{5,10,20\}\) cases using \(CaseScore\), \(N_{t}=\{1,3,5\}\) cases using \(CaseScore_{cov}\), and \(N_{p}=\{60,80,100\}\) sample paths. ### Results and Discussion We apply each model to rank predictions for each tail entity in the test set of 2-hop links and report the mean reciprocal rank (MRR) and Hits@K metrics for each model. Experimental results are shown in Table 1; the best performing model for each dataset is highlighted in bold and the second best is underlined (for EvCBR's results we only highlight the best performance of any one variation). In the causal event data, we find that variations of EvCBR show superior performance over baselines, while for FB15k-237 EvCBR shows second-best performance. In particular, we observe that applying our refinement step EvCBR\({}_{re}\) leads to the best Hits@1 performance of the event dataset, while EvCBR\({}_{re+base}\) shows the highest MRR and Hits@10. EvCBR\({}_{re}\)'s lower Hits@10 may be attributed to situations where refinement fails to sample paths connecting a prediction entity to the input cause's properties (i.e., leading to \(nRS_{max}(h,r_{e})=0\), which subsequently makes its \(ReScore\) equal 0). Our results suggest that EvCBR shows strong performance for the task of event prediction, and more generally for the task of 2-hop LP for properties about unseen entities. Our model does not require any training, which further bolsters its applicability to this task in the open-world setting, where we might see frequent changes to the underlying KG. In all of our experiments, the performance of EvCBR is reflective of making predictions for the effect event if the cause event's properties were newly added to the KG. In contrast, all baselines except ProbCR perform training while including the cause events in the training set. To compare the impact of this fact, for our AnyBURL baseline, if we remove all learned rules that explicitly refer to the cause entity when performing predictions, MRR decreases to 0.363 and 0.127 for the FB15k-237 and causal event datasets, respectively - under this condition, EvCBR now outperforms AnyBURL on the FB15k-237 dataset. As an example of applying EvCBR to ongoing events, we performed a prediction for the effect of a Protest event in Iran. Similar cases retrieved from the KG included the Bahraini Protests of 2011, the Iranian Revolution, and the 2019-21 Chilean Protests. The top instanceOf predictions for the effect were Resignation, Demonstration, and Civil Resistance, while the top country predictions were Iran, Iraq, and Azerbaijan. We find that our method shows promise in terms of factors such as predicting the possibility of events in Iraq and Azerbaijan, which both are geographically and politically intertwined with Iran, and are already dealing with consequences of the events in Iran. Our method also performs well in retrieving cases of past events that have similar causes and likely similar consequences. On the other hand, we also observe some situations in which EvCBR struggles. One failure pattern was when no particularly similar event pairs were present in the KG - this was not uncommon due to Wikidata's variable coverage and level of detail. Another noticeable issue was the overabundance of COVID-19 related events in Wikidata, which frequently were retrieved by EvCBR due to matching the target country of a query event. ## 6. Conclusion We introduce EvCBR, a case-based reasoning model developed to perform event prediction between a causal event and its unseen effect event. Framing event prediction as a 2-hop link prediction task, EvCBR retrieves cases of cause-effect events from the KG to sample and apply paths through the KG that connect causes to their respective effect's properties. A novel refinement step helps improve the accuracy of predictions. EvCBR does not require training, making it well suited to our intended application to KGs under the open-world assumption. We evaluate the effectiveness of EvCBR over the FB15k-237 dataset as well as a newly curated dataset of causal events from Wikidata, showing strong performance compared to baselines consisting of three embedding-based models, a GNN model, and two rule-based models which use similar scoring and sampling methods as ours. Future work should continue to explore the practically important application of event prediction using KGs, as well as make further advances in reasoning techniques over constantly evolving KGs such as Wikidata. ###### Acknowledgements. This work was supported by the Rensselaer-IBM AI Research Collaboration, part of the IBM AI Horizons Network. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Dataset: FB15k-237} \\ \hline Model & MRR & Hits@1 & Hits@10 \\ \hline TransE & 0.355 & 0.253 & 0.553 \\ ComplEx & 0.151 & 0.101 & 0.249 \\ RotatE & 0.290 & 0.191 & 0.477 \\ NoGE & 0.324 & 0.238 & 0.491 \\ AnyBURL & **0.381** & **0.289** & **0.560** \\ ProbCBR & 0.289 & 0.204 & 0.452 \\ \hline EvCBR\({}_{base}\) & 0.364 & 0.273 & 0.537 \\ EvCBR\({}_{re}\) & 0.349 & 0.256 & 0.524 \\ EvCBR\({}_{re+base}\) & 0.368 & 0.277 & 0.543 \\ \hline \hline \multicolumn{4}{|c|}{Dataset: Wikidata Causal Events} \\ \hline Model & MRR & Hits@1 & Hits@10 \\ \hline TransE & 0.139 & 0.096 & 0.207 \\ ComplEx & 0.030 & 0.024 & 0.050 \\ RotatE & 0.097 & 0.081 & 0.125 \\ NoGE & 0.149 & 0.118 & 0.210 \\ AnyBURL & 0.149 & 0.121 & 0.202 \\ ProbCBR & 0.122 & 0.107 & 0.154 \\ \hline EvCBR\({}_{base}\) & 0.156 & 0.118 & 0.222 \\ EvCBR\({}_{re}\) & 0.158 & **0.130** & 0.212 \\ EvCBR\({}_{re+base}\) & **0.159** & 0.125 & **0.227** \\ \hline \end{tabular} \end{table} Table 1. Results for our 2-hop LP experiments.
2310.20294
Robust nonparametric regression based on deep ReLU neural networks
In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on $\ell$-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an $\alpha$-H\"older class, employing $\ell$-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep $\ell$-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several H\"older functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest.
Juntong Chen
2023-10-31T09:05:09Z
http://arxiv.org/abs/2310.20294v1
# Robust nonparametric regression based on deep relu neural networks ###### Abstract. In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on \(\ell\)-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an \(\alpha\)-Holder class, employing \(\ell\)-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep \(\ell\)-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several Holder functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest. Key words and phrases:Nonparametric regression, robust estimation, deep neural networks, circumventing the curse of dimensionality, supremum of an empirical process 2010 Mathematics Subject Classification: Primary 62G35, 62G05; Secondary 68T01 This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N\({}^{\text{o}}\) 811017. and \(f^{\star}:\mathscr{W}\to\mathbb{R}\) is an unknown regression function that we want to estimate. A substantial body of literature addresses this problem through the minimization of empirical least squares loss functions. By integrating such a classical estimation approach with various approximation models, several methods have been developed and investigated. These include kernel regression (e.g., Nadaraya (1964) and Watson (1964)), local polynomial regression (e.g., Fan (1992, 1993)), spline-based regression (e.g., Wahba (1990) and Friedman (1991)), and wavelet-based regression (e.g., Donoho et al. (1995) and Donoho and Johnstone (1998)), among others. In-depth discussions on different methods and theories related to nonparametric regression can also be found in books such as Gyorfi et al. (2002) and Tsybakov (2009). Particularly, when \(f^{\star}:[0,1]^{d}\to\mathbb{R}\) is of \(\alpha\)-smoothness, Stone (1982) demonstrated that the minimax optimal convergence rate is of the order \(n^{-2\alpha/(2\alpha+d)}\) with respect to some squared \(\mathbb{L}_{2}\)-loss. As the value of \(d\) becomes large, the convergence rate can become extremely slow, which is a well-known phenomenon called the curse of dimensionality. One possible way to overcome this difficulty is to make additional structural assumptions on the regression function \(f^{\star}\) namely to assume that the unknown function \(f^{\star}\) is of the form \(f_{1}\circ f_{2}\) where \(f_{1}\) and \(f_{2}\) have some specific structures (e.g., Stone (1985), Horowitz and Mammen (2007) and Baraud and Birge (2014)). For instance, under the generalized additive structure of \(f^{\star}\), Horowitz and Mammen (2007) showed that, one can estimate the regression function \(f^{\star}\) with rate \(n^{-2\alpha/(2\alpha+1)}\) which is independent of the dimension \(d\). Recently, estimation based on neural networks has demonstrated remarkable success in both experimental and practical domains. Inspiring work has been carried out to systematically analyze the theoretical properties of least squares estimators implemented by various structured neural networks, particularly those employing a ReLU activation function. We mention the work of Schmidt-Hieber (2020), Kohler and Langer (2021), Suzuki and Nitanda (2021) and Jiao et al. (2023), among others. Based on the established approximation results, these studies have revealed that least squares estimators implemented using appropriate neural network architectures achieve the same minimax convergence rate as that obtained in Stone (1982) when considering a regression function \(f^{\star}\) with \(\alpha\)-smoothness. However, these findings also indicate that without further assumptions on the underlying model, nonparametric regression using deep neural networks is not immune to the curse of dimensionality. Much effort has been devoted to mitigating this issue through network-based estimation approaches (e.g., Schmidt-Hieber (2019), Chen et al. (2022) and Nakada and Imaizumi (2020) where they assume that the distribution of \(W\) is supported on a low-dimensional manifold, or the covariates exhibit a low intrinsic dimension, and Bauer and Kohler (2019), Suzuki (2019), where structural assumptions are imposed on \(f^{\star}\)). In particular, it is worth mentioning that, as shown in Schmidt-Hieber (2020), neural networks, especially deep ones, exhibit a natural advantage in approximating functions with a compositional structure compared to classical approximation methods. Given a collection of candidate estimators for \(f^{\star}\), most of the aforementioned approaches derive their estimators by minimizing a least-squares-based objective function. While possessing several desirable properties, least squares estimators are highly susceptible to data contamination and the presence of outliers, which are common scenarios encountered in practical applications. To address this issue of instability, several alternative approaches have been proposed in the context of linear regression, such as Huber regression (Huber (1973)), Tukey's biweight regression (Beaton and Tukey (1974)) and the least absolute deviation regression (Bassett and Koenker (1978)). In the realm of deep learning, a prevailing characteristic is the presence of data abundant in quantity but often deficient in quality. As a result, robustness becomes an essential property to consider when implementing estimation procedures based on deep neural networks (Barron (2019)). However, there has been significantly less research conducted in this field. In Lederer (2020), upper bounds for the expected excess risks of a specific class of estimators were established. These estimators are obtained by minimizing empirical risk using unbounded, Lipschitz-continuous loss functions on feed-forward neural networks, covering cases such as the least absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss. Jiao et al. (2023) investigated a similar class of estimators. They relaxed several assumptions required in Lederer (2020), which led to the establishment of their non-asymptotic expected excess risk bounds under milder conditions. They also considered the approximation error introduced by the ReLU neural network and demonstrated that the curse of dimensionality can be mitigated for such class of estimators if the distribution of \(W\) is assumed to be supported on an approximately low-dimensional manifold. Drawing upon the approximation results established in Schmidt-Hieber (2020) and Suzuki (2019), Padilla et al. (2022) examined the properties of quantile regression using deep ReLU neural networks. When the underlying quantile function can be represented as a composition of Holder functions or when it belongs to a Besov space, they derived convergence rates for the resulting estimators in terms of the mean squared error at the design points. All the previously mentioned work that addresses robust nonparametric regression using deep neural networks assumes the existence of the regression function \(f^{\star}\). The approaches they considered and analyzed focus on the robustness under the scenarios where there is a departure from Gaussian distributions to heavy-tailed distributions. When it comes to the case of adversarial attacks, where the statistical model is misspecified from a distributional perspective, their results are unable to provide a theoretical guarantee for the performance of the resulting estimators. In this paper, we approach the nonparametric regression problem from a novel perspective that acknowledges the possibility of misspecification at the distributional level. We propose a general procedure under mild assumptions to address this problem in a robust manner and investigate its application to ReLU neural networks. Specifically, our primary contributions are as follows. 1. We consider this estimation problem from the perspective of estimating the conditional distributions \(Q_{i}^{\star}(W_{i})\) of \(Y_{i}\) given \(W_{i}\). To handle this statistical issue, we propose an \(\ell\)-type estimation procedure based on a development of \(\ell\)-estimation methodology proposed in Baraud (2021). Our approach is based on the presumption that there exists an underlying function \(f^{\star}\) on \(\mathscr{W}\) belonging to some collection \(\overline{\mathcal{F}}\) such that \(Q_{i}^{\star}(W_{i})\) is of the form \(Q_{f^{\star}(W_{i})}\sim\mathcal{N}(f^{\star}(W_{i}),\sigma^{2})\) for all \(i\in\{1,\ldots,n\}\). However, our method is not confined to this assumption. In other words, we allow our statistical models to be slightly misspecified: \(Q_{i}^{\star}(W_{i})\) may not be exactly of the form \(Q_{f^{\star}(W_{i})}\) and even if they were, \(f^{\star}\) may not belong to the class \(\overline{\mathcal{F}}\). 2. Assuming that \(\overline{\mathcal{F}}\) is a VC-subgraph class on \(\mathscr{W}\), we derive a non-asymptotic risk bound for the resulting estimators, measured in terms of the total-variation type distance. Building upon this general result, we offer a comprehensive elucidation of the robustness of our estimators with regard to model misspecification at the distributional level. We also provide a quantitative comparison between the \(\ell\)-type estimators and another type of robust estimators known as \(\rho\)-estimators, which were introduced in Baraud and Chen (2020). 3. We showcase the application of our \(\ell\)-type estimation procedure using ReLU neural network models. In the case of a well-specified model, we derive uniform risk bounds over Holder classes for our estimators. By incorporating the lower bounds that we established, we demonstrate that the resulting estimators achieve the minimax optimal rate of convergence. 4. We consider the problem of circumventing the curse of dimensionality by imposing structural assumptions on the underlying regression function \(f^{\star}\). More precisely, we assume the function \(f^{\star}\) can be expressed as a composition of several Holder functions, following the consideration in Schmidt-Hieber (2020). In contrast to using sparsity-based ReLU neural networks as in Schmidt-Hieber (2020), we develop new deep fully-connected ReLU neural networks to approximate composite Holder functions, enhancing the informativeness of the architectural design. This approximation result can be of independent interest. By leveraging the derived approximation theory, we demonstrate that the \(\ell\)-type estimators implemented based on appropriate network models can alleviate the curse of dimensionality while converging to the truth at a minimax optimal rate. The paper is organized as follows. In Section 2, we describe our specific statistical framework and set notation. In Section 3, we introduce our estimation procedure based on \(\ell\)-estimation and present our main result regarding the risk bounds for the resulting estimators. We also provide an explanation of why the deviation inequality we establish ensures the desired robustness property of the estimators and compare them with the \(\rho\)-estimators in that section. In Section 4, we delve into the implementation of our \(\ell\)-type estimation approach on ReLU neural networks. We establish uniform risk bounds over Holder classes when the data are truly i.i.d. and the regression function exists. By combining the lower bounds we have derived, we demonstrate the minimax optimality of our estimators under the well-specified scenario. The problem of circumventing the curse of dimensionality is addressed in Section 5, where we impose structural assumptions on the regression function \(f^{\star}\). Section 6 is devoted to most of the proofs in this paper. ## 2. The statistical setting Let \(X_{i}=(W_{i},Y_{i})\), for \(i\in\{1,\ldots,n\}\) be \(n\) pairs of independent, but not necessarily i.i.d., random variables with values in a measurable product space \((\mathscr{X},\mathcal{X})=(\mathscr{W}\times\mathscr{Y},\mathcal{W}\otimes \mathcal{Y})\). Denote the set of all probabilities on \((\mathscr{Y},\mathcal{Y})\) as \(\mathscr{T}\). We assume that the conditional distribution of \(Y_{i}\) given \(W_{i}=w_{i}\) exists and is given by the value at \(w_{i}\) of a measurable function \(Q_{i}^{\star}\) from \((\mathscr{W},\mathcal{W})\) to \(\mathscr{T}\). We endow \(\mathscr{T}\) with the Borel \(\sigma\)-algebra \(\mathcal{T}\) associated with the total variation distance. Recall that when given two probabilities \(P_{1}\) and \(P_{2}\) on a measurable space \((A,\mathscr{A})\), the total variation distance \(\|P_{1}-P_{2}\|_{TV}\) between \(P_{1}\) and \(P_{2}\) is defined as \[\|P_{1}-P_{2}\|_{TV}=\sup_{\mathcal{A}\in\mathscr{A}}\left[P_{1}(\mathcal{A}) -P_{2}(\mathcal{A})\right]=\frac{1}{2}\int_{A}\left|\frac{dP_{1}}{d\mu}-\frac {dP_{2}}{d\mu}\right|d\mu,\] where \(\mu\) is any reference measure that dominates both \(P_{1}\) and \(P_{2}\). With this chosen \(\mathcal{T}\), for any \(i\in\{1,\ldots,n\}\), the mapping \(w\mapsto\|Q_{i}^{\star}(w)-R\|_{TV}\) on \((\mathscr{W},\mathcal{W})\) is measurable for any probability \(R\in\mathscr{T}\). Given a class of real-valued measurable functions \(\overline{\mathcal{F}}\) on \(\mathscr{W}\), we presume that, there exists a function \(f^{\star}\in\overline{\mathcal{F}}\) for which the conditional distributions \(Q_{i}^{\star}(W_{i})\) have the structure of \(Q_{f^{\star}(W_{i})}=\mathcal{N}(f^{\star}(W_{i}),\sigma^{2})\) or are at least in close proximity to it. The function \(f^{\star}\) is what we refer to as the regression function. It is worth emphasizing, as we mentioned in Section 1, that our statistical model could potentially be misspecified: the conditional distributions \(Q_{i}^{\star}(W_{i})\) might not precisely take the form \(Q_{f^{\star}(W_{i})}\), or even if they did, the regression function \(f^{\star}\) might not belong to the class \(\overline{\mathcal{F}}\). What we are truly assuming is that the collection \(\{Q_{f},\ f\in\overline{\mathcal{F}}\}\) provides a suitable approximation of the actual conditional distributions \(Q_{i}^{\star}\), for \(i\in\{1,\ldots,n\}\). Let \(\mathscr{Q}_{\mathscr{W}}\) represent the collection of all conditional probabilities from \((\mathscr{W},\mathcal{W})\) to \((\mathscr{T},\mathcal{T})\), and define \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}=\mathscr{Q}_{\mathscr{W}}^{n}\). As a direct result, we obtain the \(n\)-tuple \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\in\boldsymbol{ \mathscr{Q}}_{\mathscr{W}}\). We equip the space \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\) with a distance metric resembling the total variation distance. More precisely, for \(\mathbf{Q}=(Q_{1},\ldots,Q_{n})\) and \(\mathbf{Q}^{\prime}=(Q_{1}^{\prime},\ldots,Q_{n}^{\prime})\) in \(\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\), \[\ell(\mathbf{Q},\mathbf{Q}^{\prime}) =\frac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\|Q_{i}(W_{i})-Q_{i}^{ \prime}(W_{i})\|_{TV}\right]\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{\mathscr{W}}\|Q_{i}(w)-Q_{i}^{ \prime}(w)\|_{TV}dP_{W_{i}}(w). \tag{1}\] Particularly, when \(\ell(\mathbf{Q},\mathbf{Q}^{\prime})=0\), it signifies that \(Q_{i}=Q_{i}^{\prime}\)\(P_{W_{i}}\)-a.s., for all \(i\). Building on the \(n\) observations \(\boldsymbol{X}=(X_{1},\ldots,X_{n})\), we will introduce an estimation approach in the later section to develop an estimator \(\widehat{f}(\boldsymbol{X})\in\overline{\mathcal{F}}\) for the potential regression function \(f^{\star}\) (which may not exist). Furthermore, we aim to estimate the \(n\)-tuple \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) by means of the structure \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\). We assess the performance of the estimator \(\mathbf{Q}_{\widehat{f}}\) for \(\mathbf{Q}^{\star}\) through the measure \(\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\). We denote \(P=Q\cdot P_{W}\) when \(P\) represents the distribution of a random variable \((W,Y)\in\mathscr{W}\times\mathscr{Y}\), where the marginal distribution of \(W\) is \(P_{W}\) and the conditional distribution of \(Y\) given \(W\) is \(Q\). One can observe that when \(P_{1}=Q_{1}\cdot P_{W}\) and \(P_{2}=Q_{2}\cdot P_{W}\), the total variation distance between \(P_{1}\) and \(P_{2}\) can be represented as \[\|P_{1}-P_{2}\|_{TV}=\int_{\mathscr{W}}\|Q_{1}(w)-Q_{2}(w)\|_{TV}dP_{W}(w).\] By defining \(P_{i}^{\star}=Q_{i}^{\star}\cdot P_{W_{i}}\) and \(P_{i,f}=Q_{f}\cdot P_{W_{i}}\) for a measurable function \(f\) that maps \(\mathscr{W}\) to \(\mathbb{R}\), we can represent \(\ell(\mathbf{Q}^{\star},\mathbf{Q}_{f})\) as the average total variation distance over \(n\) samples: \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{f})=\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{ \star}-P_{i,f}\|_{TV}. \tag{2}\] In the case where \(W_{i}\) are i.i.d. with the common distribution \(P_{W}\) and \(Q_{i}^{\star}=Q^{\star}\) for all \(i\in\{1,\ldots,n\}\), we may slightly abuse the notation \(\ell(Q^{\star},Q_{\widehat{f}})\) to measure the distance between \(Q^{\star}\) and \(Q_{\widehat{f}}\) defined as \[\ell(Q^{\star},Q_{\widehat{f}})=\int_{\mathscr{W}}\|Q^{\star}(w)-Q_{\widehat {f}(w)}\|_{TV}dP_{W}(w). \tag{3}\] We conclude this section by introducing some notations that will be useful later. We denote \(\mathbb{N}^{*}\) the set of all positive natural numbers and \(\mathbb{R}^{*}_{+}\) the set of all positive real numbers. For any \(x\in\mathbb{R}\), we use the notation \(\lfloor x\rfloor\) to represent the largest integer strictly smaller than \(x\), and the notation \(\lceil x\rceil\) to represent the least integer greater than or equal to \(x\). Given any set \(J\), we denote its cardinality by \(|J|\). For a \(\mathbf{R}\in\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\) and a set \(\mathbf{A}\subset\boldsymbol{\mathscr{Q}}_{\mathscr{W}}\), we define \(\ell(\mathbf{R},\mathbf{A})=\inf_{\mathbf{R}^{\prime}\in\mathbf{A}}\ell(\mathbf{R},\mathbf{R}^{\prime})\). Unless otherwise specified, log denotes the logarithm function with base \(e\). Let \((E,\mathcal{E})\) be a measurable space and \(\mu\) be a \(\sigma\)-finite measure on \((E,\mathcal{E})\). For \(k\in[1,+\infty]\), we define \(\mathcal{L}_{k}(E,\mu)\) the collection of all the measurable functions \(f\) on \((E,\mathcal{E},\mu)\) such that \(\|f\|_{k,\mu}<+\infty\), where \[\|f\|_{k,\mu}=\begin{cases}\left(\int_{E}|f|^{k}d\mu\right)^{1/k},&\text{for $k \in[1,+\infty)$},\\ \inf\{K>0,\;|f|\leq K\;\mu-\text{a.e.}\},&\text{for $k=\infty$}.\end{cases}\] We denote the associated equivalent classes as \(\mathbb{L}_{k}(E,\mu)\) where any two functions coincide for \(\mu\)-a.e. can not be distinguished. In particular, we write the norm \(\|\cdot\|_{k}\) with \(k\in[1,+\infty]\) when \(\mu=\lambda\) is the Lebesgue measure. Throughout the paper, \(c\) or \(C\) denotes positive numerical constant which may vary from line to line. ## 3. \(\ell\)-Type estimation under regression setting We employ an \(\ell\)-type estimator, drawing inspiration from the concepts outlined in a series of papers presented in Baraud (2021) within a general framework, as well as from the content of Baraud et al. (2022), which is specifically dedicated to density estimation. Consider a set of \(n\) independent random variables denoted as \(X_{1},\ldots,X_{n}\), where their values are drawn from a measured space \((\mathscr{X},\mathcal{X})\). In essence, \(\ell\)-estimation offers a versatile approach to acquiring a robust estimator for the actual joint distribution \(\mathbf{P}^{\star}\) of \(X_{1},\ldots,X_{n}\). The established \(\ell\)-estimation approach begins by introducing a set of potential probabilities \(\overline{\mathscr{P}}\), intended to offer a suitable approximation of \(\mathbf{P}^{\star}\). The primary challenge in implementing \(\ell\)-estimation within a regression framework lies in the absence of information concerning the marginal distributions \(P_{W_{i}}\) required for constructing candidate probabilities and designing the estimation procedure. Moreover, our objective does not encompass the task of estimating these marginal distributions. In this scenario, further effort is necessary to implement \(\ell\)-type estimation and establish a risk bound for the resulting estimator. ### Constructing the \(\ell\)-type estimator Let \(\overline{\mathcal{F}}\) be a collection of real-valued measurable functions on \(\mathscr{W}\), which we call it a model. For any \(f\in\overline{\mathcal{F}}\), we denote \(Q_{f}\) the conditional Gaussian distribution induced by the function \(f\), i.e., given any \(w\in\mathscr{W}\), \(Q_{f(w)}\) is a normal distribution centered around \(f(w)\), with a variance of \(\sigma^{2}\), and denote \(q_{f(w)}\) the density function of the Gaussian distribution \(Q_{f(w)}\) with respect to the Lebesgue measure. To prevent any measurability issue, we introduce the notation \(\mathcal{F}\), representing either a finite or, at most, a countable subset of \(\overline{\mathcal{F}}\). Subsequently, the majority of our discussion will be focused on the set \(\mathcal{F}\). Nevertheless, as we delve into further details, it turns out that through careful choice of \(\mathcal{F}\), no approximation power will be sacrificed in comparison to estimations based on \(\overline{\mathcal{F}}\). Given \(f_{1},f_{2}\in\mathcal{F}\), we define for any \((w,y)\in\mathscr{W}\times\mathscr{Y}\), \[t_{(f_{1},f_{2})}(w,y)=\mathbbm{1}_{q_{f_{2}(w)}(y)>q_{f_{1}(w)}(y)}-Q_{f_{1}(w) }\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right).\] Employing the function \(t_{(f_{1},f_{2})}(\cdot,\cdot)\) produces the following inequalities. **Lemma 1**.: _Let \(P^{\star}=Q^{\star}\cdot P_{W}\) represent the distribution of a pair of random variables \((W,Y)\in\mathscr{W}\times\mathscr{Y}\), where the first marginal distribution is \(P_{W}\), and the conditional distribution of \(Y\) given \(W\) is denoted by \(Q^{\star}\). For any \(f_{1},f_{2}\in\mathcal{F}\), any \(P_{W}\) and any \(Q^{\star}\in\mathscr{Q}_{\mathscr{W}}\), we have_ \[\ell(Q_{f_{1}},Q_{f_{2}})-\ell(Q^{\star},Q_{f_{2}})\leq\mathbb{E}_{P^{\star}} \left[t_{(f_{1},f_{2})}(W,Y)\right]\leq\ell(Q^{\star},Q_{f_{1}}). \tag{4}\] The proof of Lemma 1 is deferred to Section 6.1. Lemma 1 implies that the family of test statistics \(t_{(f_{1},f_{2})}\) holds information concerning the \(\ell\)-type distance between two of \(Q_{f_{1}}\), \(Q_{f_{2}}\), and \(Q^{\star}\), which is an essential property for constructing our final estimator. For any \(f_{1},f_{2}\in\mathcal{F}\) and \(n\) pairs of observations \(\mathbf{X}=(X_{1},\ldots,X_{n})\) with \(X_{i}=(W_{i},Y_{i})\), \(i\in\{1,\ldots,n\}\), we design the function \[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2})=\sum_{i=1}^{n}t_{(f_{1},f_{2})}(W_{i},Y_{i})\] and set \[\mathbf{T}_{l}(\mathbf{X},f_{1})=\sup_{f_{2}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},f _{1},f_{2}).\] Our final estimator of \(\mathbf{Q}^{\star}=(Q^{\star}_{1},\ldots,Q^{\star}_{n})\) is defined as \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\), where \(\widehat{f}(\mathbf{X})\) is an \(\epsilon\)-minimizer over \(\mathcal{F}\) of the map \(f_{1}\mapsto\mathbf{T}_{l}(\mathbf{X},f_{1})\). More precisely, given \(\epsilon>0\), the \(\ell\)-type estimator within the set \(\mathcal{F}\) is defined as any measurable function \(\widehat{f}(\mathbf{X})\) of the random (and non-void) set \[\mathscr{E}(\mathbf{X},\epsilon)=\left\{f\in\mathcal{F},\ \mathbf{T}_{l}(\mathbf{X},f) \leq\inf_{f^{\prime}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},f^{\prime})+\epsilon \right\}.\] **Remark 1**.: The parameter \(\epsilon\) is devised to ensure the existence of the estimator \(\widehat{f}\). As we will explore in Section 3.2, it is prudent to choose a relatively small value for \(\epsilon\), specifically not significantly greater than \(1\), as this choice improves the risk bound of an \(\ell\)-type estimator. Specifically, when a function \(f\in\mathcal{F}\) exists such that \(\mathbf{T}_{l}(\mathbf{X},f)=\inf_{f^{\prime}\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X}, f^{\prime})\), it is advisable to prioritize this \(f\) as the estimator \(\widehat{f}\). Furthermore, considering that \(\mathbf{T}_{l}(\mathbf{X},f)\geq\mathbf{T}_{l}(\mathbf{X},f,f)=0\) for all \(f\in\mathcal{F}\), any function \(\widehat{f}\in\mathcal{F}\) meeting the condition \(0\leq\mathbf{T}_{l}(\mathbf{X},\widehat{f})\leq\epsilon\) qualifies as an \(\ell\)-type estimator. ### The performance of the \(\ell\)-type estimator Before delving into the theoretical performance of our \(\ell\)-type estimator, we lay the foundation by stating our main assumption on the model \(\overline{\mathcal{F}}\). To facilitate this, we introduce the following definition: **Definition 1** (VC-subgraph).: _An (open) subgraph of a function \(f\) in \(\overline{\mathcal{F}}\) is the subset of \(\mathscr{W}\times\mathbb{R}\) given by_ \[\mathscr{C}_{f}=\left\{(w,u)\in\mathscr{W}\times\mathbb{R},\,f(w)>u\right\}.\] _A collection \(\overline{\mathcal{F}}\) of real-valued measurable functions on \(\mathscr{W}\) is VC-subgraph with dimension not larger than \(V\) if, for any finite subset \(\mathcal{S}\subset\mathscr{W}\times\mathbb{R}\) with \(|\mathcal{S}|=V+1\), there exists at least one subset \(S\) of \(\mathcal{S}\) such that for any \(f\in\overline{\mathcal{F}}\), \(S\) is not the intersection of \(\mathcal{S}\) with \(\mathscr{C}_{f}\), i.e._ \[S\neq\mathcal{S}\cap\mathscr{C}_{f}\quad\text{whatever }f\in\overline{ \mathcal{F}}.\] Herein, we proceed to introduce our primary assumption concerning the model \(\overline{\mathcal{F}}\). **Assumption 1**.: _The class of functions \(\overline{\mathcal{F}}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\geq 1\)._ Encompassing a range of widely employed examples, Assumption 1 is formulated under a considerably broad scope. For instance, when \(\overline{\mathcal{F}}\) is contained in a linear space with finite dimension \(D\), Assumption 1 is fulfilled with \(V=D+1\) according to Lemma 2.6.15 of van der Vaart and Wellner (1996). Moreover, when \(\overline{\mathcal{F}}\) represents a fully connected ReLU neural network, it has been demonstrated in Bartlett et al. (2019) [Theorem 7] that the VC-dimension of \(\overline{\mathcal{F}}\) is linked to the depth and width of the network. Further elaboration on \(\ell\)-estimation based on neural networks will be provided in Section 4 and Section 5. Building upon Assumption 1, we can establish the following non-asymptotic exponential inequalities for the upper deviations of a total variation type distance between the true distribution of the data and the estimated one based on \(\widehat{f}(\mathbf{X})\). **Theorem 1**.: _Under Assumption 1, whatever the conditional distributions \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) of the \(Y_{i}\) given \(W_{i}\) and the distributions of \(W_{i}\), any \(\ell\)-type estimator \(\widehat{f}\) based on the class \(\mathcal{F}\) satisfies that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\),_ \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q} ^{\star},\mathbf{Q}_{\overline{f}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n }+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}. \tag{5}\] _In particular, with the triangle inequality,_ \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq 3\ell(\mathbf{Q}^{ \star},\boldsymbol{\mathscr{Q}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{6}\] _where \(\boldsymbol{\mathscr{Q}}=\{\mathbf{Q}_{f},\ f\in\mathcal{F}\}\). As a consequence of (6), for any \(n\geq V\), integration with respect to \(\xi>0\) yields the following risk bound for the resulting estimator \(\mathbf{Q}_{\widehat{f}}=(Q_{\widehat{f}},\ldots,Q_{\widehat{f}})\)_ \[\mathbb{E}\left[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\right]\leq C _{\epsilon}\left[\ell(\mathbf{Q}^{\star},\boldsymbol{\mathscr{Q}})+\sqrt{ \frac{V}{n}}\right], \tag{7}\] _where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only._ The proof of Theorem 1 is deferred to Section 6.2. Let us now provide some remarks regarding this result. **Remark 2**.: Consider the set \(\overline{\boldsymbol{\mathscr{Q}}}=\{\mathbf{Q}_{f},\ f\in\overline{ \mathcal{F}}\}\). It is clear that if \(\boldsymbol{\mathscr{Q}}\) is dense in \(\overline{\boldsymbol{\mathscr{Q}}}\) with respect to the (pseudo) distance \(\ell\), both (6) and (7) also remain valid when replacing \(\boldsymbol{\mathscr{Q}}\) with \(\overline{\boldsymbol{\mathscr{Q}}}\). This is the situation in which the subset \(\mathcal{F}\) is dense in \(\overline{\mathcal{F}}\) with respect to the topology of pointwise convergence. For further insights in this direction, we refer to Section 4.2 of Baraud and Birge (2018). For the sake of simplicity in our explanation, let us temporarily assume in this section that \(\boldsymbol{\mathscr{Q}}\) is dense in \(\overline{\boldsymbol{\mathscr{Q}}}\) with respect to \(\ell\). **Remark 3**.: According to (7), the risk of the resulting estimator is bounded, up to a numerical constant, by the sum of two terms. The term \(\ell(\mathbf{Q}^{\star},\overline{\boldsymbol{\mathscr{Q}}})\) corresponds to the approximation error incurred by employing the model \(\overline{\mathcal{F}}\), while \(\sqrt{V/n}\) illustrates the complexity of the considered model \(\overline{\mathcal{F}}\). Hence, a suitable model \(\overline{\mathcal{F}}\) should strike a balance between these two factors, namely, a model that is not excessively complex yet offers a good approximation of the underlying regression function. **Remark 4**.: In the favourable situation where the data \(X_{i}=(W_{i},Y_{i})\) are truly i.i.d. with \(Q_{i}^{\star}=Q_{f^{\star}}\), \(i\in\{1,\ldots,n\}\) for some \(f^{\star}\in\overline{\mathcal{F}}\), we can deduce from (7) that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon} \sqrt{\frac{V}{n}}.\] In typical situations, the value of \(V\) aligns with the magnitude of parameters necessary to parametrize \(\overline{\mathcal{F}}\), which cannot be improved in general. As we shall observe in Section 4.2, the above risk bound will lead to an optimal rate of convergence in the minimax sense when the regression function \(f^{\star}\) is assumed to be a smooth function of regularity \(\alpha\). **Remark 5**.: The term \(\ell(\mathbf{Q}^{\star},\overline{\boldsymbol{\mathscr{Q}}})\) elucidates the robustness property of the resulting estimator concerning model misspecification. To illustrate, let us consider the general scenario where the data are only independent and the true joint distribution is given by \[\mathbf{P}^{\star}=\bigotimes_{i=1}^{n}P_{i}^{\star}=\bigotimes_{i=1}^{n}\left[ (1-\beta_{i})P_{i,\overline{f}}+\beta_{i}R_{i}\right],\qquad\sum_{i=1}^{n} \beta_{i}\leq\frac{n}{2}, \tag{8}\] with some \(\overline{f}\in\overline{\mathcal{F}}\), \(P_{i,\overline{f}}=Q_{\overline{f}}\cdot P_{W_{i}}\), \(R_{i}\) being an arbitrary distribution on \(\mathscr{X}=\mathscr{W}\times\mathscr{Y}\) and \(\beta_{i}\) taking values in \([0,1]\) for all \(i\in\{1,\ldots,n\}\). With the connection (2) between the pseudo distance \(\ell\) and \(\|\cdot\|_{TV}\), we can deduce from (7) that \[\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{\star}-P_{i,\widehat {f}}\|_{TV}\right] \leq C_{\epsilon}\left[\frac{1}{n}\sum_{i=1}^{n}\|P_{i}^{\star}-P_ {i,\overline{f}}\|_{TV}+\sqrt{\frac{V}{n}}\right] \tag{9}\] \[\leq C_{\epsilon}\left[\frac{1}{n}\sum_{i=1}^{n}\beta_{i}+\sqrt{ \frac{V}{n}}\right],\] where the second inequality comes from the fact that \(\|\cdot\|_{TV}\) is bounded by \(1\). The above result implies that as long as the quantity \((\sum_{i=1}^{n}\beta_{i})/n\) remains small compared to the term \(\sqrt{V/n}\), the performance of the resulting estimator will not deteriorate significantly in comparison to the ideal situation presented in Remark 4. The formulation (8) can be utilized to provide a more detailed explanation of the stability of \(\ell\)-type estimation procedure. More precisely, in the case of the presence of outliers, the observations include several outliers, the indices of which are marked as a non-empty subset \(J\) of \(\{1,\ldots,n\}\). For any \(i\in J\), \(R_{i}=\delta_{a_{i}}\) and \(\beta_{i}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{i\in J}\) for all \(i\in\{1,\ldots,n\}\). The bound (9) indicates that our estimation procedure remains stable as long as \(|J|/n\) remains small compared to \(\sqrt{V/n}\). This accounts for the robustness when the outliers present. Under another scenario, where the data are contaminated, \((W_{i},Y_{i})\) are i.i.d., and \(P_{W_{i}}=P_{W}\). A portion \(\beta\in(0,1/2]\) of the \(n\) samples is drawn according to an arbitrary distribution \(R_{i}=R\) (where \(R\) is not equal to \(P_{i,\overline{f}}\)), while the remaining part follows the distribution \(P_{i,\overline{f}}\). In this case, as an immediate consequence of (9), the performance of our estimator remains stable as long as the contamination proportion \(\beta\) remains small compared to the value of \(\sqrt{V/n}\). ### Connection to \(\mathbb{L}_{1}\)-distance between the regression functions As we have seen in Section 3.2, we establish non-asymptotic inequalities for the upper deviations of a total variation type distance between the true conditional distributions and the estimated one based on \(\widehat{f}\). In the context of a regression setting where the data are truly i.i.d. and follow the common marginal distribution \(P_{W}\), the function \(f^{\star}\) exists, such that \(Q_{i}^{\star}=Q_{f^{\star}}\). It would be interesting to investigate the performance of the \(\ell\)-type estimator \(\widehat{f}(\mathbf{X})\) in relation to the regression function \(f^{\star}\), utilizing a suitable distance metric, as typically considered in the literature. Given two real-valued functions \(f\) and \(f^{\prime}\) on \(\mathscr{W}\), it turns out that \(\ell(Q_{f},Q_{f^{\prime}})\) can be related to the \(\mathbb{L}_{1}(P_{W})\)-distance between \(f\) and \(f^{\prime}\). We present the result as follows. **Lemma 2**.: _For any two measurable real-valued functions \(f,f^{\prime}\) on \(\mathscr{W}\), and any \(w\in\mathscr{W}\), we have_ \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV}=1-2\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{ 2\sigma}\right), \tag{10}\] _where the notation \(\Phi\) stands for the cumulative distribution function of the standard normal distribution. Consequently,_ \[0.78\min\left\{\frac{\|f-f^{\prime}\|_{1,P_{W}}}{\sqrt{2\pi}\sigma},1\right\} \leq\ell(Q_{f},Q_{f^{\prime}})\leq\min\left\{\frac{\|f-f^{\prime}\|_{1,P_{W}}}{ \sqrt{2\pi}\sigma},1\right\}\,. \tag{11}\] Proof.: For any two probabilities \(P\) and \(R\) on the measured space \((\mathscr{X},\mathcal{X})\), it is well known that the total variation distance can equivalently be written as \(\|P-R\|_{TV}=R(r>p)-P(r>p)\), where \(p\) and \(r\) stand for the respective densities of \(P\) and \(R\) with respect to some common dominating measure \(\mu\). Therefore, a fundamental calculation reveals that for any \(w\in\mathscr{W}\), \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV} =Q_{f^{\prime}(w)}\left(q_{f^{\prime}(w)}>q_{f(w)}\right)-Q_{f(w) }\left(q_{f^{\prime}(w)}>q_{f(w)}\right)\] \[=\left[1-\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right) \right]-\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right) \tag{12}\] \[=1-2\Phi\left(-\frac{|f^{\prime}(w)-f(w)|}{2\sigma}\right),\] which concludes the equality (10). We also note from (12) that \[\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{TV}=\mathbb{P}\left[|Z|\leq\frac{1}{2}\left( \frac{|f^{\prime}(w)-f(w)|}{\sigma}\right)\right],\] where \(Z\) is a standard real-valued Gaussian random variable. Recall that \[\ell(Q_{f},Q_{f^{\prime}})=\int_{\mathscr{W}}\|Q_{f(w)}-Q_{f^{\prime}(w)}\|_{ TV}dP_{W}(w). \tag{13}\] Based on (13), the conclusion of (11) follows by applying Lemma 1 in Baraud (2021) with \(d=1\) and replacing \(|m-m^{\prime}|\) with \((|f^{\prime}(w)-f(w)|)/\sigma\). The above result indicates that when the two functions \(f\) and \(f^{\prime}\) are sufficiently close to each other with respect to the \(\mathbb{L}_{1}(P_{W})\)-distance, the quantity \(\ell(Q_{f},Q_{f^{\prime}})\) is of order \(\|f-f^{\prime}\|_{1,P_{W}}/(\sqrt{2\pi}\sigma)\). Conversely, when \(f\) and \(f^{\prime}\) are far apart, the value of \(\ell(Q_{f},Q_{f^{\prime}})\) remains approximately of the order of \(1\). Combining Lemma 2 with (7), we can deduce that \[\min\left\{\frac{\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]}{ \sqrt{2\pi}\sigma},1\right\}\leq C_{\epsilon}\left[\inf_{f\in\mathcal{F}}\ell (Q_{f^{\star}},Q_{f})+\sqrt{\frac{V}{n}}\right], \tag{14}\] where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only. As we shall see it later, in typical applications, if we can find a nice model to approximate the regression function \(f^{\star}\) in the sense that the right hand of (14) is smaller than \(1\), then we finally obtain a risk bound for \(\widehat{f}(\mathbf{X})\) with respect to the \(\mathbb{L}_{1}\)-distance: \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma}\left[\inf_{f\in\mathcal{F}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V}{n} }\right],\] where \(C_{\epsilon,\sigma}\) is a numerical constant depending on \(\epsilon,\sigma\) only. ### Comparison with \(\rho\)-estimation As mentioned in Section 3.2, one notable feature of the \(\ell\)-type estimators is their robustness under misspecification. Interestingly, the \(\rho\)-estimators also exhibit robustness properties, but they are quantified using a Hellinger-type distance, rather than the one based on the total variation distance. For a more comprehensive understanding of the \(\rho\)-estimation methodology, one can refer to Baraud and Birge (2018) and Baraud and Chen (2020), with the latter primarily focusing on the regression setting. It is worth noting that while there exists some connection between the Hellinger distance and the total variation distance, they are not equivalent in general. The main distinctions between these two types of estimators has been examined in Section 7.1 of Baraud (2021) which includes an illustration of regression under a fixed design. Leveraging the results we have established in Section 3.2 and 3.3, we are therefore able to delve deeper in this direction, especially under a random regression design setting. To illustrate simply, we assume the data are truly i.i.d. with \(P_{W_{i}}=P_{W}\) and \(Q_{i}^{\star}=Q^{\star}\), for all \(i\in\{1,\ldots,n\}\). We write \(P^{\star}=Q^{\star}\cdot P_{W}\) the true distribution of \((W,Y)\in\mathscr{W}\times\mathscr{Y}\). For some \(f\in\mathcal{F}\), provided the term \(\ell(Q^{\star},Q_{f})\) and \(1/n\) are both sufficiently small, employing Lemma 2, one can deduce from (5) that the \(\ell\)-type estimator \(\widehat{f}_{\ell}(\mathbf{X})\) satisfies \[\mathbb{E}\left[\|f-\widehat{f}_{\ell}\|_{1,P_{W}}\right]\leq C_{\sigma, \epsilon}\left[\|P^{\star}-P_{f}\|_{TV}+\sqrt{\frac{V}{n}}\right], \tag{15}\] where \(P_{f}=Q_{f}\cdot P_{W}\). From another point of view, we can deduce, through a slight modification of Theorem 1 in Baraud and Chen (2020), that for any \(f\in\mathcal{F}\), the \(\rho\)-estimator \(\widehat{f}_{\rho}(\mathbf{X})\) complies with the following \[\mathbb{E}\left[h^{2}(P_{f},P_{\widehat{f}_{\rho}})\right]\leq C\left[h^{2}(P ^{\star},P_{f})+\frac{V(1+\log n)}{n}\right],\] where \(C>0\) is some numerical constant and \(h\) stands for the Hellinger distance. Considering the fact that \[h^{2}(P_{f},P_{\widehat{f}_{\rho}}) =\int_{\mathscr{W}}1-\exp\left[-\frac{|f(w)-\widehat{f}_{\rho}(w )|^{2}}{8\sigma^{2}}\right]dP_{W}(w)\] \[\geq(1-e^{-1})\left(\frac{\|f-\widehat{f}_{\rho}\|_{2,P_{W}}^{2} }{8\sigma^{2}}\wedge 1\right),\] we can deduce, using Holder's inequality and a similar argument to that used in obtaining (15), that for some \(f\in\mathcal{F}\), given the term \(h^{2}(P^{\star},P_{f})\) and \(1/n\) are both sufficiently small \[\mathbb{E}\left[\|f-\widehat{f}_{\ell}\|_{1,P_{W}}\right]\leq C_{\sigma}\left[ h(P^{\star},P_{f})+\sqrt{\frac{V(1+\log n)}{n}}\right]. \tag{16}\] If we put the numerical constants \(C_{\sigma,\epsilon}\), \(C_{\sigma}\) aside, the main difference between the two risk bounds lie in the fact that they express the robustness of the two different estimators by the approximation term \(\|P^{\star}-P_{f}\|_{TV}\) and \(h(P^{\star},P_{f})\) respectively. With the connection that for any two probabilities \(P_{1},P_{2}\), \[\|P_{1}-P_{2}\|_{TV}\leq\sqrt{2}h(P_{1},P_{2}),\] we can conclude that the stability of the \(\ell\)-type estimators will not be significantly worse than the \(\rho\)-estimators. In fact, the \(\ell\)-type estimators can posses much more robustness than the \(\rho\)-estimators. To explain it in details, consider the misspecified formulation: \[P^{\star}=(1-\beta)P_{f^{\star}}+\beta R,\quad\text{for some small $\beta\in(0,1)$},\] where \(f^{\star}\in\mathcal{F}\), \(R\neq P_{f^{\star}}\) is any arbitrary distribution on \(\mathscr{X}=\mathscr{W}\times\mathscr{Y}\). On the one hand, we can calculate that \[\|P^{\star}-P_{f^{\star}}\|_{TV}=\beta\|P_{f^{\star}}-R\|_{TV}, \tag{17}\] which is of the order of magnitude \(\beta\). On the other hand, we have \[h(P^{\star},P_{f^{\star}})\leq\sqrt{1-\sqrt{1-\beta}}, \tag{18}\] which is at most of the order of magnitude \(\sqrt{\beta/2}\). Therefore, for small values of \(\beta\), the above computation indicates that the term \(h(P^{\star},P_{f^{\star}})\) is much larger than \(\|P^{\star}-P_{f^{\star}}\|_{TV}\). Combining (15) with (17), we deduce that the \(\ell\)-type estimators remain stable as long as \(\beta\) is small as compared to \(1/\sqrt{n}\). Combining (16) with (18), we know that the performance of the \(\rho\)-estimators deteriorate immediately as long as \(\beta\) becomes large as compared to \((\log n)/n\). This analysis implies that the \(\ell\)-type estimators possess more robustness compared to the ones obtained from \(\rho\)-estimation. ## 4. Applications of \(\ell\)-type Estimation Using Neural Networks In recent years, experimental findings have demonstrated the significant success of neural networks modeling in various applications. From a theoretical perspective, it has been observed that neural networks, especially deep ones (see, for example, Schmidt-Hieber (2020) and Suzuki and Nitanda (2021)), possess a natural advantage over classical methods when approximating functions with specific characteristics. In this section, we will discuss \(\ell\)-type estimation for models based on neural networks. The covariates \(W_{i}\) are assumed to be i.i.d. on \(\mathscr{W}=\left[0,1\right]^{d}\), following the common distribution \(P_{W}\), while \(Q_{i}^{\star}=Q^{\star}\) holds for all \(i\in\{1,\ldots,n\}\). ### ReLU feedforward neural networks We start with introducing some preliminaries of the ReLU feedforward neural networks. Recall the Rectifier Linear Unit (ReLU) activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), which is defined as \[\sigma(x)=\max\{0,x\}.\] For any vector \(\mathbf{x}=(x_{1},\ldots,x_{p})^{\top}\in\mathbb{R}^{p}\), where \(p\in\mathbb{N}^{*}\), the notation \(\sigma(\mathbf{x})\) represents the activation function applied component-wise, defined as follows: \[\sigma(\mathbf{x})=(\max\{0,x_{1}\},\ldots,\max\{0,x_{p}\})^{\top}.\] A fundamental and extensively employed type of feedforward neural networks in practice is the multi-layer perceptrons, where the neurons in consecutive layers are fully connected through linear transformation matrices. In our later discussion on applying the \(\ell\)-type estimation, we will focus on the multi-layer perceptrons with ReLU activation function. To begin, let's introduce the expression of the multi-layer perceptrons under consideration. For any vector \(\mathbf{p}=(p_{0},\ldots,p_{L+1})\in(\mathbb{N}^{*})^{L+2}\) with \(p_{0}=d\) and \(p_{L+1}=1\) and \(L\in\mathbb{N}^{*}\), we denote the multi-layer perceptron \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) as a collection of functions of the form: \[f:\mathbb{R}^{d}\to\mathbb{R},\quad\mathbf{w}\mapsto f(\mathbf{w})=M_{L}\circ\sigma \circ M_{L-1}\circ\cdots\circ\sigma\circ M_{0}(\mathbf{w}),\] where \[M_{l}(\mathbf{y})=A_{l}(\mathbf{y})+b_{l},\quad\text{for $l=0,\ldots,L$},\] \(A_{l}\) is a \(p_{l+1}\times p_{l}\) weight matrix and the shift vector \(b_{l}\) is of size \(p_{l+1}\) for any \(l\in\{0,\ldots,L\}\). In the first layer, the input data consists of the values of the predictor \(W\), whereas the last layer represents the output. With the expression given above, we say that the network \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) comprises \(L\) hidden layers and a total of \((L+2)\) layers. For \(l\in\{1,\ldots,L\}\), we refer to \(p_{l}\) as the width of the \(l\)-th hidden layer. The entries in these weight matrices and vectors typically vary in \(\mathbb{R}\) or a subinterval of \(\mathbb{R}\), which is what we refer to as parameters. In the latter scenario, we employ the notation \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\subset\overline{\mathcal{F}}_{(L,\mathbf{ p})}\), denoting the set of all functions with parameters ranging within the interval \([-K,K]\). Furthermore, we use the notation \(\mathcal{F}_{(L,\mathbf{p})}\) (or \(\mathcal{F}_{(L,\mathbf{p},K)}\)) for the multi-layer perceptron, which shares the same architecture as \(\overline{\mathcal{F}}_{(L,\mathbf{p})}\) (or \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) respectively), but with the distinction that all the parameters take values in \(\mathbb{Q}\). In some of our application scenarios, it suffices to consider a multi-layer perceptron with a rectangular design, where \(p_{l}=p\) for all \(l\in\{1,\ldots,L\}\). In this case, we may use the simplified notation \(\mathcal{F}_{(L,p)}\) (or \(\mathcal{F}_{(L,p,K)}\)) to represent the class \(\mathcal{F}_{(L,\mathbf{p})}\) (or \(\mathcal{F}_{(L,\mathbf{p},K)}\) respectively) for \(\mathbf{p}=(d,p,\ldots,p,1)\). We discuss the implementation of the \(\ell\)-type estimation on ReLU neural networks \(\mathcal{F}_{(L,\mathbf{p},K)}\). To implement the procedure introduced in Section 3, we work on the countable subset \(\mathcal{F}_{(L,\mathbf{p},K)}\) of the model \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\). We can establish the following result. **Lemma 3**.: _For any \(L\in\mathbb{N}^{*}\), \(\mathbf{p}=(p_{0},\ldots,p_{L+1})\in(\mathbb{N}^{*})^{L+2}\) with \(p_{0}=d\), \(p_{L+1}=1\) and a finite positive constant \(K\), the class of functions \(\mathcal{F}_{(L,\mathbf{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) with respect to the supremum norm \(\|\cdot\|_{\infty}\)._ The proof of Lemma 3 is postponed to Section 6.3. Lemma 3 ensures that our estimation approach applied to the countable model \(\mathcal{F}_{(L,\boldsymbol{p},K)}\) does not compromise approximation power compared to \(\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\). The following proposition establishes VC-dimensional bounds for rectangular multi-layer perceptrons employing a ReLU activation function. This result can be derived from Proposition 5 in Chen (2022), which also aligns with those stated in Theorem 7 of Bartlett et al. (2019). **Proposition 1**.: _For any \(L\in\mathbb{N}^{*}\), \(p\in\mathbb{N}^{*}\), the class of functions \(\overline{\mathcal{F}}_{(L,p)}\) is a VC-subgraph on \(\mathscr{W}\) with dimension_ \[V(\overline{\mathcal{F}}_{(L,p)})\leq(L+1)\left(s+1\right)\log_{2}\left[2 \left(2e(L+1)\left(\frac{pL}{2}+1\right)\right)^{2}\right], \tag{19}\] _where \(s=p^{2}(L-1)+p(L+d+1)+1\)._ This result shows the connection between the VC-dimensional bounds and the depth and width of ReLU rectangular multi-layer perceptrons. Specifically, for any finite constant \(K>0\), as \(\overline{\mathcal{F}}_{(L,p,K)}\subset\overline{\mathcal{F}}_{(L,p)}\), the dimensional bounds (19) also apply to the class \(\overline{\mathcal{F}}_{(L,p,K)}\). We will use Proposition 1 along with other results to derive the risk bounds for the \(\ell\)-type estimators when applying our approach to ReLU feedforward neural networks. ### Approximating functions in Holder space In this section, we examine the performance of the \(\ell\)-type estimators implemented on the ReLU feedforward neural networks. We consider the regression setting, where the regression function \(f^{\star}\) exists, and we assume that it belongs to an \(\alpha\)-smoothness Holder class. Given \(t\in\mathbb{N}^{*}\) and \(\alpha\in\mathbb{R}^{*}_{+}\), we define \(\mathcal{H}^{\alpha}(D,B)\) an \(\alpha\)-Holder ball with radius \(B\) as the collection of functions \(f:D\subset\mathbb{R}^{t}\rightarrow\mathbb{R}\) such that \[\max_{\begin{subarray}{c}\beta=(\beta_{1},\ldots,\beta_{t})^{\top}\in\mathbb{ N}^{t}\\ \sum_{j=1}^{t}\beta_{j}\leq\lfloor\alpha\rfloor\end{subarray}}\|\partial^{ \boldsymbol{\beta}}f\|_{\infty}\leq B\quad\text{and}\quad\max_{\begin{subarray} {c}\beta\in\mathbb{N}^{t}\\ \sum_{j=1}^{t}\beta_{j}=\lfloor\alpha\rfloor\end{subarray}}\sup_{\begin{subarray} {c}\boldsymbol{x},\boldsymbol{y}\in D\\ x\neq\boldsymbol{y}\end{subarray}}\frac{\left|\partial^{\boldsymbol{\beta}}f( \boldsymbol{x})-\partial^{\boldsymbol{\beta}}f(\boldsymbol{y})\right|}{\| \boldsymbol{x}-\boldsymbol{y}\|_{2}^{\alpha-\lfloor\alpha\rfloor}}\leq B,\] where for any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t})^{\top}\in\mathbb{N}^{t}\), \(\partial^{\boldsymbol{\beta}}=\partial^{\beta_{1}}\cdots\partial^{\beta_{t}}\). Based on the notation introduced above, in this section, we assume that \(Q^{\star}=Q_{f^{\star}}\), where \(f^{\star}\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\), with a specified smoothness index \(\alpha\in\mathbb{R}^{*}_{+}\) and a finite constant \(B>0\). For any \(\alpha\in\mathbb{R}^{*}_{+}\), the following result demonstrates the error introduced by various ReLU neural networks when approximating the class \(\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\), as deduced from Corollary 3.1 of Jiao et al. (2023). **Proposition 2**.: _Assume that \(f\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\) with \(\alpha\in\mathbb{R}^{*}_{+}\) and a finite constant \(B>0\). For any \(M,N\in\mathbb{N}^{*}\), there exists a function \(\overline{f}\) implemented by a ReLU neural network \(\overline{\mathcal{F}}_{(L,p)}\) with a width of_ \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2}(8 N)\rceil\] _and a depth of_ \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d\] _such that_ \[\left|f(\boldsymbol{w})-\overline{f}(\boldsymbol{w})\right|\leq 19B(\lfloor \alpha\rfloor+1)^{2}d^{\lfloor\alpha\rfloor+\frac{\alpha\lor 1}{2}}(NM)^{-\frac{2 \alpha}{d}},\] _for all \(\boldsymbol{w}\in[0,1]^{d}\)._ In fact, several approximation results have been established regarding the Holder class of smoothness functions, for instance in Chen et al. (2019), Schmidt-Hieber (2020), and Nakada and Imaizumi (2020), among others. The reason why we consider using Proposition 2 is mainly due to two aspects. Firstly, unlike most existing results where the prefactor in the error bound depends exponentially on the dimension \(d\), the prefactor in this error bound depends only polynomially on the dimension \(d\). Secondly, it offers specific structures of the neural networks to be considered, thus making the result more informative. Building upon the results of Theorem 1, Proposition 1, 2, and Lemma 3, we derive the risk bounds for the \(\ell\)-type estimators implemented by different networks as follows. **Corollary 1**.: _For any \(N,M\in\mathbb{N}^{*}\), no matter what the distribution of \(W\) is, the \(\ell\)-type estimator \(\widehat{f}(\boldsymbol{X})\) taking values in the network class \(\mathcal{F}_{(L,p,K)}\) with_ \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2 }(8N)\rceil,\] \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d\] _and a sufficiently large \(K\), satisfies that for any \(f^{\star}\in\mathcal{H}^{\alpha}(\left[0,1\right]^{d},B)\) and \(n\geq V(\overline{\mathcal{F}}_{(L,p)})\),_ \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}\left[(NM)^{-2\alpha/d}+\frac{NM}{\sqrt{n}}\left(\log_{2}(2 N)\log_{2}(2M)\right)^{3/2}\right], \tag{20}\] _where \(C_{\epsilon,\sigma,\alpha,d,B}\) is a constant depending on \(\epsilon,\sigma,\alpha,d\) and \(B\) only._ _In particular, if we take \(N=1\) and \(M=\lceil n^{d/2(d+2\alpha)}\rceil\), combining Lemma 2 with (20) allows us to deduce that:_ \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}n^{-\frac{\alpha}{d+2\alpha}}\left(\log n\right)^{3/2}. \tag{21}\] _For \(n\) being sufficiently large such that the right-hand side of (21) is smaller that 0.78, according to Lemma 2, (21) is equivalent to_ \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma,\alpha,d,B}n^{-\frac{\alpha}{d+2\alpha}}\left(\log n\right)^{3/2},\] _which is the typical rate of convergence with respect to the \(\mathbb{L}_{1}(P_{W})\)-norm._ The proof of Corollary 1 is deferred to Section 6.4. Our remarks are provided below. **Remark 6**.: As we will see later it in Theorem 2, the convergence rate \(n^{-\alpha/(d+2\alpha)}\) is minimax optimal with respect to the distance \(\ell(\cdot,\cdot)\), at least when \(W\) is uniformly distributed on \(\left[0,1\right]^{d}\). Therefore, the risk bound (21) we obtained is optimal up to a logarithmic factor. As it was shown in Section 7 of Baraud (2021), \(\ell\)-estimators are not always optimal when addressing various estimation problems, which differs from \(\rho\)-estimators. However, by combining the upper bound (21) and the subsequent lower bound stated in Theorem 2, we demonstrate that implementing the \(\ell\)-type estimation procedure is optimal within our framework and offers more robustness compared to \(\rho\)-estimators. **Remark 7**.: A noteworthy aspect of the presented result is that the stochastic error is not dependent on the upper bound of the sup-norms for all functions within the class \(\overline{\mathcal{F}}_{(L,p,K)}\). This is not the case, for example, in the results established in Lemma 4 of Schmidt-Hieber (2020) and Theorem 4.2 of Jiao et al. (2023), both of which analyze the performance of the least squares estimator. As a consequence, the final risk bound they established deteriorates with the enlargement of the model they considered due to the inclusion of such an upper bound in their stochastic error terms. From this perspective, our estimation method does not suffer from this drawback. Therefore, we can accommodate a sufficiently large value of \(K\) without compromising the risk bound for the resulting estimator. ## 5. Circumventing the curse of dimensionality As we observed in Section 4, the minimax optimal rate over an \(\alpha\)-Holder class on \(\mathscr{W}=\left[0,1\right]^{d}\) is of order \(n^{-\alpha/(d+2\alpha)}\). This rate slows down significantly as the dimensionality \(d\) increases, a phenomenon known as the curse of dimensionality. To overcome this issue, in this section, we introduce structural assumptions on \(f^{\star}\) and construct specific models using deep ReLU neural networks to implement our procedure. One natural structure for the regression function \(f^{\star}\) for neural networks to exhibit advantages is a composition of multiple functions, which was previously explored by Schmidt-Hieber (2020). More precisely, for any \(k\in\mathbb{N}^{*}\), \(\mathbf{d}=(d_{0},\ldots,d_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}=(t_{0},\ldots,t_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\boldsymbol{\alpha}=(\alpha_{0},\ldots,\alpha_{k})\in(\mathbb{R}_{+}^{*})^{k+1}\) and a finite constant \(B\geq 0\), we denote \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) the class of functions as, \[\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)= \left\{f_{k}\circ\cdots\circ f_{0},\;f_{i}=(f_{ij})_{j}:[a_{i},b_{i}]^{d_{i} }\to[a_{i+1},b_{i+1}]^{d_{i+1}}\,,\right. \tag{22}\] \[\left.f_{ij}\in\mathcal{H}^{\alpha_{i}}(\left[a_{i},b_{i}\right]^{ t_{i}},B)\text{ and }(|a_{i}|\vee|b_{i}|)\leq B\right\},\] where \(a_{0}=0\), \(b_{0}=1\), \(d_{0}=d\) and \(d_{k+1}=1\). In what follows, we assume the existence of an underlying regression function \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) such that \(Q^{\star}=Q_{f^{\star}}\) (or at least \(Q^{\star}\) is close to \(Q_{f^{\star}}\) with respect to \(\ell\)), where the values of \(k\), \(\mathbf{d}\), \(\mathbf{t}\) and \(\boldsymbol{\alpha}\) are considered to be known. We will then proceed to construct suitable networks for approximating the class \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) and implement the \(\ell\)-type estimation procedure to derive the final estimator \(Q_{\widehat{f}}\) of \(Q^{\star}\). In such a composition structure, the power of approximation based on the neural network actually relies on the so-called effective smoothness indices, which are defined as \[\alpha_{i}^{*}=\alpha_{i}\prod_{l=i+1}^{k}\left(\alpha_{l}\wedge 1\right), \quad\text{for}\ \ i\in\{0,\ldots,k-1\}\] and \(\alpha_{k}^{*}=\alpha_{k}\). Based on Proposition 2 and the basic operation rules of the neural networks, we establish the following result to approximate any function belonging to \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\). **Proposition 3**.: _Assuming that \(f\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) with \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) defined by (22). For all \(i\in\{0,\ldots,k\}\), denote_ \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1}\] _and_ \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] _There exists a function \(\overline{f}\) implemented by a ReLU network with a width of \(\overline{p}=\max_{i\in\{0,\ldots,k-1\}}d_{i+1}p_{i}\) and a depth of \(\overline{L}=k+\sum_{i=0}^{k}L_{i}\) such that_ \[\|f-\overline{f}\|_{\infty}\] \[\leq(2B)^{1+\sum_{i=1}^{k}\alpha_{i}}\left(\prod_{i=0}^{k}\sqrt{d _{i}}\right)\left[\sum_{i=0}^{k}C_{\alpha_{i},t_{i},B}^{\prod_{l=i+1}^{k}( \alpha_{l}\wedge 1)}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2 \alpha_{i}^{*}/t_{i}}\right],\] _where_ \[C_{\alpha_{i},t_{i},B}=19(2B)^{\alpha_{i}+1}(\lfloor\alpha_{i}\rfloor+1)^{2} t_{i}^{\lfloor\alpha_{i}\rfloor+(\alpha_{i}\lor 1)/2}.\] The proof of Proposition 3 is postponed to Section 6.5. The presented approximation result is notable for offering a well-defined structure for neural networks to effectively implement diverse estimation approaches, as compared to the sparsity-based networks considered in Schmidt-Hieber (2020). From this point of view, Proposition 3 is more informative. Building upon the results of Theorem 1, Proposition 1, 3, and Lemma 3, we can derive the following risk bound for the \(\ell\)-type estimators. **Corollary 2**.: _Assume that \(f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) with \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha},B)\) defined by (22). For all \(i\in\{0,\ldots,k\}\), we set_ \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1}\] _and_ \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] _Whatever the distribution of \(W\), any \(\ell\)-type estimator \(\widehat{f}(\mathbf{X})\) implemented by a ReLU neural network \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) with_ \[\overline{L}=k+\sum_{i=0}^{k}L_{i},\qquad\overline{p}=\max_{i=0,\ldots,k}d_{i+1 }p_{i}\] _and a sufficiently large \(K\), satisfies that for all \(n\geq V(\overline{\mathcal{F}}_{(\overline{L},\overline{p})})\),_ \[\mathbb{E}\left[\ell\left(Q_{f^{\star}},Q_{\widehat{f}}\right)\right]\leq C_{ \epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\left(\sum_{i=0}^{k}n^{- \frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^{\star}}}\right)(\log n)^{3/2}, \tag{23}\] _where \(C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\) is a numerical constant depending on \(\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B\) only._ We left the proof of Corollary 2 to Section 6.6. Our comments are presented as follows. **Remark 8**.: Denoting \[\phi_{n}=\max_{i=0,\ldots,k}n^{-\alpha_{i}^{\star}/(2\alpha_{i}^{\star}+t_{i})},\] the result (23) we have established indicates that, up to a logarithmic term, the \(\ell\)-type estimator \(\widehat{f}\) based on the class \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) converges to the regression function \(f^{\star}\) at the rate of \(\phi_{n}\). Furthermore, for sufficiently large \(n\) such that the right-hand side of (23) is smaller than \(0.78\), upon applying Lemma 2, we obtain \[\mathbb{E}\left[\|f^{\star}-\widehat{f}\|_{1,P_{W}}\right]\leq C_{\epsilon, \sigma,k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B}\phi_{n}(\log n)^{3/2}.\] This aligns with the risk bound established in Theorem 1 of Schmidt-Hieber (2020) for the least squares estimator with respect to the \(\mathbb{L}_{2}(P_{W})\)-norm. **Remark 9**.: If the situation deviates from the ideal scenario where \(Q^{\star}=Q_{f^{\star}}\) and \(f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)\), a bias term \(\inf_{f\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)}\ell(Q^{\star},Q _{f})\) will be included in the final risk bound (23). However, as long as the bias term is not significantly larger than the quantity on the right-hand side of (23), the accuracy of the resulting estimator \(\widehat{f}\) remains on the same order of magnitude as in the ideal case. This follows from the robustness property of the \(\ell\)-type estimator as we have explained in Section 3. The following lower bound demonstrates that the convergence rate \(\phi_{n}\) is minimax optimal, at least when \(W\) is uniformly distributed on \([0,1]^{d_{0}}\). **Theorem 2**.: _Let \(P_{W}\) be the uniform distribution on \([0,1]^{d_{0}}\). For any \(k\in\mathbb{N}^{*}\), \(\mathbf{d}\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}\in(\mathbb{N}^{*})^{k+1}\) such that \(t_{j}\leq\min(d_{0},\ldots,d_{j-1})\) for all \(j\), any \(\mathbf{\alpha}\in(\mathbb{R}^{*}_{+})^{k+1}\) and \(B>0\) large enough, there exists a positive constant \(c\) such that_ \[\inf_{\widehat{f}}\sup_{f^{\star}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{ \alpha},B)}\mathbb{E}\left[\ell\left(Q_{f^{\star}},Q_{\widehat{f}}\right) \right]\geq c\phi_{n},\] _where the infimum runs among all possible estimators of \(f^{\star}\)._ The proof of Theorem 2 is deferred to Section 6.7. ## 6. Proofs ### Proof of Lemma 1 Proof.: Drawing on the formulation of \(t_{(f_{1},f_{2})}\) and the definition of \(\ell(\cdot,\cdot)\) as provided in (3), we can deduce that \[\mathbb{E}_{P^{\star}}\left[t_{(f_{1},f_{2})}(W,Y)\right]\] \[= \int_{\mathscr{W}}\left[Q_{(w)}^{\star}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{1}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[\leq \int_{\mathscr{W}}\|Q_{(w)}^{\star}-Q_{f_{1}(w)}\|_{TV}dP_{W}(w)\] \[= \ell(Q^{\star},Q_{f_{1}}),\] which gives the second inequality in (4). Furthermore, for any two probabilities \(P\) and \(R\) on the measured space \((\mathscr{X},\mathcal{X})\), it is well known that the total variation distance can equivalently be written as \[\|P-R\|_{TV}=R(r>p)-P(r>p),\] where \(p\) and \(r\) stand for the respective densities of \(P\) and \(R\) with respect to some common dominating measure \(\mu\). Given this fact, we can calculate \[\mathbb{E}_{P^{\star}}\left[t_{(f_{1},f_{2})}(W,Y)\right]\] \[= \int_{\mathscr{W}}\left[Q_{(w)}^{\star}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{2}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[+\int_{\mathscr{W}}\left[Q_{f_{2}(w)}\left(q_{f_{2}(w)}>q_{f_{1 }(w)}\right)-Q_{f_{1}(w)}\left(q_{f_{2}(w)}>q_{f_{1}(w)}\right)\right]dP_{W}(w)\] \[\geq \int_{\mathscr{W}}\left[\|Q_{f_{1}(w)}-Q_{f_{2}(w)}\|_{TV}-\|Q_{ (w)}^{\star}-Q_{f_{2}(w)}\|_{TV}\right]dP_{W}(w)\] \[= \ell(Q_{f_{1}},Q_{f_{2}})-\ell(Q^{\star},Q_{f_{2}}),\] which yields the first inequality in (4). ### Proof of Theorem 1 Prior to proving Theorem 1, we will initially establish several auxiliary results that will serve as the foundation for deriving Theorem 1. **Proposition 4**.: _For any \(\overline{f}\in\mathcal{F}\), we define_ \[\mathscr{C}_{+}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ q_{f(w)}(y)>q_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}\] _and_ \[\mathscr{C}_{-}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ q_{f(w)}(y)<q_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}.\] _Under Assumption 1, the classes of subsets \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) and \(\mathscr{C}_{-}(\mathcal{F},\overline{f})\) are both VC with dimensions not larger than \(9.41V\)._ Proof.: We first prove the result holds for the class \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). For any \(f\in\mathcal{F}\), we define the function \(\widetilde{q}_{f}\) on \(\mathscr{W}\times\mathscr{Y}\) as \[\widetilde{q}_{f(w)}(y)=\exp\left[\frac{yf(w)}{\sigma^{2}}-\frac{f^{2}(w)}{2 \sigma^{2}}\right].\] Then the class of subsets \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) can be rewritten as \[\mathscr{C}_{+}(\mathcal{F},\overline{f})=\left\{\left\{(w,y)\in\mathscr{W} \times\mathscr{Y}\ s.t.\ \widetilde{q}_{f(w)}(y)>\widetilde{q}_{\overline{f}(w)}(y)\right\},\ f\in \mathcal{F}\right\}.\] We introduce the result of Proposition 5 in Baraud and Chen (2020) as follows. **Proposition 5**.: _Let \(I\subset\mathbb{R}\) be a non-trivial interval and \(\mathcal{F}\) a class of functions from \(\mathscr{W}\) into \(I\). If \(\mathcal{F}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\), the class of functions_ \[\left\{h_{f}:(w,y)\mapsto e^{S(y)f(w)-A(f(w))},\ f\in\mathcal{F}\right\}\] _is VC-subgraph on \(\mathscr{W}\times\mathscr{Y}\) with dimension not larger than \(9.41V\), where \(S\) is a real-valued measurable function on \(\mathscr{Y}\) and \(A\) is convex and continuous on \(I\)._ Note that function \(\widetilde{q}_{f(w)}(y)\) takes a particular form as described in Proposition 5 with \(S(y)=y/\sigma^{2}\), for all \(y\in\mathbb{R}\), and \(A(u)=u^{2}/(2\sigma^{2})\), for all \(u\in\mathbb{R}\). Therefore, under Assumption 1, the class of functions \(\{\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). Moreover, since for any given function \(\overline{f}\in\mathcal{F}\), \(\widetilde{q}_{\overline{f}}\) is a fixed function taking its values in \(\mathbb{R}\), applying Lemma 2.6.18 (v) of van der Vaart and Wellner (1996) (see also Proposition 42 (i) in Baraud et al. (2017)), we obtain that the class of functions \(\left\{\widetilde{q}_{f}-\widetilde{q}_{\overline{f}},\ f\in\mathcal{F}\right\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). According Proposition 2.1 of Baraud (2016), \(\left\{\widetilde{q}_{f}-\widetilde{q}_{\overline{f}},\ f\in\mathcal{F}\right\}\) is weak VC-major with dimension not larger than \(9.41V\), which implies that the class of subsets \[\left\{\left\{(w,y)\in\mathscr{W}\times\mathscr{Y}\ s.t.\ \widetilde{q}_{f(w)}(y)- \widetilde{q}_{\overline{f}(w)}(y)>0\right\},\ f\in\mathcal{F}\right\}\] is a VC-class of subsets of \(\mathscr{W}\times\mathscr{Y}\) with dimension not larger than \(9.41V\). Hence, the conclusion holds for \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). Now we show the conclusion also holds for the class of subsets \(\mathscr{C}_{-}(\mathcal{F},\overline{f})\). As we have seen, under Assumption 1, \(\{\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is VC-subgraph with dimension not larger than \(9.41V\). By applying Proposition 42 (iii) of Baraud et al. (2017), we can establish that \(\{-\widetilde{q}_{f},\ f\in\mathcal{F}\}\) on \(\mathscr{W}\times\mathscr{Y}\) is a VC-subgraph with dimension not exceeding \(9.41V\). As a result of Lemma 2.6.18 (v) of van der Vaart and Wellner (1996), this property also holds for the class \(\{\widetilde{q}_{\overline{f}}-\widetilde{q}_{f},\ f\in\mathcal{F}\}\) for any fixed \(\overline{f}\in\mathcal{F}\). Finally, we can conclude using a similar argument as we did for \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\). **Proposition 6**.: _For any \(\overline{f}\in\mathcal{F}\), we define_ \[\mathscr{C}_{=}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)= \overline{f}(w)\},\ f\in\mathcal{F}\right\}.\] _Under Assumption 1, the class of subsets \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\) is a VC-class of sets on \(\mathscr{W}\) with dimension not larger than \(9.41V\)._ Proof.: We set \[\mathscr{C}_{\geq}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)- \overline{f}(w)\geq 0\},\ f\in\mathcal{F}\right\}\] and \[\mathscr{C}_{\leq}(\mathcal{F},\overline{f})=\left\{\{w\in\mathscr{W}\ s.t.\ f(w)- \overline{f}(w)\leq 0\},\ f\in\mathcal{F}\right\}.\] Since \(\mathcal{F}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\) and \(\overline{f}\) is a fixed function, \(\{f-\overline{f},\ f\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\) as a consequence of applying Proposition 42 (i) in Baraud et al. (2017). According to Proposition 2.1 of Baraud (2016), \(\{f-\overline{f},\ f\in\mathcal{F}\}\) is weak VC-major with dimension not larger than \(V\), which implies that the class of subsets \[\left\{\left\{w\in\mathscr{W}\ s.t.\ f(w)>\overline{f}(w)\right\},\ f\in \mathcal{F}\right\}\] is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(V\). Then Lemma 2.6.17 (i) of van der Vaart and Wellner (1996) implies that \(\mathscr{C}_{\leq}(\mathcal{F},\overline{f})\) is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(V\). Following a similar argument, we can show that the same conclusion also holds for the class \(\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\). Writing \[\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\bigwedge\mathscr{C}_{\leq}( \mathcal{F},\overline{f})=\left\{C_{\geq}\cap C_{\leq},\ C_{\geq}\in\mathscr{C }_{\geq}(\mathcal{F},\overline{f}),\ C_{\leq}\in\mathscr{C}_{\leq}(\mathcal{F },\overline{f})\right\},\] we can deduce that \(\mathscr{C}_{\geq}(\mathcal{F},\overline{f})\bigwedge\mathscr{C}_{\leq}( \mathcal{F},\overline{f})\) is a VC-class of subsets of \(\mathscr{W}\) with dimension not larger than \(9.41V\) according to Theorem 1.1 of van der Vaart and Wellner (2009). It is easy to note that \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\subset\mathscr{C}_{\geq}(\mathcal{ F},\overline{f})\bigwedge\mathscr{C}_{\leq}(\mathcal{F},\overline{f})\), which implies the completion of the proof. **Lemma 4**.: _Under Assumption 1, whatever the conditional distributions \(\mathbf{Q}^{\star}=(Q_{1}^{\star},\ldots,Q_{n}^{\star})\) of the \(Y_{i}\) given \(W_{i}\) and the distributions of \(W_{i}\), any \(\ell\)-type estimator \(\widehat{f}\) based on the set \(\mathcal{F}\) satisfies that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\),_ \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q}^{ \star},\mathbf{Q}_{\overline{f}})+\frac{2\boldsymbol{\vartheta}(\overline{f}) }{n}+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{24}\] _where_ \[\boldsymbol{\vartheta}(\overline{f})= \mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[\mathbf{T}_{ l}(\boldsymbol{X},\overline{f},f^{\prime})-\mathbb{E}\left[\mathbf{T}_{l}( \boldsymbol{X},\overline{f},f^{\prime})\right]\right]\right]\] \[\vee\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[ \mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f})\right] -\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f})\right]\right].\] Proof.: The proof of Lemma 4 builds upon the idea presented in the proof of Theorem 1 in Baraud et al. (2022), but with certain modifications to adapt it to the regression setting. For any \(f_{1},f_{2}\in\mathcal{F}\), define \[\mathbf{Z}_{+}(\mathbf{X},f_{1}) =\sup_{f_{2}\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2} )-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},f_{1},f_{2})\right]\right]\] \[\mathbf{Z}_{-}(\mathbf{X},f_{1}) =\sup_{f_{2}\in\mathcal{F}}\left[\mathbb{E}\left[\mathbf{T}_{l}( \mathbf{X},f_{2},f_{1})\right]-\mathbf{T}_{l}(\mathbf{X},f_{2},f_{1})\right]\] and set \[\mathbf{Z}(\mathbf{X},f_{1})=\mathbf{Z}_{+}(\mathbf{X},f_{1})\vee\mathbf{Z}_{-}(\mathbf{X},f_{1}).\] As per Lemma 1, for any \(f,\overline{f}\in\mathcal{F}\), it holds that \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{f}) \leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbb{ E}\left[\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\right]\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbb{ E}\left[\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\right]-\mathbf{T}_{l}(\mathbf{X},f, \overline{f})+\mathbf{T}_{l}(\mathbf{X},f,\overline{f})\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},f,\overline{f}) \tag{25}\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},f).\] By utilizing (25), substituting \(f\) with \(\widehat{f}(\mathbf{X})\in\mathscr{E}(\mathbf{X},\epsilon)\), and employing the definition of \(\widehat{f}(\mathbf{X})\), we can derive that \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}}) \leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},\widehat{f}) \tag{26}\] \[\leq n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\mathbf{ Z}(\mathbf{X},\overline{f})+\mathbf{T}_{l}(\mathbf{X},\overline{f})+\epsilon.\] Moreover, we can compute that \[\mathbf{T}_{l}(\mathbf{X},\overline{f}) =\sup_{f\in\mathcal{F}}\mathbf{T}_{l}(\mathbf{X},\overline{f},f) \tag{27}\] \[\leq\sup_{f\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f },f)-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f)\right]\right]+\sup_ {f\in\mathcal{F}}\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f)\right]\] \[\leq\mathbf{Z}(\mathbf{X},\overline{f})+n\ell(\mathbf{Q}^{\star}, \mathbf{Q}_{\overline{f}}),\] where the second inequality is obtained by applying Lemma 1. Combining (26) and (27), we obtain that for any \(\overline{f}\in\mathcal{F}\), \[n\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\mathbf{ Z}(\mathbf{X},\overline{f})+2n\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\epsilon. \tag{28}\] In what follows, we study the term \(\mathbf{Z}(\mathbf{X},\overline{f})\) to have a further insight of the risk bound for the estimator \(\widehat{f}\). It is worth noting that for any \(\overline{f},f\in\mathcal{F}\) and \((w,y),(w^{\prime},y^{\prime})\in\mathscr{W}\times\mathscr{Y}\), the following inequality holds: \[\big{|}t_{(\overline{f},f)}(w,y)-t_{(\overline{f},f)}(w^{\prime},y^{\prime}) \big{|}\leq 2.\] Writing \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathscr{X}^{n}\) and \(\mathbf{x}^{\prime}_{(i)}=(x_{1},\ldots,x^{\prime}_{i},\ldots,x_{n})\in\mathscr{X} ^{n}\), as an immediate consequence, we can derive that \[\frac{1}{2}\big{|}\mathbf{Z}_{+}(\mathbf{x},\overline{f})-\mathbf{Z}_{+}(\mathbf{x}^{ \prime}_{(i)},\overline{f})\big{|}\leq 1.\] By following a similar approach as in the proof of Lemma 2 in Baraud (2021) and considering the term \(\xi\) replaced with \(\xi+\log 2\), one can conclude that with a probability of at least \(1-(1/2)e^{-\xi}\), \[\mathbf{Z}_{+}(\mathbf{X},\overline{f}) \leq\mathbb{E}\left[\mathbf{Z}_{+}(\mathbf{X},\overline{f})\right]+ \sqrt{2n(\xi+\log 2)}\] \[=\mathbb{E}\left[\sup_{f\in\mathcal{F}}\left[\mathbf{T}_{l}(\mathbf{X },\overline{f},f)-\mathbb{E}\left[\mathbf{T}_{l}(\mathbf{X},\overline{f},f) \right]\right]\right]+\sqrt{2n(\xi+\log 2)} \tag{29}\] \[\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n(\xi+\log 2)}.\] A similar argument gives that with a probability at least \(1-(1/2)e^{-\xi}\), \[\mathbf{Z}_{-}(\mathbf{X},\overline{f})\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n (\xi+\log 2)}. \tag{30}\] By combining (29) and (30), we can derive that with a probability at least \(1-e^{-\xi}\), \[\mathbf{Z}(\mathbf{X},\overline{f})=\mathbf{Z}_{+}(\mathbf{X},\overline{f})\lor \mathbf{Z}_{-}(\mathbf{X},\overline{f})\leq\mathbf{\vartheta}(\overline{f})+\sqrt{2n (\xi+\log 2)}. \tag{31}\] Finally, plugging (31) into (28) gives the upper bound \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q }^{*},\mathbf{Q}_{\overline{f}})+\frac{2\mathbf{\vartheta}(\overline{f})}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}. \tag{32}\] **Proposition 7**.: _Let \(f_{1}\) and \(f_{2}\) be two functions belonging to \(\mathcal{F}\). For all \(w\in\mathscr{W}\), the following equality holds_ \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)})=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{ 2\sigma}\right)-\frac{1}{2}\mathds{1}_{f_{1}(w)=f_{2}(w)},\] _where \(\Phi\) stands for the cumulative distribution function of the standard normal distribution._ Proof.: For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)=f_{2}(w)\), it is easy to see that \(Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)})=0\). The equality naturally holds since \[\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right)-\frac{1}{2}\mathds{1}_ {f_{1}(w)=f_{2}(w)}=\Phi(0)-\frac{1}{2}=0.\] For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)>f_{2}(w)\), \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)}) =\int_{-\infty}^{[f_{1}(w)+f_{2}(w)]/2}q_{f_{1}(w)}(y)dy\] \[=\int_{-\infty}^{[f_{2}(w)-f_{1}(w)]/2\sigma}\frac{1}{\sqrt{2\pi }}e^{-\frac{t^{2}}{2}}dt\] \[=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right).\] For all \(w\in\mathscr{W}\) satisfying \(f_{1}(w)<f_{2}(w)\), \[Q_{f_{1}(w)}(q_{f_{2}(w)}>q_{f_{1}(w)}) =\int_{[f_{1}(w)+f_{2}(w)]/2}^{+\infty}q_{f_{1}(w)}(y)dy\] \[=\int_{[f_{2}(w)-f_{1}(w)]/2\sigma}^{+\infty}\frac{1}{\sqrt{2 \pi}}e^{-\frac{t^{2}}{2}}dt\] \[=1-\Phi\left(\frac{f_{2}(w)-f_{1}(w)}{2\sigma}\right)\] \[=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right).\] Therefore, we can conclude the equality. The following result comes from the Proposition 3.1 in Baraud (2016), and we shall repeatedly use it in our proof. **Lemma 5**.: _Let \(X_{1},\ldots,X_{n}\) be independent random variables with values in \((E,\mathcal{E})\) and \(\mathcal{C}\) a \(VC\)-class of subsets of \(E\) with \(VC\)-dimension not larger than \(V\geq 1\) that satisfies for \(\sigma\in(0,1]\), \(\sum_{i=1}^{n}\mathbb{P}(X_{i}\in C)\leq n\sigma^{2}\) for all \(C\in\mathcal{C}\). Then,_ \[\mathbb{E}\left[\sup_{C\in\mathcal{C}}\Big{|}\sum_{i=1}^{n}\left(\mathds{1}_{C }(X_{i})-\mathbb{P}(X_{i}\in C)\right)\Big{|}\right]\leq 10(\sigma\lor a) \sqrt{nV\left[5+\log\left(\frac{1}{\sigma\lor a}\right)\right]}\] _where_ \[a=\left[32\sqrt{\frac{(V\wedge n)}{n}\log\left(\frac{2en}{V\wedge n}\right)} \right]\wedge 1.\] To prove Theorem 1, we also need the following result, which can be obtained by making a modification to the proof of Theorem 2 in Baraud and Chen (2020). **Lemma 6**.: _Let \(W_{1},\ldots,W_{n}\) be \(n\) independent random variables with values in \((\mathscr{W},\mathcal{W})\) and \(\mathcal{F}\) an at most countable VC-subgraph class of functions with values in \([0,1]\) and VC-dimension not larger than \(V\geq 1\). If_ \[Z(\mathcal{F})=\sup_{f\in\mathcal{F}}\left|\sum_{i=1}^{n}(f(W_{i})-\mathbb{E} \left[f(W_{i})\right])\right|\ \ \text{and}\ \ \sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[f^{2}(W_{i}) \right]\leq\sigma^{2}\leq 1,\] _then_ \[\mathbb{E}\left[Z(\mathcal{F})\right]\leq 4.61\sqrt{nV\sigma^{2}\mathcal{L}( \sigma)}+85V\mathcal{L}(\sigma),\] _with \(\mathcal{L}(\sigma)=9.11+\log(1/\sigma^{2})\)._ Proof of Theorem 1.: Now we will proceed to prove Theorem 1. Utilizing the result from Lemma 4, we only need to establish an upper bound for the term \(\boldsymbol{\vartheta}(\overline{f})\). Let us express \(\boldsymbol{\vartheta}(\overline{f})\) as \(\boldsymbol{\vartheta}(\overline{f})=\boldsymbol{\vartheta}_{1}(\overline{f}) \vee\boldsymbol{\vartheta}_{2}(\overline{f})\), where \[\boldsymbol{\vartheta}_{1}(\overline{f})=\mathbb{E}\left[\sup_{f^{\prime}\in \mathcal{F}}\left[\mathbf{T}_{l}(\boldsymbol{X},\overline{f},f^{\prime})- \mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},\overline{f},f^{\prime})\right] \right]\right]\] and \[\boldsymbol{\vartheta}_{2}(\overline{f})=\mathbb{E}\left[\sup_{f^{\prime}\in \mathcal{F}}\left[\mathbb{E}\left[\mathbf{T}_{l}(\boldsymbol{X},f^{\prime}, \overline{f})\right]-\mathbf{T}_{l}(\boldsymbol{X},f^{\prime},\overline{f}) \right]\right].\] In what follows, we will derive an upper bound for the term \(\boldsymbol{\vartheta}_{1}(\overline{f})\). For any \(f_{1},f_{2}\in\mathcal{F}\), define \[g_{(f_{1},f_{2})}(w,y)=\mbox{1l}_{q_{f_{2}(w)}(y)>q_{f_{1}(w)}(y)},\quad\mbox{ for all }(w,y)\in\mathscr{W}\times\mathscr{Y}.\] Let \(\Phi\) be the cumulative distribution function of the standard normal distribution. For any \(f_{1},f_{2}\in\mathcal{F}\), define \[h_{(f_{1},f_{2})}(w)=\Phi\left(-\frac{|f_{1}(w)-f_{2}(w)|}{2\sigma}\right), \quad\mbox{for all }w\in\mathscr{W}\] and \[k_{(f_{1},f_{2})}(w)=\frac{1}{2}\mbox{1l}_{f_{1}(w)=f_{2}(w)},\quad\mbox{for all }w\in\mathscr{W}.\] Given any \(f_{1},f_{2}\in\mathcal{F}\), according to Proposition 7, we have that for all \((w,y)\in\mathscr{W}\times\mathscr{Y}\), \[t_{(f_{1},f_{2})}(w,y) =g_{(f_{1},f_{2})}(w,y)-\left[h_{(f_{1},f_{2})}(w)-k_{(f_{1},f_{2} )}(w)\right] \tag{33}\] \[=g_{(f_{1},f_{2})}(w,y)-h_{(f_{1},f_{2})}(w)+k_{(f_{1},f_{2})}(w).\] By the definition of \(\mathbf{T}_{l}\) and the equality (33), we deduce that \[\boldsymbol{\vartheta}_{1}(\overline{f}) =\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left[\sum_{i=1} ^{n}\left(t_{(\overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[t_{( \overline{f},f^{\prime})}(W_{i},Y_{i})\right]\right)\right]\right]\] \[\leq\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1 }^{n}\left(g_{(\overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[g_{( \overline{f},f^{\prime})}(W_{i},Y_{i})\right]\right)\right|\right]\] \[\quad+\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_ {i=1}^{n}\left(h_{(\overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[h_{( \overline{f},f^{\prime})}(W_{i})\right]\right)\right|\right]\] \[\quad+\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_ {i=1}^{n}\left(k_{(\overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[k_{( \overline{f},f^{\prime})}(W_{i})\right]\right)\right|\right].\] As it has been shown in Proposition 4 that under Assumption 1, the class of subset \(\mathscr{C}_{+}(\mathcal{F},\overline{f})\) is VC with dimension not larger than \(9.41V\). Hence, applying Lemma 5 with \(\sigma=1\), we can obtain that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(g_{( \overline{f},f^{\prime})}(W_{i},Y_{i})-\mathbb{E}\left[g_{(\overline{f},f^{ \prime})}(W_{i},Y_{i})\right]\right)\right|\right]\leq 68.6\sqrt{nV}. \tag{34}\] According to Proposition 6, under Assumption 1, the class of subsets \(\mathscr{C}_{=}(\mathcal{F},\overline{f})\) is VC on \(\mathscr{W}\) with dimension not larger than \(9.41V\). Applying Lemma 5 again, we derive that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(k_{( \overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[k_{(\overline{f},f^{\prime})} (W_{i})\right]\right)\right|\right]\leq 34.3\sqrt{nV}. \tag{35}\] Moreover, under Assumption 1, the class of functions \(\{f^{\prime}-\overline{f},\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(V\). Given the value of \(\sigma>0\), since the function \(\psi(z)=-|z|/2\sigma\), for all \(z\in\mathbb{R}\) is unimodal, the class \(\{\psi\circ(f^{\prime}-\overline{f}),\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(9.41V\), as stated in Proposition 42 (vi) of Baraud et al. (2017). Then according to Proposition 42 (ii) of Baraud et al. (2017), \(\{h_{(\overline{f},f^{\prime})},\ f^{\prime}\in\mathcal{F}\}=\{\Phi\circ\left[ \psi\circ(f^{\prime}-\overline{f})\right],\ f^{\prime}\in\mathcal{F}\}\) is VC-subgraph on \(\mathscr{W}\) with dimension not larger than \(9.41V\). It is easy to note that for any \(f^{\prime}\in\mathcal{F}\), \[\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left[h_{(\overline{f},f^{\prime})}^{2}(W_ {i})\right]\leq 1.\] Applying Lemma 6 to the class \(\{h_{(\overline{f},f^{\prime})},\ f^{\prime}\in\mathcal{F}\}\) gives the result that \[\mathbb{E}\left[\sup_{f^{\prime}\in\mathcal{F}}\left|\sum_{i=1}^{n}\left(h_{( \overline{f},f^{\prime})}(W_{i})-\mathbb{E}\left[h_{(\overline{f},f^{\prime}) }(W_{i})\right]\right)\right|\right]\leq 42.7\sqrt{nV}+7286.7V. \tag{36}\] Combining (34), (35) and (36) together, we can conclude that for any \(\overline{f}\in\mathcal{F}\) \[\boldsymbol{\vartheta}_{1}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{37}\] By following a similar line of proof, one can also derive that \[\boldsymbol{\vartheta}_{2}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{38}\] Therefore, (37) and (38) together imply that for any \(\overline{f}\in\mathcal{F}\), \[\boldsymbol{\vartheta}(\overline{f})=\boldsymbol{\vartheta}_{1}(\overline{f} )\vee\boldsymbol{\vartheta}_{2}(\overline{f})\leq 145.6\sqrt{nV}+7286.7V. \tag{39}\] By substituting the bound (39) into equation (24), we infer that for any \(\overline{f}\in\mathcal{F}\) and any \(\xi>0\), with a probability at least \(1-e^{-\xi}\), \[\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{\widehat{f}})\leq 2\ell(\mathbf{Q} ^{\star},\mathbf{Q}_{\overline{f}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{ n}+\sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}, \tag{40}\] which concludes the inequality (5). Using the triangle inequality, \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\overline{f}})+\ell(\mathbf{Q}_{\overline{f}},\mathbf{Q}_{ \widehat{f}}),\] we derive that any \(\ell\)-type estimator \(\widehat{f}\) on the set \(\mathcal{F}\) satisfies that for all \(\xi>0\), with a probability at least \(1-e^{-\xi}\), \[\ell(\mathbf{Q}^{\star},\mathbf{Q}_{\widehat{f}})\leq 3\ell(\mathbf{Q}^{ \star},\boldsymbol{\mathscr{Q}})+291.2\sqrt{\frac{V}{n}}+14573.4\frac{V}{n}+ \sqrt{\frac{8(\xi+\log 2)}{n}}+\frac{\epsilon}{n}.\] ### Proof of Lemma 3 Proof.: Lemma 3 can be proven using a similar argument as in the proof of Lemma 11 in Chen (2022), where the main idea is inspired by the proof of Lemma 5 of Schmidt-Hieber (2020). We only need to show that for any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), there exists a sequence of functions \(f_{i}\in\mathcal{F}_{(L,\boldsymbol{p},K)}\), \(i\in\mathbb{N}^{*}\) such that \[\lim_{i\to+\infty}\|f-f_{i}\|_{\infty}=0.\] For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), recall that it can be written as \[f(\boldsymbol{w})=M_{L}\circ\sigma\circ M_{L-1}\circ\cdots\circ\sigma\circ M_ {0}(\boldsymbol{w})\quad\text{for any }\boldsymbol{w}\in\left[0,1\right]^{d},\] where \[M_{l}(\boldsymbol{y})=A_{l}(\boldsymbol{y})+b_{l},\quad\text{for }l=0,\ldots,L,\] \(A_{l}\) is a \(p_{l+1}\times p_{l}\) weight matrix and the shift vector \(b_{l}\) is of size \(p_{l+1}\) for any \(l\in\{0,\ldots,L\}\). For \(l\in\{1,\ldots,L\}\), we define the function \(f_{l}^{+}:\left[0,1\right]^{d}\to\mathbb{R}^{p_{l}}\), \[f_{l}^{+}(\boldsymbol{w})=\sigma\circ M_{l-1}\circ\cdots\circ\sigma\circ M_{0 }(\boldsymbol{w})\] and for \(l\in\{1,\ldots,L+1\}\), we define \(f_{l}^{-}:\mathbb{R}^{p_{l-1}}\to\mathbb{R}\) \[f_{l}^{-}(\boldsymbol{x})=M_{L}\circ\sigma\circ\cdots\circ\sigma\circ M_{l-1} (\boldsymbol{x}).\] We set the notations \(f_{0}^{+}(\boldsymbol{x})=f_{L+2}^{-}(\boldsymbol{x})=\boldsymbol{x}\). Given a vector \(\boldsymbol{v}=(v_{1},\ldots,v_{p})^{\top}\) of any size \(p\in\mathbb{N}^{*}\), we denote \(|\boldsymbol{v}|_{\infty}=\max_{i=1,\ldots,p}|v_{i}|\). For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\), with the fact that the absolute values of all the parameters are bounded by \(K\) and \(\boldsymbol{w}\in\left[0,1\right]^{d}\), we have for all \(l\in\{1,\ldots,L\}\) \[\left|f_{l}^{+}(\boldsymbol{w})\right|_{\infty}\leq K_{+}^{l}\prod_{k=0}^{l-1 }(p_{k}+1),\] where \(K_{+}=\max\{K,1\}\), and \(f_{l}^{-}\), \(l\in\{1,\ldots,L+1\}\), is a multivariate Lipschitz function with Lipschitz constant bounded by \(\prod_{k=l-1}^{L}(K_{+}p_{k})\). For any \(f\in\overline{\mathcal{F}}_{(L,\boldsymbol{p},K)}\) with weight matrices and shift vectors \(\{M_{l}=(A_{l},b_{l})\}_{l=0}^{L}\) and for all \(\epsilon>0\), since \(\mathbb{Q}\) is dense in \(\mathbb{R}\), there exist a \(N_{\epsilon}>0\) such that for all \(i\geq N_{\epsilon}\), all the non-zero parameters in \(f_{i}\in\mathcal{F}_{(L,\boldsymbol{p},K)}\) are smaller than \[\frac{\epsilon}{(L+1)\prod_{k=0}^{L+1}\left[K_{+}(p_{k}+1)\right]}\] away from the corresponding ones in \(f\). We denote the weight matrices and shift vectors of function \(f_{i}\) as \(\{M_{l}^{i}=(A_{l}^{i},b_{l}^{i})\}_{l=0}^{L}\). We note that \[f_{i}(\boldsymbol{w})=f_{i,2}^{-}\circ\sigma\circ M_{0}^{i}\circ f_{0}^{+}( \boldsymbol{w})\] and \[f(\boldsymbol{w})=f_{i,L+2}^{-}\circ M_{L}\circ f_{L}^{+}(\boldsymbol{w}).\] Therefore, for all \(i\geq N_{\epsilon}\) and all \(\mathbf{w}\in[0,1]^{d}\) \[|f_{i}(\mathbf{w})-f(\mathbf{w})|\leq \sum_{l=1}^{L}\left|f_{i,l+1}^{-}\circ\sigma\circ M_{l-1}^{i}\circ f _{l-1}^{+}(\mathbf{w})-f_{i,l+1}^{-}\circ\sigma\circ M_{l-1}\circ f_{l-1}^{+}(\mathbf{w })\right|\] \[+\left|M_{L}^{i}\circ f_{L}^{+}(\mathbf{w})-M_{L}\circ f_{L}^{+}(\bm {w})\right|\] \[\leq \sum_{l=1}^{L}\left[\prod_{k=l}^{L}K_{+}p_{k}\right]\left|M_{l-1}^ {i}\circ f_{l-1}^{+}(\mathbf{w})-M_{l-1}\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty}\] \[+\left|M_{L}^{i}\circ f_{L}^{+}(\mathbf{w})-M_{L}\circ f_{L}^{+}(\bm {w})\right|\] \[\leq \sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right]\left|M_ {l-1}^{i}\circ f_{l-1}^{+}(\mathbf{w})-M_{l-1}\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty}\] \[\leq \sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right]\left[ \left|\left(A_{l-1}^{i}-A_{l-1}\right)\circ f_{l-1}^{+}(\mathbf{w})\right|_{\infty }+|b_{l-1}^{i}-b_{l-1}|_{\infty}\right]\] \[< \frac{\sum_{l=1}^{L+1}\left[\prod_{k=l}^{L+1}K_{+}p_{k}\right] \left(p_{l-1}\left|f_{l-1}^{+}(\mathbf{w})\right|_{\infty}+1\right)}{(L+1)\prod_{ k=0}^{L+1}\left[K_{+}(p_{k}+1)\right]}\epsilon\] \[< \epsilon.\] Hence, by the definition we can conclude that \(\mathcal{F}_{(L,\mathbf{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(L,\mathbf{p},K)}\) with respect to the supremum norm \(\|\cdot\|_{\infty}\). ### Proof of Corollary 1 Proof.: Recall that, in accordance with the general result (7), for any \(n\geq V(\overline{\mathcal{F}}_{(L,p)})\geq V(\overline{\mathcal{F}}_{(L,p,K)})\), we can obtain \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon} \left[\inf_{f\in\mathcal{F}_{(L,p,K)}}\ell(Q_{f^{\star}},Q_{f})+\sqrt{\frac{V (\overline{\mathcal{F}}_{(L,p,K)})}{n}}\right], \tag{41}\] where \(C_{\epsilon}>0\) is a numerical constant depending on \(\epsilon\) only. Then, applying Lemma 3 and inequality (11), we derive from (41) that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma}\left[\inf_{f\in\mathcal{F}_{(L,p,K)}}\|f ^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V(\overline{\mathcal{F}}_{(L,p,K)})}{n}}\right]\] \[\leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V(\overline{\mathcal{F}}_{(L,p,K) })}{n}}\right], \tag{42}\] where \(C_{\epsilon,\sigma}\) is a numerical constant depending only on \(\epsilon\) and \(\sigma\). On the one hand, as a consequence of Proposition 2, we have that for the network \(\overline{\mathcal{F}}_{(L,p,K)}\) with \[p=38(\lfloor\alpha\rfloor+1)^{2}3^{d}d^{\lfloor\alpha\rfloor+1}N\lceil\log_{2}(8N )\rceil, \tag{43}\] \[L=21(\lfloor\alpha\rfloor+1)^{2}M\lceil\log_{2}(8M)\rceil+2d \tag{44}\] and \(K\) being large enough, \[\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\|f^{\star}-f\|_{1,P_{W}} =\inf_{f\in\overline{\mathcal{F}}_{(L,p,K)}}\int_{\mathscr{W}}|f^{ \star}(w)-f(w)|dP_{W}(w) \tag{45}\] \[\leq 19B(\lfloor\alpha\rfloor+1)^{2}d^{\lfloor\alpha\rfloor+( \alpha\lor 1)/2}(NM)^{-2\alpha/d}.\] On the other hand, given the equalities (43) and (44), we have \(p\geq 342\) and \(L\geq 65\), for any \(\alpha\in\mathbb{R}_{+}^{*}\). By applying Proposition 1, we can derive through a basic computation that \[V(\overline{\mathcal{F}}_{(L,p,K)}) \leq(L+1)\left(s+1\right)\log_{2}\left[2\left(2e(L+1)\left(\frac {pL}{2}+1\right)\right)^{2}\right]\] \[\leq C_{d}p^{2}L^{2}\log_{2}\left(pL^{2}\right)\] \[\leq C_{\alpha,d}(NM)^{2}\left[\log_{2}(2N)\log_{2}(2M)\right]^ {3}, \tag{46}\] where \(C_{d}\) only depends on \(d\) and \(C_{\alpha,d}\) only depends on \(d\) and \(\alpha\). Plugging (45) and (46) into (42), we can conclude that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right]\leq C_{\epsilon, \sigma,\alpha,d,B}\left[(NM)^{-2\alpha/d}+\frac{NM}{\sqrt{n}}\left(\log_{2}(2N )\log_{2}(2M)\right)^{3/2}\right],\] where \(C_{\epsilon,\sigma,\alpha,d,B}>0\) only depends on \(\epsilon,\sigma,\alpha,d\) and \(B\). ### Proof of Proposition 3 Proof.: Prior to proving Proposition 3, we will first introduce the following rules for network combination, which are extensively detailed in Section 7.1 of Schmidt-Hieber (2020). _Composition_: Let \(f_{1}\in\overline{\mathcal{F}}(L,\mathbf{p})\) and \(f_{2}\in\overline{\mathcal{F}}(L^{\prime},\mathbf{p}^{\prime})\) be such that \(p_{L+1}=p_{0}^{\prime}\). Let \(\mathbf{v}\in\mathbb{R}^{p_{L+1}}\) be a vector. We define the composed network \(f_{2}\circ\sigma_{\mathbf{v}}(f_{1})\), where \[\sigma_{\mathbf{v}}\begin{pmatrix}y_{1}\\ \vdots\\ y_{p_{L+1}}\end{pmatrix}=\begin{pmatrix}\sigma(y_{1}-v_{1})\\ \vdots\\ \sigma(y_{p_{L+1}}-v_{p_{L+1}})\end{pmatrix},\] for any vector \(\mathbf{y}=(y_{1},\ldots,y_{p_{L+1}})^{\top}\in\mathbb{R}^{p_{L+1}}\). Then \(f_{2}\circ\sigma_{\mathbf{v}}(f_{1})\) belongs to the space \(\overline{\mathcal{F}}(L+L^{\prime}+1,(\mathbf{p},p_{1}^{\prime},\ldots,p_{L+1}^{ \prime}))\). _Parallelization_: Let \(f_{1}\) and \(f_{2}\) be two networks with an equal number of hidden layers and identical input dimensions. Specifically, let \(f_{1}\in\overline{\mathcal{F}}(L,\mathbf{p})\) and \(f_{2}\in\overline{\mathcal{F}}(L,\mathbf{p}^{\prime})\), where \(p_{0}=p_{0}^{\prime}\). The parallelized network \((f_{1},f_{2})\) concurrently computes \(f_{1}\) and \(f_{2}\) within a joint network belonging to the class \(\overline{\mathcal{F}}(L,(p_{0},p_{1}+p_{1}^{\prime},\ldots,p_{L+1}+p_{L+1}^{ \prime}))\). We will also use the following inequality later in the proof. It can be derived through a minor modification of the proof of Lemma 3 in Schmidt-Hieber (2020). **Lemma 7**.: _Let \(k\in\mathbb{N}^{*}\), \(\mathbf{d}=(d_{0},\ldots,d_{k})\in(\mathbb{N}^{*})^{k+1}\), \(\mathbf{t}=(t_{0},\ldots,t_{k})\in(\mathbb{N}^{*})^{k+1}\) with \(t_{i}\leq d_{i}\) and \(\boldsymbol{\alpha}=(\alpha_{0},\ldots,\alpha_{k})\in(\mathbb{R}^{*}_{+})^{k+1}\). For any \(i\in\{0,\ldots,k\}\) and \(j\in\{1,\ldots,d_{i+1}\}\) with \(d_{k+1}=1\), let \(h_{ij}\in\mathcal{H}^{\alpha_{i}}([0,1]^{t_{i}}\,,Q_{i})\) taking values in \([0,1]\) for some \(Q_{i}\geq 1\) and \(h_{i}=(h_{i1},\ldots,h_{id_{i+1}})^{\top}\). Then for any function \(\widetilde{h}_{i}=(\widetilde{h}_{i1},\ldots,\widetilde{h}_{id_{i+1}})^{\top}\) with \(\widetilde{h}_{ij}:[0,1]^{t_{i}}\to[0,1]\),_ \[\|h_{k}\circ\cdots\circ h_{0}-\widetilde{h}_{k}\circ\cdots\circ\widetilde{h} _{0}\|_{\infty}\leq\left(\prod_{i=0}^{k}Q_{i}\sqrt{d_{i}}\right)\sum_{i=0}^{k} |||h_{i}-\widetilde{h}_{i}|||_{\infty}^{\prod_{i=i+1}^{k}(\alpha_{l}\wedge 1)},\] _where \(|||f|||_{\infty}\) denotes the sup-norm of the function \(\boldsymbol{x}\mapsto|f(\boldsymbol{x})|_{\infty}\)._ The essential strategy for establishing Proposition 3 is derived from a section of the proof of Theorem 1 in Schmidt-Hieber (2020). However, we employ distinct fundamental networks as suggested by Proposition 2 to approximate functions with Holder smoothness. This, in turn, leads to more specific neural network structures for approximating \(f^{\star}=f_{k}\circ\cdots\circ f_{0}\) compared to the sparsity-based networks considered in Theorem 1 of Schmidt-Hieber (2020). To begin with, we rewrite \[f^{\star}=f_{k}\circ\cdots\circ f_{0}=g_{k}\circ\cdots\circ g_{0},\] where \[g_{0}:=\frac{f_{0}}{2B}+\frac{1}{2},\quad g_{k}:=f_{k}(2B\cdot-B)\] and \[g_{i}:=\frac{f_{i}(2B\cdot-B)}{2B}+\frac{1}{2}\quad\text{for all }i\in\{1,\ldots,k-1\}.\] Given the condition \(B\geq 1\), we can readily confirm that \(g_{0j}\in\mathcal{H}^{\alpha_{0}}([0,1]^{t_{0}}\,,Q_{0})\), \(g_{ij}\in\mathcal{H}^{\alpha_{i}}([0,1]^{t_{i}}\,,Q_{i})\), for \(i\in\{1,\ldots,k-1\}\) and \(g_{kj}\in\mathcal{H}^{\alpha_{k}}([0,1]^{t_{k}}\,,Q_{k})\), with \(Q_{0}=1\), \(Q_{i}=(2B)^{\alpha_{i}}\), for \(i\in\{1,\ldots,k-1\}\) and \(Q_{k}=2^{\alpha_{k}}B^{\alpha_{k}+1}\). We apply Proposition 2 to approximate each function \(g_{ij}\), for all \(j\in\{1,\ldots,d_{i+1}\}\), \(i\in\{0,\ldots,k\}\). In particular, for all the functions \(g_{i1},\ldots,g_{id_{i+1}}\), we take \(N_{i}=1\), \(M_{i}=\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil\) and consider a ReLU network \(\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1))}\) with \[p_{i}=114(\lfloor\alpha_{i}\rfloor+1)^{2}3^{t_{i}}t_{i}^{\lfloor\alpha_{i} \rfloor+1},\] \[L_{i}=21(\lfloor\alpha_{i}\rfloor+1)^{2}\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{ *})}\rceil\lceil\log_{2}(8\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil) \rceil+2t_{i}.\] According to Proposition 2, there exists a function \(\overline{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1 ))}\) such that \[\|\overline{g}_{ij}-g_{ij}\|_{\infty} \leq 19Q_{i}(\lfloor\alpha_{i}\rfloor+1)^{2}t_{i}^{\lfloor\alpha _{i}\rfloor+(\alpha_{i}\lor 1)/2}(N_{i}M_{i})^{-2\alpha_{i}/t_{i}}, \tag{47}\] \[\leq 19Q_{i}(\lfloor\alpha_{i}\rfloor+1)^{2}t_{i}^{\lfloor\alpha _{i}\rfloor+(\alpha_{i}\lor 1)/2}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2 \alpha_{i}/t_{i}}.\] Let \(\widetilde{g}_{ij}=(\overline{g}_{ij}\lor 0)\wedge 1=1-(1-\overline{g}_{ij})_{+}\). It is straightforward to observe that \(\widetilde{g}_{ij}\) assumes values in the interval \([0,1]\). Recall that since \(\overline{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i},(t_{i},p_{i},\ldots,p_{i},1 ))}\), it can be written as \[\overline{g}_{ij}=\overline{M}_{L_{i}}^{(i)}\circ\sigma\circ\cdots\circ\sigma \circ\overline{M}_{0}^{(i)},\] for some linear transformations \(\overline{M}_{0}^{(i)},\ldots,\overline{M}_{L_{i}}^{(i)}\). Let \(M_{L_{i}+2}(x)=M_{L_{i}+1}(x)=1-x\), for any \(x\in\mathbb{R}\). Then we have \[\widetilde{g}_{ij} =M_{L_{i}+2}\circ\sigma\circ M_{L_{i}+1}\overline{g}_{ij}\] \[=M_{L_{i}+2}\circ\sigma\circ M_{L_{i}+1}\overline{M}_{L_{i}}^{(i )}\circ\sigma\circ\cdots\circ\sigma\circ\overline{M}_{0}^{(i)}\] \[=M_{L_{i}+2}\circ\sigma\circ\widetilde{M}_{L_{i}}^{(i)}\circ \sigma\circ\cdots\circ\sigma\circ\overline{M}_{0}^{(i)}\] where \(\widetilde{M}_{L_{i}}^{(i)}=M_{L_{i}+1}\circ\overline{M}_{L_{i}}^{(i)}\). Hence, we deduce that \(\widetilde{g}_{ij}\in\overline{\mathcal{F}}_{(L_{i}+1,(t_{i},p_{i},\ldots,p_{ i},1,1))}\). Furthermore, as each function \(g_{ij}\) assumes values in the interval \([0,1]\) due to the transformation, this implies that \[\|\sigma(\widetilde{g}_{ij})-g_{ij}\|_{\infty}=\|\widetilde{g}_{ij}-g_{ij}\| _{\infty}\leq\|\overline{g}_{ij}-g_{ij}\|_{\infty}. \tag{48}\] Next, we amalgamate these individual small networks by employing the fundamental operations of neural networks introduced at the outset of this proof. Note that \(\overline{\mathcal{F}}_{(L_{i}+1,(t_{i},p_{i},\ldots,p_{i},1,1))}\subset \overline{\mathcal{F}}_{(L_{i}+1,(d_{i},p_{i},\ldots p_{i},1,1))}\), for \(t_{i}\leq d_{i}\). By the parallelization rule, the function \(\widetilde{g}_{i}=(\widetilde{g}_{i1},\ldots,\widetilde{g}_{id_{i+1}})\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(L_{i}+1,(d_{i},d_{i+1}p_{i},\ldots,d_{i+1}p_{i},d_{i +1},d_{i+1}))}\). A similar analysis implies that \(\overline{g}_{k}\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(L_{k},(d_{k},d_{k+1}p_{k},\ldots,d_{k+1}p_{k},d_{k+1 }))}\). To construct the function \(\widetilde{f}=\overline{g}_{k}\circ\widetilde{g}_{k-1}\cdots\circ\widetilde{g}_ {0}\) that approximates the function \(f^{\star}=g_{k}\circ\cdots\circ g_{0}\), we apply the composition rule to amalgamate the networks we have considered earlier. It can be shown with a similar argument as we did before that for any \(k\in\mathbb{N}^{*}\), \(\widetilde{g}_{k-1}\circ\cdots\circ\widetilde{g}_{0}\) can be implemented by the ReLU neural network \[\overline{\mathcal{F}}\left(\sum_{i=0}^{k-1}(L_{i}+1),\left(d_{0},\underbrace {d_{1}p_{0},\ldots,d_{1}p_{0}}_{L_{0}\text{ times}},\ldots,d_{k-1},\underbrace{d_{k}p_{k-1}, \ldots,d_{k}p_{k-1}}_{L_{k-1}\text{ times}},d_{k},d_{k}\right)\right).\] Note that \(d_{i}\geq 1\) for all \(0\leq i\leq k+1\) and \(p_{i}\geq 1\) for \(0\leq i\leq k\). Denote \[\overline{p}=\max_{i=0,\ldots,k}d_{i+1}p_{i}.\] Finally, we can conclude that the function \(\widetilde{f}=\overline{g}_{k}\circ\widetilde{g}_{k-1}\cdots\circ\widetilde{g} _{0}\) can be implemented by the ReLU neural network \(\overline{\mathcal{F}}_{(\overline{L},(d_{0},\overline{p},\ldots,\overline{p},d_{k+1}))}\) with \(\overline{L}=k+\sum_{i=0}^{k}L_{i}\). Recall that \(Q_{0}=1\), \(Q_{i}=(2B)^{\alpha_{i}}\), for \(i\in\{1,\ldots,k-1\}\) and \(Q_{k}=2^{\alpha_{k}}B^{\alpha_{k}+1}\). Combining Lemma 7 with (47) and (48) yields the following upper bound for the approximation error, \[\inf_{f\in\overline{\mathcal{F}}\left(\overline{\mathcal{L}},(d_{0}, \overline{p},\ldots,\overline{p},d_{k+1})\right)}\|f^{\star}-f\|_{\infty}\] \[\leq(2B)^{1+\sum_{i=1}^{k}\alpha_{i}}\left(\prod_{i=0}^{k}\sqrt{ d_{i}}\right)\left[\sum_{i=0}^{k}C_{\alpha_{i},t_{i},B}^{\prod_{l=i+1}^{k}( \alpha_{l}\wedge 1)}(\lceil n^{t_{i}/2(t_{i}+2\alpha_{i}^{*})}\rceil)^{-2\alpha_{i}^ {*}/t_{i}}\right],\] where \[C_{\alpha_{i},t_{i},B}=19(2B)^{\alpha_{i}+1}(\lfloor\alpha_{i} \rfloor+1)^{2}t_{i}^{\lfloor\alpha_{i}\rfloor+(\alpha_{i}\lor 1)/2}.\] ### Proof of Corollary 2 Proof.: Firstly, we establish an upper bound for the VC-dimension of the ReLU neural network \(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}\). Using the fact that for any \(k\), \(\mathbf{d}\), \(\mathbf{t}\), \(\boldsymbol{\alpha}\) and any \(n\geq 1\), \(\overline{L}\geq L_{0}\geq 65\), and \(\overline{p}\geq p_{0}\geq 342\), we can deduce, through the application of Proposition 1, that \[V(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}) \leq C_{d_{0}}\overline{p}^{2}\overline{L}^{2}\log_{2}\left( \overline{p}\overline{L}^{2}\right) \tag{49}\] \[\leq C_{k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha}}\left(\sum_{ i=0}^{k}L_{i}\right)^{2}\log_{2}\left(\sum_{i=0}^{k}L_{i}\right),\] where \(C_{k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha}}>0\) is a numerical constant depending only on \(k,\mathbf{d},\mathbf{t}\) and \(\boldsymbol{\alpha}\). Combining (7) with the inequality (11), we obtain that for any \(n\geq V(\overline{\mathcal{F}}_{(\overline{L},\overline{p})})\geq V(\overline{ \mathcal{F}}_{(\overline{L},\overline{p},K)})\), \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{ (\overline{L},\overline{p},K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V( \overline{\mathcal{F}}_{(\overline{L},\overline{p},K)})}{n}}\right] \tag{50}\] \[\leq C_{\epsilon,\sigma}\left[\inf_{f\in\overline{\mathcal{F}}_{ (\overline{L},\overline{p},K)}}\|f^{\star}-f\|_{1,P_{W}}+\sqrt{\frac{V( \overline{\mathcal{F}}_{(\overline{L},\overline{p},K)})}{n}}\right],\] where the second inequality rises from the fact that \(\mathcal{F}_{(\overline{L},\overline{p},K)}\) is dense in \(\overline{\mathcal{F}}_{(\overline{L},\overline{p},K)}\) with respect to the sup-norm according to Lemma 3. Finally, plugging the result provided in Proposition 3 and (49) into (50), we can conclude that \[\mathbb{E}\left[\ell(Q_{f^{\star}},Q_{\widehat{f}})\right] \leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{\alpha },B}\left[\left(\sum_{i=0}^{k}n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^{ \star}}}\right)+\left(\sum_{i=0}^{k}L_{i}\right)\sqrt{\frac{\log_{2}\left(\sum_ {i=0}^{k}L_{i}\right)}{n}}\right]\] \[\leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{ \alpha},B}\left[\sum_{i=0}^{k}\left(n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2 \alpha_{i}^{\star}}}+\frac{L_{i}}{\sqrt{n}}\right)\right]\sqrt{\log_{2}\left( \sum_{i=0}^{k}L_{i}\right)}\] \[\leq C_{\epsilon,\sigma,k,\mathbf{d},\mathbf{t},\boldsymbol{ \alpha},B}\left(\sum_{i=0}^{k}n^{-\frac{\alpha_{i}^{\star}}{t_{i}+2\alpha_{i}^ {\star}}}\right)(\log n)^{3/2}.\] ### Proof of Theorem 2 To establish lower bounds, we initially prove the following variant of Assouad's lemma. **Lemma 8**.: _Let \(\mathcal{P}\) be a family of probabilities on a measurable space \((\mathscr{X},\mathcal{X})\). If for some integer \(D\geq 1\), there is a subset of \(\mathcal{P}\) of the form \(\left\{P_{\boldsymbol{\varepsilon}},\ \boldsymbol{\varepsilon}\in\{0,1\}^{D}\right\}\) satisfying_ 1. _there exists_ \(\eta>0\) _such that for all_ \(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime}\in\{0,1\}^{D}\)_,_ \[\|P_{\boldsymbol{\varepsilon}}-P_{\boldsymbol{\varepsilon}^{\prime}}\|_{TV} \geq\eta\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime}) \quad\text{with}\quad\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^ {\prime})=\sum_{j=1}^{D}\mbox{1}\!\!\!1_{\varepsilon_{j}\neq\varepsilon_{j}^{ \prime}}\] 2. _there exists a constant_ \(a\in[0,1/2]\) _such that_ \[h^{2}\left(P_{\boldsymbol{\varepsilon}},P_{\boldsymbol{\varepsilon}^{\prime}} \right)\leq\frac{a}{n}\quad\text{for all }\boldsymbol{\varepsilon},\boldsymbol{ \varepsilon}^{\prime}\in\{0,1\}^{D}\text{ satisfying }\delta(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime})=1.\] _Then for all measurable mappings \(\widehat{P}:\mathscr{X}^{n}\to\mathcal{P}\),_ \[\sup_{P\in\mathcal{P}}\mathbb{E}_{\mathbf{P}}\left[\|P-\widehat{P}(\boldsymbol {X})\|_{TV}\right]\geq\frac{\eta D}{4}\max\left\{1-\sqrt{2a},\ \frac{1}{2}\left(1-\frac{a}{n}\right)^{2n}\right\}, \tag{51}\] _where \(\mathbb{E}_{\mathbf{P}}\) denotes the expectation with respect to a random variable \(\boldsymbol{X}=(X_{1},\ldots,X_{n})\) with distribution \(\mathbf{P}=P^{\otimes n}\)._ Proof.: Let \(\overline{\boldsymbol{\varepsilon}}\) minimize \(\boldsymbol{\varepsilon}\mapsto\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}\) over \(\{0,1\}^{D}\) for a given probability \(P\) on \((\mathscr{X},\mathcal{X})\). Note that for all \(\boldsymbol{\varepsilon}\in\{0,1\}^{D}\), \[\|P_{\boldsymbol{\varepsilon}}-P_{\overline{\boldsymbol{\varepsilon}}}\|_{TV} \leq\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}+\|P-P_{\overline{\boldsymbol{ \varepsilon}}}\|_{TV}\leq 2\|P-P_{\boldsymbol{\varepsilon}}\|_{TV}.\] Thus, using property (i), we have for all \(\boldsymbol{\varepsilon}\in\{0,1\}^{D}\): \[\|P_{\boldsymbol{\varepsilon}}-P\|_{TV}\geq\frac{\eta}{2}\delta(\boldsymbol{ \varepsilon},\overline{\boldsymbol{\varepsilon}})=\sum_{i=1}^{D}\left[ \varepsilon_{i}\ell_{i}(P)+(1-\varepsilon_{i})\ell_{i}^{\prime}(P)\right],\] where \(\ell_{i}(P)=(\eta/2)\mbox{1\kern-2.5pt{l}}_{\pi_{i}=0}\) and \(\ell_{i}^{\prime}(P)=(\eta/2)\mbox{1\kern-2.5pt{l}}_{\pi_{i}=1}\), for \(i\in\{1,\ldots,D\}\). Finally, the conclusion follows by applying a version of Assouad's lemma from Birge (1986) with \(\beta_{i}=a/n\), for all \(i\in\{1,\ldots,D\}\) and \(\alpha=\eta/2\). Now we prove Theorem 2. The roadmap is to first find a suitable collection of probabilities \(\mathcal{P}\) then apply Lemma 8 to derive the lower bound. The construction idea is inspired by the proof of Theorem 3 of Schmidt-Hieber (2020). Denote \(i^{*}\in\operatorname*{argmin}_{i=0,\ldots,k}\alpha_{i}^{*}/(2\alpha_{i}^{*}+t _{i})\). For simplicity, we write \(t^{*}=t_{i^{*}}\), \(\alpha^{*}=\alpha_{i^{*}}\) and \(\alpha^{**}=\alpha_{i^{*}}^{*}\). We define \(N_{n}=\lfloor\rho n^{1/(2\alpha^{**}+t^{*})}\rfloor\), \(h_{n}=1/N_{n}\) and \(\Lambda=\{0,h_{n},\ldots,(N_{n}-1)h_{n}\}\). Following the construction outlined on page 93 of Tsybakov (2009), we consider the function \[\mathcal{K}(x)=a\exp\left(-\frac{1}{1-(2x-1)^{2}}\right)\mbox{1\kern-2.5pt{l} }_{|2x-1|\leq 1}\] with \(a>0\). Provided that \(a\) is sufficiently small, we have \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) with support on \([0,1]\). Moreover, for any \(\beta\in\mathbb{N}\) satisfying \(\beta\leq\lfloor\alpha^{*}\rfloor\), the \(\beta\)-th derivative of \(\mathcal{K}\) is zero at both \(x=0\) and \(x=1\), i.e., \(\mathcal{K}^{(\beta)}(0)=\mathcal{K}^{(\beta)}(1)=0\). We define the function \(\psi_{\mathbf{u}}\) on \([0,1]^{t^{*}}\) as \[\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^{*}})=h_{n}^{\alpha^{*}}\prod_{j=1}^{t^{*} }\mathcal{K}\left(\frac{w_{j}-u_{j}}{h_{n}}\right),\] where \(\mathbf{u}=(u_{1},\ldots,u_{t^{*}})\in\mathcal{U}_{n}=\{(u_{1},\ldots,u_{t^{*} }),\;u_{i}\in\Lambda\}\). Note that for any \(\mathbf{u},\mathbf{u}^{\prime}\in\mathcal{U}_{n}\), \(\mathbf{u}\neq\mathbf{u}^{\prime}\), the supports of \(\psi_{\mathbf{u}}\) and \(\psi_{\mathbf{u}^{\prime}}\) are disjoint. For any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t^{*}})\in\mathbb{N}^{t^{*}}\) satisfying \(\sum_{j=1}^{t^{*}}\beta_{j}\leq\lfloor\alpha^{*}\rfloor\), it holds that \(\|\partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}\|_{\infty}\leq 1\) due to the fact that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\). Set \(\mathcal{I}_{\mathbf{u}}=[u_{1},u_{1}+h_{n}]\times\cdots\times[u_{t^{*}},u_{t^ {*}}+h_{n}]\). Moreover, for any \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{t^{*}})\) with \(\sum_{j=1}^{t^{*}}\beta_{j}=\lfloor\alpha^{*}\rfloor\), with the fact that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) and triangle inequality, we obtain that for any \(\boldsymbol{x},\boldsymbol{y}\in\mathcal{I}_{\mathbf{u}}\), \[\frac{\left|\partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}(\boldsymbol{x})- \partial^{\boldsymbol{\beta}}\psi_{\mathbf{u}}(\boldsymbol{y})\right|}{\| \boldsymbol{x}-\boldsymbol{y}\|_{2}^{\alpha^{*}-\lfloor\alpha^{*}\rfloor}} \leq t^{*}.\] Therefore, we have \(\psi_{\mathbf{u}}\in\mathcal{H}^{\alpha^{*}}(\mathcal{I}_{\mathbf{u}},t^{*})\). For any vector \(\boldsymbol{\varepsilon}=(\varepsilon_{\mathbf{u}})_{\mathbf{u}\in\mathcal{U} _{n}}\in\{0,1\}^{\left|\mathcal{U}_{n}\right|}\), define the function \(\phi_{\boldsymbol{\varepsilon}}\) on \([0,1]^{t^{*}}\) as \[\phi_{\boldsymbol{\varepsilon}}(w_{1},\ldots,w_{t^{*}})=\sum_{\mathbf{u}\in \mathcal{U}_{n}}\varepsilon_{\mathbf{u}}\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^ {*}}).\] Given that \(\mathcal{K}\in\mathcal{H}^{\alpha^{*}}(\mathbb{R},1)\) and \(\mathcal{K}^{(\beta)}(0)=\mathcal{K}^{(\beta)}(1)=0\), for any \(\beta\leq\lfloor\alpha^{*}\rfloor\), it is not difficult to verify that \(\phi_{\boldsymbol{\varepsilon}}\in\mathcal{H}^{\alpha^{*}}(\left[0,1\right]^{t^ {*}},2t^{*})\). Let \(d_{i}^{\prime}=\min\{d_{0},\ldots,d_{i}\}\), for all \(i\in\{0,\ldots,k\}\). For \(0\leq i<i^{*}\), we denote \(f_{i}(\boldsymbol{w})=(w_{1},\ldots,w_{d_{i+1}})^{\top}\), if \(d_{i+1}=d_{i+1}^{\prime}\); otherwise, we set \(f_{i}(\boldsymbol{w})=(w_{1},\ldots,w_{d_{i}^{\prime}},0,\ldots,0)^{\top}\). We denote \(f_{\boldsymbol{\varepsilon},i^{*}}(\boldsymbol{w})=(\phi_{\boldsymbol{ \varepsilon}}(w_{1},\ldots,w_{t^{*}}),0,\ldots,0)^{\top}\) \(f_{i}(\mathbf{w})=(w_{1}^{\alpha_{i}\wedge 1},0,\ldots,0)^{\top}\), for \(i^{*}<i\leq k-1\) and \(f_{k}(\mathbf{w})=w_{1}^{\alpha_{k}\wedge 1}\). Let \(\mathcal{A}=\prod_{l=i^{*}+1}^{k}(\alpha_{l}\wedge 1)\). Since \(t_{j}\leq\min(d_{0},\ldots,d_{j-1})\), we can set \[f_{\mathbf{\varepsilon}}(\mathbf{w}) =f_{k}\circ\cdots\circ f_{i^{*}+1}\circ f_{\mathbf{\varepsilon},i^{*} }\circ f_{i^{*}-1}\circ\cdots\circ f_{0}(\mathbf{w})\] \[=\sum_{\mathbf{u}\in\mathcal{U}_{n}}\varepsilon_{\mathbf{u}} \left[\psi_{\mathbf{u}}(w_{1},\ldots,w_{t^{*}})\right]^{\mathcal{A}}.\] Consequently, we can observe that the resulting function \(f_{\mathbf{\varepsilon}}\) belongs to the class \(\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{\alpha},B)\) when \(B\) is sufficiently large. Since \(W\) is uniformly distributed on \([0,1]^{d_{0}}\), we can compute \[\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{2}^{2}=\delta(\mathbf{ \varepsilon},\mathbf{\varepsilon}^{\prime})h_{n}^{{2\alpha^{**}}+t^{*}}\|\mathcal{ K}^{\mathcal{A}}\|_{2}^{2t^{*}} \tag{52}\] and \[\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{1}=\delta(\mathbf{ \varepsilon},\mathbf{\varepsilon}^{\prime})h_{n}^{{\alpha^{**}}+t^{*}}\|\mathcal{ K}^{\mathcal{A}}\|_{1}^{t^{*}}, \tag{53}\] where \(\delta(\cdot,\cdot)\) denotes the Hamming distance. For any \(P_{f_{\mathbf{\varepsilon}}}=Q_{f_{\mathbf{\varepsilon}}}\cdot P_{W}\) and \(P_{f_{\mathbf{\varepsilon}^{\prime}}}=Q_{f_{\mathbf{\varepsilon}^{\prime}}}\cdot P_{W}\), where \(P_{W}\) is the uniform distribution on \([0,1]^{d_{0}}\), we can derive that \[h^{2}(P_{f_{\mathbf{\varepsilon}}},P_{f_{\mathbf{\varepsilon}^{\prime}}}) =\int_{\mathscr{W}}\left(1-\exp\left[-\frac{|f_{\mathbf{ \varepsilon}}(w)-f_{\mathbf{\varepsilon}^{\prime}}(w)|^{2}}{8\sigma^{2}}\right] \right)dP_{W}(w)\] \[\leq\int_{\mathscr{W}}\frac{|f_{\mathbf{\varepsilon}}(w)-f_{\mathbf{ \varepsilon}^{\prime}}(w)|^{2}}{8\sigma^{2}}dP_{W}(w) \tag{54}\] \[=\frac{\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{2 }^{2}}{8\sigma^{2}}.\] According to Lemma 2, we can deduce that for \(W\) uniformly distributed on \([0,1]^{d_{0}}\), \[\ell(Q_{f_{\mathbf{\varepsilon}}},Q_{f_{\mathbf{\varepsilon}^{\prime}}})=\|P_{f_{\mathbf{ \varepsilon}}}-P_{f_{\mathbf{\varepsilon}^{\prime}}}\|_{TV}\geq\frac{0.78}{\sqrt{2 \pi}\sigma}\|f_{\mathbf{\varepsilon}}-f_{\mathbf{\varepsilon}^{\prime}}\|_{1}, \tag{55}\] provided \(\rho\geq 1+\left[\|\mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}/(\sqrt{2\pi}\sigma )\right]^{1/\alpha^{**}}\) such that \(h_{n}^{{\alpha^{**}}}\leq\sqrt{2\pi}\sigma/\|\mathcal{K}^{\mathcal{A}}\|_{1}^{ t^{*}}\). Putting (52), (53), (54) and (55) together, we observe that the family of probabilities \(\mathcal{P}=\{P_{\mathbf{\gamma}_{\mathbf{\varepsilon}}},\ \mathbf{\varepsilon}\in\{0,1\}^{|\mathcal{U}_{n}|}\}\) satisfies the assumptions of Lemma 8 with \(D=N_{n}^{t^{*}}\), \[\eta=\frac{0.78}{\sqrt{2\pi}\sigma}h_{n}^{{\alpha^{**}}+t^{*}}\|\mathcal{K}^{ \mathcal{A}}\|_{1}^{t^{*}}\quad\text{and}\quad a=\frac{1}{8\sigma^{2}}nh_{n}^{ {2\alpha^{**}}+t^{*}}\|\mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}.\] Finally, taking the constant \[\rho\geq\left[1+\left(\frac{\|\mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}}{\sqrt{ 2\pi}\sigma}\right)^{\frac{1}{\alpha^{**}}}\right]\vee\left[1+\left(\frac{\| \mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}}{\sigma^{2}}\right)^{\frac{1}{2 \alpha^{**}}+t^{*}}\right]\] such that \(h_{n}^{{\alpha^{**}}}\leq(n\|\mathcal{K}^{\mathcal{A}}\|_{2}^{2t^{*}}/\sigma^{2 })^{-\frac{{\alpha^{**}}}{2\alpha^{**}}+t^{*}}\wedge\left(\sqrt{2\pi}\sigma/\| \mathcal{K}^{\mathcal{A}}\|_{1}^{t^{*}}\right)\), we derive by Lemma 8 that there exists some constant \(c>0\) such that \[\inf_{\widehat{f}}\sup_{f^{*}\in\mathcal{F}(k,\mathbf{d},\mathbf{t},\mathbf{ \alpha},B)}\mathbb{E}\left[\ell(Q_{f^{*}},Q_{\widehat{f}})\right]\geq cn^{- \frac{{\alpha^{**}}}{2\alpha^{**}+t^{*}}}.\]
2302.10152
Interaction energy between a charged medium and its electromagnetic field as a dark matter candidate
In the scalar theory of gravitation with a preferred reference frame, a consistent formulation of electrodynamics in the presence of gravitation needs to introduce an additional energy tensor: the interaction energy tensor. This energy is gravitationally active and might contribute to the dark matter, because it has an exotic character and it is not localized inside matter. In order to check if that energy might form representative dark halos, one has to model the interstellar radiation field in a galaxy as a complete electromagnetic field obeying the Maxwell equations. A model has been built for this purpose, based on assuming axial symmetry and on recent results about axisymmetric Maxwell fields. Its predictions for the variation of the spectral energy density inside our Galaxy are relatively close to those of a recent radiation transfer model, except on the symmetry axis of the Galaxy, where the present model predicts extremely high values of the energy density.
Mayeul Arminjon
2023-01-31T15:18:40Z
http://arxiv.org/abs/2302.10152v1
Interaction energy between a charged medium and its electromagnetic field as a dark matter candidate ###### Abstract In the scalar theory of gravitation with a preferred reference frame, a consistent formulation of electrodynamics in the presence of gravitation needs to introduce an additional energy tensor: the interaction energy tensor. This energy is gravitationally active and might contribute to the dark matter, because it has an exotic character and it is not localized inside matter. In order to check if that energy might form representative dark halos, one has to model the interstellar radiation field in a galaxy as a complete electromagnetic field obeying the Maxwell equations. A model has been built for this purpose, based on assuming axial symmetry and on recent results about axisymmetric Maxwell fields. Its predictions for the variation of the spectral energy density inside our Galaxy are relatively close to those of a recent radiation transfer model, except on the symmetry axis of the Galaxy, where the present model predicts extremely high values of the energy density. ## 1 Introduction Our initial motivation for the present work was independent of the problem of dark matter. It was to develop a consistent electrodynamics in an alternative theory of gravity: "the scalar ether theory", or SET. This is a preferred-frame theory based on a scalar field only, [1, 2] that reduces to special relativity (SR) when the gravitational field vanishes. In general relativity (GR), the modification of the equations of electrodynamics in the presence of a gravitational field consists simply in rewriting the equations that are valid in SR, by using the "comma goes to semicolon" rule: \(\ {}_{,\nu}\ \rightarrow\ _{;\nu}\), i.e.: partial derivatives are replaced by covariant derivatives based on the metric connection. (See Ref. [3] for an interesting discussion.) In particular, the dynamical equation for the energy(-momentum-stress) tensor \(\boldsymbol{T}\) that is valid in SR is: \(\ T^{\lambda\nu}_{\ \,\nu}=0\). Using the rule mentioned above, that equation is modified to: \(T^{\lambda\nu}_{\ \ ;\nu}=0\), which is indeed the dynamical equation in GR and in many of its extensions or modifications. However, in the general situation, the latter equation is not equivalent to the dynamical equation of SET, [1] hence the foregoing rule cannot be used in SET. Therefore, in that alternative theory, a different and less obvious path has to be taken for the purpose of adaptating classical electrodynamics in the presence of a gravitational field. It turns out that this leads to introduce an exotic form of energy, and that this new form is a possible candidate for dark matter. In this conference paper, we quickly follow that path. We then summarize the Maxwell model of the interstellar radiation field, that we built to prepare the test of this candidate. ## 2 Necessity of an interaction tensor in SET In SET, we assume classically that the electromagnetic field tensor \(\boldsymbol{F}\) derives from a 4-potential \(A_{\mu}\): \[F_{\mu\nu}:=A_{\nu,\mu}-A_{\mu,\nu}=A_{\nu;\mu}-A_{\mu;\nu}. \tag{1}\] This is (locally) equivalent to assuming that (i) \(\boldsymbol{F}\) is antisymmetric (\(F_{\mu\nu}=-F_{\nu\mu}\) ) and (ii) the first group of the Maxwell equations is satisfied: \[F_{\lambda\mu\,,\nu}+F_{\mu\nu,\lambda}+F_{\nu\lambda,\mu}\equiv F_{\lambda \mu\,;\nu}+F_{\mu\nu;\lambda}+F_{\nu\lambda;\mu}=0. \tag{2}\] (The first equality in (2) is indeed an identity due to the antisymmetry of the field tensor and to the symmetry of the metric connection.) Therefore, in SET, the first group of the Maxwell equations is left unchanged. In a first version of electrodynamics in the presence of a gravitational field in SET, the second group of the Maxwell equations was got by applying the dynamical equation of SET to a charged medium in the presence of the Lorentz force, assuming that the following holds for the energy tensors, as is the case in SR and still in GR: \[\mbox{(A) Total energy tensor }\mathbf{T}=\mathbf{T}_{ \mbox{\scriptsize charged medium}}+\mathbf{T}_{\mbox{\scriptsize field}}. \tag{3}\] (The total energy tensor \(T\) is the source of the gravitational field -- more precisely, in SET, that source is the component \(T^{00}\) in the preferred reference frame of the theory; see Ref. [1] for details.) The additivity (3) leads to a form of Maxwell's second group of equations in SET. [4] But that form of Maxwell's second group in SET predicts charge production/destruction at untenable rates, therefore it has to be _discarded_. [5] The additivity assumption (3) is contingent and may be abandoned. This means introducing an "interaction" energy tensor \(\mathbf{T}_{\mbox{\scriptsize inter}}\), such that \[\mathbf{T}=\mathbf{T}_{\mbox{\scriptsize charged medium}}+\mathbf{T}_{\mbox{\scriptsize field}}\ \frac{+\mathbf{T}_{\mbox{\scriptsize inter}}}{\mbox{\scriptsize inter }}. \tag{4}\] One then has to constrain the form of \(\mathbf{T}_{\mbox{\scriptsize inter}}\) and to derive equations for it. ## 3 Form of the interaction tensor In SR, the additivity (3) of the energy tensors does apply, thus \(\mathbf{T}_{\mbox{\scriptsize inter}}=\mathbf{0}\). In SET we may impose that \(\mathbf{T}_{\mbox{\scriptsize inter}}\) should be Lorentz-invariant in the situation of SR, i.e. when the metric \(\gamma\) is Minkowski's metric \(\mathbf{\gamma}^{0}\) (\(\gamma^{0}_{\mu\nu}=\eta_{\mu\nu}\) in Cartesian coordinates). This is true if and, one can prove, [6]_only if_ we have: \[T_{\mbox{\scriptsize inter }\mu\nu}=p\,\gamma^{0}_{\mu\nu}\qquad\mbox{( situation of SR)}, \tag{5}\] with some scalar field \(p\). This is equivalent to: \[T^{\mu}_{\mbox{\scriptsize inter }\nu}=p\,\delta^{\mu}_{\nu}\qquad\mbox{( situation of SR)}. \tag{6}\] The definition \[T^{\mu}_{\mbox{\scriptsize inter }\nu}:=p\,\delta^{\mu}_{\nu},\qquad\mbox{or} \quad(T_{\mbox{\scriptsize inter}})_{\mu\nu}:=p\,\gamma_{\mu\nu}, \tag{7}\] thus got in a Minkowski spacetime, is in fact generally-covariant. Hence, we adopt (7) for the general case. With a general metric \(\gamma\), the tensor (7) is still pointwise Lorentz-invariant -- in the sense that we have \((T_{\mbox{\scriptsize inter}})_{\mu\nu}\,(X)=p(X)\,\eta_{\mu\nu}\) in any coordinates that are Cartesian at a given event \(X\), and this form remains invariant after any coordinate transformation that is Lorentz at \(X\), i.e., such that the matrix \(\left(\frac{\partial x^{\prime\mu}}{\partial x^{\nu}}(X)\right)\) belongs to the Lorentz group. ## 4 SET electrodynamics with the interaction tensor With the additivity assumption (3) of the energy tensors, i.e., \(\mathbf{T}_{\rm inter}={\bf 0}\), the system of equations of electrodynamics of SET is closed, but violates charge conservation. With the interaction energy tensor (7) we have just one unknown more: the scalar field \(p\). So we need just one scalar equation more. It turns out to be consistent to add _charge conservation_ as the new scalar equation. [7] Then the system of equations of electrodynamics of SET is again closed, and now it satisfies charge conservation. Based on that closed system, equations were derived that _determine the field \(p\) in a given general electromagnetic (EM) field \(({\bf E},{\bf B})\) and in a given weak gravitational field with Newtonian potential \(U\)_: [7] the scalar field \(p\) (or more exactly, its first approximation \(p_{1}\)) obeys an advection equation: \[\partial_{T}\,p_{1}+u^{j}\partial_{j}\,p_{1}=S. \tag{8}\] That equation has given source \(S\) and given characteristic curves, the latter being the integral curves \({\cal C}(T_{0},{\bf x}_{0})\) of the spatial vector field \({\bf u}\) in Eq. (8). Here, "given" means that the source field \(S\), as also the vector field \({\bf u}\) and hence the characteristic curves \({\cal C}(T_{0},{\bf x}_{0})\), do not depend on the unknown field \(p_{1}\). It follows that \(p_{1}\) can be obtained by integrating the source field \(S\) along those curves. [7] The "medium" defined by the corresponding interaction energy tensor field \(\mathbf{T}_{\rm inter}=p\mathbf{\gamma}\) can be counted as "dark matter", for * it is not localized inside (usual) matter: indeed, the equations for the field \(p_{1}\) show that its source \(S\) is, in general, non-zero as soon as there is a general EM field: \({\bf E}\neq 0,\ {\bf B}\neq 0,\ {\bf E}.{\bf B}\neq 0\), and a variable gravitational field with \(\partial_{T}U\neq 0\), where the time derivative \(\partial_{T}U\) of the Newtonian potential is taken in the preferred frame; [7] * it is gravitationally active, since, from its definition (4), it contributes to the source of the gravitational field in SET, that is the component \(T^{00}\) in the preferred frame; * it is "exotic", i.e., it is not usual matter -- as shown by the form (7) of its energy tensor, which is very different from the possible energy tensors of any fluid, solid, or EM field. The fact that it is Lorentz-invariant means that no velocity can be defined for that medium. The energy tensor (7) depends only on one scalar field (\(p\)), hence no equation of state is needed. The foregoing considerations are at the classical level, hence do not tell if the "matter" with the energy tensor (7) is made of quantum particles. ## 5 Maxwell model of the ISRF In order to check if the interaction energy \(E_{\rm inter}\) might be distributed in the form of dark halos and contribute significantly to the dark matter distribution, we have to compute the field \(p\) for a model of a galaxy. This needs that we have a model of the Interstellar Radiation Field in a galaxy (ISRF) that provides that field as a solution of the Maxwell equations. However, the existing models of the ISRF (e.g. Refs. [8, 9, 10, 11, 12, 13]) focus on the radiation transfer (mainly via absorption, reemission or scattering by dust particles). They follow the paths of light rays or photons. To the best of our knowledge, no previous model of the ISRF did consider the full EM field with its six interacting components subjected to the Maxwell equations. Therefore, we had to build a model entirely from scratch, which involved both theoretical and numerical difficulties. [17] ### Maxwell model of the ISRF: Main assumptions i) _Axial symmetry_ is a relevant approximation for many galaxies, and is in fact often used in the existing models of the ISRF (see e.g. Refs. [13, 14, 15, 16]). We adopt cylindrical coordinates \((\rho,\phi,z)\) whose the \(z\) axis is the symmetry axis. The primary source of the ISRF is made of the stars or other bright astrophysical objects. We want to describe the ISRF as a smoothed-out field at the galactic scale, not the field in the stars or in their neighborhood. Therefore: ii) we consider the _source-free_ Maxwell equations. We proved the following result: [18] _Theorem._ Any time-harmonic axisymmetric source-free Maxwell field is the sum of two simple fields of that same kind: * **1**) one deriving from a vector potential **A** having just \(A_{z}\neq 0\), with \(A_{z}\) a time-harmonic axisymmetric solution of the scalar wave equation; * **2**) one deduced from a field of the form (**1**) by EM duality, i.e. \[{\bf E}^{\prime}=c{\bf B},\quad{\bf B}^{\prime}=-{\bf E}/c.\] (9) ### Maxwell model of the ISRF: Form of the model We consider an EM field having a finite set of frequencies \((\omega_{j})\)\((j=1,...,N_{\omega})\). That EM field is thus the sum of \(N_{\omega}\) time-harmonic EM fields. Using the Theorem above, each of them is generated by potentials \(A_{jz},\,A^{\prime}_{jz}\). The scalar potential \(A_{jz}\), for the field of the form (1) above with frequency \(\omega_{j}\), can be a priori any time-harmonic axisymmetric solution of the scalar wave equation having frequency \(\omega_{j}\). [18] However, in the relevant "totally propagating" case, such a solution can be written explicitly in terms of a spectrum function \(S_{j}=S_{j}(k)\quad(-K_{j}\leq k\leq K_{j},\;\;\;K_{j}:=\frac{\omega_{j}}{c})\): \(A_{jz}=\psi_{\omega_{j}\,S_{j}}\), with [19] \[\psi_{\omega_{j}\,\,S_{j}}\,(t,\rho,z):=e^{-\,{\rm i}\,\omega_{j}t}\int_{-K_{j }}^{K_{j}}\,J_{0}\left(\rho\sqrt{K_{j}^{2}-k^{2}}\right)\,\,e^{{\rm i}\,k\,z} \,S_{j}(k)\,\,{\rm d}\,k, \tag{10}\] where \(J_{0}\) is the Bessel function of the first kind and of order \(0\). The "dual" potential \(A^{\prime}_{jz}\), for the field of the form (2) above with frequency \(\omega_{j}\), has just the same form (10), with, in the general case, another spectrum function, say \(S^{\prime}_{j}\). ### Maxwell model of the ISRF: Model of a galaxy We model an axisymmetric galaxy as a finite set \(\{{\bf x}_{i}\}\) of point-like "stars", the azimuthal distribution of which is uniform. That set of points is obtained by pseudo-random generation of their cylindrical coordinates \(\rho,\phi,z\) with specific probability laws, ensuring that [17] * the distribution of \(\rho\) and \(z\) is approximately that valid for the star distribution in the galaxy considered (in the numerical application, we took our Galaxy); * the set \(\{{\bf x}_{i}\}\) is approximately invariant under azimuthal rotations of any angle \(\phi\). ### Maxwell model of the ISRF: Determining the potentials To determine the potentials \(A_{j\,z}\) and \(A^{\prime}_{j\,z}\) (\(j=1,...,N_{\omega}\)) that generate the model ISRF (Subsect. 5.2), we are fitting to the form (10) a sum of spherical potentials emanating from the "stars" at points \({\bf x}_{i}\), thus determining the unknown spectrum functions \(S_{j}\) and \(S^{\prime}_{j}\). [17] For the purpose of this fitting, every point-like "star" is indeed assumed to contribute spherical scalar waves \(\psi_{{\bf x}_{i}\,\omega_{j}}\) having the same frequencies \(\omega_{j}\) as has the model ISRF, and whose emission center is the spatial position \({\bf x}_{i}\) of the star: \[\psi_{{\bf x}_{i}\,\omega_{j}}\,\,(t,{\bf x}):=\psi_{\omega_{j}}\,\,(t,{\bf x} -{\bf x}_{i})=\frac{e^{{\rm i}(K_{j}\,r_{i}-\omega_{j}t)}}{K_{j}\,r_{i}}. \tag{11}\] Here \(r_{i}:=|{\bf x}-{\bf x}_{i}|\), \(\ K_{j}:=\frac{\omega_{j}}{c}\), and the function \[\psi_{\omega_{j}}\,\,(t,{\bf x})=\frac{e^{{\rm i}(K_{j}\,r-\omega_{j}t)}}{K_{j }\,r},\qquad r:=|{\bf x}| \tag{12}\] is (up to an amplitude factor) the unique time-harmonic solution of the scalar wave equation, with frequency \(\omega_{j}\), that has spherical symmetry around \({\bf x}={\bf 0}\) and that is an outgoing wave. Spherical symmetry is assumed in order to ensure that all of the directions starting from the star are equivalent, of course. Of course also, the uniqueness of the solution (12) means the uniqueness of the solution translated from \({\bf x}={\bf 0}\) to \({\bf x}={\bf x}_{i}\): the function \(\psi_{{\bf x}_{i}\,\omega_{j}}\) given by Eq. (11). This implies that we cannot define different contributions of the "star" at \({\bf x}_{i}\) to the \(A_{j\,z}\) potential and to the "dual" potential \(A^{\prime}_{j\,z}\), other than through multiplying \(\psi_{{\bf x}_{i}\,\omega_{j}}\) by two different amplitude factors -- for which there is no apparent reason. Therefore, we actually must assume that \(A_{j\,z}=A^{\prime}_{j\,z}\), thus \(S_{j}=S^{\prime}_{j}\), and our fitting problem writes \[\sum_{i=1}^{i_{\rm max}}\psi_{{\bf x}_{i}\,\omega_{j}}\cong\psi_{\omega_{j}\, \,\,S_{j}}\quad{\rm on}\,\,G\qquad(j=1,...,N_{\omega}). \tag{13}\] Here the symbol \(\cong\) indicates that the equality is in the sense of the least-squares, the two sides being evaluated on some spatio-temporal grid \(G\). The unknown spectrum function \(S_{j}\) is defined (approximately) by its values \(S_{nj}:=S_{j}(k_{nj})\) at a regular discretization \(k_{nj}=-K_{j}+n\frac{2K_{j}}{N}\) (\(n=0,...,N\)) of the integration interval \([-K_{j},+K_{j}]\) for \(k\) in the integral (10). [17] With this discretization, (13) becomes the explicit least-squares system \[\sum_{i=1}^{i_{\rm max}}\psi_{{\bf x}_{i}\,\omega_{j}}\cong\sum_{n=0}^{N}f_{nj }\,S_{nj}\quad\mbox{on $G$}\qquad(j=1,...,N_{\omega}), \tag{14}\] with \(f_{nj}(t,\rho,z)=\exp(-\,{\rm i}\,\omega_{j}t)\,g_{nj}(\rho,z)\) a specific time-harmonic function. [20] The complex numbers \(S_{nj}\) (\(n=0,...,N\); \(j=1,...,N_{\omega}\)) are the solved-for parameters. Note that (14) defines \(N_{\omega}\) fitting problems. Previously, a unique "grouped fitting" was done: solving the least-squares system obtained by summing on the frequency index \(j\) on both sides of (14). [17] The "separate fitting" (14) is more precise -- and also less time-consuming, since actually the common harmonic time dependence can be removed from both sides of (14), thus eliminating the time variable, and hence considering only a spatial grid \(G^{\prime}\) instead of a spatio-temporal grid \(G\). [20] The computer time is indeed an important point to be considered, because a precision better than quadruple must be implemented. [17] ### Application to the spatial variation of the spectral energy density in the Galaxy Because we consider an EM field with a finite frequency spectrum (\(\omega_{j}\)) (\(j=1,...,N_{\omega}\)), each among its six components has the following form: \[F^{(q)}(t,{\bf x})={\cal R}e\left(\sum_{j=1}^{N_{\omega}}C_{j}^{(q)}({\bf x})e ^{-\,{\rm i}\,\omega_{j}t}\right)\qquad(q=1,...,6). \tag{15}\] It follows that the time-averaged volumic energy density of the field is given by: [21] \[\overline{U}({\bf x}):=\frac{\overline{\delta W}}{\delta V}({\bf x})=\sum_{j =1}^{N_{\omega}}u_{j}({\bf x}),\qquad u_{j}({\bf x}):=\frac{1}{4}\sum_{q=1}^{ 6}\alpha_{q}\,\Big{|}C_{j}^{(q)}({\bf x})\Big{|}^{2}, \tag{16}\] where \(\alpha_{q}=\epsilon_{0}\) for an electric field component, whereas \(\alpha_{q}=\epsilon_{0}c^{2}\) for a magnetic field component (here \(\epsilon_{0}\) is the vacuum permittivity, with \(\epsilon_{0}=1/(4\pi\times 9\times 10^{9})\) in SI units). Thus, the spectral energy density (SED) has a discrete form. Specializing to the present axisymmetric model, we thus have \(C_{j}^{(q)}=C_{j}^{(q)}(\rho,z)\) and \(u_{j}=u_{j}(\rho,z)\). The potentials \(A_{jz}=A_{jz}^{\prime}=\psi_{\omega_{j}\,S_{j}}\) are determined by the spectrum functions \(S_{j}\) in Eq. (10), which are given in the numerical model by the values \(S_{nj}:=S_{j}(k_{nj})\), that are the output of the fitting. These potentials generate the EM field, hence the \(C_{j}^{(q)}(\rho,z)\) coefficients in Eq. (15) are expressed uniquely in terms of the \(S_{nj}\)'s. [21] However, in the least-squares problem (14), the scalar radiations emitted by every point-like "star" are taken to be exactly \(\psi_{{\bf x}_{i}\,\omega_{j}}\). Clearly, we may multiply the l.h.s. of (14) by some number \(\xi_{j}>0\), thus obtaining now new values \(S_{nj}^{\prime}=\xi_{j}S_{nj}\) (\(n=0,...,N\)) as the solution of (14). We determine the numbers \(\xi_{j}>0\) so that the SED measured at our local position \({\bf x}_{\rm loc}\) in the Galaxy coincides with the calculated values \(u_{j}({\bf x}_{\rm loc})\). This allows us then to make predictions: in particular, ones of the spatial variation of the SED in the Galaxy, which we may compare with the predictions of the existing models of the ISRF. Figures 1-2 show this comparison for the four positions in the Galaxy for which the predicted SED is shown in Ref. [13]. The predictions of the two models are quite reasonably close, although the SED predicted by the present model has rather marked oscillations as function of the wavelength. Note that the different wavelengths are fully uncoupled due to the "separate fitting" defined by the \(N_{\omega}\) least-square problems (14). A surprising prediction of this model is that for the values of the maximum of the energy density, \[u_{j{\rm max}}={\rm Max}\{u_{j}(\rho_{m},z_{p});\ m=1,...,N_{\rho},\ p=1,...,N_ {z}\}, \tag{17}\] found for the different spatial grids (\(N_{\rho}\times N_{z}\)) investigated, all having \(\rho\) varying regularly from \(\rho_{0}=0\) to \(\rho_{\rm max}\simeq 10\,{\rm kpc}\) and \(z\) varying regularly from \(z_{0}=0\) or \(z_{0}=-z_{\rm max}\) to \(z_{\rm max}\leq 1\,{\rm kpc}\). Figure 3 compares the curves \(u_{j{\rm max}}=f(\lambda_{j})\) found with two spatial grids. It is seen that the two curves are quite close to one another, and both show extremely high levels of \(u_{j{\rm max}}\), from \(10^{27}{\rm eV/cm}^{3}\) to \(10^{21}{\rm eV/cm}^{3}\). This is confirmed by a rather detailed investigation of the effects of the settings of the calculation (the spatial grid, and Figure 2: SEDs at (\(\rho=8\,\)kpc, \(z=0\)) and at (\(\rho=8\,\)kpc, \(z=1\,\)kpc). also the fineness of the frequency mesh: \(N_{\omega}\), and that of the discretization: \(N\)). [20] The values of the maximum of \(u_{j}\) are always found on the axis of the Galaxy (\(\rho=0\)), moreover the level of \(u_{j}\) decreases very rapidly when one departs from the axis. [20] This prediction of the model may be described as a kind of self-focusing effect of the ISRF in an axisymmetric galaxy. ## 6 Conclusion In the "scalar ether theory" of gravity (SET), a consistent electrodynamics in a gravitational field needs the introduction of an additional energy tensor: \(\mathbf{T}_{\rm inter}\), with \(T^{\mu}_{\rm inter}\ \ _{\nu}:=p\,\delta^{\mu}_{\nu}\). Thus, this energy tensor was not designed to build missing mass. However, it turns out that the corresponding "medium" could contribute to dark matter, for it is not localized inside matter, it is gravitationally active, and it is "exotic". Moreover, the scalar field \(p\), that determines \(\mathbf{T}_{\rm inter}\), can be in principle calculated from the data of the EM field and the gravitational field, through explicit equations. This however Figure 3: Maximum of energy density; comparison between two spatial grids. demands to be able to model the EM field in a galaxy, which is essentially in the form of the interstellar radiation field (ISRF). Therefore, we built a Maxwell model of the ISRF. This was motivated by the foregoing, but it is also interesting independently of that, as the ISRF is a very important physical characteristic of a galaxy and interacts strongly with the cosmic rays. In any case, this model in itself is totally independent of the theory of gravitation and the assumption about the interaction tensor. It is based on an explicit representation of any time-harmonic axisymmetric source-free Maxwell field through a pair of scalar potentials, and on determining these potentials by fitting contributions emanating from a set of point-like "stars" schematizing a galaxy. The predictions of the model for the variation of the spectral energy distribution in the Galaxy are currently being checked. They are relatively close to the predictions of a recent radiation transfer model -- except for the fact that the Maxwell model of the ISRF predicts extremely high values of the energy density on the axis of the Galaxy, that however decrease very rapidly when departing from that axis. We hope to be able in a future work to apply the model to calculate the interaction energy and to check if its distribution resembles a dark halo.
2310.00150
Zero temperature phase transitions and their anomalous influence on thermodynamic behavior in the q-state Potts model on a diamond chain
The q-state Potts model on a diamond chain has mathematical significance in analyzing phase transitions and critical behaviors in diverse fields, including statistical physics, condensed matter physics, and materials science. By focusing on the 3-state Potts model on a diamond chain, we reveal rich and analytically solvable behaviors without phase transitions at finite temperatures. Upon investigating thermodynamic properties such as internal energy, entropy, specific heat, and correlation length, we observe sharp changes near zero temperature. Magnetic properties, including magnetization and magnetic susceptibility, display distinct behaviors that provide insights into spin configurations in different phases. However, the Potts model lacks genuine phase transitions at finite temperatures, in line with the Peierls argument for one-dimensional systems. Nonetheless, in the general case of an arbitrary $q$-state, magnetic properties such as correlation length, magnetization, and magnetic susceptibility exhibit intriguing remnants of a zero-temperature phase transition at finite temperatures. Furthermore, residual entropy uncovers unusual frustrated regions at zero-temperature phase transitions. This feature leads to the peculiar thermodynamic properties of phase boundaries, including a sharp entropy change resembling a first-order discontinuity without an entropy jump, and pronounced peaks in second-order derivatives of free energy, suggestive of a second-order phase transition divergence, but without singularities. This unusual behavior is also observed in the correlation length at the pseudo-critical temperature, which could potentially be misleading as a divergence.
Yury Panov, Onofre Rojas
2023-09-29T21:15:31Z
http://arxiv.org/abs/2310.00150v1
Zero temperature phase transitions and their anomalous influence on thermodynamic behavior in the \(q\)-state Potts model on a diamond chain ###### Abstract The \(q\)-state Potts model on a diamond chain has mathematical significance in analyzing phase transitions and critical behaviors in diverse fields, including statistical physics, condensed matter physics, and materials science. By focusing on the 3-state Potts model on a diamond chain, we reveal rich and analytically solvable behaviors without phase transitions at finite temperatures. Upon investigating thermodynamic properties such as internal energy, entropy, specific heat, and correlation length, we observe sharp changes near zero temperature. Magnetic properties, including magnetization and magnetic susceptibility, display distinct behaviors that provide insights into spin configurations in different phases. However, the Potts model lacks genuine phase transitions at finite temperatures, in line with the Peierls argument for one-dimensional systems. Nonetheless, in the general case of an arbitrary \(q\)-state, magnetic properties such as correlation length, magnetization, and magnetic susceptibility exhibit intriguing remnants of a zero-temperature phase transition at finite temperatures. Furthermore, residual entropy uncovers unusual frustrated regions at zero-temperature phase transitions. This feature leads to the peculiar thermodynamic properties of phase boundaries, including a sharp entropy change resembling a first-order discontinuity without an entropy jump, and pronounced peaks in second-order derivatives of free energy, suggestive of a second-order phase transition divergence, but without singularities. This unusual behavior is also observed in the correlation length at the pseudo-critical temperature, which could potentially be misleading as a divergence. ## I Introduction The one-dimensional Potts model, while simpler than higher-dimensional models, exhibits a range of intriguing properties making it a focus of study. It can often be solved exactly, offering valuable insight into statistical systems without the need for approximations or numerical methods [1]. These models lay the groundwork for understanding more complex behaviors in higher dimensions and are central to the study of phenomena like phase transitions in statistical physics [2]. They can represent a variety of physical and mathematical systems, such as counting colored planar maps problems [3]. Additionally, they provide a practical platform for testing new computational methods, including Monte Carlo algorithms and machine learning techniques applied to statistical physics [4]. Even though a finite-temperature phase transition is absent in one-dimensional models with short range interaction, it is still feasible to define and study a "pseudo-critical" temperature. This is commonly perceived as the temperature at which a system fluctuations reach a peak, often associated with the system specific heat, which generally exhibits a peak at the pseudo-critical temperature. In this sense, recent research has unveiled a series of decorated one-dimensional models, notably the Ising and Heisenberg models, each exhibiting a range of structures. Among these are the Ising-Heisenberg diamond chain [5; 6], the one-dimensional double-tetrahedral model with a nodal site comprising a localized Ising spin alternating with a pair of mobile electrons delocalized within a triangular plaquette [7], the ladder model with an Ising-Heisenberg coupling in alternation [8], and the triangular tube model with Ising-Heisenberg coupling [9]. Pseudo-transition phenomena were detected in all these models. While the first derivative of the free energy, like entropy, internal energy, or magnetization demonstrates a jump akin to an abrupt change when the temperature varies, the function remains continuous. This pattern mimics a first-order phase transition. Nevertheless, a second-order derivative of free energy, such as the specific heat and magnetic susceptibility, showcases behavior typical of a second-order phase transition at a finite temperature. This peculiar behavior has drawn focus for a more meticulous study, as discussed in reference [10]. More recently, reference [11] has provided additional dialogue on this property and an exhaustive study of the correlation function for arbitrarily distant spins surrounding the pseudo-transition. Furthermore, certain conditions were proposed to observe the pseudo-transition, which is associated with residual entropy [12; 13]. Recent discoveries have positioned azurite [Cu\({}_{3}\)(CO\({}_{3}\))\({}_{2}\)(OH)\({}_{2}\)] as an intriguing quantum antiferromagnetic model, as described by the Heisenberg model on a diamond chain. This has led to numerous riveting theoretical investigations into diamond chain models. Notably, Honecker et al. [14] probed the dynamic and thermodynamic traits of this model, while comprehensive analysis was conducted on the thermodynamic attributes of the Ising-Heisenberg model on diamond-like chains [15; 16; 17; 18; 19]. Additional studies into the Ising-XYZ diamond chain model were inspired by current research, including experimental explorations of the natural mineral azurite and theoretical calculations of the Ising-XXZ model. Particular attention was drawn by the appearance of a 1/3 magnetization plateau and a double peak in both magnetic susceptibility and specific heat in experimental measurements [20; 21; 22]. It is relevant to note that the dimer interactions (interstitial sites) exhibit considerably stronger exchange interaction than the nodal sites in \(xy\)-axes, especially in the \(z\)-component. Consequently, this model can be accurately represented as an exactly solvable Ising-Heisenberg model. Further supporting this, experimental data regarding the magnetization plateau align with the approximated Ising-Heisenberg model [15; 23; 24]. In the context of one-dimensional Potts models, Sarkanych et al. [25] introduced a variation featuring invisible states and short-range coupling. The notion of "invisible" in this context refers to an additional level of energy degeneracy that contributes solely to entropy without affecting interaction energy, thus catalyzing the first-order phase transition. This proposal was inspired by low-dimensional systems such as the simple zipper model [26], a descriptor of long-chain DNA nucleotides. To account for narrow helix-coil transitions within these systems, Zimm and Bragg [26] put forth a largely phenomenological cooperative parameter. This innovative approach has since sparked numerous inquiries [27; 28; 29; 30]. In one-dimensional cooperative systems, Potts-like models [27; 29] serve as an effective representation, providing study of helix-coil transitions in polypeptides [28] -- a classic application of theoretical physics to macromolecular systems, yielding insightful comprehension of helix-coil transition properties. The reversible adsorption demonstrated by polycyclic aromatic surface elements in carbon nanotubes (CNTs) and aromatic DNA further enriches these studies. To consider DNA-CNT interactions, Tonoyan et al. [30] adjusted the Hamiltonian of the zipper model [26]. Similarly, our earlier work [31] proposed a one-dimensional Potts model combined with the Zimm-Bragg model, what we call here simply as Potts-Zimm-Bragg model, as a result leading to the observation of several distinctive properties. The paper is structured as follows: Section 2 presents our proposal for a \(q\)-state Potts model on a diamond chain structure. Section 3 analyzes the zero-temperature phase transition, residual entropy, and corresponding magnetizations. Section 4 discusses the thermodynamic solution for finite \(q\)-states and explores physical quantities such as entropy, magnetization, specific heat, magnetic susceptibility, and correlation length. This section also highlights the presence of pseudo-critical temperatures. Finally, Section 5 summarizes our findings and draws conclusions. Some details of the methods used, such as the decoration transformation and the application of Markov chain theory, are given in the Appendices. ## II Potts model on a diamond chain Despite the simplicity of the one-dimensional Potts model, it possesses several intriguing properties that render it a worthy subject of study. With this in mind, consider a \(q\)-state Potts model on a diamond chain structure, as depicted in Fig.1. The unit cell in this model is composed by three types of spins: two dimer spins, \(\sigma_{a}\) and \(\sigma_{b}\), interconnected by the coupling parameter \(J_{ab}\), and a nodal spin \(\sigma_{c}\), interacting with the dimer spins through the parameter \(J_{1}\). The corresponding Potts Hamiltonian, based on this setup, can be articulated as follows: \[H= -\sum_{i=1}^{N}\Big{\{}J_{ab}\delta_{\sigma_{i}^{a},\sigma_{i}^{b }}+h_{1}\delta_{\sigma_{i}^{c},1}+h_{2}\big{(}\delta_{\sigma_{i}^{a},1}+\delta _{\sigma_{i}^{b},1}\big{)}\] \[+J_{1}\big{(}\delta_{\sigma_{i}^{c},\sigma_{i}^{a}}+\delta_{ \sigma_{i}^{c},\sigma_{i}^{b}}+\delta_{\sigma_{i}^{c},\sigma_{i+1}^{c}}+\delta _{\sigma_{i}^{c},\sigma_{i+1}^{c}}\big{)}\Big{\}} \tag{1}\] where \(\sigma=\{1,\ldots,q\}\). It is noteworthy that the Hamiltonian (II) can be mapped onto an effective one-dimensional Potts-Zimm-Bragg model [31], as detailed in Appendix A. This suggests that the Potts model on a diamond chain can be equated to a bona fide one-dimensional Potts-Zimm-Bragg model as studied in reference [31]. It should be noted, however, that the effective parameters of the effective Potts-Zimm-Bragg model now depend on temperature. ### Transfer Matrix of \(q\)-state Potts model In what follows we will dedicate our attention to the thermodynamics properties, to obtain the partition function will use the standard transfer matrix technique. After obtaining each elements of the transfer matrix, the \(q\)-dimension transfer matrix elements has the following structure Figure 1: Schematic representation of the Potts model on the diamond chain structure \[V=\left(\begin{array}{cccccccc}d_{1}&t_{1}&t_{1}&\cdots&t_{1}&t_{1}\\ t_{1}&d_{2}&t_{2}&\cdots&t_{2}&t_{2}\\ t_{1}&t_{2}&d_{2}&\cdots&t_{2}&t_{2}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ t_{1}&t_{2}&t_{2}&\cdots&d_{2}&t_{2}\\ t_{1}&t_{2}&t_{2}&\cdots&t_{2}&d_{2}\end{array}\right). \tag{2}\] Therefore, let us write the transfer matrix eigenvalues similarly to that defined in reference [10], whose eigenvalues become \[\lambda_{1} = \frac{1}{2}\left(w_{1}+w_{-1}+\sqrt{(w_{1}-w_{-1})^{2}+4w_{0}^{2} }\right), \tag{3}\] \[\lambda_{2} = \frac{1}{2}\left(w_{1}+w_{-1}-\sqrt{(w_{1}-w_{-1})^{2}+4w_{0}^{2} }\right),\] (4) \[\lambda_{j} = (d_{2}-t_{2})\,,\quad\text{and}\quad\ j=\{3,4,\ldots,q\}, \tag{5}\] where the elements are expressed as follow \[w_{1} = d_{1}, \tag{6}\] \[w_{-1} = d_{2}+(q-2)t_{2},\] (7) \[w_{0} = \sqrt{q-1}\;t_{1}, \tag{8}\] considering the following notation \[d_{1}= z_{1}\left[\left(q-1+x^{2}z_{2}\right)^{2}+(y-1)\left(q-1+x^{4}z_{2}^{2 }\right)\right], \tag{9}\] \[d_{2}= \left(q-2+x^{2}+z_{2}\right)^{2}\] \[+\left(y-1\right)\left(q-2+x^{4}+z_{2}^{2}\right),\] (10) \[t_{1}= \sqrt{z_{1}}\left[q-2+x\left(z_{2}+1\right)\right]^{2}\] \[+\sqrt{z_{1}}(y-1)\left[q-2+x^{2}\left(z_{2}^{2}+1\right)\right],\] (11) \[t_{2}= \left(q-3+2x+z_{2}\right)^{2}\] \[+\left(y-1\right)\left(q-3+2x^{2}+z_{2}^{2}\right). \tag{12}\] here we used the following notations \(x=\mathrm{e}^{\beta J_{1}}\), \(y=\mathrm{e}^{\beta J_{ab}}\), \(z_{1}=\mathrm{e}^{\beta h_{1}}\) and \(z_{2}=\mathrm{e}^{\beta h_{2}}\). We can also obtain the corresponding transfer matrix eigenvectors, which are given by \[|u_{1}\rangle= \cos(\phi)|1\rangle+\tfrac{\sin(\phi)}{\sqrt{q-1}}\sum_{\mu=2}^{q }|\mu\rangle, \tag{13}\] \[|u_{2}\rangle= -\sin(\phi)|1\rangle+\tfrac{\cos(\phi)}{\sqrt{q-1}}\sum_{\mu=2}^{ q}|\mu\rangle,\] (14) \[|u_{j}\rangle= \sqrt{\tfrac{j-2}{j-1}}\Big{(}\tfrac{1}{j-2}\sum_{\mu=2}^{j-1}| \mu\rangle-|j\rangle\Big{)},\quad j=\{3,\cdots,q\}, \tag{15}\] where \(\phi=\tfrac{1}{2}\cot^{-1}\left(\tfrac{w_{1}-w_{-1}}{2w_{0}}\right)\), with \(-\tfrac{\pi}{4}\leqslant\phi\leqslant\tfrac{\pi}{4}\). By using the transfer matrix eigenvalues, we express the partition function as follows \[Z_{N}= \lambda_{1}^{N}+\lambda_{2}^{N}+(q-2)\lambda_{3}^{N},\] \[= \lambda_{1}^{N}\left\{1+\left(\tfrac{\lambda_{2}}{\lambda_{1}} \right)^{N}+(q-2)\Big{(}\tfrac{\lambda_{1}}{\lambda_{1}}\Big{)}^{N}\right\}. \tag{16}\] It is evident that the eigenvalues satisfy the following relation \(\lambda_{1}>\lambda_{2}\geqslant\lambda_{3}\). Hence, assuming \(q\) finite, the free energy in thermodynamic limit (\(N\to\infty\)) reduces to \[f=-T\ln\left(\lambda_{1}\right). \tag{17}\] It is important to acknowledge that the free energy for any finite \(q\)-state presents a continuous function, without any singularities or discontinuities. As a result, we should not anticipate any genuine phase transition at a finite temperature. Furthermore, we can also compute the free energy (17) from the effective one-dimensional Potts-Zimm-Bragg model [31]. The specifics of this mapping are outlined in Appendix A. Note that the effective parameters of the Potts-Zimm-Bragg model are temperature-dependent. ## III Zero-temperature phase diagram In order to describe the ground state of the \(q\)-state Potts model on a diamond chain we use the following notation for the state of \(i\)th unit cell: \[\big{|}\begin{bmatrix}\mu_{i}\\ \nu_{i}\end{bmatrix}\alpha_{i}\big{\rangle}_{i}=\left\{\begin{bmatrix}\mu_{i} ^{\prime}\alpha_{i}\big{\rangle}_{i}&\text{or}&\big{|}\begin{bmatrix}\nu_{i} \\ \mu_{i}^{\prime}\alpha_{i}\big{\rangle}_{i}\end{bmatrix}\right\}. \tag{18}\] Here, \(\mu_{i}\), \(\nu_{i}\) and \(\alpha_{i}\) stand for the states of sites \(a\), \(b\) and \(c\) in the \(i\)th unit cell, and the square brackets inside a ket-vector denote two equivalent configurations for the values of Potts spins on \(a\) and \(b\) sites. Assuming \(q\geqslant 3\) in Hamiltonian (1), we identify the following ground states \[|FM_{1}\rangle=\prod_{i}\big{|}\begin{bmatrix}\mu_{1}\\ \nu_{i}\end{bmatrix}\big{\rangle}_{i}\,, |FM_{2}\rangle=\prod_{i}\big{|}\begin{bmatrix}\mu_{\mu}\\ \nu_{i}\end{bmatrix}\mu\big{\rangle}_{i}\,, \tag{19}\] \[|FR_{1}\rangle=\prod_{i}\big{|}\begin{bmatrix}\mu_{i}\\ \nu_{i}\end{bmatrix}\mu\big{\rangle}_{i}\,, |FR_{2}\rangle=\prod_{i}\big{|}\big{[}\begin{bmatrix}\mu_{i}\\ \nu_{i}\end{bmatrix}1\big{\rangle}_{i}\,,\] (20) \[|FR_{3}\rangle=\prod_{i}\big{|}\big{[}\begin{bmatrix}\mu_{i}\\ \nu_{i}\end{bmatrix}\mu\big{\rangle}_{i}\,, |FR_{4}\rangle=\prod_{i}\big{|}\begin{bmatrix}\nu_{i}\\ \nu_{i}\end{bmatrix}\mu_{i}\big{\rangle}_{i}\,,\] (21) \[|FR_{5}\rangle=\prod_{i}\big{|}\big{[}\begin{bmatrix}\mu_{i}\\ \nu_{i}\end{bmatrix}1\big{\rangle}_{i}\,, |FR_{6}\rangle=\prod_{i}\big{|}\big{[}\begin{bmatrix}\nu_{i}\\ \nu_{i}\end{bmatrix}\mu_{i}\big{\rangle}_{i}\,,\] (22) \[|FR_{7}\rangle=\prod_{i}\big{|}\big{[}\begin{bmatrix}\xi_{i}\\ \nu_{i}\end{bmatrix}\mu\big{\rangle}_{i}\big{\rangle}_{i}\,. \tag{23}\] Here, the state indexes \(\mu\), \(\nu\), and \(\xi\) are in a range \(2,\ldots q\), and are not equal to each other if they are written in the same ket-vector. The cell index \(i\) indicates that the site states in neighboring cells can differ, so for the frustrated phases the ground state consists of all relevant combinations and has non-zero residual entropy. Expressions for energy and entropy per the unit cell for the ground states (19\(-\)23) are given in Table 1. It is important to note that the internal energy at zero temperature does not depend on \(q\), while the residual entropy is completely determined by \(q\). The frustrated phases are numbered in order of increasing of the residual entropy for \(q\geqslant 7\). The dependence on \(q\) of the residual entropy for different phases is shown in Fig.2. The ground state phase diagrams assuming that \(h_{1}=h_{2}=h\) are shown in Fig.3(a)\(-\)(f) in different planes. The states FM\({}_{1,2}\) are of the pure ferromagnetic type. The FM\({}_{1}\) (FM\({}_{2}\)) phase is realized if \(h>0\) (\(h<0\)). In general, the phase FM\({}_{2}\) is a multi-domain, and it consists of \(q-1\) kinds of equivalent macroscopic domains having all spins of a diamond chain in the \(\mu\) state. The state FR\({}_{1}\) is the first of frustrated type states. The \(a\) and \(b\) spins are in a state \(1\), while \(c\)-spins may be in any of \(\mu_{i}=2,\ldots q\) states, so the FR\({}_{1}\) phase is realized only if \(h>0\) and \(J_{1}<0\). The number of states of \(c\)-spins determines the entropy of the phase, \(\mathcal{S}_{0}=\ln{(q-1)}\) per unit cell. In the second frustrated phase FR\({}_{2}\), the spin \(a\) equals \(\mu=2,\ldots q\), and the two remaining spins in the unit cell equal \(1\), so this phase exists at \(J_{ab}<0\). Due to the equivalence of sites \(a\) and \(b\), the entropy of FR\({}_{2}\) phase is greater by \(\ln{2}\). Frustrated phases FR\({}_{3,4,7}\) exist only if \(h<0\). In the FR\({}_{3}\) phase, the spin states in the unit cell are not equal \(1\). The state of the \(c\)-spin and the state of one of the spins \(a\) or \(b\) are the same, \(\sigma_{i}^{c}=\sigma_{i}^{(a,b)}\), but the states of spins \(a\) and \(b\) in the same unit cell are different, \(\sigma_{i}^{a}\neq\sigma_{i}^{b}\), so this phase appears as a ground state only if \(J_{ab}<0\). Formally, the number of states of an elementary cell is \(2(q-1)(q-2)\). But for the given phase, the state of the chain should look the same when moving along the chain from left to right or in the opposite direction. This mirror symmetry generates the restriction \(\sigma_{i-1}^{(a,b)}=\sigma_{i}^{c}=\sigma_{i}^{(a,b)}\), so the total number of states per unit cell in the FR\({}_{3}\) phase is \(4(q-2)\). In turn, for the FR\({}_{4}\) phase, the conditions \(\sigma_{i-1}^{(a,b)}\neq\sigma_{i}^{c}\) and \(\sigma_{i}^{c}\neq\sigma_{i}^{(a,b)}\) must be met, so the total number of states per unit cell reduces from \((q-1)(q-2)\) to \((q-2)^{2}\). Under the assumption \(h_{1}=h_{2}\), the energies of the FR\({}_{5}\) and FR\({}_{6}\) phases are equal, and these states do not mix at the microscopic level, that is, the unit cells of these states cannot alternate in the chain. Formally, the chain state in the FR\({}_{6}\) phase region in Fig.3 should be a phase separation consisting of macroscopic domains of the FR\({}_{5}\) and FR\({}_{6}\) phases. The entropy of the FR\({}_{5}\) phase is determined by the total number of states in the unit cell, that is \((q-1)(q-2)\). In the FR\({}_{6}\) phase, the conditions \(\sigma_{i-1}^{(a,b)}\neq\sigma_{i}^{c}\neq\sigma_{i}^{(a,b)}\) give \(2(q-2)^{2}\) states instead of the formally possible \(2(q-1)(q-2)\) states in the unit cell. Nevertheless, at \(q>3\) the entropy of the FR\({}_{5}\) phase is less than the entropy of the FR\({}_{6}\) phase, therefore, the free energy of the FR\({}_{6}\) phase at any finite temperature is the lowest, and in the limit at \(T\to 0\) we Figure 2: Dependencies on \(q\) of the residual entropy for different phases of the ground state (see Table 1). will have the FR\({}_{6}\) phase as the ground state. However, the FR\({}_{5}\) contributes to the state at the FR\({}_{2}\)-FR\({}_{6}\) phase boundary. In the frustrated phase FR\({}_{7}\), all the spins in the unit cell are pairwise unequal and are not equal to 1, so, formally, the number of states of an elementary cell is \((q-1)(q-2)(q-3)\). The restrictions \(\sigma_{i-1}^{c}\neq\sigma_{i}^{(a,b)}\) reduces the total number of states per unit cell to the value \((q-2)(q-3)^{2}\). If \(q\geqslant 7\), the entropy of the FR\({}_{7}\) phase has the highest value among the other ground state phases. The case \(q=3\) is special, and corresponding phase diagram is shown in Fig.4. If \(q=3\), then the states of the FR\({}_{7}\) phase cannot be realized, since in this case there are only 2 different Potts spins with \(q\neq 1\) for the 3 sites in the unit cell. The phase FR\({}_{7}\) is absent and its region on the phase diagram is taken by other phases. Also the phase diagram contains both the FR\({}_{6}\) phase and the phase-separated state \({\rm FR}_{5}+{\rm FR}_{6}\), which consist of equal fractions of the macroscopic domains of the phases FR\({}_{5}\) and FR\({}_{6}\). Both FR\({}_{6}\) and FR\({}_{5}+{\rm FR}_{6}\) phases and the boundary phase have the same entropy \({\cal S}=\ln 2\). The structure of the ground state here can be explored using the methods of the theory of Markov chains (see Appendix B). To obtain the residual entropy \({\cal S}_{0}\) as a limit at zero temperature, we use equations for the internal energy \(\varepsilon\) and the free energy \(f\): \[\varepsilon=f+T{\cal S},\qquad f=-T\ln\left(\lambda_{1}\right), \tag{24}\] where \(\lambda_{1}\) is the maximum eigenvalue of the transfer matrix. Then \[{\cal S}=\ln\left(\frac{\lambda_{1}}{{\rm e}^{-\beta\varepsilon}}\right). \tag{25}\] An explicit expression of \(\lambda_{1}\) is given by Eq. (3), and we can write it in the following form \[\lambda_{1}={\rm e}^{-\beta\varepsilon_{0}}\varphi\left({\rm e}^{-\beta( \varepsilon_{1}-\varepsilon_{0})},{\rm e}^{-\beta(\varepsilon_{2}-\varepsilon _{0})},\ldots\right). \tag{26}\] Here \(\varepsilon_{0}\) is the ground state energy for given parameters of the Hamiltonian, so relations \(\varepsilon_{k}>\varepsilon_{0}\) are fulfilled for all \(k\). The form of \(\varphi\) depends on the ground state. Since \(\varepsilon\) tends to \(\varepsilon_{0}\) at zero temperature, we obtain \[{\cal S}_{0}=\ln\left(\varphi(0)\right). \tag{27}\] To find \(\varphi(0)\), it is enough to zero out all exponential terms having \(\varepsilon_{k}\neq\varepsilon_{0}\) and replace \({\rm e}^{-\beta\varepsilon_{0}}\) by unity in \(\lambda_{1}\). A similar procedure can be defined for the magnetizations \(m_{c}\) and \(m_{ab}\) in the ground state. So, for the magnetizations \(m_{c}\) and \(m_{ab}\) we have the equations \[m_{c}=\frac{1}{\lambda_{1}}\frac{\partial\lambda_{1}}{\partial(\beta h_{1})}, \quad m_{ab}=\frac{1}{\lambda_{1}}\frac{\partial\lambda_{1}}{\partial(\beta h _{2})}. \tag{28}\] If we define \[\frac{\partial\lambda_{1}}{\partial(\beta h_{1})} ={\rm e}^{-\beta\varepsilon_{0}}\psi_{c}\left({\rm e}^{-\beta( \varepsilon_{1}-\varepsilon_{0})},{\rm e}^{-\beta(\varepsilon_{2}-\varepsilon _{0})},\ldots\right), \tag{29}\] \[\frac{\partial\lambda_{1}}{\partial(\beta h_{2})} ={\rm e}^{-\beta\varepsilon_{0}}\psi_{ab}\left({\rm e}^{-\beta( \varepsilon_{1}-\varepsilon_{0})},{\rm e}^{-\beta(\varepsilon_{2}-\varepsilon _{0})},\ldots\right), \tag{30}\] then in the ground state we get \[m_{c}=\frac{\psi_{c}(0)}{\varphi(0)},\quad m_{ab}=\frac{\psi_{ab}(0)}{\varphi( 0)}. \tag{31}\] The ground state energy \(\varepsilon_{0}\), the residual entropy \({\cal S}_{0}\), and magnetizations \(m_{c}\) and \(m_{ab}\), which were found using Equations (27) and (31), are given in Table 1 for all phases and phase boundaries. There is another way to get the values given in Table 1 and study the properties of the ground state in detail. This method, based on the theory of Markov chains, is described in Appendix B. The values in Table 1 show that the entropy of all phase boundaries is greater than the entropy of adjacent phases. Figure 4: The ground state phase diagrams for the case \(q=3\) in the plane \(J_{ab}-J_{1}\) for (a) \(h>0\) and (b) \(h<0\), in the plane \(J_{ab}-h\) for (c) \(J_{1}>0\) and (d) \(J_{1}<0\), in the plane \(J_{1}-h\) for (e) \(J_{ab}>0\) and (f) \(J_{ab}<0\). The green lines show the FM\({}_{1}\)-FM\({}_{2}\) boundaries where both adjacent phases and the boundary phase have zero entropy. The blue lines show the new boundaries between the FR\({}_{6}\) phase and the phase-separated state FR\({}_{5}\)+FR\({}_{6}\). The red lines are the FR\({}_{1}\)-FR\({}_{2}\) boundaries, where \({\cal S}_{{\rm FR}_{1}}<{\cal S}_{{\rm FR}_{1}-{\rm FR}_{2}}={\cal S}_{{\rm FR}_{ 2}}\). The exceptions are two phase boundaries. The first is the FM\({}_{1}\)-FM\({}_{2}\) boundary, where the entropy of both adjacent phases and the boundary state is zero. The second is the FR\({}_{1}\)-FR\({}_{2}\) boundary, where the boundary state is such that \(\mathcal{S}_{\text{FR}_{1}}<\mathcal{S}_{\text{FR}_{1}-\text{FR}_{2}}=\mathcal{S }_{\text{FR}_{2}}\). This phase boundary is truly an anomalous property, leading to a peculiar phase pseudo-transition at finite temperature, which we will explore in the next section. ## IV Thermodynamics of \(q\)-state Potts model In what follows, we will analyze the thermodynamic properties of the model in detail. First, we will examine the 3-state models (\(q=3\)), which exhibit some peculiar properties, distinct from the behavior for \(q>3\). Later, we will explore the case when \(q>3\). It's worth noting that the behavior for any \(q>3\) tends to be rather consistent across finite values of \(q\). For the purpose of this discussion, we will focus specifically on \(q=5\), without losing its core properties. \begin{table} \begin{tabular}{c l l l l} \hline \hline Ground state & \(\varepsilon_{0}\) & \(\mathcal{S}_{0}\) & \(m_{c}\) & \(m_{ab}\) \\ \hline FM\({}_{1}\) & \(-\left(4J_{1}+J_{ab}+3h\right)\) & \(0\) & \(1\) & \(2\) \\ FM\({}_{2}\) & \(-\left(4J_{1}+J_{ab}\right)\) & \(0\) & \(0\) & \(0\) \\ FR\({}_{1}\) & \(-\left(J_{ab}+2h\right)\) & \(\ln\left(q-1\right)\) & \(0\) & \(2\) \\ FR\({}_{2}\) & \(-\left(2J_{1}+2h\right)\) & \(\ln\left[2(q-1)\right]\) & \(1\) & \(1\) \\ FR\({}_{3}\) & \(-2J_{1}\) & \(\ln\left[4(q-2)\right]\) & \(0\) & \(0\) \\ FR\({}_{4}\) & \(-J_{ab}\) & \(2\ln(q-2)\) & \(0\) & \(0\) \\ FR\({}_{5}\) & \(-h\) & \(\ln\left[(q-1)(q-2)\right]\) & \(1\) & \(0\) \\ FR\({}_{6}\) & \(-h\) & \(\ln\left[2(q-2)^{2}\right]\) & \(0\) & \(1\) \\ FR\({}_{7}\) & \(0\) & \(\ln\left[(q-2)(q-3)^{2}\right]\) & \(0\) & \(0\) \\ \hline FM\({}_{1}\)-FM\({}_{2}\) & \(-\left(4J_{1}+J_{ab}\right)\) & \(0\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FM\({}_{1}\)-FR\({}_{1}\) & \(8J_{1}-J_{ab}\) & \(\ln(q)\) & \(\frac{1}{q}\) & \(2\) \\ FR\({}_{1}\)-FR\({}_{2}\) & \(-2\left(J_{1}+h\right)\) & \(\ln[2(q-1)]\) & \(1\) & \(1\) \\ FM\({}_{1}\)-FR\({}_{2}\) & \(2\left(J_{1}+J_{ab}\right)\) & \(\ln(2q-1)\) & \(1\) & \(\frac{2q}{2q-1}\) \\ FM\({}_{2}\)-FR\({}_{3}\) & \(-2J_{1}\) & \(\ln(4q-7)\) & \(0\) & \(0\) \\ FR\({}_{2}\)-FR\({}_{3}\) & \(-2J_{1}\) & \(\ln[4(q-1)]\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FM\({}_{2}\)-FR\({}_{4}\) & \(-2J_{ab}\) & \(2\ln(q-1)\) & \(0\) & \(0\) \\ FR\({}_{1}\)-FR\({}_{4}\) & \(-J_{ab}\) & \(2\ln(q-1)\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FR\({}_{1}\)-FR\({}_{6}\) & \(J_{ab}\) & \(\ln(2q^{2}-7q+7)\) & \(0\) & \(\frac{2(q^{2}-3q+3)}{2q-7q^{2}}\) \\ FR\({}_{2}\)-FR\({}_{6}\) & \(2J_{1}\) & \(\ln\left[\frac{1}{2}\left(3q^{2}-9q+8+\phi_{1}(q)\right)\right]^{a}\) & \(\frac{\phi_{1}(q)-q^{2}+7q-8}{2\phi_{1}(q)}\) & \(\frac{\phi_{1}(q)+q^{2}-3q+4}{2\phi_{1}(q)}\) \\ FR\({}_{4}\)-FR\({}_{7}\) & \(0\) & \(\ln[(q-2)(q^{2}-5q+7)]\) & \(0\) & \(0\) \\ FR\({}_{6}\)-FR\({}_{7}\) & \(0\) & \(\ln[(q-1)(q-2)^{2}]\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FR\({}_{3}\)-FR\({}_{7}\) & \(0\) & \(\ln[(q-1)^{2}(q-2)]\) & \(0\) & \(0\) \\ \hline FM\({}_{1}\)-FR\({}_{1}\)-FR\({}_{2}\) & \(6J_{1}\) & \(\ln\left[\frac{1}{2}\left(3q-2+\phi_{2}(q)\right)\right]^{b}\) & \(\frac{q+\phi_{2}(q)}{2\phi_{2}(q)}\) & \(\frac{3\phi_{2}(q)-q+2}{2\phi_{2}(q)}\) \\ FR\({}_{1}\)-FR\({}_{2}\)-FR\({}_{6}\) & \(2J_{1}\) & \(\ln\left[\frac{1}{2}\left(3q^{2}-8q+7+\phi_{3}(q)\right)\right]^{c}\) & \(\frac{\phi_{3}(q)-q^{2}+6q-7}{2\phi_{3}(q)}\) & \(\frac{2\left[(q^{2}-2q+2)\left(q^{2}-7+\phi_{3}(q)\right)+10-2q\right]}{\phi_{ 3}(q)\left(3q^{2}-8q+7+\phi_{3}(q)\right)}\) \\ FM\({}_{1}\)-FM\({}_{2}\)-FR\({}_{3}\) & \(-2J_{1}\) & \(\ln(4q-3)\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FR\({}_{1}\)-FR\({}_{4}\)-FR\({}_{6}\)-FR\({}_{7}\) & \(0\) & \(\ln\left[(q-1)\left(q^{2}-3q+3\right)\right]\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ FM\({}_{2}\)-FR\({}_{3}\)-FR\({}_{4}\)-FR\({}_{7}\) & \(0\) & \(3\ln(q-1)\) & \(0\) & \(0\) \\ FR\({}_{2}\)-FR\({}_{3}\)-FR\({}_{6}\)-FR\({}_{7}\) & \(0\) & \(\ln[q^{2}(q-1)]\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ O & \(0\) & \(3\ln(q)\) & \(\frac{1}{q}\) & \(\frac{2}{q}\) \\ \hline \hline \end{tabular} \({}^{a}\)\(\phi_{1}(q)=\sqrt{q\left(q^{3}+2q^{2}-15q+16\right)}\) \({}^{b}\)\(\phi_{2}(q)=\sqrt{q^{2}+4q-4}\) \({}^{c}\)\(\phi_{3}(q)=\sqrt{q^{4}+4q^{3}-30q^{2}+44q-15}\) \end{table} Table 1: The ground state energy, residual entropy and magnetizations of a diamond Potts chain. ### 3-state Potts Model Indeed, the 2-state Potts model is equivalent to the Ising model, which differs significantly from the \(q>2\) state Potts model. A primary feature to highlight in the latter is the emergence of frustration. The 3-state Potts model, being the first to exhibit this frustration behavior, is expected to display peculiar characteristics. In contrast, all higher \(q\)-state Potts models tend to behave similarly. The one-dimensional 3-state Potts model, provides a richer set of behaviors than the 2-state Ising model, yet it is still analytically solvable. Surely, in the one-dimensional case, there is no phase transition at finite temperature for the Potts model with \(q>2\) states, which can be proven via the Peierls argument [2]. This property makes the 1D 3-state Potts model a tractable system to study, helping investigations into more intricate systems and behaviors within statistical physics. Its study contributes to the broader field of statistical physics and has implications in several scientific disciplines. In this sense here we will consider the special case of 3-state Potts model on diamond chain. Initially, we will explore the thermodynamics and magnetization properties in the vicinity of the zero-temperature phase boundary that separates \(\mathrm{FM}_{1}\) and \(\mathrm{FM}_{2}\). Figure 5a illustrates the internal energy \(U\) as a function of the external magnetic field. We assume the same magnetic field for both nodal and dimer sites (\(h_{1}=h_{2}=h\)). Three different temperature values are considered to demonstrate the behavior of the internal energy. At \(h=0\), corresponding to the zero-temperature phase transition between \(\mathrm{FM}_{1}\) and \(\mathrm{FM}_{2}\) (refer to Fig.4), an evident change is observed. As the temperature increases, a small peak emerges at \(h=0\), which grows with higher temperatures. In panel (b), the entropy \(\mathcal{S}\) is shown as a function of the external magnetic field, under the same conditions as panel (a). Here, we notice the absence of residual entropy at zero temperature, in accordance with the argument in reference [12; 13], suggesting the presence of a pseudo-critical temperature at this boundary. Panel (c) displays the correlation length \(\xi=1/\ln\left(\frac{\lambda_{1}}{\lambda_{2}}\right)\) as a function of \(h\), using the same parameter set as the previous panels. Once again, a sharp peak at \(h=0\) confirms the phase transition at zero temperature. Interestingly, in this case, there are not only one pseudo-critical temperature but infinitely many. For any temperature \(T_{p}\lesssim 0.2\), we observe a sharp peak in the correlation length at a null magnetic field. Lastly, panel (d) presents the specific heat under the same conditions. In contrast to a typical pseudo-critical peak, an intense peak appears, with a small minimum at \(h=0\). As the temperature decreases, the specific heat tends to zero, as expected. In the following analysis, we explore the magnetic properties of the system, specifically the magnetization and magnetic susceptibility. Fig. 6a illustrates the magnetization \(m_{c}\) of the nodal site as a function of the external magnetic field \(h\) for temperatures \(T=\{0.1,0.2,0.3\}\). The parameters \(J_{ab}=-1\) and \(J_{1}=1\) remain fixed throughout. In the low-temperature region, we observe the saturated phase (\(\mathrm{FM}_{1}\)) and a phase Figure 5: (a) Internal energy \(U\) as a function of the external magnetic field, for three different temperature values \(T=\{0.1,0.2,0.3\}\) assuming fixed parameter \(J_{ab}=-1\), \(J_{1}=1\) and \(q=3\); (b) the entropy \(\mathcal{S}\) for the same conditions; (c) the correlation length, and (d) the specific heat. Figure 6: (a) Magnetization \(m_{c}\) as a function of external magnetic field \(h\), for three different temperature values \(T=\{0.1,0.2,0.3\}\), assuming fixed parameter \(J_{ab}=-1\), \(J_{1}=1\) and \(q=3\). (b) Magnetization \(m_{ab}\) as a function of external magnetic field \(h\), for the same set of fixed parameters in panel (a). (c-d) Magnetic susceptibility \(\chi_{c}\) and \(\chi_{ab}\) assuming same condition the above panels. transition at \(h=0\), where the magnetization drops to zero, corresponding to FM\({}_{2}\). It is important to note that FM\({}_{2}\) exhibits null magnetization since, according to the definition in Eq. (20), it aligns in any state other than 1. Moving on to panel (b), we present the dimer magnetization \(m_{ab}\), which exhibits a similar behavior to panel (a). Panel (c) exhibits the magnetic susceptibility \(\chi_{c}\) as a function of the external magnetic field \(h\), under the same aforementioned conditions. Notably, it displays sharp peaks reminiscent of pseudo-critical phase transitions, particularly for temperatures \(T_{p}\lesssim 0.2\). Similarly, panel (d) illustrates the dimer magnetic susceptibility \(\chi_{ab}\) as a function of \(h\), employing the same conditions as the previous panels. These last two panels exhibit characteristic sharp peaks distinct from the double peak observed in the specific heat plot depicted in Fig.5d, which occurs around \(h=0\). There is a peculiar behavior for \(q=3\), so we will now investigate the anomalous interface between FR\({}_{5}\) and FR\({}_{5}\)+FR\({}_{6}\), which represents another phase boundary that requires analysis. Figure 7a illustrates the internal energy (\(U\)) as a function of the external magnetic field (\(h\)), with parameters \(J_{ab}=-1.4\) and \(J_{1}=-1\) held constant. The graph uses three distinct temperatures for illustrative purposes. The internal energy for these temperatures appears almost identical, with minor variations around \(h=\pm 1.4\), where a zero-temperature phase transition occurs. Conversely, panel (b) depicts the entropy (\(\mathcal{S}\)) as a function of the same set of temperatures. Unlike the internal energy, the entropies for these temperatures are distinctly different under the same set of parameters. A peak is noticeable at the same magnetic field \(h=\pm 1.4\), underscoring the impact of the zero-temperature phase transition. On the other hand, the correlation length (\(\xi\)) indicates a curvature change at a disparate temperature, approximately around \(h\approx\pm 0.8\), but no evidence of phase transition influence at \(h=\pm 1.4\) is discernible. Lastly, panel (d) demonstrates the specific heat as a function of temperature, maintaining the same parameters as in panel (a). A double peak is observable around \(h=\pm 1.4\), but no signs of unusual behavior are evident at \(h\approx\pm 0.8\). Although there is no anomalous behavior for \(U\), \(\mathcal{S}\), and \(C\) at \(h=0\), the correlation length \(\xi\) illustrates a maximum at \(h=0\). This anomalous behavior will be discussed further later. Figure 8a depicts the magnetization of nodal spin, assuming parameters set in Fig.7. It reveals that the magnetization, denoted as \(m_{c}\), alters its behavior notably at \(h\approx\pm 0.8\). However, there are no traces of a phase transition at \(h=\pm 1.4\), even though the magnetization exhibits symmetry under the exchange of the magnetic field sign. Additionally, panel (b) reports the magnetic susceptibility \(\chi_{c}\) of nodal spin, as a function of the Figure 7: (a) Internal energy \(U\) as a function of temperature for three specific values (\(T=\{0.01,0.05,0.1\}\)), assuming fixed parameters \(J_{ab}=-1.4\), \(J_{1}=-1\) and \(q=3\); (b) the entropy \(\mathcal{S}\) under the same conditions; (c) the correlation length \(\xi\); and (d) the specific heat \(C\). Figure 8: (a) Magnetization \(m_{c}\) as a function of the external magnetic field \(h\), considering three different temperature values \(T=\{0.01,0.05,0.1\}\). The fixed parameters \(J_{ab}=-1.4\), \(J_{1}=-1\) and \(q=3\) are assumed. (b) Magnetization \(m_{ab}\) as a function of the external magnetic field \(h\), with the same set of fixed parameters as in panel (a). (c) Magnetic susceptibility \(\chi_{c}\) and (d) \(\chi_{ab}\) under the same conditions as the previous panels. magnetic field, with temperatures designated in the same panel. Note that, the magnetic susceptibility enlarges rapidly for lower temperatures and remains substantial at \(h=\pm 1\). It also displays a significant alteration in the curve around \(h\approx\pm 0.8\) and a change in curvature at about \(h\approx\pm 1.4\). In contrast, the panel (c) illustrates the dimer magnetization, denoted as \(m_{ab}\), has been represented as a function of the external magnetic field \(h\). Here, we observe the zero-temperature phase transition impact at \(h=\pm 1.4\) and \(h=\pm 0.8\), despite the magnetization no longer maintaining symmetry under the exchange of the magnetic field. Similarly, panel (d) features the magnetic susceptibility \(\chi_{ab}\), using the same parameters presented in panel (a). Comparable to observations in panel (c), we detect a significant change of curvature around \(h\approx\pm 0.8\), while a local maximum of magnetic susceptibility emerges at \(h\approx\pm 1.4\). As previously identified in the correlation length \(\xi\), there is an anomalous behavior observed at \(h=0\). The magnetization \(m_{c}\) exhibits a peculiar value of \(1/3\) at null magnetic field, while similarly, \(m_{ab}\) yields \(2/3\) at \(h=0\), and obviously the total magnetization becomes \(1\). This anomalous behavior is also manifested in the magnetic susceptibilities \(\chi_{c}\) and \(\chi_{ab}\), which exhibit a maximum value at \(h=0\) when the magnetic field is varied. Furthermore, in panel (e), we report the total magnetization \(m_{t}=m_{c}+m_{ab}\). Interestingly, based on our observations, there is no evidence of any anomalous behavior; instead, a long plateau is evident. For this analysis, we assumed the same set of parameters as those used for the previous partial magnetizations. Panel (f) shows the total magnetic susceptibility \(\chi_{t}=\chi_{c}+\chi_{ab}+2\chi_{abc}\), where \(\chi_{abc}=-\frac{\partial^{2}f}{\partial h_{c}\partial h_{ab}}\) (not depicted). Again, we relied on the parameters established for the partial magnetic susceptibilities. It is noteworthy that this analysis does not reveal significant insights around the anomalous regions. Additionally, the total magnetic susceptibility presents a markedly smaller magnitude compared to the partial magnetic susceptibilities displayed in panels (b) and (d). This reduced magnitude arises because the magnetic susceptibility \(\chi_{abc}\) counterbalances the positive contributions from \(\chi_{c}\) and \(\chi_{ab}\), due to its comparable magnitude. As an alternative approach, one can determine \(\chi_{t}\) for the current case by assuming \(h_{c}=h_{ab}\) and taking the second derivative of the negative free energy. ### Pseudo-critical temperature around \(\text{FR}_{1}-\text{FR}_{2}\) phase boundary We will now investigate the properties of the Potts model on a diamond structure, which displays anomalous behavior near the phase boundary \(\text{FR}_{1}-\text{FR}_{2}\) influenced by temperature variations. This region displays a pseudo-critical transition, akin to a first or second-order phase transition. Notably, the anomalous properties observed in the low-temperature regime are primarily independent of the particular value of \(q\). For \(q>3\), the behavior of physical quantities is rather similar. Therefore, we will consider \(q=5\) solely for illustrative purposes, without losing any relevant properties. Examining this transition is crucial for comprehending the physical properties of the Potts model and predicting its behavior under diverse conditions. #### iv.2.1 Entropy Fig.9(top) shows the plot of entropy (\(\mathcal{S}\)) as a function of temperature (\(T/T_{p}\)), where \(T_{p}\) is the pseudo-critical temperature, with \(J_{ab}=-0.75\), \(J_{1}=-0.38\) and \(q=5\) are fixed parameters. We consider several magnetic fields, i.e., \(h=\{0.8,0.9,1.0,1.3,1.4,1.45\}\), and their corresponding pseudo-critical temperatures are \(T_{p}=\{0.0100651534871,\ 0.0143473088244,\ 0.0144268117883,\ 0.0144268970579,\ 0.0143982497686,\ 0.0139830161395\}\), respectively. For magnetic fields in the range of \(1\lesssim h\lesssim 1.3\), we observe a robust change of curvature at \(T_{p}\), which resembles a typical first-order phase transition. However, there is no sudden jump in entropy at \(T_{p}\), and when we magnify the entropy plot around \(T_{p}\), we can see that the curve is a continuous smooth function. On the other hand, for other magnetic field values, the Figure 9: Entropy \(\mathcal{S}\) (top), specific heat \(C\) (middle), and correlation length \(\xi\) (bottom), as function of temperature \(T/T_{p}\), in units of pseudo-critical temperature \(T_{p}\) assuming fixed parameter \(J_{ab}=-0.75\), \(J_{1}=-0.38\) and \(q=5\), for several magnetic field \(h=\{0.8,0.9,1.0,1.3,1.4,1.45\}\), and corresponding pseudo-critical temperature \(T_{p}=\{0.0100651534871,\ 0.0143473088244,\ 0.0144268117883,\ 0.0144268970579,\ 0.0143982497686,\ 0.0139830161395\}\), respectively. sudden change of curves is clearly a smooth function (not shown). It is worth mentioning that for \(T<T_{p}\), the system mostly resembles the FR\({}_{1}\) phase with residual entropy \(\mathcal{S}=\ln(q-1)=\ln(4)\), while for \(T>T_{p}\), the system behaves somewhat similarly to the FR\({}_{2}\) phase, with residual entropy \(\mathcal{S}\approx\ln(2(q-1))=\ln(8)\). This effect is more evident for magnetic fields in the range of \(1\lesssim h\lesssim 1.3\). #### iii.2.2 Specific heat In Fig.9(middle), we plot specific heat (\(C\)) as a function of temperature (\(T/T_{p}\)), or in units of pseudo-critical temperature \(T_{p}\). We consider the same set of parameters as in the previous plot, and each colored curve corresponds to the caption specified in the top panel. The anomalous behavior manifests clearly for magnetic fields in the range of \(1\lesssim h\lesssim 1.3\), where we observe a very intense sharp peak around \(T_{p}\) or \(T/T_{p}=1\), which looks like a second-order phase transition. However, there is no divergence at \(T_{p}\). For other values of magnetic field, this peak becomes broader and less intense. This sharp peak around \(T_{p}\) evidently signals the limit between the FR\({}_{1}\) phase and FR\({}_{2}\) phase, as discussed earlier. Therefore, the plots in Fig.9 provide valuable insights into the magnetic field-induced phase transition in the system. #### iii.2.3 Correlation Length In Fig.9(bottom), we plot the correlation length (\(\xi\)) as a function of temperature (\(T/T_{p}\)), [in units of pseudo-critical temperature \(T_{p}\)]. For simplicity and consistency with the previous figures, we consider the same set of parameters as in the top panel. Again, we observe the anomalous behavior of the correlation length around \(T_{p}\), confirming the evidence of a pseudo-transition at \(T_{p}\). The correlation length peak is more intense when we consider an external magnetic field in the range of \(1\lesssim h\lesssim 1.3\). This peak originates when the second largest eigenvalue becomes as important as the largest eigenvalue, although it should never attain the magnitude of the largest eigenvalue. For other values of magnetic field, the peak becomes less intense. These results further support the evidence of a magnetic field-induced phase transition in the system, as seen in the previous plots of specific heat and entropy. The behavior of the correlation length also provides valuable insights into the nature of this transition. The power-law behavior of the correlation length may be analytically derived using the formula proposed in reference [32]. This can be achieved by manipulating the relation: \(\xi=1/\ln\left(\frac{\lambda_{1}}{\lambda_{2}}\right)\). Utilizing the effective Boltzmann factors from (6) and (7), we can express the correlation length as \[\xi(\tau)=c_{\xi}|\tau|^{-1}+\mathcal{O}(\tau^{2}), \tag{32}\] where \[c_{\xi}=\frac{1}{\tilde{w}_{1}T_{p}}\left|\frac{\partial[w_{1}(\beta)-w_{-1}( \beta)]}{\partial\beta}\right|_{\beta=\beta_{p}}, \tag{33}\] and \(\tau=(T_{p}-T)/T_{p}\) with \(\tilde{w}_{1}=w_{1}(\beta_{p})\). #### iii.2.4 Magnetization In Fig.10a (top), we plot the magnetization \(m_{c}\) as a function of temperature \(T/T_{p}\), in units of pseudo-critical temperature \(T_{p}\). We consider the same fixed parameter set as in Fig.9 for comparison purposes. It is evident that for \(T<T_{p}\), the magnetization \(m_{c}\) is almost negligible, indicating that almost none of the spin components are in the first component of spin. Suddenly, the magnetization increases rapidly and reaches a saturated value at \(T/T_{p}\approx 4\), indicating that the spins are almost fully ordered. For higher temperatures, the spins gradually become randomly oriented. This behavior is more pronounced for the magnetic field range of \(1\lesssim h\lesssim 1.3\), while for other values of magnetic field, the magnetization \(m_{c}\) shows a smooth curve with an enhanced magnetization slightly above \(T_{p}\). Similarly, in Fig.10b (top), we present the magnetization \(m_{ab}\) as a function of temperature in units of \(T_{p}\). In this case, the magnetization \(m_{ab}\) is well-behaved, and most of the particles spin components are configured in the FR\({}_{1}\) phase. For \(1\lesssim T/T_{p}\lesssim 4\), the spin of the system is roughly configured in the FR\({}_{2}\) phase, and then increases slightly. However, this peak disappears when the magnetic field satisfies the condition of \(h\lesssim 1\) and \(h\gtrsim 1.3\). As the temperature increases further, the magnetization decreases asymptotically. Figure 10: Magnetization, Magnetic susceptibility as a function of temperature \(T/T_{p}\), in units of pseudo-critical temperature \(T_{p}\), assuming the values considered in Fig.9. (a) Correspond to the magnetization of nodal spins \(m_{c}\) (top) and corresponding magnetic field (bottom). #### iv.2.5 Magnetic Susceptibility In Fig.10a (bottom), we present the nodal spin magnetic susceptibility \(\chi_{c}\) as a function of temperature \(T/T_{p}\), where we use the same set of parameters as in the above panels for ease of comparison. In the range of magnetic field \(1\lesssim h\lesssim 1.3\), the \(\chi_{c}\) peak is very sharp around \(T/T_{p}=1\), and a second broader peak appears at higher temperatures, which vanishes when the peak at \(T_{p}\) decreases. For other intervals of magnetic field, the magnetic susceptibility exhibits less intense and broader peaks around \(T/T_{p}\approx 1\), and when the peak becomes less pronounced, the second peak disappears as well. Similarly, in Fig.10b (bottom), we report the magnetic susceptibility \(\chi_{ab}\) as a function of temperature \(T/T_{p}\). For magnetic fields \(1\lesssim h\lesssim 1.3\), the intense sharp peak delimits the boundary between quasi-phases \(qFR_{1}\) and \(qFR_{2}\), accompanied by a second broader peak at higher temperatures. However, for other intervals of magnetic field, the intense sharp peak decreases and gradually disappears, and at the same time, the second broad peak vanishes as well. To summarize our results, it is important to note that while the transfer matrix of most models exhibiting pseudo-transitions is typically reduced to a \(2\times 2\) matrix as shown in reference [32], our transfer matrix can, in principle, be significantly larger, depending on the values of \(q\). This contrasts with what was previously discussed in [32]. However, both the largest and the second-largest eigenvalues share the same structure as those of a typical \(2\times 2\) transfer matrix. In the thermodynamic limit, all other eigenvalues become irrelevant. Therefore, it's worth mentioning that pseudo-transitions adhere to the same universality properties outlined in reference [32]. ## V Conclusions Here we explored the \(q\)-state Potts model on a diamond chain in order to study the zero-temperature phase transitions and thermodynamic properties. The \(q\)-state Potts model on a diamond chain exhibits intriguing behavior, due to discrete states assembled on a diamond chain structure, which exhibits several unusual features, like various possible alignments of magnetic moments. The 3-state Potts model on a diamond chain presents peculiar characteristics, around the zero temperature phase transition FM\({}_{1}-\)FM\({}_{2}\) and FR\({}_{5}\) and FR\({}_{5}+\)FR\({}_{6}\), such as the absence of residual entropy at the phase boundary. Thermodynamic quantities such as entropy, internal energy, and specific heat remain unaffected by this phase transition, even at significantly low temperatures, while magnetic properties like correlation length, magnetization, and magnetic susceptibility offer evidence of a zero-temperature phase transition at finite temperatures when the magnetic field is varied. These findings highlight the intricate nature of the \(q\)-state Potts model on a diamond chain and contribute to our understanding of complex systems in diverse scientific disciplines. Furthermore, we conducted an analysis of the \(q\)-state Potts model, primarily independent of the specific value of \(q\), but for illustrative purposes, we chose \(q=5\). Our exploration centered around the phase boundaries FR\({}_{1}\) and FR\({}_{2}\), where certain anomalous properties become more pronounced in low-temperature regions. This is due to residual entropy, which unveils unusual frustrated regions at zero-temperature phase transitions. Phase boundaries featuring non-trivial phase transitions demonstrate anomalous thermodynamic properties, including a sharp entropy alteration as a function of temperature, resembling a first-order jump of entropy without an actual discontinuity. Similarly, second-order derivatives of the free energy, such as specific heat and magnetic susceptibility, present distinct peaks akin to those found in second-order phase transition divergences, but without any singularities. The correlation length also exhibits analogous behavior at the pseudo-critical temperature, marked by a sharp and robust peak that could be easily misinterpreted as true divergence. It is worth noting that, although the ground state phase diagram shows several frustrated phases and many boundaries, only for the state near the FR\({}_{1}\)-FR\({}_{2}\) boundary is there a pseudo-transition at a finite temperature. This is a good demonstration of the predictive power of the criterion for pseudo-transitions formulated earlier [12; 13]. The pseudo-critical transitions observed at the phase boundaries offer valuable insights into the interplay between temperature and magnetic field in inducing phase transitions. These findings contribute to a deeper understanding of statistical physics and phase transitions and have implications in various scientific disciplines. Further investigations into this model can open up new avenues for exploring the dynamics of complex systems and phase transitions, enriching the field of condensed matter physics. ###### Acknowledgements. The work was partly supported by the Ministry of Science and Higher Education the Russian Federation (Ural Federal University Program of Development within the Priority-2030 Program), and Brazilian agency CNPq and FAPEMIG. ## Appendix A Decoration transformation for \(q\)-state Potts model The decoration transformation [33; 34; 35; 36] has been widely used in Ising models and Ising-Heisenberg models. In this appendix, we apply the decoration transformation mapping to transform the \(q\)-state Potts model on a diamond chain to an effective one-dimensional Potts-Zimm-Bragg model, as considered in reference [31]. To study the thermodynamics of the Hamiltonian (1), we need to obtain the partition function using transfer matrix techniques. The elements of the transfer matrix are commonly known as Boltzmann factors. \[w(\sigma_{1}^{c},\sigma_{2}^{c})= \mathrm{e}^{\frac{\beta h_{k}}{2}\left(\delta_{\sigma_{1}^{c}, \dagger}+\delta_{\sigma_{2}^{c},\dagger}\right)}\] \[\sum_{\sigma^{a},\sigma^{b}=1}^{q}\Bigl{\{}\mathrm{e}^{\beta J_{ ab}\delta_{\sigma^{a},\sigma^{b}}+\beta h_{2}\left(\delta_{\sigma^{a},1}+ \delta_{\sigma^{b},1}\right)}\] \[\times\mathrm{e}^{\beta J_{1}\left(\delta_{\sigma_{1}^{c}, \sigma^{a}}+\delta_{\sigma_{1}^{c},\sigma^{b}}+\delta_{\sigma^{a},\sigma^{c}_{ 2}}+\delta_{\sigma^{b},\sigma^{c}_{2}}\right)}\Bigr{\}}, \tag{25}\] the summation in (A) can be expressed as \[\sum_{\sigma^{a},\sigma^{b}}^{q}\cdots= \left(\sum_{\sigma^{a}=1}^{q}\mathrm{e}^{\beta\left[J_{1}\left( \delta_{\sigma_{1}^{c},\sigma^{a}}+\delta_{\sigma^{a},\sigma^{c}_{2}}\right) +h_{2}\delta_{\sigma^{a},1}\right]}\right)^{2}\] \[+(y-1)\sum_{\sigma^{a}=1}^{q}\mathrm{e}^{2\beta\left[J_{1}\left( \delta_{\sigma_{1}^{c},\sigma^{a}}+\delta_{\sigma^{a},\sigma^{c}_{2}}\right) +h_{2}\delta_{\sigma^{a},1}\right]}, \tag{26}\] where we are using the following notation \[\mathrm{e}^{\beta J_{ab}\delta_{\sigma_{1}^{a},\sigma_{1}^{b}}}=1+(y-1)\delta _{\sigma_{1}^{a},\sigma_{1}^{b}}, \tag{27}\] with \(y=\mathrm{e}^{\beta J_{ab}}\). Thus the Boltzmann factor (A), can be simplified after some algebraic manipulation \[w(\sigma_{1}^{c},\sigma_{2}^{c})=\nu_{0}+\nu_{1}\delta_{\sigma_{1}^{c},\sigma _{2}^{c}}+\nu_{2}\delta_{\sigma_{1}^{c},1}\delta_{1,\sigma_{2}^{c}}+\nu_{3} \left(\delta_{\sigma_{1}^{c},1}+\delta_{1,\sigma_{2}^{c}}\right), \tag{28}\] by using the following notation just to express by a simple expression \[\nu_{0} = t_{2} \tag{29}\] \[\nu_{1} = d_{2}-t_{2}\] (30) \[\nu_{2} = d_{1}-d_{2}-2\left(t_{1}-t_{2}\right)\] (31) \[\nu_{3} = t_{1}-t_{2}, \tag{32}\] where we have denoted the Boltzmann factors by \(w(1,1)=d_{1}\), \(w(\mu,\mu)=d_{2}\), \(w(1,\mu)=t_{1}\) and \(w(\mu,\mu^{\prime})=t_{2}\), with \(\mu\) and \(\mu^{\prime}\) taking \(\{2,3,\ldots,q\}\). On the other hand, based on the Hamiltonian considered in reference [31], let us write the effective one-dimensional Potts-Zimm-Bragg model, whose Hamiltonian has the following form \[\mathsf{H}=-\sum_{i=1}^{N}\left\{K_{0}+K\delta_{\sigma_{1}^{c},\sigma_{i+1}^{ c}}+K_{1}\delta_{\sigma_{1}^{c},1}\delta_{1,\sigma_{1+1}^{c}}+h\delta_{\sigma_{1}^{c},1}\right\}, \tag{33}\] here \(K_{0}\), \(K\), \(K_{1}\) and \(h\) must be considered as the effective parameters. Therefore, the corresponding Boltzmann factors of the effective model becomes \[\mathsf{w}(\sigma_{1}^{c},\sigma_{2}^{c})=\mathrm{e}^{\beta\left\{K_{0}+K \delta_{\sigma_{1}^{c},\sigma_{2}^{c}}+K_{1}\delta_{\sigma_{1}^{c},1}\delta_{1,\sigma_{2}^{c}}+\frac{1}{2}\left(\delta_{\sigma_{1}^{c},1}+\delta_{1,\sigma_ {2}^{c}}\right)\right\}}. \tag{34}\] Using the decoration transformation, we can impose the condition \(w(\sigma_{1}^{c},\sigma_{2}^{c})=\mathsf{w}(\sigma_{1}^{c},\sigma_{2}^{c})\). This results in four non-equivalent algebraic equations that allow us to determine the four unknown effective parameters of the Hamiltonian (33) by solving the system of equations. The solution are given which result as: \[K_{0} = \frac{1}{\beta}\ln\left[w(\mu,\mu^{\prime})\right]=\frac{1}{ \beta}\ln\left(t_{2}\right) \tag{35}\] \[K = \frac{1}{\beta}\ln\left[\frac{w(\mu,\mu)}{w(\mu,\mu^{\prime})} \right]=\frac{1}{\beta}\ln\left(\frac{d_{2}}{t_{2}}\right)\] (36) \[K_{1} = \frac{1}{\beta}\ln\left[\frac{w(1,\mu)}{w(\mu,\mu^{\prime})} \right]=\frac{1}{\beta}\ln\left(\frac{t_{1}}{t_{2}}\right)\] (37) \[h = \frac{2}{\beta}\ln\left[\frac{w(1,1)}{w(\mu,\mu^{\prime})} \right]=\frac{2}{\beta}\ln\left(\frac{d_{1}}{t_{2}}\right). \tag{38}\] This transformation maps the diamond chain Potts model (1) on an effective one-dimensional Potts-Zimm-Bragg model [31]. ## Appendix B Application of Markov chain theory It is possible to construct a mapping of our one-dimensional model to some Markov chain if we take as the entries of a transition matrix \(P_{\alpha\gamma}\) the conditional probabilities \(P(\gamma|\alpha)\) of the state \(\gamma=\left|\begin{matrix}\xi_{i+1}\\ \eta_{i+1}\end{matrix}\zeta_{i+1}\right.\) in the \((i+1)\)th cell, given that the \(i\)th cell is in the state \(\alpha=\left|\begin{matrix}\xi_{i}\\ \eta_{i}\end{matrix}\zeta_{i}\right.\)). Conditional probabilities are determined from the Bayes formula \(P(\alpha\gamma)=P(\alpha)P(\gamma|\alpha)\), where, in turn, \[P(\alpha) = \left\langle\Delta_{i,\alpha}\right\rangle, \tag{39}\] \[P(\alpha\gamma) = \left\langle\Delta_{i,\alpha}\Delta_{i+1,\gamma}\right\rangle, \tag{40}\] and \(\Delta_{i,\alpha}\) is a projector on the \(\alpha\) state for the \(i\)th cell. Using the transfer matrix \(V\), built on the states \(\alpha\), we find \[\left\langle\Delta_{i,\alpha}\right\rangle=\lim_{N\rightarrow\infty} \frac{\mathrm{Tr}\left(V^{i-1}\Delta_{i,\alpha}V^{N-i+1}\right)}{\mathrm{Tr} \left(V^{N}\right)}=\] \[=\lim_{N\rightarrow\infty}\frac{\sum_{k}\left\langle\alpha|\lambda_ {k}\right\rangle\lambda_{k}^{N}\left(\lambda_{k}|\alpha\right)}{\sum_{k} \lambda_{k}^{N}}=\left\langle\alpha|\lambda_{1}\right\rangle\left\langle \lambda_{1}|\alpha\right\rangle, \tag{41}\] \[\left\langle\Delta_{i,\alpha}\Delta_{i+1,\gamma}\right\rangle=\frac{V_{\alpha \gamma}}{\lambda_{1}}\left\langle\gamma|\lambda_{1}\right\rangle\left\langle \lambda_{1}|\alpha\right\rangle. \tag{42}\] Here \(\lambda_{1}\) is the maximum eigenvalue of the transfer matrix \(V\). For a positive matrix, the coefficients \(v_{\alpha}=\left\langle\alpha|\lambda_{1}\right\rangle\) can be chosen positive, according to Perron's theorem [37]. Assuming that \[P_{\alpha\gamma}=P(\gamma|\alpha)=\frac{\langle\Delta_{i,\alpha}\Delta_{i+1, \gamma}\rangle}{\langle\Delta_{i,\alpha}\rangle}, \tag{100}\] we obtain \[P_{\alpha\gamma}=\frac{V_{\alpha\gamma}v_{\gamma}}{\lambda_{1}\,v_{\alpha}}. \tag{101}\] The stochastic properties of the matrix \(P_{\alpha\gamma}\) are checked directly: \[\sum_{\gamma}P_{\alpha\gamma}=\frac{1}{\lambda_{1}\,v_{\alpha}}\sum_{\gamma}V _{\alpha\gamma}v_{\gamma}=1. \tag{102}\] Equation (101) for constructing a transition matrix is known in the theory of non-negative matrices [37], but the expression (100) reveals its physical content for our model. This allows us to use the results of a very advanced field of mathematics, the theory of Markov chains. The state of the system is determined by the stationary probability vector \(\mathbf{w}\) of the Markov chain, which can be found from the following equations \[\sum_{\alpha}w_{\alpha}P_{\alpha\gamma}=w_{\gamma},\quad\sum_{\alpha}w_{ \alpha}=1. \tag{103}\] Using (101), one can check that \(w_{\alpha}=P(\alpha)\), and if the transfer matrix \(V\) is chosen symmetric, then \(w_{\alpha}=v_{\alpha}^{2}\). For the magnetizations, we obtain following expressions \[m_{c}=\mathbf{w}\mathbf{m}_{c},\quad m_{ab}=\mathbf{w}\mathbf{m}_{ab}, \tag{104}\] where \(\alpha\)th component of the vector \(\mathbf{m}\) equals to the corresponding magnetization for the state \(\alpha\). The calculation of the transition matrix \(P\) involves finding the maximum eigenvalue \(\lambda_{1}\) of the transfer matrix \(V\), the dimension of which for our model is \(q^{3}\). However, the dimension of the matrices can be reduced using the lumpability method for reducing the size of the state space of Markov chain [38]. We divide the original set of \(m\) states into \(M\) groups and find the lumped transition matrix. Formally, this can be done using matrices \(L_{A\alpha}\) and \(R_{\gamma G}\), \(\alpha,\gamma=1,\ldots m\), \(A,G=1,\ldots M\): \[P_{AG}=\sum_{\alpha\gamma}L_{A\alpha}P_{\alpha\gamma}R_{\gamma G}, \tag{105}\] where \(R_{\gamma G}=1\) if the state \(\gamma\) belongs to the group with the number \(G\), and \(R_{\gamma G}=0\) otherwise; \(L_{A\alpha}=1/\dim(A)\) if the state \(\alpha\) belongs to the group with the number \(A\) consisting of \(\dim(A)\) elements, and \(L_{A\alpha}=0\) otherwise. In our problem, it is natural to divide \(m=q^{3}\) states into phases (19-23), and also, to complete the set of states, it is necessary to supplement this list with 2 phases, \(\mathrm{FR}^{\prime}\) and \(\mathrm{FR}^{\prime\prime}\), the states for which have the following form \[\left|FR^{\prime}\right\rangle=\prod_{i}\left|\begin{bmatrix}1\\ \end{bmatrix}\mu\right\rangle_{i},\quad\left|FR^{\prime\prime}\right\rangle= \prod_{i}\left|\begin{bmatrix}\mu_{i}1\\ \end{bmatrix}_{i}. \tag{106}\] The energy and entropy at zero temperature have the values \[\mathrm{FR}^{\prime}: \varepsilon_{0}=-\left(2J_{1}+h\right), \mathcal{S}_{0}=\ln 2, \tag{107}\] \[\mathrm{FR}^{\prime\prime}: \varepsilon_{0}=-\left(J_{ab}+h\right), \mathcal{S}_{0}=\ln\left(q-1\right). \tag{108}\] The states \(\mathrm{FR}^{\prime}\) and \(\mathrm{FR}^{\prime\prime}\) are not represented in the phase diagrams of the ground state in Fig.3 and 4 by their own domains, but appear as impurities in mixed states at the phase boundaries. Thus, for any \(q>3\) the matrix \(P_{AG}\) will have dimension \(M=11\), and \(M=10\) for \(q=3\). The equilibrium state of the system will correspond to a stationary probability vector for the lumped Markov chain, \[\sum_{A}w_{A}P_{AG}=w_{G},\quad\sum_{A}w_{A}=1, \tag{109}\] and the expressions for magnetizations will not formally change. The lumpability (105) can also be applied to the transfer matrix \(V\), if we are only interested in its maximum eigenvalue. Indeed, the matrix \(V\) is non-negative, so according to the Perron-Frobenius theorem [37] \[\lambda_{1}=\max_{(v\geqslant 0)}\min_{1\leqslant\alpha\leqslant m}\frac{ \sum_{\beta}V_{\alpha\gamma}v_{\gamma}}{v_{\alpha}}. \tag{110}\] Since the matrix \(R\) sums the matrix elements for states from the group, and the matrix \(L\) removes duplicate rows, the value of \(\lambda_{1}\) in Eq. (110) will not change after the lumpability. This computational scheme is greatly simplified for the ground state. We will count the energy of the system from the energy of the ground state \(E_{0}=N\varepsilon_{0}\). If \(T=0\), then for all states with energy higher than \(\varepsilon_{0}\) we get \(V_{\alpha\gamma}=0\). A pair of states \(\alpha=\left|\begin{smallmatrix}\xi_{\alpha}\\ \eta_{\alpha}\end{smallmatrix}\zeta_{\alpha}\right\rangle\) and \(\gamma=\left|\begin{smallmatrix}\xi_{\gamma}\\ \eta_{\gamma}\end{smallmatrix}\zeta_{\gamma}\right\rangle\) with energy equal to \(\varepsilon_{0}\) will be called _allowable_ if the state of the interface \(\left|\begin{smallmatrix}\xi_{\gamma}\\ \eta_{\gamma}\end{smallmatrix}\zeta_{\alpha}\right\rangle\) also has energy \(\varepsilon_{0}\) (see Fig.11). For the allowable pair of states, we get \(V_{\alpha\gamma}=1\), otherwise \(V_{\alpha\gamma}=0\). For the lumped matrix, the nonzero matrix elements \(V_{AG}\) will be equal to the number of allowable pairs for any state \(\alpha\) from the group \(A\) and all states from the group \(G\). As a result, the dimension of a block with nonzero matrix elements will in most cases be less than \(M\). It can also be shown that for the ground state the entropy per cell can be calculated as \[\mathcal{S}_{0}=\ln\lambda_{1}, \tag{111}\] where \(\lambda_{1}\) is the maximum eigenvalue of the matrix \(V_{\alpha\gamma}\) (or \(V_{AG}\)) at \(T=0\). For a cyclic closed sequence, the probability of the state (\(\alpha_{1}\ldots\alpha_{N}\alpha_{1}\)) has the following form [39]: \[P\left(\alpha_{1}\ldots\alpha_{N}\alpha_{1}\right) = P(\alpha_{1}|\alpha_{2})P(\alpha_{1}|\alpha_{2})\ldots P(\alpha_ {N}|\alpha_{1}) \tag{112}\] \[= \prod_{\alpha\gamma}P(\alpha|\gamma)^{NP(\alpha\gamma)}=p_{0}^{N}.\] In the ground state, the equation \(\mathcal{S}_{0}=-\ln p_{0}\) is valid [40]. Consider the limit of \(\ln p_{0}\) at zero temperature. We have \[\ln p_{0}=\sum_{\alpha}P(\alpha\alpha)\ln P(\alpha|\alpha)\,+\\ +\sum_{\alpha<\gamma}P(\alpha\gamma)\ln\left[P(\alpha|\gamma)P( \gamma|\alpha)\right]. \tag{101}\] Using the Equations (100) and (101), we obtain: \[P(\alpha|\alpha)=\frac{V_{\alpha\alpha}}{\lambda_{1}},\quad P(\alpha|\gamma)P( \gamma|\alpha)=\frac{V_{\alpha\gamma}V_{\gamma\alpha}}{\lambda_{1}^{2}}. \tag{102}\] The Eq. (100) takes the form \[\ln p_{0}=\sum_{\alpha\gamma}P(\alpha\gamma)\ln V_{\alpha\gamma}-\ln\lambda_{ 1}. \tag{103}\] For allowable pairs \(P(\alpha\gamma)\neq 0\) and \(V_{\alpha\gamma}\to 1\) at \(T\to 0\), hence \(P(\alpha\gamma)\ln V_{\alpha\gamma}\to 0\). For the other pairs \(V_{\alpha\gamma}\to 0\), and hence \(P(\alpha\gamma)\ln V_{\alpha\gamma}\propto V_{\alpha\gamma}\ln V_{\alpha \gamma}\to 0\) at \(T\to 0\). As a result, \(\mathcal{S}\to\ln\lambda_{1}\) at \(T\to 0\). _Example 1._ Consider the ground state at the boundary between phases \(\mathrm{FR}_{1}\) and \(\mathrm{FR}_{4}\). The energies of these phases \(\varepsilon_{FR_{1}}=-\left(J_{ab}+2h\right)\) and \(\varepsilon_{FR_{4}}=-J_{ab}\) become equal to \(\varepsilon_{0}=-J_{ab}\) if \(h=0\). The \(\mathrm{FR}^{\prime\prime}\) phase has the same energy. These phase states for the \(i\)th cell have the form \[\left|FR_{1}\right\rangle_{i} = \left|\begin{matrix}1\\ 1\end{matrix}\mu_{i}\right\rangle, \tag{104}\] \[\left|FR_{4}\right\rangle_{i} = \left|\begin{matrix}\nu_{i}\\ \nu_{i}\end{matrix}\mu_{i}\right\rangle,\] (105) \[\left|FR^{\prime\prime}\right\rangle_{i} = \left|\begin{matrix}\mu_{i}\\ \mu_{i}\end{matrix}1\right\rangle, \tag{106}\] so that \[\mathbf{m}_{c}=\begin{pmatrix}0\\ 0\\ 1\end{pmatrix},\quad\mathbf{m}_{ab}=\begin{pmatrix}2\\ 0\\ 0\end{pmatrix}. \tag{107}\] The nonzero block of the matrix \(V_{AB}\) has the following form: \[V=\begin{pmatrix}q-1&(q-2)^{2}&q-2\\ q-1&(q-2)^{2}&q-2\\ 0&(q-1)(q-2)&q-1\end{pmatrix}. \tag{108}\] Here is taken into account that the interface state for pairs \(\mathrm{FR}^{\prime\prime}\)-\(\mathrm{FR}_{1}\) has the form \(\mathrm{FM}_{1}\), with energy higher than \(\varepsilon_{0}\), and for pairs, for example, \(\mathrm{FR}_{1}\)-\(\mathrm{FR}^{\prime\prime}\) an invalid state \(\mathrm{FM}_{2}\) may also occur. Finding the maximum eigenvalue of \(\lambda_{1}=(q-1)^{2}\), we get \[\mathcal{S}_{0}=2\ln\left(q-1\right),\quad\mathbf{v}=C\begin{pmatrix}1\\ 1\\ 1\\ 1\end{pmatrix}, \tag{109}\] and calculate the transition matrix \[P_{AG}=\frac{V_{AG}}{\lambda_{1}}\frac{v_{G}}{v_{A}}:\ P=\frac{1}{q-1}\begin{pmatrix} 1&\frac{(q-2)^{2}}{q-1}&\frac{q-2}{q-1}\\ 1&\frac{(q-2)^{2}}{q-1}&\frac{q-2}{q-1}\\ 0&q-2&1\end{pmatrix}. \tag{110}\] Finding the stationary distribution of the lumped Markov chain \[P^{T}\mathbf{w}=\mathbf{w}:\quad\mathbf{w}=\frac{1}{q}\begin{pmatrix}1\\ q-2\\ 1\end{pmatrix}, \tag{111}\] we calculate the magnetizations: \[m_{c}=\mathbf{w}\mathbf{m}_{c}=\frac{1}{q},\quad m_{ab}=\mathbf{w}\mathbf{m}_ {ab}=\frac{2}{q}. \tag{112}\] Note that without taking into account the \(\mathrm{FR}^{\prime\prime}\) states, the nonzero magnetization \(m_{c}\) at the phase boundary \(\mathrm{FR}_{1}\) and \(\mathrm{FR}_{4}\) looks mysterious, since for these phases themselves \(m_{c}=0\). Similarly, the \(\mathrm{FR}^{\prime}\) states contribute in a state of \(\mathrm{FR}_{2}\)-\(\mathrm{FR}_{3}\) phase boundary, where the energies of these three states are equal. _Example 2._ For the ground state at the boundary between phases \(\mathrm{FR}_{2}\) and \(\mathrm{FR}_{6}\), the energies of these phases \(\varepsilon_{FR_{2}}=-2\left(J_{1}+h\right)\) and \(\varepsilon_{FR_{6}}=-h\) become equal to \(\varepsilon_{0}=2J_{1}\) if \(h=-2J_{1}\). The phase \(\mathrm{FR}_{5}\) has the same energy. These phase states for the \(i\)th cell have the form \[\left|FR_{2}\right\rangle_{i} = \left|\begin{matrix}1\\ 1\end{matrix}\mu_{i}\right\rangle, \tag{113}\] \[\left|FR_{6}\right\rangle_{i} = \left|\begin{matrix}1\\ 1\end{matrix}\mu_{i}\right\rangle,\] (114) \[\left|FR_{5}\right\rangle_{i} = \left|\begin{matrix}\mu_{i}\\ \nu_{i}\end{matrix}1\right\rangle, \tag{115}\] and \[\mathbf{m}_{c}=\begin{pmatrix}1\\ 0\\ 1\end{pmatrix},\quad\mathbf{m}_{ab}=\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}. \tag{116}\] The lumped transfer matrix \[V=\begin{pmatrix}2(q-1)&2(q-1)(q-2)&(q-1)(q-2)\\ 2(q-2)&2(q-2)^{2}&0\\ 2(q-1)&2(q-1)(q-2)&(q-1)(q-2)\end{pmatrix}, \tag{117}\] Figure 11: An illustration of the interface between \(i\)th and \((i+1)\)th cells of the diamond chain. has a maximum eigenvalue \[\lambda_{1}=\frac{1}{2}\left[3q^{2}-9q+8+\phi_{1}(q)\right], \tag{101}\] where \[\phi_{1}(q)=\sqrt{q^{4}+2q^{3}-15q^{2}+16q}. \tag{102}\] We write the corresponding eigenvector \[\mathbf{v}=C\left(\begin{array}{c}1\\ \dfrac{\left(q-2\right)\left(q^{2}-3q+4+\phi_{1}(q)\right)}{\left(q-1\right) \left(3q^{2}-9q+8+\phi_{1}(q)\right)}\\ 1\end{array}\right), \tag{103}\] and find the transition matrix: \[P=\left(\begin{array}{ccc}\frac{2(q-1)}{\lambda_{1}}&\frac{2(q-2)^{2}\left( \lambda_{1}-q^{2}+3q-2\right)}{\lambda_{1}^{2}}&\frac{(q-2)(q-1)}{\lambda_{1}} \\ \frac{2(q-1)}{\lambda_{1}-q^{2}+3q-2}&\frac{2(q-2)^{2}}{\lambda_{1}}&0\\ \frac{2(q-1)}{\lambda_{1}}&\frac{2(q-2)^{2}\left(\lambda_{1}-q^{2}+3q-2\right) }{\lambda_{1}^{2}}&\frac{(q-2)(q-1)}{\lambda_{1}}\end{array}\right). \tag{104}\] The stationary probabilities can be reduced to the form \[\mathbf{w}=\frac{1}{2\phi_{1}(q)}\begin{pmatrix}4(q-1)\\ \phi_{1}(q)+q^{2}-7q+8\\ \phi_{1}(q)-q^{2}+3q-4\end{pmatrix}, \tag{105}\] that allow us to find the magnetizations: \[m_{c}=\frac{\phi_{1}(q)-q^{2}+7q-8}{2\phi_{1}(q)}, \tag{106}\] \[m_{ab}=\frac{\phi_{1}(q)+q^{2}-3q+4}{2\phi_{1}(q)}. \tag{107}\] In this way, it is possible to obtain all the values given in the Table 1. A special situation occurs at \(q=3\), when the ground state energy has the value \(\varepsilon_{0}=-h\). This value has the energy of the FR\({}_{5}\) and FR\({}_{6}\) phases. At \(q>3\), the entropy of the FR\({}_{6}\) phase is greater than that of the FR\({}_{5}\) phase, so when \(T>0\), the free energy for the FR\({}_{6}\) phase is less than for the FR\({}_{5}\) phase, and in the limit \(T\to 0\), the main state is the FR\({}_{6}\) phase. If \(q=3\), the entropy of FR\({}_{5}\) and FR\({}_{6}\) phases is equal to \(\mathcal{S}_{0}=\ln 2\). The nature of the ground state in this case can be investigated using the Markov chain method proposed above. At a sufficiently low temperature, the state of the system is formed by phases whose energies are closest to the energy of the ground state. In the parameter range \(h>0\), \(J_{1}<-h/2\), and \(J_{ab}<-h\) (see Fig.4a), it is natural to take into account in addition to FR\({}_{5}\) and FR\({}_{6}\) phases also neighboring phases FR\({}_{1}\) and FR\({}_{2}\). The transfer matrix with enties \(V_{AB}\), where \(A,B=\text{FR}_{1}\), FR\({}_{2}\), FR\({}_{5}\), FR\({}_{6}\), has the form \[V=2z\begin{pmatrix}yz&\sqrt{xy}z\left(1+x\right)&x\sqrt{yz}&\sqrt{yz}\left(1+x \right)\\ x^{5/2}\sqrt{y}z&2x^{2}z&\sqrt{xz}&2x^{3/2}\sqrt{z}\\ x^{2}\sqrt{yz}&2x^{3/2}\sqrt{z}&1&2x\\ \sqrt{yz}&\sqrt{xz}\left(1+x\right)&x&1+x\end{pmatrix}. \tag{108}\] Here \(x=\text{e}^{\beta J_{1}}\), \(y=\text{e}^{\beta J_{ab}}\), \(z=\text{e}^{\beta h}\), and it is taken into account that \(q=3\). Explicit expressions for the maximum eigenvalue \(\lambda_{1}\) and its eigenvector \(\mathbf{v}\) have a rather cumbersome form, however, for the parameters under consideration and \(\beta\gg 1\), their approximate expressions can be used: \[\lambda_{1}=2z\left(1+u\right),\quad\mathbf{v}=C\begin{pmatrix}\sqrt{yz}\\ 2x^{3/2}\sqrt{z}/u\\ 2x/u\\ 1\end{pmatrix}, \tag{109}\] where \[u=\frac{1}{2}\left(yz+\sqrt{y^{2}z^{2}+8x^{2}z}\right). \tag{110}\] Using these expressions in Eq. (104) and leaving only the leading terms for \(\beta\gg 1\) in the entries of the matrix \(V\), we obtain the transition matrix: \[P=\frac{1}{1+u}\begin{pmatrix}yz&\frac{2x^{2}z}{u}&\frac{2x^{2}}{u}&1\\ \frac{1}{2}uxyz&2x^{2}z&1&u\\ \frac{1}{2}uxyz&2x^{2}z&1&u\\ yz&\frac{2x^{2}z}{u}&\frac{2x^{2}}{u}&1\end{pmatrix}. \tag{111}\] In the parameter domain under consideration, the stochastic properties of this matrix are hold quite accurate at \(\beta\gg 1\). The qualitative difference of Markov chains generated by \(P\) at low temperature depends on the asymptotic behavior of the parameter \(u\): \[u\xrightarrow[\beta\gg 1]{}\begin{cases}yz,&2J_{ab}+h>2J_{1},\\ 2yz=2x\sqrt{z},&2J_{ab}+h=2J_{1},\\ x\sqrt{2z},&2J_{ab}+h<2J_{1}.\end{cases} \tag{112}\] Consider the case of \(2J_{ab}+h>2J_{1}\). Under this condition, the following inequalities will be true: \(x^{2}\ll x^{2}z\ll yz\), \(uxyz\ll yz\). For clarity, we leave in the matrix \(P\) only entries of order \(1\) and the first order of smallness, and replace matrix elements of higher orders of smallness with zeros. As a result, the transition matrix takes the form \[P=\begin{pmatrix}a_{1}&0&0&1-a_{1}\\ 0&0&1-a_{1}&a_{1}\\ 0&0&1-a_{1}&a_{1}\\ a_{1}&0&0&1-a_{1}\end{pmatrix},\quad a_{1}=yz. \tag{104}\] Transition graph of this Markov chain is shown in Fig.12a. The state \(\text{FR}_{2}\) in this case is transient and is omitted for simplicity. Thin and thick lines correspond to the transition probabilities \(a_{1}\) and \(1-a_{1}\). The stationary state contains an exponentially small admixture of the \(\text{FR}_{1}\) phase and in the limit of \(T\to 0\) becomes a pure \(\text{FR}_{6}\) phase: \[\mathbf{w}=\begin{pmatrix}a_{1}\\ 0\\ 0\\ 1-a_{1}\end{pmatrix}\xrightarrow[T\to 0]{}\begin{pmatrix}0\\ 0\\ 0\\ 1\end{pmatrix}. \tag{105}\] If \(2J_{ab}+h>2J_{1}\), when \(u\approx x\sqrt{2z}\), the estimates hold \(uxyz\ll yz\ll u\), \(x^{2}\ll u\), and \(2x^{2}z/u\approx u\). Replacing exponentially small entries with zeros, we get the transition matrix \[P=\begin{pmatrix}0&a_{2}&0&1-a_{2}\\ 0&0&1-a_{2}&a_{2}\\ 0&0&1-a_{2}&a_{2}\\ 0&a_{2}&0&1-a_{2}\end{pmatrix},\quad a_{2}=x\sqrt{2z}. \tag{106}\] The corresponding graph without the transient state \(\text{FR}_{1}\) is shown in Fig.12b. Thin and thick lines correspond to the transition probabilities \(a_{2}\) and \(1-a_{2}\). The stationary state in this case contains an exponentially small admixture of the \(\text{FR}_{2}\) phase and at \(T\to 0\) transforms into a mixture of independent phases \(\text{FR}_{5}\) and \(\text{FR}_{6}\) having equal fractions: \[\mathbf{w}=\frac{1}{2}\begin{pmatrix}0\\ a_{2}\\ 1-a_{2}\\ 1\end{pmatrix}\xrightarrow[T\to 0]{}\frac{1}{2}\begin{pmatrix}0\\ 0\\ 1\\ 1\end{pmatrix}. \tag{107}\] It is this state that is designated as \(\text{FR}_{5}+\text{FR}_{6}\) in Fig.4a. On the boundary \(2J_{ab}+h=2J_{1}\), similar considerations give a transition matrix \[P=\begin{pmatrix}a_{3}&a_{3}&0&1-2a_{3}\\ 0&0&1-2a_{3}&a_{3}\\ 0&0&1-2a_{3}&a_{3}\\ a_{3}&a_{3}&0&1-2a_{3}\end{pmatrix},\;a_{3}=x\sqrt{z}=yz. \tag{108}\] The stationary state in this case transforms at \(T\to 0\) into a mixture of independent phases \(\text{FR}_{5}\) and \(\text{FR}_{6}\) with a ratio of fractions of \(1/2\): \[\mathbf{w}=\frac{1}{3}\begin{pmatrix}2a_{3}\\ 2a_{3}\\ 1-2a_{3}\\ 2-2a_{3}\end{pmatrix}\xrightarrow[T\to 0]{}\frac{1}{3}\begin{pmatrix}0\\ 0\\ 1\\ 2\end{pmatrix}. \tag{109}\] At \(h<0\), similar results for the composition of the phases of the ground state, shown in Fig.4b, can be obtained taking into account the mixing of the phases \(\text{FR}_{5}\) and \(\text{FR}_{6}\) states of neighboring phases \(\text{FR}_{3}\) and \(\text{FR}_{4}\). A special composition of the ground state also occurs at \(h=0\) in the region of phases \(\text{FR}_{6}\) and \(\text{FR}_{5}+\text{FR}_{6}\) in Figures 4d and 4f. The stationary state at \(T=0\) on the line \(h=0\) is a mixture of \(\text{FR}_{5}\) and \(\text{FR}_{6}\) with a ratio of fractions of \(1/2\).
2309.04255
LLMCad: Fast and Scalable On-device Large Language Model Inference
Generative tasks, such as text generation and question answering, hold a crucial position in the realm of mobile applications. Due to their sensitivity to privacy concerns, there is a growing demand for their execution directly on mobile devices. Currently, the execution of these generative tasks heavily depends on Large Language Models (LLMs). Nevertheless, the limited memory capacity of these devices presents a formidable challenge to the scalability of such models. In our research, we introduce LLMCad, an innovative on-device inference engine specifically designed for efficient generative Natural Language Processing (NLP) tasks. The core idea behind LLMCad revolves around model collaboration: a compact LLM, residing in memory, takes charge of generating the most straightforward tokens, while a high-precision LLM steps in to validate these tokens and rectify any identified errors. LLMCad incorporates three novel techniques: (1) Instead of generating candidate tokens in a sequential manner, LLMCad employs the smaller LLM to construct a token tree, encompassing a wider range of plausible token pathways. Subsequently, the larger LLM can efficiently validate all of these pathways simultaneously. (2) It employs a self-adjusting fallback strategy, swiftly initiating the verification process whenever the smaller LLM generates an erroneous token. (3) To ensure a continuous flow of token generation, LLMCad speculatively generates tokens during the verification process by implementing a compute-IO pipeline. Through an extensive series of experiments, LLMCad showcases an impressive token generation speed, achieving rates up to 9.3x faster than existing inference engines.
Daliang Xu, Wangsong Yin, Xin Jin, Ying Zhang, Shiyun Wei, Mengwei Xu, Xuanzhe Liu
2023-09-08T10:44:19Z
http://arxiv.org/abs/2309.04255v1
# LLMCad: Fast and Scalable On-device Large Language Model Inference ###### Abstract. Generative tasks, such as text generation and question answering, hold a crucial position in the realm of mobile applications. Due to their sensitivity to privacy concerns, there is a growing demand for their execution directly on mobile devices. Currently, the execution of these generative tasks heavily depends on Large Language Models (LLMs). Nevertheless, the limited memory capacity of these devices presents a formidable challenge to the scalability of such models. In our research, we introduce LLMCad, an innovative on-device inference engine specifically designed for efficient generative Natural Language Processing (NLP) tasks. The core idea behind LLMCad revolves around model collaboration: a compact LLM, residing in memory, takes charge of generating the most straightforward tokens, while a high-precision LLM steps in to validate these tokens and rectify any identified errors. LLMCad incorporates three novel techniques: (1) Instead of generating candidate tokens in a sequential manner, LLMCad employs the smaller LLM to construct a token tree, encompassing a wider range of plausible token pathways. Subsequently, the larger LLM can efficiently validate all of these pathways simultaneously. (2) It employs a self-adjusting fallback strategy, swiftly initiating the verification process whenever the smaller LLM generates an erroneous token. (3) To ensure a continuous flow of token generation, LLMCad speculatively generates tokens during the verification process by implementing a compute-IO pipeline. Through an extensive series of experiments, LLMCad showcases an impressive token generation speed, achieving rates up to 9.3\(\times\) faster than existing inference engines. ## 1. Introduction Generative tasks like text generation, question answering, and translation play a crucial role on mobile devices, as numerous applications rely on them to deliver key functionalities. For instance, input method application like Google GBoard heavily leverages its text generation capabilities, while private assistant like Apple Siri uses it for question answering. Such tasks are often privacy-sensitive and heavily rely on users' private data, thereby necessitating on-device local inference. Large language models (LLMs), especially those built atop transformer decoder (Wang et al., 2019) such as GPT-3 (Gupta et al., 2019) and LLAMA (Wang et al., 2019), have become the de-facto approach to solve NLP generative tasks. Recent research in the machine learning community has demonstrated that scaling up such LLMs parameter size brings accuracy improvement and emergent ability (Gupta et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), as shown in Figure 1(a). In general, an LLM necessitates more than 1B parameters to learn meaningful representations (Wang et al., 2019), over 10B parameters to exhibit certain arithmetic reasoning abilities (Zhou et al., 2019), and more than 30B Figure 1. The memory wall hinders LLM’s “scaling law” on mobile devices. “-Math,” -NLU, “-Mode, and “-GM denote LLMs’ emergent abilities: math reasoning, multi-task comprehension, mode arithmetic, and learning meaningful representations. parameters to achieve multi-task comprehension capabilities [30]. This phenomenon is well-recognized in the machine learning community as the _scaling law_[11; 12; 21; 34]. **Key challenge: memory wall.** However, our preliminary experiments in Figure 1(b) reveal that the scaling ability is challenged on mobile devices. Specifically, when LLMs are too large to be fit into device memory, mobile DNN engines like MNN [4]a nd llama.cpp [45] need to repetitively release and load model weights. It results in 59-224\(\times\) lengthened inference latency. Such memory wall severely hinders the scaling law. Users have to choose between real-time generation and emergent ability. For instance, 10B parameters represent the minimum size required for LLMaM to possess arithmetic reasoning capabilities, yet it also represents the maximum parameter size for achieving real-time inference on smartphones (e.g., Xiaomi 10). **LLMCad: breaking memory wall through model collaboration.** In this paper, we propose LLMCad, the first efficient inference engine for on-device generative NLP tasks. LLMCad delivers LLM's scaling ability to mobile devices with a tolerable generation speed through _model collaboration_. The main idea is to delegate most tokens to a smaller real-time LLM that can be totally hosted in device memory (namely memory-resident LLM). The design is based on a key observation that, while a smaller LLM is inadequate to deliver satisfactory end-to-end sentences, they can correctly generate most easy tokens (e.g., determiners, pronouns, and punctuations). Furthermore, LLMs are often trained with a series of model variants, e.g. T5-Small/Base/Large [55] and LLMaM-7B/13B/33B [64], and its smaller counterpart (e.g., LLMa-7B and T5-small, dubbed _memory-resident model_ in this paper) can often be hosted in memory easily [17; 55; 64; 75]. LLMCad employs a unique form of model collaboration, namely "generate-then-verify" [20; 41]. In this approach, the memory-resident LLM serves as a token generator, while a target LLM acts as a verifier, using its output as the ground truth to inspect and rectify any errors introduced during the token generation process. This approach provides two significant advantages: (1) _No compromising accuracy_. Each token is verified by the target model, therefore its accuracy is guaranteed. This is crucial as a wrong token could propagate its error to the subquent tokens due to the autoregressive nature. (2) _Fast verification_. As will be detailed in SS2.3, the verification of \(N\) tokens can be accomplished within one-shot inference o f the target model, therefore much faster than using it to generate \(N\) tokens sequentially. Despite these advantages, applying model collaboration for on-device LLM introduces three distinctive challenges: * _Overlooked correct tokens with sub-optimal confidence_. Typically, state-of-the-art LLM engines and studies always use the token with the highest probability as the output. Nevertheless, our observation has revealed that some of generation errors by the memory-resident LLM can be rectified by the sub-optimal tokens. Figure 4 gives a real-world example of such phenomenon. Given the significant performance overhead associated with on-device verification, LLMCad must capitalize on these often-overlooked tokens to reduce the frequency of verification. * _Verification timing_. Another crucial aspect is determining when to initiate the verification process. On-device verification is time-consuming, e.g., taking 7.1s on Jetson TX2. Too early or too late verification just wastes computing mobile devices scarce resources by invalid verification (i.e., no errors detected) or useless tokens. Prior works have typically relied either a single token or token sequence length, which may not accurately pinpoint the optimal verification timing. * _IO vs. compute asymmetry_. With a LLM cascade, the large LLM execution blocks the small model inference due to the cross-token dependency, and the processor is underutilized as the I/O bottlenecks during weights loading. Such a situation severely hampers the inference speed as the target model needs to be invoked unavoidably to guarantee correct tokens generation. In response, LLMCad desgins three novel techniques: **(1) Token tree generation and verification (SS3.2).** Instead of generating and verifying a linear token sequence, LLMCad employs a different approach by constructing and validating a "token tree." This token tree permits each token to have multiple potential succeeding tokens. To accomplish this efficiently, LLMCad employs three novel modules: (1) Confidence-based branch pacer paces the progress of different branches to prevent the wasteful allocation of computing resources to the wrong branch; (2) Tree decoder generates tokens from various branches without incurring the overhead of context switching between them; (3) Non -autoregressive token tree verifier examines and rectifies all errors within a token tree in a batch manner, at the cost of a single iteration. **(2) Self-adaptive fallback strategy (SS3.3).** This strategy is devised to initiate the verification process promptly when the memory-resident LLM generates an incorrect token. It is inspired by two key observations: (1) Typically, each token generated by the memory-resident LLM introduces some "uncertainty" (imperfect confidence score). LLMCad uses a more accurate metric referred to as _cumulative uncertainty_ within the token tree compared. Compared to prior works, this metric better reflects the error probability associated with memory-resident LLM generation, especially considering the accumulative nature of autoregressive models. (2) Historical data pertaining to the accuracy of verified tokens is harnessed to assess the memory-resident LLM's generation capability. A stronger generation ability necessitates a lower frequency of verification. **(3) Speculative generation pipeline (SS3.4).** To break the cross-token dependency and enhance parallelism, we propose _speculative generation_, i.e., continuing generating tokens through the memory-resident LLM during the verification process, This is founded on the insight that sometimes the verification process may not detect errors, rendering the speculatively generated tokens usable. However, simultaneous speculative generation with verification directly can lead to processor and memory contentions. To further tackle this issue, LLMCad incorporates a fine-grained pipeline, ensuring that the speculative generation only runs when loading target LLM parameters below the memory upper bound to void interfering with the regular verification process. **Implementation and evaluation.** We have fully implemented LLMCad on top of two SOTA LLM engines: PyTorch (Cordord et al., 2017) and llama.cpp (Lian et al., 2018). Extensive evaluation of the system was conducted across four platforms: two IoT devices (Jetson TX2 and Jetson Orin NX) and two smartphones (Xiaomi 10 and Xiaomi 11). This evaluation encompassed six widely utilized LLMs (GPT2 (Cordord et al., 2017), T5 (Cordord et al., 2017), mT5 (Cord et al., 2017), Bart (Bart, 2017), Vicuna, and LLMaMa2 (Vicuna et al., 2018)) and seven datasets (CNN/Daily (Vicuna et al., 2018), Wikitext (Wikitext, 2017), iwlt2017 (Wimt14/22 (Cord et al., 2017), SQuAD (S **Autoregressive inference.** Generative LLMs employ an _autoregressive_ inference procedure that generates one token at a time and takes that token as input to generate the next one. For instance, Figure 2(b) illustrates a three-autoregressive-iteration inference procedure. In the 1st iteration, the model takes all existing tokens ("You should") as input and generates the output "wear." In the next iteration, the newly generated "wear" will be fed into the model, which then predicts "shoes." This process continues until the model generates the end-of-sequence token (\(<\textit{EOS}>\)), indicating the end of the generation procedure. The nature of autoregressive inference introduces unique challenges for optimizing on-device LLM as will be described later. ### On-device LLM is Memory-bounded In this section, we perform pilot experiments to reveal the performance issue of on-device LLM inference. The experiments are performed on typical LLMs (GPT2, T5, and LLaMa), datasets (SQuAD and TriviaQA), and mobile devices (Jetson TX2 and Xiaomi 10) using state-of-the-art DL engines (Pytorch (2016) and llama.cpp (2017). We summarize our key findings below. **Scaling up parameter size brings accuracy improvement.** Transformer-based LLM architecture is highly flexible and scalable by simply adjusting the encoder/decoder layers, sequence length, and other hyper-parameters. Consequently, popular LLM is often developed with a series of model variants, such as T5-Small/Base/Large (Zhu et al., 2017) and LLaMa-7B/13B/33B/65B (Zhu et al., 2017). With the parameter size scaling up, the model exhibits stronger abilities. As shown in Table 1, T5-Large outperformed T5-Small by a significant margin, achieving a 7.6% improvement in accuracy on the SQuAD dataset. Similarly, LLaMa-13B demonstrated a 6.6% higher QA accuracy on TriviaQA than LLaMa-7B. Indeed, such a phenomenon is well known in ML community as _scaling law_(Kal the proposal of this work is at system level and is compatible with quantization. ### Opportunities and Challenges of LLM collaboration This work focuses on _model collaboration_ approach (Kumar et al., 2017; Kumar et al., 2018; Kumar et al., 2019), which leverages multiple models with accuracy-cost tradeoff to speed up inference. In the case of generative NLP tasks, we delegate most computations (i.e., tokens) to a smaller model that entirely fits into the memory budget. The key rationale is that smaller models can exhibit close performance to the large one, especially for the easier data points (Kumar et al., 2019; Kumar et al., 2019). Our empirical experiments have confirmed such an assumption: in the iwslt2017 de-en translation dataset (Kumar et al., 2019), mT5-Small correctly generates more than 80% tokens as mT5-Large does. Figure 4 gives one concrete example of translating a sentence by the smaller model and larger LLM, and the corresponding ground truth. It shows that most of the tokens (in green) generated by the small model are correct. However, employing model collaboration for LLMs faces one critical challenge: Wrong token delegation could be fatal. Traditional model cascade either relies on internal (Kumar et al., 2018) or external knowledge (Kumar et al., 2018; Kumar et al., 2019) to select a portion of data (often the easier ones) to be processed by the smaller model. In such circumstance, the accuracy cannot be guaranteed. For generative NLP tasks, however, a wrongly generated token by the small model could propagate the error to the subsequent ones and finally cause catastrophic results due to its autoregressive nature (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). For example, in Figure 4, the second "ice" in red is incorrect, resulting in additional two "ice" generations and wrong translation information in the subsequent tokens. Note that generative NLP tasks are often accuracy-sensitive, e.g., translation and Q/A, as a wrongly generated result could misinform users and result in unexpected behaviors. To tackle this issue, LLMcad employs a unique form of model collaboration, namely "generate-then-verify" (Kumar et al., 2018; Kumar et al., 2019). In this approach, the memory-resident LLM serves as a token generator, while a target LLM acts as a verifier, using its output as the ground truth to inspect and rectify any errors introduced during the token generation process. By doing that, LLMcad can prevent from propagating its error to the subquent tokens due to the autoregressive nature.and ensure no compromising accuracy. ## 3. Design ### Overview LLMcad is built on two LLMs: a target model that is accurate but heavy (cannot fit to device memory) like mT5-large; and a memory-resident model that is less accurate but lightweight like mT5-small. The design goal of LLMcad is to generate texts with the speed of memory-resident model _without compromising the accuracy_ of target (larger) model. **Simplified workflow and an illustrative example** Figure 5 illustrates the workflow of LLMcad. Figure 6 also provides an illustrative example based on the case of Figure 4 to exemplify the workflow. Essentially, LLMcad is a generation and verification framework using the memory-resident LLM as a generator, and the target LLM as a verifier. First, LLMcad feeds the input text to the memory-resident model and generates a _token tree_. A _token tree_ is the intermediate result generated by the memory-resident model (details in _tree generation_ SS3.2). Unlike a token sequence where each token has only a single succeeding token, a token in a token tree can have multiple candidate succeeding tokens, as shown in Figure 6(1). Each of the candidate tokens represents a candidate token sequence (referred to as a _branch_). This is based on the observation that sometimes the "sub-optimal" tokens generated by the memory-resident LLM is the true output of the target LLM, e.g., the alternative token "cap". In practice, any candidate token with a confidence higher than a threshold (e.g., 30%) generates a branch. Each token generated by the memory-resident LLM introduces some "uncertainty" (imperfect confidence score). Once such uncertainty accumulates to a level in the output sentence, the target LLM is used to verify all the generated branches since last verification, as shown in Figure 6(2). Notably, the verification of \(N\) tokens can be done within one-shot inference with target LLM, therefore much faster than using it to generate one token by \(N\) times. such verification process is therefore termed "non-autoregressive". Once an error is detected, LLMcad will rollback the token tree and rectify the it. The details of the verification and rollback strategy is discussed in SS3.2. The verification process involves target LLM inference, therefore is I/O-bound as previously shown in SS2.3. LLMcad further proposes _Speculative generation_ to exploit the under-utilized hardware resources by generating tokens _speculatively_ (SS3.4), i.e., continuing generating tokens through the memory-resident model during verification process, as shown in Figure 6(3) dashed boxes on the left side of the red dash line. This approach is based on the insight that sometimes the verification detects no error so the speculatively generated tokens could be used afterwards. Therefore, it can effectively hide execution latency under the I/O, e.g., the second branch in Figure 6(3). LLMcad repeat the above generation and verification process until it encounters "\(\neg\)EOS\(\neg\)", the end token. ### Token tree generation and verification This subsection is mainly to discuss how the token tree is generated and verified in LLMCad. **Token tree generation** To generate useful token trees, LLMCad needs to answer two crucial questions: \(\bullet\) Branches compete for computing resource (e.g., GPU) to generate subsequent tokens by running memory-resident model. At a timestamp, which branch shall receive the resource to lengthen its token sequence? The decision is crucial as generating tokens to a wrong branch (as verified by the target model later) wastes computing resource and delays the generation of true tokens. \(\bullet\) Generating tokens from different branches requires switching between branch contexts. How to generate token from different branches efficiently? The design is crucial as LLMCad needs to frequently switch between up to tens of branches. In response, LLMCad incorporates two novel techniques: \(\bullet\)_Confidence-based branch pacer_. To properly pace the progress of different branches, LLMCad relies on the fact that branch with a higher probability is more likely to be the correct result and should have a longer sequence length. Here, LLMCad models the probability with the cumulative confidence scores given by the memory-resident for each token generated. To control the branch length dynamically, LLMCad uses max-min fairness (Zhu et al., 2017). Assuming that there are \(N\) branches and \(M\) tokens, and the i-th branch includes \(T_{i}^{B}\) tokens, i-th branch cumulative confidence \(C_{i}\) is the product of every token's confidence. Thus, the branch length problem can be done by solving the following problem. \[f(x)=M*\frac{C_{x}}{\sum_{i=0}^{N}C_{i}}-T_{x}^{B} \tag{1}\] \[Obj=min_{x=0}^{N}f(x) \tag{2}\] Under the max-min fairness, LLMCad tries to allocate more hardware resources to the branch which is more likely to be the ground truth. \(\bullet\)_Tree decoder._ We commence our study by conducting an exhaustive analysis of the fundamental reasons behind the substantial performance overhead incurred by branch context switching (e.g., 25% overhead for mT5 models on Jetson TX2). In Figure 7(a), we provide an illustration of the implementation of branch context switching within state-of-the-art LLM engines, such as PyTorch, using the scenario depicted in Figure 6(1) as a case study. In this illustration, iterations 1-4 take the previous output token as the new input. However, generating token "7" in iteration 5 necessitates branch switch from b1 to b2, which involves the removal of token \(T_{4}\), the omission of the new token \(T_{6}\), and the utilization of the sub-optimal output \(T_{5}\) from iteration 3 as input. Consequently, LLMCad must deviate from the autoregressive rule and modify each iteration input with a substantial amount of metadata (e.g., Key-Value cache (Cheng et al., 2017; Wang et al., 2017; Wang et al., 2017) and position ids (Wang et al., 2017)) maintaining operations and CPU-GPU interactions. To tackle this issue, LLMCad incorporates _masking_ technique (Wang et al., 2017). This technique is employed to ensure that the predictions for position \(i\) depends only on the known outputs at positions less than \(i\) in decoder-based LLMs. The masking technique relies on a table where all the positions with value of one will be taken into account for calculation. Crucially, the tree decoder retains the autoregressive procedure while only modifying its masking table to support the isolation of effects from different branches, as demonstrated in Figure 7(b). During each iteration, LLMCad treats the new generated token as input, just as in regular generation. However, it assigns a value of one only to the previous positions on the same branch. For example, when generating token "7" for branch b2 in iteration 5, the input remains \(T_{6}\), the output of iteration 4, but only the positions "1, 2, 3, and 5" are set to one; all other positions are set to zero. This approach ensures Figure 4. A translation generation example from mt5-small and mt5-large models as well as its ground truth label. Green: correct parts of small model generation; Red: error propagation parts of small model generation; Blue: the sub-optimal token which is the correct answer. Noticeably, on the iwslt2017 de-en translation dataset (Kang et al., 2017), the mT5-small model correctly generates nearly 69.3% of tokens, while the number for the mT5-large model is 73.1%. Figure 5. The workflow of LLMCad. that tokens "4 and 6" do not affect the calculation, enabling LMLcad to generate token "7" without being influenced. **Token tree verification and rectification** To achieve the goal of not sacrificing accuracy, LMLcad has to verify every token generated by the memory-resident LLM. An intuitive approach is to use the target LLM to run verification after each token generation by the memory-resident LLM. However, such an approach (called _autoregressive verification (AV)_) is even slower than generating every token by the target model directly, since AV does not reduce the target LLM inference times but even uses memory-resident LLM. To tackle this issue, LMLcad is based on two opportunities: (1) The target LLM can examine a sequence of tokens Figure 8. The illustration of token tree verification. Figure 6. An illustrative example of LMLcad. Figure 7. Examples of tree generation procedure in state-of-the-art LLM engines and the tree decoder procedure of LMLcad based on the case of Figure 6. Figure 9. The latency of verification with input sequencing increasing. in parallel by visiting its parameters only once (called _non-autoregressive verification (NAV)_) and the verification results are the same as examining them sequentially (Zhou et al., 2017; Zhang et al., 2018). (2) The NAV process is way faster than AV. Our pilot experiments in Figure 9 on LLaMa-13B and mT5-Large models using Xiaomi10 and TX2 shows that NAV outperforms AV in examining time across 2-10 input tokens, with its benefits more pronounced as token count increases. NAV significantly reduces verification time for mT5-Large and LLMaT13B models by 8.5-9.9x at 10 tokens, attributed to NAV's single weight swapping versus AV's multiple swappings per token, reducing I/O overhead. To sum up, NAV can parallel verifying multiple tokens correctly at the cost of only one target LLM inference. LLMCad incorporates NAV and extends it to support token tree verification, as shown in Figure 8, including two crucial steps: * _Batched non-autoregressive verification._ As shown in Figure 8, to support token tree verification, LLMCad first divides the token tree into several branch sequences and combines them to a mini-batch as the input of the target LLM. After NAV, LLMCad can obtain the correct results of every position in each branch sequence. Compared with the origin branch sequences, LLMCad can detect all errors of a token tree, e.g., \(T_{7}^{\prime}\) in branch2 results. A branch correct sequence is the sub-sequence leading up to the first error position, plus the rectified token, to avoid error propagation. For example, branch1 correct sequence stops at the \(T_{2}\), plus the \(T_{5}\), i.e., tokens "1, 2 and "5". To be noted, given that the NAV is I/O-bounded, increasing batch size (e.g., <10) has a negligible effects on verification time. * _Depth-first verified sequence searching._ Based on the correct sequences, LLMCad can build a correct token tree, as shown in Figure 8. Its leaf node is either first rectified token or the origin branch sequence last token. LLMCad leverages depth-first searching algorithm to find the longest correct path in the correct token tree as the _verified sequence._ If the verified sequence has a rectified token, e.g., \(T_{7}^{\prime}\), LLMCad will rollback the token tree to the error position, fix the error, and use it as the new input for future generation. ### Self-adaptive fallback strategy This strategy is devised to initiate the verification process promptly when the memory-resident LLM generates an incorrect token. To achieve this goal, LLMCad needs to answer two crucial questions: * **Selection of Decision Metric.** The decision metric should effectively evaluate the probability of errors within the token tree. * **Threshold Values for Different Tasks.** Recognizing that a universal threshold may not be suitable for all tasks, LLMCad must establish appropriate threshold values tailored to specific tasks. To tackle these issues, LLMCad introduces two innovative techniques: * _Tree-cumulative confidence (\(T_{c}\))._ We propose using _tree-cumulative confidence_ as the decision variable for initiating fallback. Unlike prior studies (Wang et al., 2017; Wang et al., 2017) that rely on a single token confidence or token sequence length, \(T_{c}\) provides a comprehensive assessment of global uncertainty. It captures errors more accurately due to the autoregressive nature of token generation. The formulation _tree-cumulative confidence_ is as \(T_{c}=max_{i=1}^{N_{c}}C_{i}\), where \(N_{c}\) represents the number of branches in a token tree, and \(C_{i}\) denotes the cumulative confidence of the \(i\)-th branch. We select the maximum cumulative confidence over minimum/average confidence because the most confident branch is more likely to yield the correct result after verification, and the verification process can only identity errors when the most confident branch is wrong. * _self-adaptive threshold (\(\alpha\))_ is utilized to determine when target LLM shall verify. It operates on the principle that the memory-resident LLM, which generates outputs closely resembling those of the target LLM, should be trusted more, i.e., a lower verification frequency by setting a lower threshold. To assess the outputs similarity, LLMCad relies on historical data regarding the accuracy of verified tokens. Users can either select an initial \(\alpha\) value or utilize the default value (0.01) provided by the system. After verification, LLMCad updates the _self-adaptive threshold (\(\alpha\))_ using the following rule: \[\alpha_{i+1}=\left\{\begin{array}{cc}\alpha_{i}*0.5&if\ N_{correct}==N_{all} \\ \alpha_{i}/T_{c}^{N_{all}}&if\ N_{correct}<N_{all}\end{array}\right. \tag{3}\] where \(N_{correct}\) and \(N_{all}\) are the number of total tokens and correct tokens in the most matching branch during one verification. Specifically, when the verification process detects no error, LLMCad lowers \(\alpha\) by multiplying the current value by 0.5, the cumulative confidence of 3-5 tokens in empirical observations. In contrast, if verification identifies errors, the threshold is increased by dividing \(\alpha\) by the average cumulative confidence of all tokens subsequent to the incorrectly generated one. The rationale behind the use of an exponential function is that the _tree-cumulative confidence_ is the product of every token's confidence, accumulating exponentially. In summary, after each token generation by the memory-resident LLM, LLMCad calculates \(T_{c}\). If \(T_{c}\) falls below \(\alpha\), a fallback occurs, and the target model begins verification. After verification, \(\alpha\) is updated based on the latest generation accuracy history. ### Speculative Generation Pipeline As elaborated in SS2.3, the GPU utilization undergoes cyclical upswings attributed to the fact that SOTA LLM engines resort to the _swapping_ technique. To harvest the free cycles, LLMCad proposes _speculative generation_ technique by allowing the memory-resident LLM to continue generating tokens during verification process. This approach is based on the insight that sometimes the verification detects no error so the speculatively generated tokens could be used afterwards. **The impacts of speculative generation on target LLM.** Our preliminary experiments on TX2 using mT5-Large and GPT2-Large models shows that paralleling the memory-resident and target LLM execution increases the the target LLM computing and loading time by 2.2-2.3\(\times\) and 1.05-1.09\(\times\), respectively. The computing delay is attributed to the GPU cores contention, while the loading delay is unexpected which is because of the memory contention. Specifically, the memory-resident LLM continually allocates memory regions for speculatively generated tokens, while the target LLM dynamically allocates memory regions for loading parameters. Typically, negligible effects are exerted on each other unless the memory usage is over 90%. However, it is common for memory usage to exceed 90% or even reach 95% in the speculative generation scenario, attributed to the fact that existing state-of-the-art LLMs engines are designed for loading as many parameters as possible from disk to the memory to reduce inference time. **Computing-loading pipeline.** To tackle the above issue, LLMCad delicately plans the parallel execution, as shown in Figure 11. The main principle is that normal verification process cannot be influenced by the speculative execution. Thus, there is a memory upper bound by profiling or user defining to avoid two LLMs memory contention, and the two LLMs computing cannot be executed in parallel. After feeding the input sequence, the parameters of both the memory-resident and the target LLMs are loaded to the memory. Once the memory-resident LLM loading finishes, it begins to generate tokens, and the target LLM parameters loading (in yellow) will stop before the memory upper bound is exceeded to avoid influencing normal memory-resident LLM generation. When the fallback condition is met, the rest of parameters for the target model (in orange) will be loaded into the memory, and then the computing of the target LLM begins. The speculative execution (in blue) will not run unless the verification process is loading parameters below the memory budget (in yellow), avoiding processors and memory contention. ## 4. Evaluation ### Implementation and Setups We have fully implemented LLMCad with 4.5k SLoC (Python: 3,500 and C/C++: 1,000). The prototype is a standalone framework supporting LLMs exported from TensorFlow ((2016)) and PyTorch ((2016)). LLMCad leverages llama.cpp ((2016)) (one of the most lightweight on-device LLM engine) as the smartphone backend and PyTorch ((2016)) as the IoT device backend. **Hardware setup.** We test the performance of LLMCad on four devices: 2 smartphones (Xiaomi 10 and Xiaomi 12) and 2 IoT devices (Jetson TX2, and Jetson Orin), as summarized in Table 2. We run LLMs on Jetson GPUs and smartphone CPUs, since the existing LLM engines [refs] have unmature support for smartphone GPU/NPU. Nevertheless, LLMCad's design is orthogonal to hardware types. **Models and datasets.** We test with a range of typical LLM models with various generative tasks datasets across different devices, as summarized in Table 3. On the IoT devices, we evaluate LLMCad on two translation tasks, two question answering tasks, one language modeling task, and one summarization tasks with mT5, T5, GPT2, and Bart models. All the models are fine-tuned by ourselves as (Xiaomi et al., 2017) does. For smartphone devices, we use Vicuna-1.5 and LLaMA2 models Figure 11. LLMCad’s speculative generation. Figure 10. The loading and computing time of the target model execution with the memory-resident model parallelization. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Platform** & \multicolumn{2}{c}{**Processor**} & \multicolumn{1}{c}{**Software**} & \multicolumn{1}{c}{**Mem.**} \\ \hline \multirow{2}{*}{Jetson TX2} & 4x Cortex-A37 & \multirow{2}{*}{Torch-1.10} & \multirow{2}{*}{SG} \\ & Maxwell 128 CUDA cores & & & \\ \hline \multirow{2}{*}{Jetson} & \multirow{2}{*}{Ampere 1024 CUDA cores} & \multirow{2}{*}{Torch-2.0} & \multirow{2}{*}{SG} \\ & & & \(\times\) 32 TensorFlow Cores & & \\ \hline \multirow{2}{*}{Xiaomi 10} & \multirow{2}{*}{N2 2 & \(\times\) 24GHz Cortex A77} & \multirow{2}{*}{An additional 10} & \multirow{2}{*}{SG} \\ & & +x 1.8GHz Cortex A55 & & \\ \hline \multirow{2}{*}{Xiaomi 11} & \multirow{2}{*}{N3 0 GHz X2-3x \% 2.5 GHz Cortex A710} & \multirow{2}{*}{An additional 10} & \multirow{2}{*}{SG} \\ & & & \(\times\) 1.8GHz \& Cortex A55 & \\ \hline \multirow{2}{*}{Xiaomi 11} & \multirow{2}{*}{N3 0 GHz X2-3x \% 2.5 GHz Cortex A710} & \multirow{2}{*}{An additional 10} & \multirow{2}{*}{SG} \\ & & & \(\times\) 1.8GHz \& Cortex A510 & \\ \hline \hline \end{tabular} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{**Deviers**} & \multirow{2}{*}{**Transs**} & \multirow{2}{*}{Memory-**} & \multirow{2}{*}{**Target**} & \multirow{2}{*}{**Speed**} & \multirow{2}{*}{**Datasets**} \\ & & & & & \\ \multirow{2}{*}{Jetson TX2} & \multirow{2}{*}{T} & \(\times\) 21\(\times\) (0.38) & mT5-large (1.28) & 230k & W1ST7-d-e-en ((2016)) \\ & & Bart-base & Bart-large & & \\ \cline{2-5} & & & & & WMT4-d-e-en ((2016)) \\ \hline \multirow{2}{*}{Jetson TX2} & \multirow{2}{*}{QA} & \(\times\) 115-small (0.38) & m175-large (1.28) & 230k & SQUAD\_v2 [(2016)] \\ & & T5-small (0.06) & T5-large (0.78) & 260k & SQUAD\_v2 [(2016)] \\ \cline{2-5} & & LM & GPT2-148) & GPT2-large (0.78) & 210k & Wiketetel ((2016)) \\ \hline \multirow{2}{*}{Xiaomi 10} & \multirow{2}{*}{T} & \multirow{2}{*}{N3 0 GHz X2-3x \% 2.5 GHz Cortex A710} & \multirow{2}{*}{5} & \multirow{2}{*}{Swift (2016)} \\ & & & & & \\ \multirow{2}{*}{Xiaomi 10} & \multirow{2}{*}{T} & \multirow{2}{*}{N3 0 GHz X2-3x \% 2.5 GHz Cortex A710} & \multirow{2}{*}{5} & \multirow{2}{*}{Swift (2016)} \\ & & & & & WMT22-d-e-en \\ \cline{2-5} & & & & \\ \multirow{2}{*}{Xiaomi Pro} & \multirow{2}{*}{Q} & \(\times\) 11\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\)1\(\times\times\ with three translation tasks, one question answering task and, and one summarization. All the models are downloaded from hugging face repository (Dosovosov et al., 2017) and have been quantized by AutoGPTQ (Dosovosov et al., 2017) into 4-bit format for saving memory and improving inference speed. **Baselines.** We mainly compare LLMCad with 5 state-of-the-art baselines which can be divided into two categories: \(\bullet\) 3x _Single-LLM baselines_. (1) Standard (Std) always utilizes the target LLM to generate outputs, with PyTorch for IoT and Ilama.cpp for smartphones. (2) Standard pipeline (StdPL): It executes a layer-wise pipeline, overlapping I/O and computation, as used by existing SOTA LLM inference engines. (3) Speedy Transformer Inference (STI) (Zhou et al., 2017): An edge NLP inference framework with quantization parameter shards and fine-grained computing-loading pipeline. \(\bullet\) 2x _LLM collaboration baselines_. (1) Speculative Decoding (SP) (Zhou et al., 2017): A state-of-the-art framework also uses "generator and verifier" LLM collaboration. (2) Big Little Transformer Decoder (BLD) (Zhou et al., 2017): An algorithm of determining verification timing and rollback mechanism for "generator and verifier" LLM collaboration. **Metrics and configurations.** We mainly report generation accuracy and the per-token generation time. For clarity, LLMCad's goal is to align the memory-resident LLM outputs to the target LLM. Thus, we regard the text produced by the target LLM as the ground truth and calculate the Rouge-L score (Zhou et al., 2017), a similarity between two sequences based on the longest common subsequence, as the _generation accuracy_. ### Generation Speed **Overall performance.** We first comprehensively investigate the generation performance of LLMCad on four tested devices. The generation accuracy and per-token generation time results are illustrated in Table 4, Figure 12 and Figure 13, respectively. Our key observation is that LLMCad consistently and remarkably outperforms other baselines on per-token generation time without comprising accuracy across all tested devices. \(\bullet\)_Generation time of LLMCad v.s. Single-LLM baselines_. Compared with Std, LLMCad achieves a 2.9-9.3\(\times\) and 3.47-4.67\(\times\) speedup in per-token average generation time on IoT and smartphone devices, respectively, without compromising accuracy. Specifically, LLMCad can generating question-answering outcomes on Xiaomi 11 at a fastest speed of 0.86/token. This achievement enables real-time token generation with over-10B LLM on COTS device for the first time. This is attributed to the fact that LLMCad can delegate most of token generations to the memory-resident LLM and ensure correctness by non-autoregressive verification. When compared with more competitive baselines like StdPL and STI, LLMCad reduce per-token average generation time 2.9-9.3\(\times\) and 1.83-2.45\(\times\), respectively. Those benefits are attributed to the fact employing memory-resident LLMs for text generation consistently outpaces any pipeline or quantization approaches of target LLMs on the mobile device, where memory-resident LLM can yield a over hundredfold speed improvement compared to the target LLM. Besides, it can also improve generation accuracy by 11.1-19.0 percentage point, compared to STI. This benefits from our tree non-autoregressive verification which can examine and correct all errors by the memory-resident LLM efficiently. \(\bullet\)_Generation time of LLMCad v.s. LLM collaboration baselines_. Compared with BLD, LLMCad can achieves a 4.5-94.5 and 9.8-96.7 percentage point generation accuracy improvement with a 1.1-1.4\(\times\) and 1.1-1.3\(\times\) speedup in per-token average generation time on IoT and smartphone devices, respectively. That is because, unlike BLD which speeds up the generation process by reducing the number of correction (sacrificing accuracy), our self-adaptive fallback strategy aims to minimize verification times while ensuring verification for each token. Such an approach enhances generation speed without sacrificing accuracy. Furthermore, speculative execution enables memory-resident LLM to generate texts earlier without waiting the verification results when no errors are detected by the verification process, further reducing generation latency. Similarly, LLMCad can reduce per-token average generation time by 1.93-2.00\(\times\) and 1.34-1.77\(\times\) on IoT and smartphone devices, respectively. That is because, unlike SP uses token sequence length, our self-adaptive fallback strategy can accurate finds when the memory-resident LLM generate errors and can reduce verification frequency. ### Memory Sensitivity Analysis This subsection is to investigate the impact of different memory budgets on our approach. We further conduct experiments on mT5 and T5 models on TX2 and LLMa2 on Xiaomi 10 respectively under different memory budgets (e.g., from 4GB to 8GB on Xiaomi 10). The speedup of different baselines are shown in Figure 14. LLMCad consistently exhibits the highest speedup among all baselines from 8GB to 4GB and its benefits are more prominent with the decreasing memory budget. LLMCad reduces generation time under 6GB memory budget on Jetson TX2 by 5.54\(\times\), 1.76\(\times\) and 1.12\(\times\) on average for StdPL, SP and BLD, respectively; while the speedups for 4GB memory budget are 6.12\(\times\), 1.91\(\times\) and 1.25\(\times\), correspondingly, which are 1.29\(\times\), 1.08\(\times\) and 2.1\(\times\) larger than that under 6GB on average. Similarly, LLMCad achieve a generation time speedup under 4GB memory budget on Xiaomi 10 by 4.72\(\times\) and 1.34\(\times\) on average for StdPL and BLD, respectively. That is because when the memory budget is stricter, the inference \begin{table} \begin{tabular}{l l c c c c} \hline \hline **Models** & **Datasets** & **StdPL** & **SP** & **BLD** & **Ours** \\ \hline \multirow{2}{*}{mT5-Large} & T: IWLT517-de-en & 100 & 100 & 96.1 & 100 \\ & QA: SQuAD & 100 & 100 & 52.9 & 100 \\ \hline \multirow{2}{*}{T5-Large} & S: CNN/Daily & 100 & 100 & 5.5 & 100 \\ & QA: SQuAD & 100 & 100 & 51.4 & 100 \\ \hline \multirow{2}{*}{Bart-Large} & T: WMT14-de-en & 100 & 100 & 96.5 & 100 \\ \hline GPT2-Large & LM: Wikitext & 100 & 100 & 12.9 & 100 \\ \hline \hline \end{tabular} \end{table} Table 4. The summary of the generation accuracy of LLMCad and the baseline on tested devices. T:*, S:*, QA:*, and LM:* represent the generative tasks of translation, summary, question-answering, and language modeling. Figure 12. Average per-token generation latency of LLMCad and the baselines under different tasks on IoT devices. Figure 13. Average per-token inference latency of LLMCad and the baselines under different tasks on smartphones. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Models-tasks-datasets** & **Vanilla** & **StdPL** & **SP** & **BLD** & **Ours** \\ \hline \multirow{2}{*}{mT5-translation} & & & & & \\ & WLT517-DE-EN & 36.9 & 36.2 & 12.0 & 7.7 & **7.7 (4.8\(\times\))** \\ \hline \multirow{2}{*}{T5-summary CNN/Daily} & & & & & \\ & WLT517-DE-EN & 36.4 & 36.0 & 7.6 & 10.3 & **8.4 (4.3\(\times\))** \\ \hline \multirow{2}{*}{T5-QA} & T5-QA & 36.9 & 36.5 & 15.4 & 9.9 & **4.6 (8.0\(\times\))** \\ \hline \hline \end{tabular} \end{table} Table 5. The summary of the energy consumption (J) of different models across different devices. speed gap between the memory-resident and target LLM is more significant, and delegating tokens to the memory-resident LLM can gain more benefits. ### Energy Consumption Analysis We then evaluate the energy consumption of L LMCad with mT5 and T5 models on IoT devices and Vicuana and LLMa2 model on smartphones. As shown in Table 5, compared with Std, StdPL, SP and BLD, L LMCad reduces per-token energy consumption by 4.35-7.96, 4.34-7.92, 1.56-3.33, and 1.05-2.15x on Jetson Orin NX, L LMCad achieves a energy consumption reduction by 3.22-3.59, 3.18-3.56, 1.24-1.66, 1.07-1.31 and 2.01-2.56,\(\times\) correspondingly on Xiaomi 11, plus STI. This is because L LMCad's two techniques can delegate as many tokens as possible to memory-resident LLM while not sacrificing accuracy. Compared with the latency speedup, L LMCad's energy consumption is relatively lower. This situation arises because our speculative generation parallels the memory-resident and target LLM execution together, resulting in more energy consumption. ### Ablation Study **Overall techniques.** We further conduct a breakdown analysis of the benefit brought by each of L LMCad's techniques. The experiments are performed with the mT5 and T5 models on TX2 and LLMa2 on Xiaomi 10. The results are illustrated in Figure 15. The leftmost bar is the same as baseline Vanilla, while the leftmost one is L LMCad. The three crucial technique _token tree generation and verification_ in SS3.2, _self-adaptive fallback strategy_SS3.3 and the _speculate generation_ in SS3.4 are represented by TGV, SF and SPP correspondingly. We observe that **all techniques make a non-trivial contribution to the improvement.** First, _tree non-autoregressive generationa and verification_ can delegate most token generations to the memory-resident LLM, leading to an 2.6-4.3 speedup for mT5, T5 and LLMa2 models, respectively. The more benefits for mT5 model on IWLST translation dataset are because the mT5-Small model can generate more correct tokens than other two models so that more tokens can be delegated to the memory-resident LLM. Besides, _speculative execution_ can reduce per-token generation time by up to 1.51\(\times\). That is because L LMCad can directly use the speculative results when the verification detects no error, especially true for LLMa2 model. Lastly, _Self-adaptive fallback strategy_ achieves a 1.08-1.20\(\times\) speedup. This is achieved by leveraging tree-cumulative confidence to assess error probabilities and dynamically adjust verification timings in response to variations in task complexity. ## 5. Related Work **Model collaboration** is a common optimization technique utilized to reduce inference latency (Zhou et al., 2017; Zhang et al., 2018). Its key idea is to delegate most of the workloads to lightweight models to reduce inference latency while maintaining relatively high accuracy. Tabi (Zhou et al., 2017) is a multi-level inference engine that serves queries using various small models according to the queries difficulties. MobiSR (Zhou et al., 2017) and NestDNN (Zhou et al., 2017) employ the similar idea but depending on either resolutions or available resources. Other "early exit" works (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018), which propose adaptive exit timing relying on input data difficulties can also be regarded as a collaboration. However, they either focus on CNN/encoder-based model architectures or must modify and retrain the model, hardly fitting into on-device LLM inference. The most closely related works are _speculative decoding_(Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018), which also employs smaller LLMs for text generation and larger LLMs for text verification. L LMCad is motivated by these works and is the first inference engine for on-device generative NLP tasks, considering mobile devices unique challenges like the memory-bound situation. **Mobile ML optimization.** Machine learning optimization approaches, such as _model compression_(Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018), Figure 14. The speedup of different baselines under different memory budgets. Figure 15. Ablation study of L LMCad. 76, 77], reducing the model size by quantization and knowledge distillation, _caching_[69, 74, 80], reducing computation by reusing existing results, and _token pruning_[16, 18, 38, 57, 66], reducing computation by pruning useless tokens, have been extensively researched to reduce the generation latency. LLMCad is orthogonal to and compatible with those algorithm-level optimizations. Besides, some of researchers focus on generate text in a non-autoregressive manner [35, 39]. However, these works can only apply to \(\sim\)1B models and have accuracy degradation problems, not the mainstream research direction. **Pipeline optimization for ML.** Pipeline optimization has been extensively used to accelerate ML [13, 27, 49, 67, 82]. Most of them, such as PipeDream [49], are utilized to scale out ML to multiple machines by pipelining forward/backward computation with activation/gradients synchronization to minimize bubbles of I/O and network communication. Still, there are some studies focusing on single machine/task optimization. For instance, PipeSwitch [13] introduce pipelining model transmission over the PCIe and task execution in the GPU to reduce context switching performance overhead; while STI [27] pipelines model shards loading with its computation to reduce inference latency. LLMCad is inspired by these efforts and propose an efficient _speculative generation pipeline_ to address the challenge of I/O blocking and limited parallelism. ## 6. Conclusions This work has proposed LLMCad, the first efficient inference engine for on-device generative NLP tasks. It breaks the memory wall and deliver LLM's scaling ability to mobile devices It incorporates three novel techniques, including: token tree generation and verification, Self-adaptive fallback strategy and speculative generation pipeline that can exploit the waste hardware resources during the verification process. Our experiments have demonstrated that when compared to the state-of-the-art LLM engines, LLMCad can reduce average per-token generation time by 2.9-9.3\(\times\) and 3.5-4.7\(\times\) on IoT devices and smartphones, without comprising accuracy.
2309.10220
Comparing effects of price limit and circuit breaker in stock exchanges by an agent-based model
The prevention of rapidly and steeply falling market prices is vital to avoid financial crisis. To this end, some stock exchanges implement a price limit or a circuit breaker, and there has been intensive investigation into which regulation best prevents rapid and large variations in price. In this study, we examine this question using an artificial market model that is an agent-based model for a financial market. Our findings show that the price limit and the circuit breaker basically have the same effect when the parameters, limit price range and limit time range, are the same. However, the price limit is less effective when limit the time range is smaller than the cancel time range. With the price limit, many sell orders are accumulated around the lower limit price, and when the lower limit price is changed before the accumulated sell orders are cancelled, it leads to the accumulation of sell orders of various prices. These accumulated sell orders essentially act as a wall against buy orders, thereby preventing price from rising. Caution should be taken in the sense that these results pertain to a limited situation. Specifically, our finding that the circuit breaker is better than the price limit should be adapted only in cases where the reason for falling prices is erroneous orders and when individual stocks are regulated.
Takanobu Mizuta, Isao Yagi
2023-09-19T00:23:25Z
http://arxiv.org/abs/2309.10220v1
# Comparing effects of price limit and circuit breaker in stock exchanges by an agent-based model ###### Abstract The prevention of rapidly and steeply falling market prices is vital to avoid financial crisis. To this end, some stock exchanges implement a price limit or a circuit breaker, and there has been intensive investigation into which regulation best prevents rapid and large variations in price. In this study, we examine this question using an artificial market model that is an agent-based model for a financial market. Our findings show that the price limit and the circuit breaker basically have the same effect when the parameters, limit price range and limit time range, are the same. However, the price limit is less effective when limit the time range is smaller than the cancel time range. With the price limit, many sell orders are accumulated around the lower limit price, and when the lower limit price is changed before the accumulated sell orders are cancelled, it leads to the accumulation of sell orders of various prices. These accumulated sell orders essentially act as a wall against buy orders, thereby preventing price from rising. Caution should be taken in the sense that these results pertain to a limited situation. Specifically, our finding that the circuit breaker is better than the price limit should be adapted only in cases where the reason for falling prices is erroneous orders and when individual stocks are regulated. Price limit, Circuit breaker, Financial crisis, Agent-based model, ABM, Multi-agent simulation, Artificial market model ## I Introduction The prevention of rapidly and steeply falling market prices is vital to avoid financial crisis. To this end, some stock exchanges implement a price limit, which refuses orders that are priced significantly away from the current market price, or implement a circuit breaker, which halts placing orders for a while when the market prices are largely varied. Stock exchanges in Japan, South Korea, China, and other Asian countries tend to implement the price limit, while those in the USA and Europe lean more toward the circuit breaker. While there has been intensive investigation into which regulation best prevents rapid and large variations in price [1], this question remains unanswered. Empirical studies cannot be conducted to isolate the direct effect of implementing a trading regulation due to the many diverse factors affecting price formation in actual markets. In contrast, an artificial market model, which is an agent-based model for a financial market1, can isolate the pure contributions of a trading regulation. Many previous artificial market studies have contributed to explaining the nature of financial market phenomena such as bubbles and crashes. Recent artificial market studies have also contributed to discussions about appropriate financial regulations and rules [2, 3, 4]. The JPY2 Working Paper series includes various studies that have contributed to such discussions3. Footnote 1: The basic concept for constructing and validating the artificial market model utilized in the current work are explained in Appendix V-B and V-C, or the article [2, 3] Footnote 2: The Japan Exchange Group (JPCx) is a Japanese financial services corporation operating the Tokyo Stock Exchange. Footnote 3: [https://www.jpx.co.jp/english/corporate/research-study/working-paper/index.html](https://www.jpx.co.jp/english/corporate/research-study/working-paper/index.html) There have been many investigations using artificial market models for both the price limit [5, 6, 7] and the circuit breaker [8, 9]. Mizuta et al. [6] investigated the price limit and showed that the following conditions prevent large price variations: \[Pr/tr<S_{fall}, \tag{1}\] \[Pr>Vol_{tr}, \tag{2}\] \[tr<tm, \tag{3}\] \[Pr<P_{DD}, \tag{4}\] where \(Pr\) and \(tr\) are parameters of the price limit (respectively a limit price range and a limit time range), \(S_{fall}\) is the falling speed of market prices, \(Vol_{tr}\) is the standard deviation of every \(tr\) return (volatility), \(t_{m}\) is the erroneous orders period, and \(P_{DD}\) is the falling depth of market prices. Their findings showed that the \(Pr\) and \(tr\) parameters should be satisfied and that stock exchanges should implement several price limits for every time scale of price variety. However, no previous research has compared the effectiveness of the price limit versus the circuit breaker using the \(Pr\) and \(tr\) parameters, mostly because the model needs to be improved before the price limit and circuit breaker can be compared under an equal condition. For example, when a circuit breaker is active, time simply passes without orders being placed, but Mizuta et al.'s model [6] cannot treat such time passing without placing orders because the time passes only when an order is placed. Therefore, in this study, we expanded on Mizuta et al.'s artificial market model [6] by adding stop loss behavior of the agents and a circuit breaker and then used it to investigate whether the price limit or the circuit breaker is more effective to prevent falling market prices. Each agent estimates a fair price and then re-estimates it when market prices fall significantly below that price. The agents need a long time for the re-estimation, so they place stop-loss orders during the re-estimation, which enables the model to let time pass without placing orders. ## II Model Chiarella and Iori [10] built a model that, while very simple, could replicate long-term statistical characteristics observed in actual financial markets, e.g. fat-tail and volatility clustering. Mizuta et al. [6] later expanded this model so that it can treat large fluctuation of prices such as turmoil by erroneous orders. Only fundamental and technical analysis strategies that exist generally for any market at any time4 are implemented in the agent model. Footnote 4: Many questionnaire-based empirical studies have found these strategies to be the majority of investment strategies, as comprehensively reviewed by Menkhoff and Taylor [11]. The empirical study using market data by Yamamoto [12] showed that investors are switching fundamental and technical analysis strategies. The simplicity of the model is very important for this study because unnecessary replication of macro phenomena leads to models that are overfitted and too complex. Such models hamper the understanding and discovery of the mechanisms affecting price formation because of the increase in related factors. Gilbert said "the aim of abstract models is to demonstrate some basic social process that may lie behind many area of social life" [13]. Axelrod mentioned that to understand mechanism abstract models should be as simple as possible because needless complex model preventing to understand the mechanism, and called the principle as KISS (keep it simple stupid) [14]. Because our model is abstract model to understand and discover the mechanisms mechanism, our model should obey KISS principle. The detail of basic concept for constructing our artificial market model and an explanation of its validation are explained in Appendix V-B and V-C, or the article [2, 3]. In the current study, we extend Mizuta et al.'s artificial market model [6] by adding stop loss behavior of agents and a circuit breaker. This model contains one stock, and the stock exchange utilizes a continuous double auction to determine the market price [15]. In the auction mechanism, multiple buyers and sellers compete to buy and sell stocks in the market, and transactions can occur at any time an offer to buy and an offer to sell match. The minimum unit of a price change is \(\delta P\). The buy-order and sell-order prices are respectively rounded down and up to the nearest fraction. ### _Agents_ We introduce agents for modeling a general investor so as to replicate the nature of price formation in actual financial markets. The number of agents is \(n\) and they can short sell freely. The holding positions are not limited, so agents can take an infinite number of shares for both long and short positions. Time \(t\) increases by one when an agent places an order or when a circuit breaker in operation skips placing an order by an agent. Agents always order only one share at a time. First, agent \(1\) places an order to buy or sell a stock, and then agent \(2\) places an order to buy or sell. After that, agents \(3,4,,n\) each place orders to buy or sell. After the final agent \(n\) places an order, going back to the first agent, agent \(1\) places an order to buy or sell, and at agents \(2,3,,,,n\) each place orders to buy or sell, and this cycle is repeated. An agent determines the order price and buys or sells using a combination of fundamental and technical analysis strategies to form an expectation of the stock return. The expected return of agent \(j\) at \(t\) is calculated as \[r_{e,j}^{t}=(w_{1,j}\ln\frac{P_{f}}{P^{t-1}}+w_{2,j}\ln\frac{P^{t-1}}{P^{t- \tau_{j}-1}}+w_{3,j}\epsilon_{j}^{t})/\Sigma_{i}^{3}w_{i,j}, \tag{5}\] where \(w_{i,j}\) is the weight of term \(i\) for agent \(j\) and is independently determined by random variables uniformly distributed on the interval \((0,w_{i,max})\) at the start of the simulation for each agent. \(\ln\) is the natural logarithm. \(P_{f}\) is a fundamental value and is constant. \(P^{t}\) is a mid-price (the average of the highest buy-order price and the lowest sell-order price) at \(t\), and \(\epsilon_{j}^{t}\) is determined by random variables from a normal distribution with average \(0\) and variance \(\sigma_{e}\) at \(t\). \(\tau_{j}\) is independently determined by random variables uniformly distributed on the interval \((1,\tau_{max})\) at the start of the simulation for each agent5. Fig. 1: Illustrations for (left) erroneous orders period and falling depth and (right) stop-loss orders. Fig. 2: Illustrations for (left) price limit and (right) circuit breaker. The first term in Eq. (5) represents a fundamental strategy: the agent expects a positive return when the market price is lower than the fundamental value, and vice versa. The second term represents a technical analysis strategy using a historical return: the agent expects a positive return when the historical market return is positive, and vice versa. The third term represents noise. After the expected return has been determined, the expected price is \[P_{e,j}^{t}=P^{t-1}\exp{(r_{e,j}^{t})}. \tag{6}\] Order prices are scattered around the expected price \(P_{e,j}^{t}\) to replicate many waiting limit orders. An order price \(P_{o,j}^{t}\) is \[P_{o,j}^{t}=P_{e,j}^{t}+P_{d}(2\rho_{j}^{t}-1), \tag{7}\] where \(\rho_{j}^{t}\) is determined by random variables uniformly distributed on the interval \((0,1)\) at \(t\) and \(P_{d}\) is constant. This means that \(P_{o,j}^{t}\) is determined by random variables uniformly distributed on the interval \((P_{e,j}^{t}-P_{d},P_{e,j}^{t}+P_{d})\). Whether the agent buys or sells is determined by the magnitude relationship between \(P_{e,j}^{t}\) and \(P_{o,j}^{t}\). Here6, Footnote 6: When \(t<t_{c}\), to generate enough waiting orders, the agent places an order to buy one share when \(P_{f}>P_{o,j}^{t}\), or to sell one share when \(P_{f}<P_{o,j}^{t}\). when \(P_{e,j}^{t}>P_{o,j}^{t}\), the agent places a buy order, and when \(P_{e,j}^{t}<P_{o,j}^{t}\), the agent places a sell order. The remaining orders are canceled \(t_{c}\) after the order time except when a circuit breaker is in operation. ### _Erroneous orders_ We utilized the same model of erroneous orders as Mizuta et. al. [6]. Erroneous orders start at time \(t_{ms}\) and finish at time \(t_{me}\) (see also Fig. 1 (left)). Within that period, with a constant probability \(p_{m}\), each order of the agents is changed to one share sell at the highest buy-order price listed in the order book. The changed sell order is immediately executed matching the highest buy order. Increasing such erroneous sell orders is what makes market prices fall. ### _Stop-loss orders_ The agent \(j\) starts to place stop-loss orders when \(P^{t}\) becomes under \(P_{f}\exp{\epsilon_{j}^{t=0}}-P_{l,j}\), where \(P_{l,j}\) is independently determined by random variables uniformly distributed on the interval \((P_{lmin},P_{lmax})\) at the start of the simulation for each agent \(j\). \(t_{ls,j}\) is the start time of placing stop-loss orders for each agent. As shown on Fig. 1 (right), within the stop-loss period with a constant probability \(p_{l}(t_{ls,j}+t_{l,j}-t)/t_{l,j}\), each order of an agent is changed to one share sell at the highest buy-order price, where \(t_{l,j}\) is independently determined by random variables uniformly distributed on the interval \((t_{lmin},t_{lmax})\) at the start of the simulation for each agent \(j\) and \(p_{l}\) is constant. Each agent estimates \(P_{f}\exp{\epsilon_{j}^{t=0}}\) as a fair price and has to re-estimates a fair price when market prices fall significantly below that price. The agent needs a long time for the re-estimation and places stop-loss orders during this process. In this study, because erroneous orders cause prices to fall significantly, the fair price is not changed. The agent learns this fact after the re-estimation. Therefore, the probability of a stop-loss order decreases as time progresses in this model, and the agent stops a stop-loss order after the re-estimation. ### _Price limit and circuit breaker_ We utilized the same model of a price limit as Mizuta et al. [6]. In the case of adopting a price limit, as shown Fig. 2 (left), any order prices of sell under \(P^{t-tr}-Pr\) (of buy above \(P^{t-tr}+Pr\)) are changed to \(P^{t-tr}-Pr\) (\(P^{t-tr}+Pr\)), where \(tr,Pr\) are constant parameters to determine the nature of the price limit. We simulated the model to investigate the nature of price formations when these parameters were changed. As Mizuta et al. [6] demonstrated, the price limit prevents large price variations because agents cannot place orders far from \(P^{t-tr}\) when the price limit is in place. In the case of adopting a circuit breaker, as shown Fig. 2 (right), when \(P_{t}\) reaches under \(P^{t-tr}-Pr\) or above \(P^{t-tr}+Pr\), the circuit breaker starts, and after starting, placing or cancelling orders are stopped during \(t_{2}\). Actually, the circuit breaker limits orders (unlike the price limit) and just waits for time to pass; however, the erroneous order period and stop-loss order period continue while the circuit breaker is activated. This leads to a decrease in the erroneous orders and stop-loss orders and, like the price limit, prevents large price variations. We also introduced a price limit version two in which any order prices of sell under \(P^{t-tr}-Pr\) or buy above \(P^{t-tr}+Pr\) are canceled. In an actual financial market, investors will place orders exactly at the \(P^{t-tr}\pm Pr\) with such a price limit because they know orders will be canceled outside the \(P^{t-tr}\pm Pr\). Therefore, in real financial markets version two does not cause any changes and the result is exactly the same as the normal version. The reason we introduce version two is that we want to investigate the effect of multiple orders existing at \(P^{t-tr}\pm Pr\), so we use agents that do not care whether their orders are cancelled even though this is unrealistic in actual financial markets. Fig. 3: Example of time evolution of market prices in cases with the price limit and the circuit breaker on \(tr=10000andPr=100\), and with no regulation. ## III Simulation results In this study, we determined and validated the parameters of model to replicate the fat-tail and volatility clustering that are stably observed as stylized facts for any asset and in any period same as Mizuta et al. [6]. We did not dare to replicate other stylized facts that are unstably observed because, as we mentioned, the simplicity of the model is very important for this study because unnecessary replication of macro phenomena leads to models that are overfitted and too complex. More details, see the Appendix V-B and V-C. Then, we set \(\delta P=0.01andP_{f}=10000\), and for the agents, \(n=1000,w_{1,max}=1,w_{2,max}=10,w_{3,max}=1,\tau_{max}=10000,\sigma_{e}=0.03,P_{ d}=1000,t_{c}=10000\), and \(P_{f}=10000\). For the erroneous orders, we set \(t_{ms}=30000,t_{me}=60000,andp_{m}=0.15\), and for stop-loss orders, \(P_{min}=1000,P_{lmax}=3000,t_{lmin}=10000,t_{lmax}=100000,andp_{l}=0.35\). The simulations ran to \(t=t_{e}=150000\). The price limit and the circuit breaker have the same parameters (\(tr\) and \(Pr\), respectively). We simulated \(35(=5\times 7)\) cases for \(tr=1000,2000,5000,10000,and20000\), and for \(Pr=10,20,50,100,200,500,and1000\), where the other parameters were fixed and used the same random number table. We then simulated these runs 100 times, changing the random number table each time, and counted the averages of falling depth (\(P_{f}-\)lowest \(P^{t}\); see also Fig. 1 (left)). iors. In the price limit case, sharply falling prices cause many sell orders to be accumulated around the lower limit price, \(p^{t-tr}-Pr\). The threshold, \(tr=10000\), is exactly the same as the cancel period, \(t_{c}=10000\). Therefore, when \(tr<tc\), \(p^{t-tr}-Pr\) is changed before the accumulated sell orders are cancelled, which leads to the accumulation of more sell orders of various prices, as shown in Fig. 5. This accumulation then acts like a wall against buy orders, which prevents rising prices. Therefore, it is more difficult for the price limit to prevent price falling when \(tr<tc\) than when \(tr>tc\). Actually, in the case with the price limit version two, which does not accumulate sell orders around the lower limit price, falling is prevented more or less as effectively as the circuit breaker when \(tr<tc\), as Table IV indicates. Figure 4 shows an example of the time evolution of market prices (mid prices) in the cases with the price limit version two, the price limit, and the circuit breaker on \(tr=2000,Pr=20\). As we can see, the time evolution market prices with the price limit version two were almost the same as those with the circuit breaker. Table V shows the order book at \(t=t_{me}=60000\) for the example in Fig. 4. The numbers are aggregated orders for price ranges in increments of 20. In the case with the price limit, many sell orders were accumulated and they prevented the prices from rising. To summarize, the price limit and the circuit breaker basically prevent falling prices with the same level of effectiveness when the \(Prandtr\) parameters are the same. However, the price limit is less effective when \(tr<tc\). In the case with the price limit, many sell orders are accumulated around the lower limit price. When \(tr<tc\), \(p^{t-tr}-Pr\) is changed before the accumulated sell orders are cancelled, which leads to the accumulation of more sell orders of various prices. This accumulation acts like a wall against buy orders, and the wall is what prevents the prices from rising. ## IV Summary In this study, we expanded on Mizuta et al.'s artificial market model [6] by adding stop loss behavior of the agents and a circuit breaker and then used it to investigate whether the price limit or the circuit breaker is more effective to prevent falling market prices. Our findings showed that the price limit and the circuit breaker are basically equally effective when the limit price range (\(Pr\)) and limit time range (\(tr\)) parameters are the same. However, the price limit is less effective when \(tr\) is smaller than the cancel time range (\(tc\)). In the case with the price limit, many sell orders are accumulated around the lower limit price. When \(tr<tc\), the lower limit price is changed before the accumulated sell orders are cancelled, which leads to the accumulation of more sell orders of various prices. This accumulation then acts like a wall against buy orders, which prevents the prices from rising. Caution should be taken in the sense that these results pertain only to a limited situation. Specifically, the fact that the circuit breaker is better than the price limit should be adapted only in the case where the reason for falling prices is erroneous orders and where individual stocks are regulated. In cases where other reasons make the market prices fall (for example, large variations in the fundamental price) or in cases where individual stocks are not regulated (for example, stock indexes are regulated), the results might be different. In fact, in real financial markets, market prices frequently fall for reasons other than erroneous orders, and there are many cases where a stock index is regulated by circuit breakers while individual stocks are not regulated. Wang et. al. [16] indicated that if a stock index is regulated and the index closes to the limit price of circuit breaker, some traders are in a hurry to sell and their sell orders make the index fall, which means that sometimes a circuit breaker induces a sharp fall rather than preventing it. In the current work, we did not examine such behavior of traders, so this and the other caveats mentioned above will be the focus of our future work. ## V Appendix ### _Brief history and contributions of Artificial market models_ An artificial market model is an agent-based model for a financial market. There are thorough great reviews on such model [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 16, 18, 19, 13, 19, 15, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 86, 88, 89, 91, 84, 87, 89, 92, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 84, 85, 86, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 94, 95, 96, 97, 98, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 100, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38 used to investigate the mechanism by which stylized facts7 (fat-tails, volatility-clustering, and so on) emerge [23, 26]. Footnote 7: A stylized fact is a term used in economics to refer to empirical findings that are so consistent (for example, across a wide range of instruments, markets and time periods) that they are accepted as truth [27]. Here, let us briefly examine the nature of financial market phenomena such as bubbles and crashes, as described in [24, 25]. It is said that micro-macro feedback loops have played very important roles in bubbles and crashes, and artificial market models can directly treat such loops. A number of projects have built generic artificial market models, such as the U-mart project in Japan in the 2000s. (Kita et al. provide a comprehensive review of the U-mart project [28].) These projects have helped to explain the nature of financial market phenomena and the mechanism by which stylized facts emerge. Artificial market models, however, have rarely been used to investigate the rules and regulations of financial markets. After the bankruptcy of Lehman Brothers in 2008, some researchers argued that traditional economics had not found ways to design markets that work well but suggested that an artificial market model could do so. Indeed, in Nature, Farmer and Foley [29] explained that "such (agent based) economic models should be able to provide an alternative tool to give insight into how government policies could affect the broad characteristics of economic performance, by quantitatively exploring how the economy is likely to react under different scenarios." Richard Bookstaber, an expert on risk management who has worked for investment banks and hedge funds, wrote a book [30] that "provides a nontechnical introduction to agent-based modeling, an alternative to neoclassical economics that shows great promise in predicting crises, averting them, and helping us recover from them." In 2010, Jean-Claude Trichet, then President of the European Central Bank (ECB) [31], stated that "agent-based modelling dispenses with the optimization assumption and allows for more complex interactions between agents. Such approaches are worthy of our attention." Financial regulators and exchanges, who decide rules and regulations, are especially interested in using artificial market models to design a market that works well. Indeed, the Japan Exchange Group (JPX), which is the parent company of the Tokyo Stock Exchange, has published 40 JPX working papers, including 12 on using artificial market models, as of December 20228. Footnote 8: [https://www.jpx.co.jp/english/corporate/research-study/working-paper/index.html](https://www.jpx.co.jp/english/corporate/research-study/working-paper/index.html) Mizuta [4] reviewed other previous agent-based models for designing a financial market that works well. ### _Basic concept for constructing a model_ An artificial market model, which is an agent-based model for financial markets, can be used to investigate situations that have never occurred, handle regulation changes that have never been made, and isolate the pure contribution of these changes to price formation and liquidity [2, 3]. These are the advantages of an artificial market simulation. However, the outputs of this simulation would not be accurate or be credible forecasts of the actual future. The simulation needs to reveal possible mechanisms that affect price formation through many simulation runs, e.g., searching for parameters or purely comparing the before and after states of changes. The possible mechanisms revealed by these runs provide new intelligence and insight into the effects of the changes in price formations in actual financial markets. Other methods of study, e.g., empirical studies, would not reveal such possible mechanisms. Artificial markets should replicate the macro phenomena that exist generally for any asset at any time. Price variation, which is a kind of macro phenomenon, is not explicitly modeled in artificial markets. Only micro processes, agents (general investors), and price determination mechanisms (financial exchanges) are explicitly modeled. Macro phenomena emerge as the outcome of interactions from micro processes. Therefore, the simulation outputs should replicate existing macro phenomena to generally prove that simulation models are probable in actual markets. However, it is not the primary purpose for an artificial market to replicate specific macro phenomena only for a specific asset or period. Unnecessary replication of macro phenomena leads to models that are overfitted and too complex. Such models would prevent us from understanding and discovering mechanisms that affect price formation because the number of related factors would increase. In addition, artificial market models that are too complex are often criticized because they are very difficult to evaluate [19]. A model that is too complex not only would prevent us from understanding mechanisms but also could output arbitrary results by overfitting too many parameters. It is more difficult for simpler models to obtain arbitrary results, and these models are easier to evaluate. Therefore, we constructed an artificial market model that is as simple as possible and does not intentionally implement agents to cover all the investors who would exist in actual financial markets. Such simplicity is very important not only for artificial market models but also generally for agent-based models. Gilbert argued that there are three types for agent-based models, an abstract model, a middle-range model and a facsimile model [13]. Gilbert said "the aim of abstract models is to demonstrate some basic social process that may lie behind many area of social life." Axelrod mentioned that to understand mechanism abstract models should be as simple as possible because needless complex model preventing to understand the mechanism, and called the principle as KISS (keep it simple stupid) [14]. For example, Thomas Schelling, who received the Nobel Prize in economics, used an agent-based model to discuss the mechanism of racial segregation. The model was built very simply compared with an actual town to focus on the mechanism [32]. While it was not able to predict the segregation situation in the actual town, it was able to explain the mechanism of segregation as a phenomenon. Of course, the model was not calibrated to empirical data because of not leading to make it more complex to prevent understanding. For such abstract models, we should be cautious complex modeling by calibrations using empirical date, and this is a fundamental for modeling [33]. Indeed, many artificial market models were validated by replicating fat-tails and volatility-clustering that are very famous stylized facts in financial markets [2, 3, 17, 19]. On the other hand, the aim of facsimile models is to replicate some specific situation and "predict" future exactly. So, facsimile models are needed to be calibrated by empirical data at least and should replicate existing social phenomena exactly. As Michael Weisberg mentioned [33], "Modeling, (is) the indirect study of real-world systems via the construction and analysis of models." "Modeling is not always aimed at purely veridical representation. Rather, they worked hard to identify the features of these systems that were most salient to their investigations." Therefore, effective models are different depending on the phenomena they focus on. Thus, our model is effective only for the purpose of this study and not for others. The aim of our study is to understand how important properties (behaviors, algorithms) affect macro phenomena and play a role in the financial system rather than representing actual financial markets precisely. The aforementioned discussion holds not only for artificial markets but also for agent-based models used in fields other than financial markets. For example, Thomas Schelling, who received the Nobel Prize in economics, used an agent-based model to discuss the mechanism of racial segregation. The model was built very simply compared with an actual town to focus on the mechanism [32]. While it was not able to predict the segregation situation in the actual town, it was able to explain the mechanism of segregation as a phenomenon. Michael Weisberg studied what mathematical and simulation models are in the first place and cited the example of a map [33]. Needless to say, a map models geographical features on the way to a destination. With a simple map, we can easily understand the way to the destination. However, while a satellite photo replicates actual geographical features very well, we cannot easily find the way to the destination. The title page of Michael Weisberg's book [33] cited a passage from a short story by Jorge Borges [34], "In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it...In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars." The story in which a map was enlarged to the same size as the real Empire to become the most detailed of any map is an analogy to that too detailed a model is not useful. This story give us one of the most important lessons for when we build and use any model. ### _Validation of the model_ In many previous artificial market studies, the models were validated to determine whether they could explain stylized facts, such as a fat-tail or volatility clustering [2, 3, 17, 19]. A fat-tail means that the kurtosis of price returns is positive. Volatility clustering means that square returns have a positive autocorrelation, which slowly decays as its lag becomes longer. Many empirical studies, e.g., that of Sewell [27], have shown that both stylized facts (fat-tail and volatility clustering) exist statistically in almost all financial markets. Conversely, they also have shown that only the fat-tail and volatility clustering are stably observed for any asset and in any period because financial markets are generally unstable. This leads to the conclusion that an artificial market should replicate macro phenomena that exist generally for any asset at any time, fat-tails, and volatility clustering. Other stylized facts should be replicate only when a purpose of study relates them to prevent to make a model needless complex. In the case of this study, our model should replicate only universally established stylized facts about time evolution of market prices which are fat-tails and volatility clustering to prevent to make a model needless complex because we focus generally and universally impacts to market prices by the rules. We did not dare to replicate other stylized facts that are unstably observed because, as we mentioned, the simplicity of the model is very important for this study because unnecessary replication of macro phenomena leads to models that are overfitted and too complex. This is an example of how empirical studies can help an artificial market model. The kurtosis of price returns and the autocorrelation of square returns are stably and significantly positive, but the magnitudes of these values are unstable and very different depending on the asset and/or period. Very broad magnitudes of about \(1\sim 100\) and about \(0\sim 0.2\), respectively, have been observed [27]. For the aforementioned reasons, an artificial market model should replicate these values as significantly positive and within a reasonable range. It is not essential for the model to replicate specific values of stylized facts because the values of these facts are unstable in actual financial markets. Table VI lists the statistics showing the stylized facts, kurtosis of price returns for \(100\) tick times (\(\ln(P^{t}/P^{t-100})\)), and autocorrelation coefficient for square returns for \(100\) tick times without the erroneous orders. Note that without the erroneous orders no stop loss is happened and no regulation is triggered. This shows that this model replicated the statistical characteristics, fat-tails, and volatility clustering observed in real financial markets.